Hello everyone,
Going forward, one of the things we need to do is to revisit how
micro-services are structured. This is a big task, for which there
have been previous discussions on the mailing list and related github
issues. Those discussions mainly focused on splitting the
micro-services out, into a separate repository. While, this was given
a shot, it didn't work quite smooth.
With this email, I want to set the high-level requirements for a
plugin architecture which will help with separating the
micro-services, but also frontends and backends, to their own
packages, and make it easy to plug-in more micro-services, frontends,
backends (or other types of plugins).
Currently, the Satosa repository contains the core of satosa, the
supported backends, the supported frontends, and some micro-services.
Ideally, the satosa repository would contain only the core component.
What I want to do, is have each backend, frontend and micro-service be
its own package. These packages can then specify the desired
dependencies, their example configurations, documentation, as well as
installation and deployment instructions. Finally, the individual
plugins, can be developed and updated without the need to change the
core.
And, we can almost do that. We can separate backends, frontends, and
the micro-services out - not to a grouped repository (ie
"micro-services repository"), but each of those plugins to its own
repository and python package. There is little work that needs to be
done to enable this, mainly to decouple the core from two
micro-services (the CMservice and AccountLinking, that have special
requirements hardcoded).
But there is more than separating the plugins out. Separating the
plugins enables us to specify the control we want to have over the
provided functionality, the way certain aspects of the flow take
place, how state is managed, etc. By defining what a plugin is, we can
treat frontends, backends and micro-services in a uniform way.
A plugin is an external component, that will be invoked by Satosa at
certain points of an authentication flow. We need to _name_ those
points, and have plugins declare those in their configuration, thus
deciding when they should be invoked (and if they should be invoked in
multiple points). At different points in the flow, different
information is available. Plugins should be provided most of the
available information in a structured manner (the internal
representation).
Right now, we have four kinds of plugins (which can be though of as
roles), invoked at specific points of the flow:
- frontends: invoked when an authentication request is received
- backends: invoked when an authentication response is received
- request micro-services: invoked before the authentication request is
converted to the appropriate protocol
- response micro-services: invoked before the authentication response
is converted to the appropriate protocol
I'm not certain that this separation is the best; I can see the need
by some micro-services to know more than just the internal
representation of the available information. This can be solved in two
ways: introduce more points in the flow where a plugin will be invoked
and hope this point is better suited for the intended purpose, or,
enumerate what that needed information is and provide it in a safe way
- the example I have in mind, is a situation where a micro-service
needs to select certain SAML attributes to generate an id, but the
available information is limited to the internal attribute names,
which introduces an indirect coupling.
When talking about invoking external components, we usually think in
blocking terms. This, however, may not always be the case. Examples
include the need to do heavy IO operations, or invoke another service
over the network from which we do not expect a response (ie, send
logs, stats or crash reports to a monitoring service). For such cases,
we may want to set a certain plugin to work in async mode.
There are also cases, where we do expect an answer from an external
service, but this may come at an undefined time in the future and does
not interfere with the current flow. For these cases, we need to keep
some kind of state. This is now done in the form of a frontend channel
(ie, a cookie). What I'd like is to make this explicit and available
to the plugins as a module/function that handles state in a uniform
way.
Moving over to this structure will allow plugins to be much more
flexible. But there is still an issue hidden there. If we have plugins
as separate packages, developed and updated independently from the
core of Satosa, we also need a way to signal Satosa that such a
package has been installed, or updated and should be reloaded. This
will affect how all plugins are initialized and loaded internally, and
most probably it will affect how Satosa itself is initialized.
Along with that work, intrusive work needs to be done in error
handling and logging. At the moment, errors end up as plain text
messages (usually not that helpful) in the browser (which wraps the
text into basic html) and the logs. This needs to be change in the
direction of structured error messages. Logging will also change
towards that direction. Since, the logs will contain this information
in a structured manner, the same payload can be returned as the error
message. I would like to have messages structured in JSON format (most
probably), with context, data and a backtrace included among other
information (such as timestamp, hostname, process-id, src-map,
request-id, and more.) Provided this information, another process (a
frontend/error-handling service) can parse it and present the user
with the appropriate error message.
The structured logger and the error-handling service should be part of
the parameters that initiaze a plugin. The plugins should make use of
them, in order for the service to have a uniform way of handling these
cross-cutting concerns.
The library I'm looking into, to take care of the log format is structlog:
https://github.com/hynek/structlog
Other things to look for in the future, is grouping and high-level
coordination work between plugins in the form of invocation
strategies. Given three plugins, I want to invoke them in order, until
one succeeds with returning a result. Or, I want to invoke them in
parallel and get an array of results. Or, I want to invoke this plugin
that does a network operation, and if it fails, I want to retry 3
times with an exponential backoff.
To sum up, this was an (non-technical) overview of the things that I'd
like to do in relation to the "plugins". For some of the above, the
technical parts are still under consideration. There are more things
to be done for Satosa, both technical and not which I hope to write
down, discuss and do with everyone's help and suggestions.
Cheers,
--
Ivan c00kiemon5ter Kanakarakis >:3
Hi-
After upgrading our Satosa to the latest, I am getting some deprecation warnings in the Satosa log.
I'll address the other 2 warnings later (I think they are related to python3), but for now I'm interested in how to use the new hasher microservice. I am not having much luck finding documentation for it.
The warning in the log is this:
"/usr/local/lib/python3.6/site-packages/satosa/base.py:56: DeprecationWarning: 'USER_ID_HASH_SALT' configuration option is deprecated. Use the hasher microservice instead."
Can you point me to some help in using that new microservice?
Thanks!
Hi all,
I would like to find out what the licenses of *all* components and
dependencies that make up SaToSa. Any suggestion to how to do that in an
automated fashion?
I could walk down the libraries manually in
virtualenv/lib/python3.5/site-packages, but that is rather tedious and
error prone. I also found some scripts to do this for me, however these
seem to just report what is in use at the os/system level, not
specifically for our virtualenv.
Any suggestions, or a solution you may have used previously?
Niels
--
Niels van Dijk Technical Product Manager Trust & Security
Mob: +31 651347657 | Skype: cdr-80 | PGP Key ID: 0xDE7BB2F5
SURFnet BV | PO.Box 19035 | NL-3501 DA Utrecht | The Netherlands
www.surfnet.nlwww.openconext.org
Dear Scott K.,
You mentioned last week something about accessing the metadata of the
IdP used to authenticate the user from within a microservice. This was
in the context of using information such as REFEDS R&S or SirTiFi
compliance to make access control decisions in SATOSA at time of
authentication. Would you mind elaborating?
Best wishes,
Matthew
--
"The lyf so short, the craft so longe to lerne."
Hi all,
I know this is not entirely the correct venue, but does anybody know
about an entity that would be able to provide a SaToSa training for
members of the AARC project?
Cheers,
Niels
--
Niels van Dijk Technical Product Manager Trust & Security
Mob: +31 651347657 | Skype: cdr-80 | PGP Key ID: 0xDE7BB2F5
SURFnet BV | PO.Box 19035 | NL-3501 DA Utrecht | The Netherlands
www.surfnet.nlwww.openconext.org
Hi all (and specifically Ivan as he was committing stuff),
I note a commit mentioning eIDAS integration, however what was committed
(https://github.com/IdentityPython/SATOSA/commit/a0b7cf9eb73714cef76d6ab7249…)
seems a bit too little to actually engage with eIDAS. I am for example
not seeing any reference to eIDAS specific saml extentions. Is all of
that covered in pySAML? I found this as well
(https://github.com/grnet/pysaml2eidas/tree/devel) , but that does not
seem to be used by SatoSa?
What would I need to pull together to setup a satosa based eIDAS gateway?
thanks,
Niels
--
Niels van Dijk Technical Product Manager Trust & Security
Mob: +31 651347657 | Skype: cdr-80 | PGP Key ID: 0xDE7BB2F5
SURFnet BV | PO.Box 19035 | NL-3501 DA Utrecht | The Netherlands
www.surfnet.nlwww.openconext.org
Hi all,
I would like to mint a new attribute scheme called voPerson
(https://voperson.org/) for use in a SaToSa? What would be the best
approach?
thanks!
Niels
--
Niels van Dijk Technical Product Manager Trust & Security
Mob: +31 651347657 | Skype: cdr-80 | PGP Key ID: 0xDE7BB2F5
SURFnet BV | PO.Box 19035 | NL-3501 DA Utrecht | The Netherlands
www.surfnet.nlwww.openconext.org
Hey everyone,
I am writing a SATOSA front end that implements SAML 2.0 IdP-initiated
(unsolicited) SSO. Currently, I plan to generate a SAML AuthnRequest
using a request variable (`providerID`) that names the service provider.
Eventually, I'd like to implement the same interface as Shibboleth
(request variables `shire`, `target`, and `time`) because I'm just not
that creative.
I have some (well, a lot of) questions:
- How do I get a list of SAMLFrontend endpoints?
- There could be more than one SAMLFrontend configured. How would I
know which one to use?
- I don't want to rely on JavaScript or the user to submit a form.
Can I send the AuthnRequest to the selected SAMLFrontend's HTTP-Redirect
endpoint via satosa.response.Redirect?
- Is it OK to omit the RelayState?
- In the SAML AuthnRequest, can I specify
AssertionConsumerServiceIndex="0"?
- If not, how do I look up the SP's AssertionConsumerServiceURL?
- In the SAML AuthnRequest, can I omit the Destination?
- If not, which endpoint should I set Destination to---HTTP-Redirect
or HTTP-POST?
- If I construct the redirect URL manually, do I base64-encode the
AuthnRequest using Python's base64.urlsafe_b64encode()?
- Should I use the urllib or requests library to construct the URL
instead?
Thanks in advance! :)
Best wishes,
Matthew
--
"The lyf so short, the craft so longe to lerne."
Hey everyone,
Now that satosa-microservices are split into their own repository we
should set the process which acquires them back to the setup. There
are many options here:
- have each microservice be its own python package and selectively
install it using pip
- have the microservices repo be a package itself and use pip to install it
- have microservices repo as a git module under satosa (not suggested)
- have microservices as something completely external and fetch using
http/git (as shown below). This could mean a lot of different things -
ie, should microservices use code from satosa? if so, satosa is a
dependency to microservices and as such this makes microservices a
package with dependencies, etc.
- (more options?)
Skoranda mentioned that
> If you need the LDAP Attribute Store microservice you must also install ldap3 using pip:
This indicates that certain microservices have their own dependencies.
Users cannot guess what dependencies are needed for a certain
microservice. This information should be explicit and automatically
resolved by the microservice installation process.
This leads me to think to having each microservice as a separate
(python) package, with its own dependencies and deployment process, is
the way to go.
This is not a simple decision to make. Let's have a discussion on how
the dev-community think it is better to be solved.
Cheers,
PS: This discussion was triggered by skoranda's PR here:
https://github.com/IdentityPython/SATOSA/pull/168#discussion_r149634137
--
Ivan c00kiemon5ter Kanakarakis >:3
Hey everyone,
In the example config, there's this list of internal attributes:
https://raw.githubusercontent.com/IdentityPython/SATOSA/master/example/i
nternal_attributes.yaml.example
How does that relate to the attribute maps defined in the Docker
container?
https://github.com/IdentityPython/SATOSA/tree/master/docker/attributemap
s
Those appear to be the same as the ones defined in pysaml2 (with the
exception of several EIDAS attribute mappings missing from SATOSA's
version):
https://github.com/rohe/pysaml2/tree/master/src/saml2/attributemaps
What's the Right Way to configure or extend the attribute mappings?
Can I assume the ones from pysaml2 are already loaded?
Also, where do I find the attribute mappings for other protocols, like
OIDC? I looked at the pyoidc sources, but I didn't find similar data
structures or source code.
Best wishes,
Matthew
--
"The lyf so short, the craft so longe to lerne."