Hi,
For a SAML proxy deployment (no OIDC here yet) I need a sophisticated
attribute release policy that should be largely driven by SAML metadata
entity categories. The policy is essentially to release the union of
attributes for each of the entity categories REFEDs R&S, CoCo, and a few
others to which the entity belongs.
Later I am sure there will need to be per-entity adjustments (there
always are...).
Have other deployments already implemented tooling to implement such a
policy?
I don't see that the SAMLFrontend (or its sub classes) has the requisite
functionality. Or that pySAML2 has it--the policy based functionality
appears to mostly be around statically configured filtering and not
driven by SAML metadata. Am I missing existing functionality?
If I need to develop the functionality, then I am wondering if it is
best done by implementing a response microservice(s), or to evolve the
SAMLFrontend code?
Thoughts?
Thanks,
Scott K
Hi,
I just submitted a first draft of a PR for the SAMLUnsolictedFrontend
class.
The class provides all of the functionality of the SAMLFrontend class
but also enables an "unsolicited" endpoint that can be used to initiate
a SAML flow using a proprietary set of query string parameters that are
not part of any SAML standard but follow closely similar functionality
from the Shibboleth project.
For example, one might do a GET to (line breaks added and query string
not encoded for clarity)
https://myproxy.my.org/saml2/unsolicited?\providerId=https://mysp.my.org/sp/shibboleth&target=https://mysp.my.org/secure&shire=https://mysp.my.org/Shibboleth.sso/SAML2/POST&discoveryURL=https://mydisco.my.org/
This will cause a flow where
https://mydisco.my.org/
is used for IdP discovery followed by a SAML <Response> being sent to
the SP with entityID
https://mysp.my.org/sp/shibboleth
at the ACS URL
https://mysp.my.org/Shibboleth.sso/SAML2/POST&
and with relay state
target=https://mysp.my.org/secure&
The providerId query string is required. The others are optional.
To prevent being an open relay the SP entityID must match one found in
the trusted metadata, the ACS URL must match one found in the trusted
metadata for the SP, the relay state URL must have a scheme, host, and
port that matches the ACS URL, and the discovery service URL must be
whitelisted in the configuration. The relay state URL condition could be
more sophisticated.
The functionality and the names of the input query parameters are
inspired by the Shibboleth IdP functionality described at
https://wiki.shibboleth.net/confluence/display/IDP30/UnsolicitedSSOConfigur…
and the SP functionality for discoveryURL described at
https://wiki.shibboleth.net/confluence/display/SP3/ContentSettings
I am not married to the names of the query parameters--aligning with the
Shibboleth project seems to make some sense, but they are using 'shire'
for historical reasons and a better name would be something like
'acs_url' or even just 'acs'.
The configuration is the same as for the SAMLFrontend base class, except
that you add something like
unsolicited:
endpoint: unsolicited
discovery_service_whitelist:
- https://mydisco.my.org/
With that configuration the endpoint is exposed at <backend name>/unsolicited
If instead you had
unsolicited:
endpoint: foo/bar
discovery_service_whitelist:
- https://mydisco.my.org/
then it would be exposed at <backend name>/foo/bar
I have not written any tests yet nor updated the documentation. If
nobody raises any objects I will proceed with doing that.
All input welcome.
Thanks,
Scott K
Hi,
I know it has been talked about as "doable", but has anybody already
deployed SATOSA with a response microservice that implements a "step-up"
flow to leverage a second factor (like Duo) when the authenticating IdP
does not assert that MFA was used?
If so, are you considering sharing it and/or contributing it to the code
base?
If not, but you are considering such an implementation/deployment, can
you indicate if you are interested in collaborating on the development
and testing?
Thanks,
Scott K
Hi to everybody,
I developed a microservice that can map specific SaToSa backends to
specific target entity id. A configuration example can be this:
````
module: satosa.micro_services.custom_routing.DecideBackendByTarget
name: TargetRouter
config:
target_mapping:
"http://idpspid.testunical.it:8088": "spidSaml2"
"http://strangeIDP.testunical.it:8081/saml2/metadata": "strangeSaml2"
````
I needed a backend routing based on the target entity ID because I have
some SAML2 IDP that only accepts highly customized authn request and
metadata. An example would be SPID italian federation, through which my
organization will federate soon with SaToSa. Another example could be the
need to use different configurations, like enc and digest algorithms,
depending by target IDP.
I was looking into DecideBackendByRequester microservice but soon I
realized that it was made for different goals, in it the subjects are the
requester entity ID and not the target entity ID.
As you can see in https://github.com/IdentityPython/SATOSA/pull/220
I made a single branch to pull only this feature.
I'm also curious about SaToSa milestone, which are the features in
development status, which will compose the next release and another
question about the possibility to have a dev branch to do PR on it.
I don't know if this microservice could sound useless to you, I searched a
lot before programming it and I hope to have done a middleware that could
be usefull for the SaToSa community.
Hope to hear your comments soon
Hi,
Right now the saml2.py in src/satosa/backends/ has
def disco_query(self):
"""
Makes a request to the discovery server
:type context: satosa.context.Context
:type internal_req: satosa.internal.InternalData
:rtype: satosa.response.SeeOther
:param context: The current context
:param internal_req: The request
:return: Response
"""
return_url = self.sp.config.getattr("endpoints", "sp")["discovery_response"][0][0]
loc = self.sp.create_discovery_service_request(self.discosrv, self.sp.config.entityid, **{"return": return_url})
return SeeOther(loc)
Essentially this restricts the flow to one and only one IdP discovery
service that is configured statically.
I propose that this method be enhanced so that it can inspect the context
and internal data and if it finds a URL for the discovery service to use
it overrides what is in the configuration.
Then one can configure a request microservice that uses some logic to set
the URL for the discovery service, such as which SP the authentication
request came from.
Since the comment for the method already includes a mention of the context
and internal data, I suspect this functionality was designed but never
implemented.
Any objections to me implementing it?
Any other comments or input?
Thanks,
Scott K
We collected some user cases during the meeting @TIIME2019. I put this into the wiki and added short descriptions:
https://github.com/IdentityPython/SATOSA/wiki <https://github.com/IdentityPython/SATOSA/wiki>
These are drafts, please feel free to improve them directly. In particular I did not grasp the UC7/Policy Enforcement, this one needs a useful description (Scott K.?)
Cheers, Rainer
Hello everyone,
Going forward, one of the things we need to do is to revisit how
micro-services are structured. This is a big task, for which there
have been previous discussions on the mailing list and related github
issues. Those discussions mainly focused on splitting the
micro-services out, into a separate repository. While, this was given
a shot, it didn't work quite smooth.
With this email, I want to set the high-level requirements for a
plugin architecture which will help with separating the
micro-services, but also frontends and backends, to their own
packages, and make it easy to plug-in more micro-services, frontends,
backends (or other types of plugins).
Currently, the Satosa repository contains the core of satosa, the
supported backends, the supported frontends, and some micro-services.
Ideally, the satosa repository would contain only the core component.
What I want to do, is have each backend, frontend and micro-service be
its own package. These packages can then specify the desired
dependencies, their example configurations, documentation, as well as
installation and deployment instructions. Finally, the individual
plugins, can be developed and updated without the need to change the
core.
And, we can almost do that. We can separate backends, frontends, and
the micro-services out - not to a grouped repository (ie
"micro-services repository"), but each of those plugins to its own
repository and python package. There is little work that needs to be
done to enable this, mainly to decouple the core from two
micro-services (the CMservice and AccountLinking, that have special
requirements hardcoded).
But there is more than separating the plugins out. Separating the
plugins enables us to specify the control we want to have over the
provided functionality, the way certain aspects of the flow take
place, how state is managed, etc. By defining what a plugin is, we can
treat frontends, backends and micro-services in a uniform way.
A plugin is an external component, that will be invoked by Satosa at
certain points of an authentication flow. We need to _name_ those
points, and have plugins declare those in their configuration, thus
deciding when they should be invoked (and if they should be invoked in
multiple points). At different points in the flow, different
information is available. Plugins should be provided most of the
available information in a structured manner (the internal
representation).
Right now, we have four kinds of plugins (which can be though of as
roles), invoked at specific points of the flow:
- frontends: invoked when an authentication request is received
- backends: invoked when an authentication response is received
- request micro-services: invoked before the authentication request is
converted to the appropriate protocol
- response micro-services: invoked before the authentication response
is converted to the appropriate protocol
I'm not certain that this separation is the best; I can see the need
by some micro-services to know more than just the internal
representation of the available information. This can be solved in two
ways: introduce more points in the flow where a plugin will be invoked
and hope this point is better suited for the intended purpose, or,
enumerate what that needed information is and provide it in a safe way
- the example I have in mind, is a situation where a micro-service
needs to select certain SAML attributes to generate an id, but the
available information is limited to the internal attribute names,
which introduces an indirect coupling.
When talking about invoking external components, we usually think in
blocking terms. This, however, may not always be the case. Examples
include the need to do heavy IO operations, or invoke another service
over the network from which we do not expect a response (ie, send
logs, stats or crash reports to a monitoring service). For such cases,
we may want to set a certain plugin to work in async mode.
There are also cases, where we do expect an answer from an external
service, but this may come at an undefined time in the future and does
not interfere with the current flow. For these cases, we need to keep
some kind of state. This is now done in the form of a frontend channel
(ie, a cookie). What I'd like is to make this explicit and available
to the plugins as a module/function that handles state in a uniform
way.
Moving over to this structure will allow plugins to be much more
flexible. But there is still an issue hidden there. If we have plugins
as separate packages, developed and updated independently from the
core of Satosa, we also need a way to signal Satosa that such a
package has been installed, or updated and should be reloaded. This
will affect how all plugins are initialized and loaded internally, and
most probably it will affect how Satosa itself is initialized.
Along with that work, intrusive work needs to be done in error
handling and logging. At the moment, errors end up as plain text
messages (usually not that helpful) in the browser (which wraps the
text into basic html) and the logs. This needs to be change in the
direction of structured error messages. Logging will also change
towards that direction. Since, the logs will contain this information
in a structured manner, the same payload can be returned as the error
message. I would like to have messages structured in JSON format (most
probably), with context, data and a backtrace included among other
information (such as timestamp, hostname, process-id, src-map,
request-id, and more.) Provided this information, another process (a
frontend/error-handling service) can parse it and present the user
with the appropriate error message.
The structured logger and the error-handling service should be part of
the parameters that initiaze a plugin. The plugins should make use of
them, in order for the service to have a uniform way of handling these
cross-cutting concerns.
The library I'm looking into, to take care of the log format is structlog:
https://github.com/hynek/structlog
Other things to look for in the future, is grouping and high-level
coordination work between plugins in the form of invocation
strategies. Given three plugins, I want to invoke them in order, until
one succeeds with returning a result. Or, I want to invoke them in
parallel and get an array of results. Or, I want to invoke this plugin
that does a network operation, and if it fails, I want to retry 3
times with an exponential backoff.
To sum up, this was an (non-technical) overview of the things that I'd
like to do in relation to the "plugins". For some of the above, the
technical parts are still under consideration. There are more things
to be done for Satosa, both technical and not which I hope to write
down, discuss and do with everyone's help and suggestions.
Cheers,
--
Ivan c00kiemon5ter Kanakarakis >:3
Hi-
After upgrading our Satosa to the latest, I am getting some deprecation warnings in the Satosa log.
I'll address the other 2 warnings later (I think they are related to python3), but for now I'm interested in how to use the new hasher microservice. I am not having much luck finding documentation for it.
The warning in the log is this:
"/usr/local/lib/python3.6/site-packages/satosa/base.py:56: DeprecationWarning: 'USER_ID_HASH_SALT' configuration option is deprecated. Use the hasher microservice instead."
Can you point me to some help in using that new microservice?
Thanks!
Hi all,
I would like to find out what the licenses of *all* components and
dependencies that make up SaToSa. Any suggestion to how to do that in an
automated fashion?
I could walk down the libraries manually in
virtualenv/lib/python3.5/site-packages, but that is rather tedious and
error prone. I also found some scripts to do this for me, however these
seem to just report what is in use at the os/system level, not
specifically for our virtualenv.
Any suggestions, or a solution you may have used previously?
Niels
--
Niels van Dijk Technical Product Manager Trust & Security
Mob: +31 651347657 | Skype: cdr-80 | PGP Key ID: 0xDE7BB2F5
SURFnet BV | PO.Box 19035 | NL-3501 DA Utrecht | The Netherlands
www.surfnet.nlwww.openconext.org
Hello everyone,
I have been looking into PR #495
https://github.com/IdentityPython/pysaml2/pull/495
What the user needs is an option to configure the signing algorithm
that will be used by the SP to sign an authentication request. This
comes in hand with the digest algorithm and in extension what is
entity that should be signed.
What the proposed one line change does, is allows this value to exist
in pysaml2 Config object. By itself, it does not affect anything in
the code - it does not set the signing algorithm. It is only a
placeholder for a value. Something else is supposed to look at that
value, at the right time, and pass it as an argument to the
appropriate function/method.
This doesn't seem right. If it is in the configuration, then it should
actually do something - it should affect the way the library behaves.
Looking into this I stumbled upon some commits, made about a year ago,
that implement some of this functionality for the IdP part:
* 2aedfa0 - make both sign response and assertion configurable
adds sign_assertion and sign_response options
* bd4303a - Signing signature and digest algorithm configuration
adds sign_alg and digest_alg options
These are implemented in the SATOSA repository (see
satosa/frontends/saml2.py the _handle_authn_response method). However,
their configuration lies between the lines of the pysaml2
configuration (under service/idp/policy). This is wrong - each project
should be responsible for its own configuration. The code that decides
what should be signed and how, should live in pysaml2. If it is
handled by SATOSA then it should be part of the SATOSA configuration
and an override of the pysaml2 configuration.
Moreover, it seems that this functionality was partially already there
in pysaml2 in the first place. See the Policy class in
saml2/assertion.py and its get_sign method. An option named 'sign' can
be defined under the service/idp/policy part of the configuration,
that defines an array of values that represent what should be signed,
for example:
service:
idp:
policy:
sign:
- response
- assertion
So now we have both the above 'sign' option, plus 'sign_assertion' and
'sign_response', which should do the same thing.
What I would like to do is move the code introduced by the commits
above into pysaml2: this will allow a consistent behaviour whether
pysaml2 is used by SATOSA or some other project. Then the same options
can be used by the backend too, which would satisfy the user request.
Once that is done we can look into making the configuration work in
one way.
This is bigger than it looks. What happens now is that we define for
example, that we want to use SHA512 as a sign_alg. This will be used
when a authentication response is formed, but it is ignored when for
example a logout request is to be created. This happens because the
configuration of what signing algorithm will be used is only
implemented for the authentication response.
There are two ways to fix this:
- we either assume that a configuration option like 'sign_alg' is
global, and as such, it affects the signing algorithm of anything that
is to be signed
- or we assume that it relates to the authentication req/response only
(and in that case it shold probably be called authn_sign_alg or alike)
and require new options for other kinds of signatures
(logout_sign_alg, metadata_sign_alg, etc).
The first solution requires that we find all places in the code use
signatures and make sure they respect the configuration. I have
already noted (a lot of) entry points to pysaml2 that should be
looking into the configuration to derive the signing and digest
algorithm values.
The second option is "easier" to work with, as it allows for an
incremental implementation of this request. The second approach is
also more flexible for the end user, but at the same time more complex
as it requires more configuration values to be set.
Ofcourse we can have both, use an option like sign_alg to defined the
signing algorithm, and use "suboptions" like authn_sign_alg to
override the sign_alg setting where needed.
I hope this makes sense (even though it mixes at least four
configuration options together). If you have any comments, I'd like to
hear.
Cheers,
--
Ivan c00kiemon5ter Kanakarakis >:3