On Fri, 31 May 2019 at 12:37, Rainer Hoerbe <rainer at hoerbe.at> wrote:
>
> I summarized the handling SATOSA_STATE as discussed in Tuesday’s meeting in the attached diagrams. I have two questions:
>
> a) Is this picture correct?
Looks right to me. btw, I think those diagrams are really helpful,
thanks for making them ;)
>
> b) Is there any purpose in loading the context from SATOSA_STATE when an AuthnRequest is received?
>
In general, no; it is there as part of the generic process (it does
not specialize for the authn-request flow).
But, one can use this to make some special things happen, ie use the
cookie for feature flags.
--
Ivan c00kiemon5ter Kanakarakis >:3
Hi,
I require the SATOSA SAMLFrontend to assert the following XML for
eduPersonTargetedID as part of an <AttributeStatement>
<Attribute FriendlyName="eduPersonTargetedID"
Name="urn:oid:1.3.6.1.4.1.5923.1.1.1.10"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri">
<AttributeValue>
<NameID
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"
NameQualifier="https://proxy.my.org/idp/satosa"
SPNameQualifier="https://service.my.org/sp/shibboleth">
1088878806
</NameID>
</AttributeValue>
</Attribute>
Here the value 1088878806 for <NameID> is to be taken from a database (I will
be using an LDAP directory, but in general it is being pulled from some
storage), the NameQualifier is the entityID for the SATOSA proxy IdP frontend
(the SAMLFrontend instance), and the SPNameQualifier is the entityID for the SP
that sent the <AuthnRequest> to the SATOSA proxy IdP frontend.
Has anybody configured the existing SATOSA and pysaml2 code to assert such
XML? If so can you share some details of your configuration?
If not, is anybody using modified SATOSA or pysaml2 code that has not been
contributed yet to assert such XML and is willing to share and/or collaborate
on getting it contributed (relatively quickly)?
Thanks,
Scott K
P.S. I am fully aware that there are reasons to deprecate the use of
eduPersonTargetedID. For this particular deployment I need to assert this
attribute for the time being using SATOSA.
Hi,
Am I correct that the SATOSA SAMLBackend class currently has no way to
dynamically set "force_authn" so that the SAML authn request sent to the
authenticating (campus) IdP includes the forced reauthentication flag?
Thanks,
Scott K
Hi,
For a SAML proxy deployment (no OIDC here yet) I need a sophisticated
attribute release policy that should be largely driven by SAML metadata
entity categories. The policy is essentially to release the union of
attributes for each of the entity categories REFEDs R&S, CoCo, and a few
others to which the entity belongs.
Later I am sure there will need to be per-entity adjustments (there
always are...).
Have other deployments already implemented tooling to implement such a
policy?
I don't see that the SAMLFrontend (or its sub classes) has the requisite
functionality. Or that pySAML2 has it--the policy based functionality
appears to mostly be around statically configured filtering and not
driven by SAML metadata. Am I missing existing functionality?
If I need to develop the functionality, then I am wondering if it is
best done by implementing a response microservice(s), or to evolve the
SAMLFrontend code?
Thoughts?
Thanks,
Scott K
Hi,
I just submitted a first draft of a PR for the SAMLUnsolictedFrontend
class.
The class provides all of the functionality of the SAMLFrontend class
but also enables an "unsolicited" endpoint that can be used to initiate
a SAML flow using a proprietary set of query string parameters that are
not part of any SAML standard but follow closely similar functionality
from the Shibboleth project.
For example, one might do a GET to (line breaks added and query string
not encoded for clarity)
https://myproxy.my.org/saml2/unsolicited?\providerId=https://mysp.my.org/sp/shibboleth&target=https://mysp.my.org/secure&shire=https://mysp.my.org/Shibboleth.sso/SAML2/POST&discoveryURL=https://mydisco.my.org/
This will cause a flow where
https://mydisco.my.org/
is used for IdP discovery followed by a SAML <Response> being sent to
the SP with entityID
https://mysp.my.org/sp/shibboleth
at the ACS URL
https://mysp.my.org/Shibboleth.sso/SAML2/POST&
and with relay state
target=https://mysp.my.org/secure&
The providerId query string is required. The others are optional.
To prevent being an open relay the SP entityID must match one found in
the trusted metadata, the ACS URL must match one found in the trusted
metadata for the SP, the relay state URL must have a scheme, host, and
port that matches the ACS URL, and the discovery service URL must be
whitelisted in the configuration. The relay state URL condition could be
more sophisticated.
The functionality and the names of the input query parameters are
inspired by the Shibboleth IdP functionality described at
https://wiki.shibboleth.net/confluence/display/IDP30/UnsolicitedSSOConfigur…
and the SP functionality for discoveryURL described at
https://wiki.shibboleth.net/confluence/display/SP3/ContentSettings
I am not married to the names of the query parameters--aligning with the
Shibboleth project seems to make some sense, but they are using 'shire'
for historical reasons and a better name would be something like
'acs_url' or even just 'acs'.
The configuration is the same as for the SAMLFrontend base class, except
that you add something like
unsolicited:
endpoint: unsolicited
discovery_service_whitelist:
- https://mydisco.my.org/
With that configuration the endpoint is exposed at <backend name>/unsolicited
If instead you had
unsolicited:
endpoint: foo/bar
discovery_service_whitelist:
- https://mydisco.my.org/
then it would be exposed at <backend name>/foo/bar
I have not written any tests yet nor updated the documentation. If
nobody raises any objects I will proceed with doing that.
All input welcome.
Thanks,
Scott K
Hi,
I know it has been talked about as "doable", but has anybody already
deployed SATOSA with a response microservice that implements a "step-up"
flow to leverage a second factor (like Duo) when the authenticating IdP
does not assert that MFA was used?
If so, are you considering sharing it and/or contributing it to the code
base?
If not, but you are considering such an implementation/deployment, can
you indicate if you are interested in collaborating on the development
and testing?
Thanks,
Scott K
Hi to everybody,
I developed a microservice that can map specific SaToSa backends to
specific target entity id. A configuration example can be this:
````
module: satosa.micro_services.custom_routing.DecideBackendByTarget
name: TargetRouter
config:
target_mapping:
"http://idpspid.testunical.it:8088": "spidSaml2"
"http://strangeIDP.testunical.it:8081/saml2/metadata": "strangeSaml2"
````
I needed a backend routing based on the target entity ID because I have
some SAML2 IDP that only accepts highly customized authn request and
metadata. An example would be SPID italian federation, through which my
organization will federate soon with SaToSa. Another example could be the
need to use different configurations, like enc and digest algorithms,
depending by target IDP.
I was looking into DecideBackendByRequester microservice but soon I
realized that it was made for different goals, in it the subjects are the
requester entity ID and not the target entity ID.
As you can see in https://github.com/IdentityPython/SATOSA/pull/220
I made a single branch to pull only this feature.
I'm also curious about SaToSa milestone, which are the features in
development status, which will compose the next release and another
question about the possibility to have a dev branch to do PR on it.
I don't know if this microservice could sound useless to you, I searched a
lot before programming it and I hope to have done a middleware that could
be usefull for the SaToSa community.
Hope to hear your comments soon
Hi,
Right now the saml2.py in src/satosa/backends/ has
def disco_query(self):
"""
Makes a request to the discovery server
:type context: satosa.context.Context
:type internal_req: satosa.internal.InternalData
:rtype: satosa.response.SeeOther
:param context: The current context
:param internal_req: The request
:return: Response
"""
return_url = self.sp.config.getattr("endpoints", "sp")["discovery_response"][0][0]
loc = self.sp.create_discovery_service_request(self.discosrv, self.sp.config.entityid, **{"return": return_url})
return SeeOther(loc)
Essentially this restricts the flow to one and only one IdP discovery
service that is configured statically.
I propose that this method be enhanced so that it can inspect the context
and internal data and if it finds a URL for the discovery service to use
it overrides what is in the configuration.
Then one can configure a request microservice that uses some logic to set
the URL for the discovery service, such as which SP the authentication
request came from.
Since the comment for the method already includes a mention of the context
and internal data, I suspect this functionality was designed but never
implemented.
Any objections to me implementing it?
Any other comments or input?
Thanks,
Scott K
We collected some user cases during the meeting @TIIME2019. I put this into the wiki and added short descriptions:
https://github.com/IdentityPython/SATOSA/wiki <https://github.com/IdentityPython/SATOSA/wiki>
These are drafts, please feel free to improve them directly. In particular I did not grasp the UC7/Policy Enforcement, this one needs a useful description (Scott K.?)
Cheers, Rainer
Hello everyone,
Going forward, one of the things we need to do is to revisit how
micro-services are structured. This is a big task, for which there
have been previous discussions on the mailing list and related github
issues. Those discussions mainly focused on splitting the
micro-services out, into a separate repository. While, this was given
a shot, it didn't work quite smooth.
With this email, I want to set the high-level requirements for a
plugin architecture which will help with separating the
micro-services, but also frontends and backends, to their own
packages, and make it easy to plug-in more micro-services, frontends,
backends (or other types of plugins).
Currently, the Satosa repository contains the core of satosa, the
supported backends, the supported frontends, and some micro-services.
Ideally, the satosa repository would contain only the core component.
What I want to do, is have each backend, frontend and micro-service be
its own package. These packages can then specify the desired
dependencies, their example configurations, documentation, as well as
installation and deployment instructions. Finally, the individual
plugins, can be developed and updated without the need to change the
core.
And, we can almost do that. We can separate backends, frontends, and
the micro-services out - not to a grouped repository (ie
"micro-services repository"), but each of those plugins to its own
repository and python package. There is little work that needs to be
done to enable this, mainly to decouple the core from two
micro-services (the CMservice and AccountLinking, that have special
requirements hardcoded).
But there is more than separating the plugins out. Separating the
plugins enables us to specify the control we want to have over the
provided functionality, the way certain aspects of the flow take
place, how state is managed, etc. By defining what a plugin is, we can
treat frontends, backends and micro-services in a uniform way.
A plugin is an external component, that will be invoked by Satosa at
certain points of an authentication flow. We need to _name_ those
points, and have plugins declare those in their configuration, thus
deciding when they should be invoked (and if they should be invoked in
multiple points). At different points in the flow, different
information is available. Plugins should be provided most of the
available information in a structured manner (the internal
representation).
Right now, we have four kinds of plugins (which can be though of as
roles), invoked at specific points of the flow:
- frontends: invoked when an authentication request is received
- backends: invoked when an authentication response is received
- request micro-services: invoked before the authentication request is
converted to the appropriate protocol
- response micro-services: invoked before the authentication response
is converted to the appropriate protocol
I'm not certain that this separation is the best; I can see the need
by some micro-services to know more than just the internal
representation of the available information. This can be solved in two
ways: introduce more points in the flow where a plugin will be invoked
and hope this point is better suited for the intended purpose, or,
enumerate what that needed information is and provide it in a safe way
- the example I have in mind, is a situation where a micro-service
needs to select certain SAML attributes to generate an id, but the
available information is limited to the internal attribute names,
which introduces an indirect coupling.
When talking about invoking external components, we usually think in
blocking terms. This, however, may not always be the case. Examples
include the need to do heavy IO operations, or invoke another service
over the network from which we do not expect a response (ie, send
logs, stats or crash reports to a monitoring service). For such cases,
we may want to set a certain plugin to work in async mode.
There are also cases, where we do expect an answer from an external
service, but this may come at an undefined time in the future and does
not interfere with the current flow. For these cases, we need to keep
some kind of state. This is now done in the form of a frontend channel
(ie, a cookie). What I'd like is to make this explicit and available
to the plugins as a module/function that handles state in a uniform
way.
Moving over to this structure will allow plugins to be much more
flexible. But there is still an issue hidden there. If we have plugins
as separate packages, developed and updated independently from the
core of Satosa, we also need a way to signal Satosa that such a
package has been installed, or updated and should be reloaded. This
will affect how all plugins are initialized and loaded internally, and
most probably it will affect how Satosa itself is initialized.
Along with that work, intrusive work needs to be done in error
handling and logging. At the moment, errors end up as plain text
messages (usually not that helpful) in the browser (which wraps the
text into basic html) and the logs. This needs to be change in the
direction of structured error messages. Logging will also change
towards that direction. Since, the logs will contain this information
in a structured manner, the same payload can be returned as the error
message. I would like to have messages structured in JSON format (most
probably), with context, data and a backtrace included among other
information (such as timestamp, hostname, process-id, src-map,
request-id, and more.) Provided this information, another process (a
frontend/error-handling service) can parse it and present the user
with the appropriate error message.
The structured logger and the error-handling service should be part of
the parameters that initiaze a plugin. The plugins should make use of
them, in order for the service to have a uniform way of handling these
cross-cutting concerns.
The library I'm looking into, to take care of the log format is structlog:
https://github.com/hynek/structlog
Other things to look for in the future, is grouping and high-level
coordination work between plugins in the form of invocation
strategies. Given three plugins, I want to invoke them in order, until
one succeeds with returning a result. Or, I want to invoke them in
parallel and get an array of results. Or, I want to invoke this plugin
that does a network operation, and if it fails, I want to retry 3
times with an exponential backoff.
To sum up, this was an (non-technical) overview of the things that I'd
like to do in relation to the "plugins". For some of the above, the
technical parts are still under consideration. There are more things
to be done for Satosa, both technical and not which I hope to write
down, discuss and do with everyone's help and suggestions.
Cheers,
--
Ivan c00kiemon5ter Kanakarakis >:3