Hello everyone,
I have been looking into PR #495
https://github.com/IdentityPython/pysaml2/pull/495
What the user needs is an option to configure the signing algorithm
that will be used by the SP to sign an authentication request. This
comes in hand with the digest algorithm and in extension what is
entity that should be signed.
What the proposed one line change does, is allows this value to exist
in pysaml2 Config object. By itself, it does not affect anything in
the code - it does not set the signing algorithm. It is only a
placeholder for a value. Something else is supposed to look at that
value, at the right time, and pass it as an argument to the
appropriate function/method.
This doesn't seem right. If it is in the configuration, then it should
actually do something - it should affect the way the library behaves.
Looking into this I stumbled upon some commits, made about a year ago,
that implement some of this functionality for the IdP part:
* 2aedfa0 - make both sign response and assertion configurable
adds sign_assertion and sign_response options
* bd4303a - Signing signature and digest algorithm configuration
adds sign_alg and digest_alg options
These are implemented in the SATOSA repository (see
satosa/frontends/saml2.py the _handle_authn_response method). However,
their configuration lies between the lines of the pysaml2
configuration (under service/idp/policy). This is wrong - each project
should be responsible for its own configuration. The code that decides
what should be signed and how, should live in pysaml2. If it is
handled by SATOSA then it should be part of the SATOSA configuration
and an override of the pysaml2 configuration.
Moreover, it seems that this functionality was partially already there
in pysaml2 in the first place. See the Policy class in
saml2/assertion.py and its get_sign method. An option named 'sign' can
be defined under the service/idp/policy part of the configuration,
that defines an array of values that represent what should be signed,
for example:
service:
idp:
policy:
sign:
- response
- assertion
So now we have both the above 'sign' option, plus 'sign_assertion' and
'sign_response', which should do the same thing.
What I would like to do is move the code introduced by the commits
above into pysaml2: this will allow a consistent behaviour whether
pysaml2 is used by SATOSA or some other project. Then the same options
can be used by the backend too, which would satisfy the user request.
Once that is done we can look into making the configuration work in
one way.
This is bigger than it looks. What happens now is that we define for
example, that we want to use SHA512 as a sign_alg. This will be used
when a authentication response is formed, but it is ignored when for
example a logout request is to be created. This happens because the
configuration of what signing algorithm will be used is only
implemented for the authentication response.
There are two ways to fix this:
- we either assume that a configuration option like 'sign_alg' is
global, and as such, it affects the signing algorithm of anything that
is to be signed
- or we assume that it relates to the authentication req/response only
(and in that case it shold probably be called authn_sign_alg or alike)
and require new options for other kinds of signatures
(logout_sign_alg, metadata_sign_alg, etc).
The first solution requires that we find all places in the code use
signatures and make sure they respect the configuration. I have
already noted (a lot of) entry points to pysaml2 that should be
looking into the configuration to derive the signing and digest
algorithm values.
The second option is "easier" to work with, as it allows for an
incremental implementation of this request. The second approach is
also more flexible for the end user, but at the same time more complex
as it requires more configuration values to be set.
Ofcourse we can have both, use an option like sign_alg to defined the
signing algorithm, and use "suboptions" like authn_sign_alg to
override the sign_alg setting where needed.
I hope this makes sense (even though it mixes at least four
configuration options together). If you have any comments, I'd like to
hear.
Cheers,
--
Ivan c00kiemon5ter Kanakarakis >:3
We did not have time in the last call to discuss this:
There are use cases where we need to share state between a request and a response microservice. Now we have already 2 cases, so I suggest to define a common method to achieve this. The same approach could also be used to access the common config, e.g. if you need to know the proxy configuration (such as a backend entityid) in a microservice.
A simple mechanism would be to use a module-level variable as singleton:
=====================
shared_state.py
state = {}
———
plugins/microservices/a.py
from shared_state import state # import executes only once
…
state[’a’] = 'foo'
———
plugins/microservices/b.py
from shared_state import state
whatever(state['a‘])
=====================
I thing that for just passing request status to response microservices and passing config data around this should be good enough. There are several alternatives, like the Borg pattern, which I find harder to read.
- Rainer
Hi all,
I created a simple LDAP client on top of ldap3, as our
ldap_store_attributes MS is.
https://github.com/peppelinux/pyLDAP
I think that would be better to handle all the configuration parameters
subdivided as API methods are implemented, like
https://github.com/peppelinux/pyLDAP/blob/master/settings.py.example#L4
At this moment we maps them manually from configuration to ldap_store MS,
that approach instead would permit us somethig like:
https://github.com/peppelinux/pyLDAP/blob/master/client.py#L32https://github.com/peppelinux/pyLDAP/blob/master/client.py#L41
things will come as they are from yaml configuration, without any
additional mapping into MS code.
Another topic is the possibility to decouple standalone clients from MS, as
guidance principle.
The tool I showned can fetch data from multiple sources using a single
configuration.
I can be also apply embedded rewrite rules. Doing it in the client and
using that client decoupled from MS code would be better to debug and
app/code reuse. In the MS code we would only include calls to the client's
API, to get them to work and fetch from them what needed. They could also
scale up in a multiprocessing asset more easily this way. Multiple clients,
with same methods and similar API could be parallelized for a faster data
aggregation and account linking. The same methods could be used for a WS
service with SOAP client, noSQL client and others.
these thought would be also be linked to our latest comments about the
"global configuration visible into MS" and "a shareble context into MS", if
these ideas could help in a wider approach with potential benefits for the
future.
I share as it is,
see you back soon
Hi folks,
I'm wondering to develop a microservice on top of asyncio for manage
multiple connections to many LDAP servers (or any kind).
I think that this would be the best solution for a performant, flexible and
highly customizable account linking.
What do you think?
https://docs.python.org/3/library/asyncio.html
--
____________________
Dott. Giuseppe De Marco
CENTRO ICT DI ATENEO
University of Calabria
87036 Rende (CS) - Italy
Phone: +39 0984 496961
e-mail: giuseppe.demarco at unical.it
If would like to discuss a logging feature that I would like to see in Satosa. I proposed with PR 237 to add a log filter that would enhance satosa's logging capabilities. Ivan rejected it for the (in general good) reason that the proxy should log everything and log processing should be external.
I agree in general, but there are bits where I still think that they would be useful in satosa. These are:
1. In production environments it is unlikely to push the full set of debug information to a logging service. However, it might be useful to get debug level data on certain selections. Usually that would be based in IP addresses, which should not be too complicated to implement.
2. In a dev environment one is easily inundated with debug data. Shibboleth has a nice feature providing logging levels for certain aspects, such as XML tooling, SAMl message de-/encoding. I find this capability quite useful, because in my dev environment I do not have an elaborate log processor. Attribute configuration in satosa could be helped by selected messages.
If done properly, this change has following impact on all modules that instantiate a logger:
1. refactor the satosa_logger wrapper back to the native logger with similar signature
2. add a log filter after each get_logger()
The logfilter (logging.Filter) is orthogonal to structured logging, or may even help to improve it.
see: https://github.com/IdentityPython/SATOSA/pull/237 <https://github.com/IdentityPython/SATOSA/pull/237>
Cheers, Rainer
Hi,
As already mentioned to Ivan during our previous meeting I do not use
docker but a bootstrap procedure on top of virtualenv.
In production I use uwsgi instead of gunicorn. A configuration example Is
here:
https://github.com/peppelinux/Satosa-saml2saml/tree/master/example/uwsgi_se…
If It could be usefull to the community we could serve this examples
directly into SATOSA. With uwsgi I have a lot of professional features like
http statistics server in json format, triggers that reload workers on file
change (or simply touch) and many others that Is not yet included there.
I share as It come, if usefull I can do a style and comments clean up
--
------------------------------------------------------------------------------------------------------------------
Il banner è generato automaticamente dal servizio di posta elettronica
dell'Università della Calabria
<http://www.unical.it/5x1000>