*Idpy meeting 3 September 2024*
Attendees: Johan W, Johan L, Shayna, Ivan, Hannah S
0 - Agenda bash
1 - Project review
a. General - Ivan's plan is to merge things that don't break anyone's
flow.
b. OIDC libraries -
https://github.com/IdentityPython (idpy-oidc,
JWTConnect-Python-CryptoJWT, etc)
- pyop - there are some changes that can go ahead, won't block anything.
and then there will be a new release -
https://github.com/IdentityPython/pyop/pull/55
- plan is still to move away from pyop, however
- more patches coming up for idpy-oidc - internal repos so no PRs.
- configuration change needed to handle redirect uris better- read
urls with special characters like spaces work with some flows
but not for
others.
- reuse indicators - there are specific use cases as to how they
are to be treated and that has been encoded into tests -
later on the code
changes. Separating what happens when reuse indicators are in place in
regards to token exchange - the two specs reference each
other but also
conflict in some ways.
- introducing new concepts around audience policies
- mechanism that allows you to state an audience
- what requirements you have for the audience. Allow multiple
values, one value, etc.
- This has nothing to do with reuse indicators where you signal
which value or values should be set as the audience.
- There are also some questions as to how things work and when
resolution takes place based on different layers - you
could request
resource X and this means the audience will get service 1
- the identifiers
can be different.
c. Satosa -
https://github.com/IdentityPython/SATOSA
- Anything behind a features flag can probably be merged, such as the
logout capabilities that Hannah S and Ali have been working on.
- logout PRs that can be merged -
-
https://github.com/IdentityPython/SATOSA/pull/444
-
https://github.com/IdentityPython/SATOSA/pull/431
- backend/frontend connections - need some discussion - complex
-
https://github.com/IdentityPython/SATOSA/pull/449
-
https://github.com/IdentityPython/SATOSA/pull/450
- These will be easy to pull in:
- Apache configuration:
https://github.com/IdentityPython/SATOSA/pull/462
- Tu Wien SP configuration example:
https://github.com/IdentityPython/SATOSA/pull/469
- EntraID backend:
https://github.com/IdentityPython/SATOSA/pull/461
- documentation cleanup:
https://github.com/IdentityPython/SATOSA/pull/458
- xmlsec breaking:
https://github.com/IdentityPython/SATOSA/pull/452
- dev processes - pre-commit and flake:
https://github.com/IdentityPython/SATOSA/pull/454
- a bit harder:
- types - needs thought but can probably move forward-
https://github.com/IdentityPython/SATOSA/pull/435
- removing pyoidc, separating dependencies between SATOSA and
pysaml2 - this is a breaking change; this will require people
using SATOSA
to install pysaml2 separately now
https://github.com/IdentityPython/SATOSA/pull/442
- more involved:
- Kristof - base paths - need to make sure we're not breaking
anything. Paths that were there before should still just work.
https://github.com/IdentityPython/SATOSA/pull/451
- adding new member services - exposing information - needs to be
done a different way.
https://github.com/IdentityPython/SATOSA/pull/448
- LDAP plugins - add tests - not pressing, on hold
- backend and frontend names are unique - this PR should go in but not
in the suggested format.
- d. pySAML2 -
https://github.com/IdentityPython/pysaml2
- To be merged:
- xmlenc:
https://github.com/IdentityPython/pysaml2/pull/964
- EC types:
https://github.com/IdentityPython/pysaml2/pull/897
- MDQ:
https://github.com/IdentityPython/pysaml2/pull/959
- domain validation:
https://github.com/IdentityPython/pysaml2/pull/951 - needs a few
changes, then will be easy to pull in
- UTC
https://github.com/IdentityPython/pysaml2/pull/939 - can go
in with a little bit of checking
- Windows support - these will probably be closed and done differently -
maybe using signals from garbage collector cleanup would be better as a
workaround? Really needs to be addressed by Python itself.
-
https://github.com/IdentityPython/pysaml2/pull/933
-
https://github.com/IdentityPython/pysaml2/pull/931
-
https://github.com/IdentityPython/pysaml2/pull/665
- important: encryption algos:
https://github.com/IdentityPython/pysaml2/pull/924 - this one needs
to be checked - cannot just be merged
- dev processes -these will probably be merged - run tests when there is
a merge request opened; release packages when merge request is
merged. etc.
-
https://github.com/IdentityPython/pysaml2/pull/882
-
https://github.com/IdentityPython/pysaml2/pull/816
- lxml:
https://github.com/IdentityPython/pysaml2/pull/940 - not
complete - it is a draft. It is a basis for using lxml everywhere in the
project. Lxml parser is Qname aware - it knows when an xml attibute
contains a namespace or a type. The default python parser does not do
anything with namespaces, so when you try to do validation, the namespace
is missing because python has optimized it away (removed it). There are
certain use cases where this is problem. Ivan may also talk to a
person who
has an xml validator which has a way of using the default python
parser but
still is able to check for those edge cases.
e. Any other project (pyFF, djangosaml2, pyMDOC-CBOR, etc)
- question on slack concerning pyff from Hannah at CERN. Ivan will try
to get to it today.
- PyFF - Ivan needs to look at this issue:
https://github.com/IdentityPython/pyFF/issues/264
- pyff - mdq - Ivan would like to have a configuration that says
output should go into either the file system (what happens now), or into
S3, or into a database (in which case you don't need a Discovery Service,
everything can be an API call). In the database case then things can be
quickly sorted and indexed the way you like. The problem is there is no
mapping between xml and a table. Need to think about how to do indexing
without the schemas, and so on. This will unlock capabilities
that we don't
have right now and also simplify what we do with the discovery service.
- Also need to change the way we parse xml into memory - can do this
within the entities descriptor. This shouldn't be hard. This
would make it
so we don't need a big machine or lots of resources to do the
parsing of a
large thing every 6 hours. Pyff could be put into a lambda, possibly.
- Or using S3 could make this a serverless process.
- SATOSA itself can also be simplified, but the whole
configuration would need to change. They have looked at moving toward a
framework like Django - not sure if this would be done as SATOSA
or SATOSA
version 2? New approach in parallel with what we have now - does
that make
sense time-wise and maintenance-wise? How to do this without
breaking what
is there now? Need to experiment with Django. Async parts of Django would
make some parts of SATOSA easier. Background things like statistics that
don't need to interact with the actual flow but need to be there
- perhaps
API call to elasticsearch to record that a new flow happened. Open
telemetry - asynchronous calls to the logger - tracing - do
these in a way
that don't affect the timing of the flow itself.
2 - AOB
- Ivan is doing a lot of work on EOSC with the AI integration.
- Next meeting - 17 September. Shayna will not be available but will
send out the meeting reminder. Ivan will take notes and send them to Shayna
to distribute.