Hi,
Thanks for the note/update Leif.
Could this note, or some form of this note, become a roadmap document
that you keep and maintain somewhere in
https://github.com/IdentityPython/pyFF
?
I confess that I thought I understood where you were going with breaking
pyFF into backend and frontend pieces, but when I look at
https://github.com/TheIdentitySelector
I am confused about just what I would *do* to build a discovery service
that can leverage a pyFF backend server.
I think it would be helpful (for me at least) to have some higher level
documentation that showed the architecture, the pieces, and how you
expect them to fit together.
In short, I am trying to understand if I see a path for a small science
team with limited resources to use pyFF + (something) to get what they
get right now with pyFF, or will this really only be something
accessible to large teams going forward.
I am happy to create the architecture diagram(s) and associated
documentation with your help if you can dialogue with me. Please let me
know if you want me to contribute in that way.
Thanks,
Scott K
Hey,
Sorry for the crosspost...
After a few weeks of spending all of my available development bits on
the various parts of RA21 (cf
github.com/TheIdentitySelector, yes its
all nodejs!) I'm back to working on pyFF for a bit.
Here is what I have planned for in the quite near term:
1. merge the api-refactory branch which includes a pyramids-based API
2. merge documentation PR from Hannah Sebuliba (thx!)
3. tag and release the last monolothic version of pyFF
4. in HEAD which becomes the new 1.0.0 release:
- remove all frontend bits (old discovery, management web app)
- pyffd will now start pyramids-based API server
- wsgi will be available/recommended
- create a new "frontend app" as a separate webpack+nodejs project
- create docker-compose.yaml that starts pyffd (API) + frontend app
5. tag and release 1.0.0 thereby moving pyFF over to semantic versioning
After 4 it makes sense to talk about things like...
- new redis/#nosql backends
- work on reducing memory footprint
- pubsub for notifications between MDQ servers
- more instrumentation & monitoring
- adaptive aggregation for large-scale deployments
- elastic search
- management APIs for integrated editing of local metadata
- OIDC
- generating offline MDQ directory structures (cf scripts/mirror-mdq.sh)
Thoughts etc are as usual welcome.
Cheers Leif