Skip to end of metadata
Go to start of metadata

Date and time

10-11 EST




Kafka partitions in a multi-tenant Folio environment

Florian Gleixner wrote "The TC got a RFC that deals with a option to reduce the number of Kafka partitions in a multi-tenant Folio environment. The RFC is now in the public rewiew stage according to the TCs RFC process You will find the RFC here: We invite you to ask questions or discuss the RFC in the slack channel #public-review-rfc-kafka-partitions"

This will require some changes in the modules' treatment of Kafka messages, but this will save hosting costs for Kafka cloud hosting.

Do we have questions or comments ?

Are there security issues , because modules have access to the messages of all tenants of the tenant collection ?


Problems during an upgrade for modules that use Kafka. This must be solved by SysOps. Which modules picks up the pending Kafka messages, the new one or the old one ? There is no rule for this. SysOps must solve that by using environment variables. Could put the Flower Release name in the ENV variable. Kafka makes up the message name by the ENV var and the NAMESPACE. SysOps need to test and document this upgrade procedure.

Meeting Notes:

number of partitions = number of topics * replication factor * number of tenants
  this will soon be in the thousands
  you have to pay per partition
The RFC wants to add the option to collect some topics of multiple tenants in a so-called topic collection.
Usually you do not separate the topics for different tenants.
Example: Catalogue and mod-search need data. Both modules subscribe to these events in Kafka.
The topic name contains the tenant name, as of now.
In the future, it will contain only a tenant collection name (if you make use of this option).
The tenant name is lower case, the tenant collection name is upper case.
The modules need  to implement more logic to filter out the tenant, as it will not be anymore in the topic name.
The RFC proposes to extend the FOLIO Kafka library. Tenant collection can be used optionally.
Tenant collection names are set in the environment. They are not persistent.
Q: What are the drawbacks of not having that much partitions ?
A: Both solutions, the current one and the proposed one, have drawbacks. Kafka opens one file for each partition. You can run out of file descriptors. That is a drawback that you have already now.
Tenant collections can also have drawbacks. It can be possible that other tenants have to wait until their messages are being consumed.
The partitions are consumed round-robin. If you have one partition per tenant, one tenant can not influence the other tenant.
Julian thinks that in most cases Kafka will not be the bottleneck, but rather the modules that consume the messages.
For self-hosting you have to monitor Kafka and see if it runs out of file descriptors.
If you don't have to pay for Kafka you don't have that problem. The default will be the topic-per-tenant setting.
Security issues: All modules share the same secret accessing Kafka. You might sees this as a
security problem, but it is the same situation as with the database. All modules can see all messages
of all tenants, as they can see all tenants in the database. The separation has to be in the module and will be in the module in the future.
Putting extra credentials for the tenants is only useful if you want to attach external services
to Kafka or to the database. But this is not the way we want to integrate external services. We want to use the APIs.
What is not handled by the RFC and has to be tested and documented:
If you now use a multi-tenant environment and have an upgrade situation,
you will have more than one module that consumes the messages. You will have
the same module in different versions and you can not say which module will subscribe to which message.
But it is not part of the RFC. The solution should be that the system admministrator sets an 
environment variable that is the first part of the topic name.
This variable + the namespace + the event name = the topic name.
Then you can assure that these messages are consumed only by modules of the same flower release.
Problem: if the sender (producer) does not get an update, but the consumer (receiver) does. Maybe then you need to wait with the update until also the producer module gets an update.
Jeremy: We are using AWS S3. For data-export and data-import.

Implementing both a minio client and an aws client in mod-bulk-operations and mod-data-export-worker quadruples docker image size.

[FOLS3CL-5] Make aws-sdk-java optional (maven scope: provided) - FOLIO Issue Tracker

Do we want/need minio and aws support in the same model ? For institutions that use minio, a thin client would be desirable.

[MODBULKOPS-64] S3_IS_AWS is non-functional - FOLIO Issue Tracker

Unlcear, why both clients have been implemented, because the two clients should be compatible to each other.

Meeting Notes:

We need to have someone (in this group) who can explain the incompatibilities to us.

Why is there a need for the AWS client ?
Julian asked Craig. We will discuss this in the next TC meeting.

Ingolf: Consider becoming a candidate for the next TC elections. Terms are for two years, starting July 1st. Half of the TC members change after one year. Thus, you will be a "new" member for the first year, and an experienced member in the second year. You will gain an insight in new modules before they will become part of the official distribution.

Action items

  • Type your task here, using "@" to assign to a user and "//" to select a due date