Towards a microservice architecture?

Want to post some food for thoughts after discussing with @MyriamBoure about her work with the Data Food Consortium initiative and how it can relate to the future of OFN.

One the objectives of the Data Food Consortium is that food platforms would make their data available via a set of API services using the same representations of the world. Example: each platform would make their product catalog available (many benefits, producers need to manage their products only in one platforms for instance).

(Note the arrow in red where OFN itself accesses the product catalogue only and only via the API)

This could be the opportunity to migrate OFN towards a more modular, so called microservice architecture. How? Well, one of the nice consequences of abstracting the behavior of a system into services is that once the services are defined and used, one can modify the internals of the system without impacting the outside world - provided the services are not modified. And this can be done at the same granularity as the services themselves, functionality by functionality. No need to re-write everything from scratch :slight_smile:

Concrete example: in the diagram above one could re-write the “product catalog” functionality into an independent application (=microservice) that would then serve the client food platforms, including OFN. Step by step the entire set of OFN’s functionalities can be refactored into modular independent applications.

Another example is for some of the functionalities to be implemented for LP in France: the new UI module that will be used by producers to manage their orders could be a separate “microservice” interacting with the rest of OFN via APIs, instead of being developed part of the OFN core stack.

Some pre-requisites:

  • the API interfaces breakdown, focusing on main data entities
  • microservice breakdown: each should own their own data
  • the technologies and tools to be used - this is more complex to manage and without proper tooling, chaos follows.
1 Like

I’m certainly interested in this as a future evolution. One thing that needs careful consideration though is the impact on developers who are often part-time and remote. It would be risky to increase the effort of setting up a development environment too much because it needs multiple microservices in place to work. I’m wondering if this is a greater risk than it might be for a full-time, possibly co-located team?

this is definitely a fantastic future evolution beyond just the software interoperability, but allowing possibility to integrate features of OFN without installing it to any other software or service using any language(or the ones that support the comm protocol for that μService)
I wouldn’t worry about the complexity of installation/setup as the ‘simple’ need of continuous and autonomous deployment, reslience etc. will force the creation of easily runnable (docker) images and setting up proper orchestration and monitoring for those running containers. So at the end it should be even easier.

However there are much more things to consider than the prerequisites described in order to have advantage of μServices. Also HTTP/REST API breakdown could be seen as focusing on resource/data entities, however from the perspective of μServices the split would be rather by functionality, btw those 2 doesn’t need to be the same.

I could see the evolution in several (simplified) steps:

  1. make separate gems from the current code to work on top of spee (or the least possible modified fork of it)
  2. abstract and split the spree hooking and other boundaries, OFN logic and persistence layers of those gems
  3. make implementations(other gems) to enable those gems to run autonomously (without spree and its DB) of course keeping the ability to communicate to other such modules.
  4. setup monitoring of all those modules
  5. play around with configurations, scaling, mocking, testing etc (possibly go back to 3) or event 2) to rework )
  6. collaborate and drink :slight_smile:

note that point 3) is bloody hard :slight_smile: and it includes the software interoperability topic

@Kirsten @RohanM @maikel @oeoeaio @danielle I want to invite you to join that conversation as it seems this is exactly what we are heading towards with the potential LP French project. At least when talking to Kirsten the other day I just realized that’s what we were proposing, and I almost jumped out of joy (@Kirsten you can testify :-)) as it is so much aligned also with the vision we are heading toward with the Data Food Consortium project, I have the impression we are really here taking a big step that can make the whole ecosystem move. @emak and @sylvain can also testify how much it resonates with the LP ITarchitect vision after the call we had this morning, I have the feeling this is really the “future that wants to emerge:slight_smile: But happy to see the more tech people answering on that too :wink:

When I first heard about the Open Food Network, I had a much more decentralised system in mind. At the moment, we have one piece of software modelling the whole network. I like the idea to break it down, introduce APIs and make it more modular. But it’s actually quite difficult to do it in a way that is actually better than the current system. The first step is defining APIs. The “one piece” OFN can offer all the micro services through these APIs even though it’s not made of independent modules. But other parts like independent shop fronts can access those APIs. And that is exactly what we are planning with the LP project.

Breaking up the OFN into several parts is difficult, because the database structure is complex. To extract the product catalogue you have to solve the permission problem. Products can be changed by the farmers, but also by certain hubs that were granted permission. Hubs can also add product overrides. And the permissions are managed by enterprise-to-enterprise relationships that don’t belong into a product database. The permission logic would reside in the OFN. It could have full access to all products created through the OFN. Or it could manage access keys that can be exchanged to grant other people permission. These are possibilities that have to be explored with a collection of use cases to meet everybody’s requirements.

Extracting the product database would probably also mean to replace a lot of the Spree logic. It affects the product management, but also the cart, checkout, orders, invoices etc.

Yes, I understand @maikel that it seems complicated, but I think this can be done module by module (with my non tech perspective that’s what I understood when talking especially to @sylvain . We have started a very precise prototype on that with the Data Food Consortium, where we are describing some basic use cases to start with. I will share the blogpost when I have written it here, but we are in the use case analysis describing for ex the whole process for the use case “referencing a products catalog”, then another process for the use case “modifying the products catalog”, and in the process describtion we specify the “rules”, like permissions, etc. We are just at the first step so I can’t tell you more, but I am sure there will be concrete solutions experimented in this consortium.

hi all, this all seems interesting / cool to me - but just want to be careful that there isn’t ‘over-promising’ going on to the ‘potential major French client’ about what / how much will happen within MVP of that project. I guess some of the permissions issues will need to be considered / worked out with the lean producer interface that is proposed, but it’s very much my understanding that that would be talking directly to an OFN database, not via a 3rd party product management thing?! @MyriamBoure @maikel etc

I agree, if one thing must remain stable it is the web APIs. Then work can happen in the background to extract modules and run them in different containers if needed. But API stability is key as soon as external services start plugging into them. So designing future proof APIs is the key problem right now.

Just wanted to point out that there are other options in between the monolithic Rails app and microservices. For instance Components Based Rails Apps https://leanpub.com/cbra

What I’m saying is that it can be easier to move decoupled pieces of code into “modules/components” (Rails engines for instance) than jumping directly into the microservices jungle.

1 Like

For sure it becomes harder to test the whole thing end-to-end but at the same time, each modules is much easier to test on its own. And provided the API is kept untouched (including its intent), end-to-end testing may not be needed in most cases.

Designing “future-proof” anything is difficult but we can start and iterate. The good news is that it should be easier with this Data Food Consortium initiative as it gives a much larger prospective than just OFN.

Definitely! And it can be done functionality by functionality, no need of a big rewrite.

Yes. But you need to test an API with a real world application before you can think about it as stable. So usually there is an informal API tested out first and then it is formalised and standardised so that everybody can rely on it.

1 Like

Some interesting thoughts on when to move from a monolith to microservices:

1 Like

Hi all,

Everything has a start, so this will be mine. Not a simple topic, but a crucial one, root of every other technical decision I feel. Apologies if my lack of background on the project make me say some non-sense.

I feel there might be some confusion here between micro-services and decentralized.

Micro-services is when features are implemented in independent services. Eg. profile management, messaging, payments, billing etc. This is popular at the moment because it is simpler to test/maintain though as said above it can be harder to setup (but docker shall help indeed :smile: ). It could definitely be helpful to isolate some services, but it won’t directly help the consortium into creating a product description standard.

Decentralized means different systems can talk to each-other. For now, there are several instances of the openfoodnetwork codebase deployed, but as far as I know, they are 100% independent and don’t talk to each-other.

In order to achieve this goal, I would definitely suggest each of us to have a look at the mastodon project, which in short is a decentralized twitter. Good things are it is open source AND written in ruby. :heart:

I am not yet aware of how the decentralization works there, but I guess there might be a list of all available instances that is shared and synced among instances. To join, one simply has to add itself to the list. Deeper into the technical details could be kept for later.

This would allow any instance of the OFN to crawl products that belong to another instance, just as tweets for mastodon. Ain’t this what we are trying to achieve?

Indeed the first step to this is to create a standard description for each products. That’s the consortium’s job. What is the current status of this at the moment @MyriamBoure?

But then this can definitely be independent from the local architecture (micro-service vs monolithic), and it should not bother the current APIs either (though unifying those with the future standard one shall be a reasonable objective at some point but step by step).

Comments welcome!

As I already stated in this and other threads I think that our challenge is not at infrastructure level, where the mentioned architectures could come in hand, but at code organization level.

I’m starting to believe that we need to clean up our code, modularize it and make a repo for each instance. Every instance would be able then to pick the modules that better fit the reality it faces without adding customizations to the core that maybe only one enterprise in one instance will use.

This decision would come with some cost of course, I’m working on a proposal with more details. My idea is to present it in december.

In any case, I just wanted to stress out that our challenge now is to fit each local instance need. The time it needs to make OFN match the needs of our local hubs and producers is too long IMHO and modularization is one possible (although challenging) way to fix it.

1 Like

I am trying to follow this @enricostn and I look forward to discussing your ideas in Australia in December (turns out I can go!!) A question though - wouldn’t any instance putting in place a public infrastructure want all modules available anyway? I can’t really imagine an instance that wouldn’t. When you first set up an instance - you don’t know who the users or what the uses are exactly. Indeed I’ve received requests that I wouldn’t have imagined when we set up the instance here in Canada. I’m thinking that maybe there are other advantages to the modular approach - like if I just want to set up a farm store, I don’t need to see or know about all the options for a food hub… I just want to make sure that going modular still leaves the possibility that a user can ‘grow’ into more complicated uses because I think that is very likely. A farmer might start with a small farm store selling their own goods, but then advance to a multi-farm CSA, and then maybe a wholesale buying club… I just don’t understand why an instance wouldn’t want to make all modules available all the time - which would mean more updating wouldn’t it? (Sorry if I don’t understand - maybe I should just wait to discuss.)

Never heard of the Mastodon project, sounds interesting.

Another project that OFN could potentially benefit from is the Diaspora Project which is a decentralized social network. They use a federation protocol to connect their pods (they call these their individual instances).
It is written in Ruby (available as Ruby Gem), open-source and focused around decentralization and privacy.

Here is the federation protocol specification of the Diaspora project:
https://diaspora.github.io/diaspora_federation/

My initial idea, at a point at which I wasn’t too aware of what OFN can and can’t do yet (which was yesterday : D when I stumbled across it ), was to use OFN for a buying group. We would setup a server instance ourselves which means we are independent of other instances (e.g. downtimes, abandoning of the server, …).
In the case we would only buy from suppliers that are already registered on other OFN instances our instance would only require a small amount of running modules. One advantage would be that we could potentially rent a cheaper server (assuming less modules = less processing power required).