Is there any way to address the JSON structure issues by only serving ‘new’ (and ‘improved’) API endpoints with the new structure? In the knowledge that ‘new and improved’ API endpoints will also not need to serve our FE, can this be part of our transition strategy?
If so, this can form part of our ‘officially supported’ criteria.
Can you roughly tshirt-size this work?
On the first point about structure, I don’t think it would require much work, I’d say Small-ish.
The second point on changing the serialization would need a spike to get even a vague idea of the size.
I think there would be real value in figuring out one standard and sticking to it, rather than having a mish-mash of different endpoints with totally different outputs.
Totally agree on the value of having one standard and sticking to it - in an ideal world at least.
In the reality of this team, though, I’m super scared that we’ll just be doing groundwork forever. Already we see week/months of work on adjustments blocking the tax reports. The reality is that we won’t create officially supported endpoints on these reports unless we can find a way to be agile.
If both of these things are requirements to supporting API endpoints then I suggest that the scope creep is too large and thus tax reports endpoints will not be the first officially supported API endpoints.
Thanks for the info
Yeah, I think a bit of groundwork would be sensible (and wouldn’t take very long), but maybe the big serialization change is too much. There’s definitely things we can get stuck into that probably can/should be done before tax reports is delivered
There seemed to be a general feeling in last night’s meeting that we can’t do any work on the API until we’ve determined exactly which users will use it and exactly which use cases the API will cater to.
My feeling is that we should take the complete opposite approach when thinking about how to actually design and implement the API, which is: we should assume that the “eternal clients” are an indeterminately broad and diverse group, and that the use cases are potentially infinite and similarly indeterminable. It’s actually much clearer and simpler if you start with that assumption.
With the former perspective, the central question is “how can we determine exactly what all the potential users want”, which leads to analysis paralysis and no obvious way forward. With the latter perspective, the central question becomes “how can we design this tool in a way that’s as broadly and generically useful for as wide a range of clients and use-cases as possible”, which actually leads to really clear and obvious answers.
Anyway, that’s just my 2 cents. I was a bit too tired to be articulate last night…
I’m so glad for this excitement about API and I trust that the technical aspect of this are in very very capable hands. I don’t follow those discussions, but its fantastic to see them happening. I am writing now from the perspective of a different kind of contributor in the global community. I think I contribute to broad strategic and governance aspects of our community. In particular, I think one of my roles is to reflect and consider how our values and other aspecits of governance are operational and evident in all the decisions we make. In my experience with OFN, there are always broader political-economic-soveriengty-justice… considerations for everything we develop (tech is not politically neutral). It seems to me that our API strategy should be informed by these considerations in addition to technical considerations. I’m wondering where and in what process we might have these discussions? Specifically, I think it requires 1. an introduction to APIs for the non-technical member in our community, 2. a Q&A time where we can consider our community governance (pledge, values…) and float questions about the API from that perspective, 3. Consideration/response time and isolation of the thorny bits, 4. A consensus decision process on the thorny bits (if there are indeed any thorny bits). I hope I’m out of line with this post. I just feel that in a way, some of us are gatekeepers of our community values and governance - and maybe there is due diligence for us to do here.
Just eavesdropping on the convo, and I don’t want to interrupt, but I had to say what a thrill it gives me to see all this activity towards API development! I hope this is the year we finally get a chance to talk about OFN/farmOS integrations.
And FWIW, farmOS 2.0 will be using JSON API as well, so cool to see you converging on the same solution!
Hey @lin_d_hop, just FYI:
Hendirk and I reviewed the use cases and we have noted which ones “we like too”.
Also I have added two more use cases at the very end.
so glad to hear this @jgaehring - and as a further aside - OFN-Canada is currently working toward interopability with LiteFarm (I’m sure you know them) - so it will be a big party!
Update from API Strategy Meeting 11/02
Proposal on DFC Implementation
Although we were a bit short the appropriate folks to make this decision, there was a very strong feeling amdist the group present that we need to pivot on the DFC implementation. The work to now has been done as an engine within OFN in which the DFC engine reads directly from the OFN DB. This strategy has meant the development is slow and has a high requirement for OFN developers to really understand and be involved in the DFC implementation. It’s just not working.
The proposal is that the DFC should be a connector on top of the REST API.
This proposal has been floated in the past. Unless anyone has some strong reaction or reason that this should not be the route for the DFC implementation this is the direction we will follow. It has implications on the prioritisation of REST API endpoints.
Resourcing API Development
Those present in the meeting recognised a huge and pressing need to be able to commit to API development work outside of the existing pipe. We discussed the potential of committing to having out new developers being assigned specifically to API work. We discussed how the new DFC implementation might enable some funding toward work on the REST API. We discussed the ongoing issue of funding other work like code reviews. There was a general agreement that we need to understand cost and clearly spec the REST API tech debt work that has been outlined in other places such that we can use this information to budget, resource and recruit.
REST API Tech debt
In previous conversations we’ve understood that there is some outstanding tech debt to be completed on the REST API before we can move toward implementing specific endpoint requirements.
- Reworking the API Structure at the top level as per here.
- Improving response times and load efficiency as per here.
In order to resource this (both hiring and funds) we need to spec and spike. This work needs to be prioritised, probably in delivery train. There is also a pretty strong feeling that this work should not slow down adjustments refactoring or Rails upgrade as both of these tasks are blocking pipe movement.
Hello, nice, sounds like progress.
I havent followed the details here but I can add there’s some test coverage to the current api endpoints (it’s not very complete but it’s a very good start). We should use any new API project to start using request specs instead of controller specs for the API: RSpec 3.5 has been released!
For non tech people: the current rest api has decent amount of tests but we need to improve if we want external users to use the API. As we do this we should write the tests in a slightly different and improved way called request specs (instead of the old style way we currently use called controller specs).
Oh, and one more thing, the existing request spec for the orders endpoint was built by Steve with a gem called rswag that generates the swagger file (docs) automatically from the spec! It’s not perfect and requires a bit of work but something to explore. If we use rswag as in spec/requests/api/orders_spec.rb we get free swagger file (swagger/v1/swagger.yaml) otherwise we need to keep the swagger file manually updated like we do on the swagger.yaml on the root of the project.
Pulling this thread back up as we continue on our journey toward a roadmap for our API.
I’ve spent quite a bit of time in the Use Case deck to understand the fundamentals of what everyone wants to do with the API. I’ve then pulled my notes together into a minimally coherent document that attempts to pin down these use cases into a rough set of needs with reference to the models that they touch.
Find the summary here.
Our needs can be solved by the following with
- Effective Discovery Endpoints - Shops and Order Cycles.
- Create and Update on Customers Endpoint - Including tags
- Products Endpoint and DFC Integration
- Read Orders with and without Line Items - Cleanup of existing endpoint required.
- Create and Update on Orders with and without Line Items
- Reports via the API
- Full Shopping Experience via the API
- Create and Update Enterprises
Note that none of these pieces of work have been incepted. The next step is to choose the first priotiy, then we can invest more heavily in the inception and specification work to ensure that all our use cases can be solved by the work.
Of these needs the following are currently blocked:
3. Products Endpoint and DFC Integration requires auth work on the DFC Engine to be prioritised
4. Read Orders is blocked by the adjustments work.
5. Create and Update orders is blocked by the adjustments work.
6. Awaiting the Reports project to be incepted. This will be part of that work.
.1. Discovery - small inception needed.
.2. Create & Update on Customers - very likely the smallest piece.
.7. Shopping Experience - large inception needed and would work best as a funded partnership
.8. Create & Update Enterprises - mid sized inception needed.
.9. Authentication in the DFC engine
I would therefore like to propose that we explore putting either 1, 2 or 9 into the Funded Features pipe as the first API endpoint funded feature. I would suggest that there are two ways we could decide which first:
- Funding - funder decides seems fair in the funded features pipe to me
- Ease - the smallest and simplest is a good first option, dependant on funding of course.
All in all though, the path forward feels a lot clearer to my mind.
Personally I would be very happy with all three
What would people like as a next step?
Shall we make a decision on one of these three, estimate, liberate the funds and deliver?
In particular ping @Rachel and @Kirsten as the two people that have suggested that they can contribute funding to API work via the Funded Features pipe.
Very good summary @lin_d_hop ! Feeling excited about things being more clear now
Totally biased due to my role in OFN, I would personally choose 2. assuming there’s interest and funds for it, of course. It feels to me that one is the more scoped and the one that will get us to something production-ready pretty soon.
At Coopdevs we’re currently working on an integration for another marketplace that exactly looks like that. We’re updating Worpdress users based on data fetched from the customer’s CRM. This experience may come in handy.
Thanks for this summary @lin_d_hop it’s awesome
Maybe we can put this topic on the agenda of our next product or delivery circle? I think we need a clear view on the budget available first. So that means we gather how much funding we have, but also we get time estimates on 1,2 and 9 ?
A small group had a chat on Friday about product syncing and the possibility of using the DFC. So that’s point 3 in the list here. Lynne summarised very well that it’s not a low hanging fruit because it needs a form of authentication first. But it would bring us an easy way of syncing products with other platforms via the existing DFC prototype which is a high priority, at least in Australia.
While I would like to push the product type syncing here, I now think that Lynne’s priority list makes a lot of sense and that it would be good to trust the velocity of the development team. If we get the easier things done quickly, will get to the rest soon. → Action.
@lin_d_hop asked if I could come up with a rough estimate for adding authentication to the existing DFC engine before the next delivery train. I think it’d take about 1 developer-week (so roughly 30 hours) for this to get a PR ready for Code Review. The bulk would be getting your head around how the auth token should be passed between the two apps; the code itself that’s produced would be relatively straightforward.
Conversation continued in 2023 API Roadmap