Introduction
There have been significant discussion about the approach to versioning the Consumer Data Right Standards (and the API endpoints in particular) – the particular conventions adopted for the standard and the reasons for their adoption over other versioning models that are used for API's.
Available Approaches
There are three or four standard paradigms from for versioning APIs. Most jurisdictions that set standards, use block versioning where they release (the collection of APIs) on a regular basis at a specified cadence - every quarter every year, the next version of the standards and all aspects of that standard are versioned at one time. We have adopted that to a degree in that we are using semantic versioning for the standards as a whole. However, the CDR regime is unusual when compared to many other regimes, Open Banking in particular, around the world in that it is intended to cover many sectors. Conceivably, in a number of years, the CDR might be covering five, six, or even a dozen different sectors, and each of those sectors will have their own change path. So, the APIs for (say) Superannuation, might be changing at a different pace to the APIs for Banking, which might be very stable at that point. As new sectors are brought into the regime, wholly new APIs will need to be developed and they will need to undergo several iterations as they're tested and implemented and rolled-out, whereas previous sectors may be relatively stable and unchanging.
Semantic Versioning
For that reason, although we have semantic versioning at the standards level as a whole, so that we can accommodate, like normal projects that implement semantic versioning, changes that are major and breaking in nature (or changes that are minor but still impactful and the changes that are really just bug fix or correction) we have also implemented an endpoint versioning model using headers. The intent of this is that, as we move forward and the CDR is multisector, we could be in a situation where there are hundreds of Data Holders and thousands of Accredited Data Recipients, all using these standards to communicate with each other on a daily basis. In that scenario, we need to manage forward compatibility and backward compatibility of any change.
In this scenario, it is conceivable that introducing a new version of an API with a breaking change may take months, possibly even a year, before all participants have upgraded to that new version. So, we have implemented endpoint versioning, to be able to support Data Holders that wish to implement at a different pace using different change cycles. In this way, they may adopt or implement a new version of an API at various times and Accredited Data Recipients can choose to wait to use a new version of an API until a quorum of Data Holders have implemented it and they are comfortable that most of the time when they asked for that data from the particular version of the endpoint, it will actually be returned. This mechanism also allows for us to manage obsolescence. So, we can make state we can and will make statements in the standards such as here is version two of an endpoint. This will be binding from some future date, say next October. And here is the old version and that will be obsolete. Three months after the new version becomes binding. And that way it creates a compliance requirement both on Data Holders to have the new version of available, and Accredited Data Recipients to upgrade their clients to the new version within known time frames.
Conclusion
The versioning model the DSB has implemented, where we are using semantic versioning (to make it clear when the standards change and why and how) but also applying endpoint versioning within each version of the standard, to manage lower level, more granular change. The combination allows for the evolution of the CDR standards and more flexible deployment of changes by the participants across the regime – which was an important consideration from the outset of the standards development.
Comments
0 comments
Article is closed for comments.