Skip to main content

Detecting breaking changes in APIs

When making a pull request with code changes, an automated check is run to verify that the change doesn't cause a breaking change in the APIs. Once an API has been exposed it is implicitly guaranteed that it should keep working without having to make changes. This is why breaking changes should be prevented. If ever it is necessary to make a breaking change, this should be done by creating a new version of the API.

More on what qualifies as breaking changes in APIs: https://stackoverflow.com/questions/1456785/a-definitive-guide-to-api-breaking-changes-in-net

Overview

In gDCC (and other products that expose APIs) OpenAPI is used to generate swagger.json files that represent our API specification. The main idea for detecting breaking change automatically is to compare swagger files generated before and after the code changes. The best place to perform this check is in the pull request pipelines, where a pull request can be rejected if it introduces breaking changes.

Tool choice

Due to the open source nature of OpenAPI ecosystem, there are many different libraries that attempt to detect breaking changes. It has been chosen to use oasdiff as a CLI tool. At the time of implementation (October 2024) it is the most maintained and updated choice. It is written in GO, which allows us to add it to the pipeline with relative ease. Other versions of a similar tool exist, but are either not up to date with the newest OpenAPI specifications, like the official Azure/openapi-dff project. Another offered solution is OpenAPITools/openapi-diff which requires Java. This is less practical than the GO version the pipeline setup, and also has its latest release is close to three years old at the time of writing (October 2024). NPM also provides a version of the tool, but at the time of writing (October 2024) it has an unclosed issue which affects the swagger file of the project.

Prerequisites

It is usually not considered best practice to track an autogenerated file in git. However the way the the breaking change detection is implemented, it is required that the swagger.json file is tracked by git and that its merge policy is set to binary. More on this choice in the Tracking the swagger file section.

Pipeline steps

Once we commit the changes, our pull request pipeline performs the following steps (mentioning the steps relevant to this topic):

  1. Check out the old swagger file: The main branch is checked out and its swagger file is copied to a temporary file.
  2. Reset to clean state: The swagger file is reset to fit the commited version. It represents the updated API specification.
  3. Install GO and the oasdiff tool: The needed folders are created and the specific version of GO and oasdiff tool is installed.
  4. Run oasdiff Tool: Use the oasdiff CLI tool to compare the old and new swagger.json files. This tool will detect any breaking changes between the two versions. The specific script that performs the operation checks if the method prints any results. If no results are printed, then there are no breaking changes. If they are however detected, the pipeline fails.
  5. Verify that the commited swagger file correct: The commited swagger file is temporarily copied, and then compared with the one generated by the build process.

While steps from 1. to 4. can be reused and are templated, and are placed in azure-pipelines/templates/breaking_changes.yaml, step 5. is somewhat different and depends on the build process of the specific application. In this case, a .dockerfile is used to specify the build steps, and it consists of the following substeps:

  1. Copy the commited swagger file to a temporary folder.
  2. Delete the one in the original destination. This forces the build to generate a new swagger file, as it sometimes may not do so otherwise.
  3. Compare the contents of the built swagger file to the copied one.
  4. If they are identical, continue the pipeline and delete the copy. Otherwise, fail the pipeline.

The reason for these additional steps is to ensure that the commited swagger file is indeed the one that accurately reflects the state of our API, as it will be used as a baseline for comparison in the future pull requests.

Design choices

Placing the checks in the pipeline

This was a straight forward choice as the check is guaranteed to be performed and work as a PR gate. The check can be also performed locally by executing the steps from the pipeline.

Tracking the swagger file

This was a decision that was discussed quite a bit, and it covers edge cases that other approaches couldn't. The key take aways of the discussions are summarized below, where alternatives are compared and key differences are pointed out.

Mathematical property of breaking changes as a relation

Let's define relation D (in context of aDb) as a relation which specifies that API b does not introduce breaking changes to API a. In that case, we can look at the properties of D:

  1. It is reflexive, as aDa is true.
  2. It is not symmetric, as aDb does not imply bDa.
  3. It is antisymmetric*, if aDb and bDa then a=b.
    • Antisymmetric in sense of API specification, different code versions can have same API specifications.
  4. It is transitive, if aDb and bDc then aDc.

It can be concluded that relation D is an ordering relation, so in some further cases down the line, you might think of it as a <= (less or equal) relation, the implications will be the same.

Not using the newest swagger file

Without the above explained formalization, it might seem like we could use a swagger file which is not the absolute newest for comparison. For example, maybe we could use the one that is currently in production and published in our artifacts somewhere remotely. Let's say that the production code is on some version A. We might make two pull requests, B and C. B and C themselves might not introduce breaking changes to A, but they might introduce breaking changes to each other, which would not be detected in this case. To draw a parallel with the <= relation, if A = 10, B = 15, and C = 12, we will let the commit carrying B to be accepted, and then we will compare C with A, which is also accepted, but it is not true that B <= C.

Store the newest swagger file in a remote location once the pull request is complete

Doing this could end up with a classic case of a "Dirty Read". Two pull requests could run their pipeline and take the stored value as the baseline. Both of them might succeed, and on their completion they might attempt to update the remotely stored swagger file. First one will successfully write its changes, and then the second one will also succeed, but it performed the comparison with the swagger that is no longer the newest version, and its updates might actually introduce breaking changes compared to the results of first pull request.

Merge policy for swagger file

Similar issue as the one mentioned above could occur if storing the swagger file on git, but decide not to introduce the binary merge policy. Idea behind the binary merge policy is that is should not be possible to complete the pull request unless it has the newest version of swagger.json to compare our code with. This may seem too aggressive at first, and one might argue that just tracking merge conflicts in the code should be sufficient, so the following section is dedicated to exposing the flaws of that particular approach.

Code with no merge conflicts can cause breaking changes

One might argue that if two code changes don't cause merge conflicts, they do not cause breaking changes. Here is an example of two changes, that do not cause merge conflicts, both do not cause breaking changes compared to main, but cause breaking changes between each other.

In this case, the main branch contains class A, which has a property of type B. Pull request 1. exposes an endpoint which returns an instance of class A. Pull request 2. changes class A to contain a property of type C instead of B. If both pull request are made in the same time, the check for breaking changes will pass. In that case they might both be completed, and perhaps only pull request 1. might get included in the first release, and then pull request 2 gets included in the second release. The second release introduces a breaking change that is uncaught by the check.

Generate the code from main branch and then compare it to the generated code of the pull request

This idea also suffers from a possible Dirty Read, along with being slow as we would have to perform the build step twice.