8 min read
A Field Guide to Terrible APIs

You integrate with enough third-party APIs over the course of a career and you develop a kind of instinct. Within the first few hours of reading the docs, or more often, the first few hours of discovering there aren’t any, you can tell whether this integration is going to take a week or consume your will to live for the next three months.

I recently worked with two different APIs in quick succession. One was well-documented and a pleasure to integrate with. The other was not. The contrast was so sharp that it felt like a public service to catalog exactly what the bad one got wrong, in the hope that someone building an API might read this and quietly fix a few things before inflicting them on the rest of us.

This is that catalog.

Why does the same field have three different names?

The first sign you’re dealing with a troubled API is when the same concept shows up with different names depending on which endpoint you’re looking at. customerId in the order response. customer_id in the invoice endpoint. cust_ID in the webhook payload. All referring to the same value, all requiring mapping code that shouldn’t need to exist.

This usually means the API was built by multiple teams who never talked to each other, or by one team that never talked to themselves. Either way, you’re the one who gets to reconcile their naming disagreements in your integration layer.

The good API I worked with picked camelCase and stuck with it. It’s such a low bar that it feels ridiculous to praise, and yet here we are.

What version of this API am I even using?

Versioning is one of those things that seems straightforward until you encounter an API that treats it as a suggestion. You’ll see /v2/ in the URL, but the response format matches what /v1/ used to return, except for two fields that were renamed without notice. Or some endpoints are on v3 while others are still on v1, and the migration guide references a v2 that no longer exists.

The best part is when a breaking change ships without a version bump. Your integration stops working on a Tuesday afternoon, you spend two hours debugging, and then you find a changelog entry dated three weeks ago that casually mentions the field you depend on now returns a different data type. No deprecation warning, no version increment. Just surprise.

A version number is a promise. If the promise means nothing, the number is decoration.

What does this field do?

You’re reading the API response and there it is: a field called status with a value of 7. What does 7 mean? The documentation says the field is a “string” (it is not a string) and offers no further explanation. No enum reference, no mapping table. Just a Slack thread from 2022 where someone asked the same question and was told to “refer to the docs.”

Undocumented enums force you into an archaeological expedition. You start making requests with different parameters, collecting the distinct values that come back, and reverse-engineering what each one means from context. It’s being handed a map where all the labels have been removed and being told the territory is well-documented.

The good API published every enum as a dedicated section in their docs, with human-readable descriptions and a note about when each value was added. It took someone maybe an afternoon to write. It saved me days.

Why is the order of my JSON keys a crime?

This one still haunts me. The API rejects your request if the keys in your JSON payload aren’t in a specific order. Not the values. The keys. The thing that the JSON specification explicitly says is unordered. The thing that most serialization libraries handle in whatever order they feel like. That thing has to be arranged just so, or you get a 400 back.

The documentation doesn’t mention this requirement. You discover it after an hour of comparing your request to the single example in the docs, character by character, until you notice that you put lastName before firstName and apparently that’s a problem.

If your API requires ordered JSON keys, your parser is broken. That’s not a feature request. That’s a bug.

Why do these endpoints look like they were built by different companies?

You learn the patterns for the customer endpoints. Resources are fetched with GET, created with POST, responses come back in a consistent wrapper object with data and meta keys. You feel good. You build your integration layer with confidence.

Then you move on to the billing endpoints and everything is different. The list endpoint returns a raw array instead of the wrapper. The create endpoint expects query parameters instead of a request body. Date formats changed. The authentication header is the same, which is genuinely the only reason you believe these endpoints belong to the same API.

This is what happens when an API grows organically without anyone stepping back to ask whether the whole thing makes sense as a product. Each endpoint works in isolation. Together they feel like a junk drawer.

What does “Error: An error occurred” mean?

You send a request. It fails. The response body says: {"error": "An error occurred"}. Thank you. Technically accurate and completely useless.

The variety of bad error messages is almost impressive. An empty 400 with no body. A 200 that succeeded at the HTTP level but contains an error buried three levels deep in the JSON. A 500 that returns an HTML error page because the API server is apparently also a website. My personal favorite is the error message that references an internal identifier you have no way to look up.

The good API returned structured errors with a code, a human-readable message, and a details array that told you exactly which field failed validation and why. When something went wrong, you knew what to fix. That’s genuinely all anyone is asking for.

Why am I debugging your product for you?

When you hit a bug in a well-designed API, you file a ticket, someone investigates, and you get a resolution. When you hit a bug in a bad API, you file a ticket and begin a months-long relationship with a support process that was apparently designed to test your patience.

It starts with a request for logs. Fair enough. Then they want you to reproduce the issue in their sandbox environment, which doesn’t behave like production. Then they ask you to try a slightly different payload, then another, then another. At some point you realize you’re doing their QA for them. You’re testing permutations of their own API, documenting results, and reporting back like an unpaid contractor.

The resolution, when it finally arrives weeks or months later, is a message that says: “We’ve deployed a fix. Can you confirm it’s working?” No details about what changed, no release notes. Just a question that puts the burden of verification back on you, the person who already did the investigation, the reproduction, and most of the diagnosis.

A support ticket that lasts three months is not a support ticket. It’s a project.

What the good API got right

The good API wasn’t built by a bigger company or a better-funded team. It was built by people who thought about the developer on the other end. Naming was consistent, errors told you what went wrong, and the docs covered the things you actually needed to know. When I filed a support ticket, the response included specifics instead of a request for more logs.

None of this is hard. It’s the difference between treating your API as a product that other people have to use and treating it as an implementation detail that happens to be exposed to the internet. One of those costs the API developer a few extra hours. The other costs every integrating developer days, repeatedly, forever.

I think about status: 7 more often than I’d like to admit.