If this is causing you a problem, it will cause your putative ecosystem of developers a problem (e.g. when they try to develop an alternative UI). If you are really eating your own dog food, make the API (and the rate limiting) work for your application. Here are some suggestions:
Do not rate limit by IP address. Rather, rate limit by something associated with the user, e.g. their user ID. Apply the rate limit at the authentication stage.
Design your API so that users do not need to call it continuously (e.g. give a list call that returns many results, rather than a repeated call that returns one item each time)
Design your web app with the same constraints you expect your developer ecosystem to have, i.e. ensure you can design it within reasonable throttling rates.
Ensure your back end is scalable (horizontally preferably) so you don't need to impose throttling at levels so low it actually causes a problem to a UI.
Ensure your throttling has the ability to cope with bursts, as well as limiting longer term abuse.
Ensure your throttling performs sensible actions tailored to the abuse you are seeking to remove. For instance, consider queuing or delaying mild abusers rather than refusing the connection. Most web front ends will only open four simultaneous connections at once. If you delay an attempt to open a fifth you'll only hit the case where they are using a CLI at the same time as the web client (ot two web clients). If you delay the n-th API call without a gap rather than failing it, the end user will see things slow down rather than break. If you combine this with only queuing N API calls at once, you will only hit people who are parallelising large numbers of API calls, which is probably not the behaviour you want - e.g. 100 simultaneous API calls then a gap for an hour is normally far worse than 100 sequential API calls over an hour.
Did this not answer your question? Well, if you really need to do what you are asking, rate-limit at the authentication stage and apply a different rate limit based on the group your user fits into. If you are using one set of credentials (used by your devs and QA team), you get a higher rate limit. But you can immediately see why this will inevitably lead you to your ecosystem seeing issues that your dev and QA team do not see.
My company has developed a rate-limited API. Our goal is twofold:
- A: Create a strong developer ecosystem around our product.
- B: Demonstrate the power of our API by using it to drive our own application.
Clarification: Why rate-limit at all?
We rate limit our API, because we sell it as an addition to our product. Anonymous access to our API has a very low threshold for API calls per hour, whereas our paid customers are permitted upwards of 1000 calls per hour or more.
So the question is:
How do you secure an api so that rate-limiting can be removed where in the process in removing such rate-limiting can't be easily spoofed?
Explored Solutions (and why they didn't work)
Verify the referrer against the host header. -- Flawed because the referrer is easily faked.
Proxy the request and sign the request in the proxy -- Still flawed, as the proxy itself exposes the API.
I am looking to the brilliant minds on Stack Overflow to present alternate solutions. How would you solve this problem?
Unfortunately, there is no perfect solution to this.
Can you stand up a separate instance of the UI and throttle-free API, and then restrict access to IP addresses coming from your organisation?
E.g., deploy the whole thing behind your corporate firewall, and attach the application to the same database as the public-facing instance if you need to share data between instances.
- Whitelist source IP addresses
- Use a VPN, whitelist VPN members
- Proxy solution or browser addon that adds HTTP headers should be fine if you can secure the proxy and aren't concerned about MITM attacks sniffing the traffic
- Any solution involving secrets can mitigate the impact of leaks by rotating secrets on a daily basis