In our environment, we have:
We have some optimization problems with a few processes, and they can take a second or two to resolve. If someone gets clicky, they can cause the following process:
We've been able to mitigate this somewhat in two ways: 1. Stopping some of the requests that were caused by buttons (ie- shoehorning in JS to disable the button that was causing the request.) 2. Implementing NGINX Rate Limiting.
The problem we get is that at scale it doesn't take much to bog down the system.
In plain English, all I want to do is:
If the same requester asks for the same thing three times in a short period, stop passing that through to the application for a time.
It appears that NGINX rate limiting doesn't allow this (unless I'm missing something.)
The Web Application Firewall rules for AWS are for "The maximum number of requests from a single IP address that are allowed in a five-minute period." With a minimum of 2000.
I'm looking more for "3 times in 10 seconds" not "2000 times in 5 minutes".
Perhaps it's something we'll need to include at the application layer.
Mostly I'm fishing for strategies. We can't be the only ones to have this problem where long-running application processes chew up resources even though they've been cancelled.
Is there a silver bullet method for dropping these requests?
Is there a way to actually-cancel the application after NGINX sends the abort signal?
Is there a way in NGINX or ELB/WAF to deny identical requests?