One solution is to investigate if the problem is being caused by the elasticsearch client creating a new http connection for each action. I'm not actually certain that in my local instance that we do create this many connections. Following the port allocations between the client and server locally with lsof seems to always have the same ports allocated. Perhaps it is only an artefact of nginx acting as a proxy.
If we do find that this is the case then it should be possible to tweak the client in such away that the connections are kept alive and reused (although I hope and think this is actually what we are already doing)
Another solution is to stop using the elasticsearch client and just use a plain JS http client. This will take some time and might introduce a load of bugs; we also won't benefit from all the retrying, node detection etc.. functionality that is already there.
Yet another solution is to deploy our own elasticsearch cluster on the wikifactmine labs project . Unfortunately given that it is still in a shared environment we will still need to front it would our own nginx proxy (or similar) to stop unlimited access the to server. However we would have quick access to the configuration and logs of nginx which might mean we can introduce a config change to the proxy that solves the problem.