AWS Lambda Node Timeout Issue

Some further details.

I’m deploying the code to the lambda function, and it runs perfectly for about 2 hours, roughly 82 invocations based around 1 container.

Then Lambda dumps the container (perfectly normal) and a new container is created, obviously based around the same nodejs code. This time however, right from the first invocation, I’m getting the LCB_ERR_TIMEOUT error.

I’m wracking my brains, but can’t see why this would happen.

Is couchbase running out of connections? Is the old connection somehow being held open with no timeout? Can I just ‘hack’ this and open and close the connection with every invocation? I’m using SDK3.0 and there doesn’t seem to be any command for closing a cluster connection?