OK. So finally the solution is here with Couchbase NodeJS SDK 3.1.1. The coding approach isn’t much different than Couchbase NodeJS SDK 3.0.4, but it surely deals with that timeout problem. It has a problem though, but that can be easily handled.
The first notable change in SDK 3.1.1 is that ‘couchbase’ is no longer a constructor. So, we no longer connect to the cluster like this:
const couchbase = require(‘couchbase’)
const cluster = new couchbase.Cluster(‘couchbase://XXXXXXXXXXXXXXXXXXX’, {
username: ‘USERNAME’,
password: ‘PASSWORD’
})
Instead, we do:
const couchbase = require(‘couchbase’)
couchbase.connect(‘couchbase://XXXXXXXXXXXXXXXXXXX’, {
username: ‘USERNAME’,
password: ‘PASSWORD’
})
.then(cluster => {
// DO WHATEVER
})
.catch(err => {
// AGAIN, DO WHATEVER
})
Now, to set timeout with cluster connection, we can now pass the timeout parameters alongside username and password. So, we can now write the code like:
const couchbase = require(‘couchbase’)
couchbase.connect(‘couchbase://XXXXXXXXXXXXXXXXXXX’, {
username: ‘USERNAME’,
password: ‘PASSWORD’,
kvTimeout: 3600000,
kvDurableTimeout: 3600000,
viewTimeout: 3600000,
queryTimeout: 3600000,
analyticsTimeout: 3600000,
searchTimeout: 3600000,
managementTimeout: 3600000
})
.then(cluster => {
// DO WHATEVER
cluster.analyticsQuery(QUERY_STRING, { …OPTIONS })
.then((err, rows) => {
if(err)
throw err
else
console.log(rows)
})
.catch(err => console.error(err))
})
.catch(err => {
// AGAIN, DO WHATEVER
})
And THIS WORKS. This surely does. This way, there’s no more timeout error after 75sec. Analytics queries on big datasets can easily be run for hours.
But there’s a little problem. If we want to open a bucket to execute a certain operation and afterwards want to run this analytics, it fails. Timeout no longer works for analytics. For example,
const bucket = cluster.bucket(BUCKET_NAME_STRING)
const collection = bucket.defaultCollection()
collection.get(DOCUMENT_KEY, (err, res) => {
if(err)
throw err
else{
cluster.analyticsQuery(QUERY_STRING, { …OPTIONS })
.then((err, rows) => {
if(err)
throw err
else
console.log(rows)
})
.catch(err => console.error(err))
}
})
In this above piece of code, analytics will face a timeout after 75sec. So basically, all the timeout options we passed with username and password, no longer work. To find the reason, when I read the Couchbase package files from within node_modules directory, I found this:
Whenever we connect to a bucket for a certain operation [i.e., const bucket = cluster.bucket(BUCKET_NAME_STRING)], the SDK redefines the connection without passing the timeout parameters.
The cluster.bucket method inside node_modules/couchbase/lib/cluster.js file returns Bucket constructer, which is defined inside node_modules/couchbase/lib/bucket.js
The Bucket constructor accepts 2 parameters: cluster object and bucketName string. When we call cluster.bucket method, we pass the bucketName only, which is passed down here. The Bucket constructor then calls the cluster._getConn method, with the parameter { bucketName: bucketName } only, which redefines the connection. This is the place, where timeout parameters get missing.
Solution:
I had to update 2 files in specific.
node_modules/couchbase/lib/cluster.js
I defined a class variable (object)
this._user_timeout_opts = {
kvTimeout: options.kvTimeout,
kvDurableTimeout: options.kvDurableTimeout,
viewTimeout: options.viewTimeout,
queryTimeout: options.queryTimeout,
analyticsTimeout: options.analyticsTimeout,
searchTimeout: options.searchTimeout,
managementTimeout: options.managementTimeout
}
just above this line:
this._connStr = connStr;
-
node_modules/couchbase/lib/cluster.js
I modified Bucket constructor’s this code-block
this._conn = cluster._getConn({
bucketName: bucketName
});
with this:
this._conn = cluster._getConn({
bucketName: bucketName,
...cluster._user_timeout_opts
});
Now it doesn’t matter how many times we connect to different buckets, every time our timeout parameters (which we passed during connecting the cluster) will be passed within the new connections. Hope, this helps