I run two indexer nodes in a cluster. OS is Ubuntu 16.04 nextcloud. The machine has 6 cores, 12 threads, 64gb ram. I have around 4.2M documents and 3 indexes which index 400k documents.
The following error occurs:
Service ‘indexer’ exited with status 134. Restarting. Messages:
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.7.3/go/src/runtime/proc.go:259 +0x13a fp=0xc4275cec30 sp=0xc4275cec00
runtime.selectgoImpl(0xc4275cef28, 0x0, 0x18)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.7.3/go/src/runtime/select.go:423 +0x11d9 fp=0xc4275cee58 sp=0xc4275cec30
runtime.selectgo(0xc4275cef28)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.7.3/go/src/runtime/select.go:238 +0x1c fp=0xc4275cee80 sp=0xc4275cee58 github.com/couchbase/gometa/protocol.(*messageListener).start(0xc42ef8ca50)
/home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/gometa/protocol/leader.go:403 +0x4d3 fp=0xc4275cefb8 sp=0xc4275cee80
runtime.goexit()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.7.3/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4275cefc0 sp=0xc4275cefb8
created by github.com/couchbase/gometa/protocol.(*Leader).AddWatcher
/home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/gometa/protocol/leader.go:256 +0x45d
[goport(/opt/couchbase/bin/indexer)] 2018/07/09 04:39:58 child process exited with status 134
I am using 5.1 of CB, installed on a VirtualBox VM running 2 indexers. I believe (but am not sure) I got into this state (indexers are perpetually warming up) by rebooting my VM a few times. I am confident it has nothing to do with number of objects in CB as we currently only have a few dozen. Would like to resolve this issue. If you need to see log information, please let me know what would be helpful.
Hi Deepkaran, right now our CB “sandbox” is a singular node of CB; if the indexes are corrupt on this single-node, we obviously cannot failover/rebalance. How would we go about rebuilding the indexes? Dropping them and recreating?
In any event, in a multi-node scenario, would the failover/rebalance sequence be done via the dashboard or CLI? If CLI, what would be the commands to execute?
@steve.cecutti, you may try to drop the index but if indexer is repeatedly failing due to corrupt disk file, it may not be able to process it. In that cases, for single node setup, the only option is to do a fresh install.
For multi-node scenario, failover/rebalance can be done both via UI as well as CLI. You can checkout the CLI documentation here.
Sorry for reviving an old thread but I am running into exactly the same problem and I am on the latest EC2 AMI build.
I have a single node as I am evaluating Couchbase and my indexes keep rotating over and over into warming up state. I don’t see any panic messages in the logs, the dashboard reports 4.8GB RAM unallocated and 70GB of free disk space. however I do see either of the following 2 messages when this happens:
So I have a new node up and running, but to actually do a cbcollect I need shell access to the system, unfortunately it seems that the AWS AMI provided by CouchBase disables this functionality or at the very least renames the default user name associated with the private key that is installed on the system during the bootstrap process.
@Antek - You can do a cbcollect from Logs → Collect Information tab in UI and choose “upload to Couchbase” option and share the link with me if you are still seeing indexer exited issue.
@Antek if you still need shell access, login as the default user couchbase creates by default, and then you can always change the default ssh configuration that Couchbase AMI offers using sudo.