I run a 4 Member cluster with basic setup and as we had some issues by losing one host after a VM ware upgrade went south we are making sure this wont happen again. One interesting thing i noticed that if one of my nodes goes down i lose all indexes for one bucket while another bucket’s index is still avail.
Is there a way to make index redundant as it is nice to have data redundancy but if you cant access the data because of missing index its not that redundant after all.
What is the ideal node count and config for data and index redundancy ?
If using EE . Use replicas Availability and Performance | Couchbase Docs .
- WITH {“nodes”:[“node1:8091”, “node2:8091”, “node3:8091”]}
- WITH {“num_replica”: 2}
If using CE, no partition indexes, no index replication. But you can get High Availability by creating duplicate indexes described Point 9 Duplicate Index Create the Right Index, Get the Right Performance.
So in EE I can specify which servers should hold a Replica via the WITH nodes or if i want Couchbase decide I can use the num_replicas. Which i asume will mean I have to go and drop the old indexes and recreate them…
Is there a way to actually get info on which node holds what index as i don’t see any info on that.
Also if you go and use the function to set the indexer.settings.num_replica to a number higher then 0 will that take care of previously created indexes or only new ones created after the value has been set above 0
Hi @makeawish ,
You can use Alter index ALTER INDEX | Couchbase Docs
@deepkaran.salooja will answer specific questions.
Thanks but that doesn’t much less work and not sure how big performance gain is vs drop and create. In Alter i still need to go and get all indexes create statements and append the 1. WITH {“num_replica”: X}
On that note can i pull out via N1QL the original Create Index Syntax for all Indexes of a bucket ?
@deepkaran.salooja has expertise in this. Let him get back on that.
Using N1QL to drop Index and recreate has how to get index statements
@makeawish, Alter Index is more optimal as it needs to only create the extra replica. Drop/Recreate will take more time as the original index will get recreated again. And you may need to create the new index with extra replicas first before dropping the old one, to avoid downtime. Depending on how many resources are available in the cluster, it may slow down the index creation.
So the alter will not drop old index and be smart enough to know the only change is a index replication. Whats the default behavior and how does Couchbase select where it places the indexes for a bucket on which node ?
Can one query where indexes for a bucket get stored ? Like in my case on 4 node cluster when I lost the node which had the index for a bucket i no longer could use it since apps complained that there was no primary index. Lets assume i lose that Node which has the Index for my bucket. If i would do a failover and rebalance do the other nodes have a copy of index or what would happen
FYI: each replica creates on different index node. If you have replica=1 (master, replica). If one index node down still available. If 2 index nodes down it might problem (if index exist only on that nodes).
https://docs.couchbase.com/server/5.5/clustersetup/rebalance.html#rebalancing-the-index-service
So the alter will not drop old index and be smart enough to know the only change is a index replication.
Yes, that’s right.
Whats the default behavior and how does Couchbase select where it places the indexes for a bucket on which node ?
Indexer tries to find the best placement based on resource utilization and HA constraints. More details here.
Can one query where indexes for a bucket get stored ? Like in my case on 4 node cluster when I lost the node which had the index for a bucket i no longer could use it since apps complained that there was no primary index.
This information is available on the Admin Console.
Lets assume i lose that Node which has the Index for my bucket. If i would do a failover and rebalance do the other nodes have a copy of index or what would happen
If you create an index with replica, then yes. In case one node fails over and index is unavailable, the replica will handle all the traffic. Also note that whenever the failed over node gets recovered, the index automatically becomes available again, regardless of whether there is any replica or not.