We’re looking at switching from views to n1ql and have a question about best practices for creating indexes.
In our current setup, each service has a rake task (we’re using Ruby for the most part) that looks through all the map/reduce scripts it needs and adds them if they’re not there. This is run each time any instance of the service is started and because the views live on the cluster, if the admins add or remove couchbase instances, the views are still guaranteed to be there.
It seems this plan won’t work with sgi indexes because they’re tied to a specific node. If a service adds an index that it needs and it gets put on node1 and that node is taken down, the index is no longer available (which could essentially “crash” our app).
How do people handle this in practice? We don’t want our services to have to know anything about couchbase nodes or keep track of which indexes are available where.
-
Would this all be done “externally” by the dev ops? They don’t know anything about which services need which indexes themselves.
-
Is the solution to add all the indexes to every node and let couchbase handle which one to use? If so, when is this done?
-
Should each query create the index it needs with the assumption that if it’s already there, it will error out? This seems pretty inefficient if you’re handling thousands of queries a second.
Unless I’m missing something (which I certainly might be) there seems to be a lot of opportunity here for indexes to be out of sync with the services that consume them.
Thanks for any help,
James