I have one collection, where each doc has a type field, it will have all records will have the same schema,
some properties could be missing but it will have the same schema,
But for different type field value, it will have different schema.
{
“type” : “type1”,
“s1”: “”,
.
.
“s100” : “”,
“someOtherKeys32”: “”
}
{
“type” : “type2”
“s1”: “”,
.
.
“s100” : “”,
“someOtherKeys12”: “”
}
I have also given them functionality that if they want to do query, then use keys from s1… s100,
So for that i have created 100 covered index, combination of (type file and key) like
[type, s1],
[type, s2] upto
[type, s100]
So what happens when there is new record added which does not use any field from s1…s100, like
{
“type”: “type3”,
“nonIndexField” :“RandomValue”,
}
then 100 index gets updated, Also in UI i see N no of mutation remaining.
what could be good design for this.
Here type could be N no of types, so i dont think partitioning via scope and bucket will help me here.