@paulharter
The replicator will store a checkpoint doc on the target DB, in Couchbase Server these will have a _sync:local:
name prefix.
If the local checkpoint doc is missing on the target DB when a replication starts, the replication will start from sequence 0.
There is no state stored to local disk on the Sync Gateway instances.
I ran a scenario similar to yours using a single Sync Gateway instance with two buckets. I did not see any issues with this test, here is my config.
{
"log": ["*"],
"adminInterface":"0.0.0.0:4985",
"replications": [
{"source":"http://localhost:4985/source/", "target":"http://localhost:4985/target/", "continuous":true, "replication_id":"continuousA-B"},
{"source":"http://localhost:4985/target/", "target":"http://localhost:4985/source/", "continuous":true, "replication_id":"continuousB-A"}
],
"databases": {
"source": {
"server": "http://localhost:8091",
"bucket":"bucket-1",
"users": {
"GUEST": {"disabled": false, "admin_channels": []}
}
},
"target": {
"server": "http://localhost:8091",
"bucket":"bucket-2",
"users": {
"GUEST": {"disabled": false, "admin_channels": []}
}
}
}
}
I flushed both buckets before starting SG.
I added two documents to DB A and two documents to DB B. All 4 docs were replicated to both DB’s A and B.
In each CBS bucket there was a single _sync:local:
doc with the following content:
{
"_rev": "0-5",
"lastSequence": "5"
}
I shut down SG and flushed bucket A
After restarting SG DB A contained all 4 docs.
The the CBS bucket for SG DB B the _sync:local:
doc was unchanged, in the CBS bucket for SG DB A the _sync:local:
doc content was:
{
"_rev": "0-1",
"lastSequence": "5"
}