I have encountering an error when inserting bulk data with the upsert function and cannot figure out how to fix it. Anyone know what is wrong here? What the program is essentially doing is grabbing data from a SQL server database and loading into our Couchbase bucket on an Amazon instance. It does initially begin loading but after about 10 or so upserts it then crashes.
My error is as follows: Collection was modified; enumeration operation may not execute. Here are the screen shots of the error (Sorry the error only is replicated on my other Amazon server instance and not locally): http://imgur.com/a/ZJB0c
Here is the function which is calling the upsert method. This is called multiple times since I’m retrieving only parts of the data at a time since the SQL table is very large.
private void receiptItemInsert(double i, int k) {
const int BATCH_SIZE = 10000;
APSSEntities entity = new APSSEntities();
var data = entity.ReceiptItems.OrderBy(x => x.ID).Skip((int)i * BATCH_SIZE).Take(BATCH_SIZE);
var joinedData = from d in data
join s in entity.Stocks
on new { stkId = (Guid)d.StockID } equals new { stkId = s.ID } into ps
from s in ps.DefaultIfEmpty()
select new { d, s };
var stuff = joinedData.ToList();
var dict = new Dictionary<string, dynamic>();
foreach (var ri in stuff)
{
Stock stock = new Stock();
var ritem = new CouchModel.ReceiptItem(ri.d, k, ri.s);
string key = "receipt_item:" + k.ToString() + ":" + ri.d.ID.ToString();
dict.Add(key, ritem);
}
entity.Dispose();
using (var cluster = new Cluster(config))
{
//open buckets here
using (var bucket = cluster.OpenBucket("myhoney"))
{
bucket.Upsert(dict); #CRASHES HERE
}
}
}
looks like you’ve stumbled upon a bug!
just to clarify, does this error happen consistently? and is the batch upsert the very first key/value couchbase operation you execute?
from the stacktrace, the culprit seems to be here.
my theory: probably due to high concurrency of the batch, thread B would see list created by thread A while thread A is still populating it, causing the list.Any()
call of thread B to crash.
I wonder if as a workaround you could execute a single get before firing the batches, that would initialize the endpoints in the cluster map…
@simonbasle Yeah upon every run the error will occur after certain amount of cycles. Yes its the first operation I execute apart from the initial bucket connection set up. I had feeling it might of been the threading concurrency. Here is a full solution of my code
When do you reckon the bug would be fixed by? Ill try your workaround and see how that goes for now.
hi @acac999, were you able to test that it doesn’t occurr with a single GET
before your batch of SET
s?
I Agree with simonbasle,
It could be a threadning issue… one thread “foreach’ing” over the collection while another is modifying the the items in the collection, perhaps using a thread safe version of Dictionary could work as a workaround?
https://msdn.microsoft.com/en-us/library/dd287191(v=vs.110).aspx?f=255&MSPPError=-2147217396
Hi acac999,
What version of the .NET SDK are you using?
@martinesmann I’m using v2.0.0.0 I downloaded the SDK and modified this code so that _syncObj is locked before the end point checks. That appeared to have stopped the error but I haven’t done any extensive testing so not sure if this correct. @simonbasle At the moment I haven’t tried doing a single GET before the SETs, but I will hopefully do that soon.
[JsonIgnore]
public List<IPEndPoint> IPEndPoints
{
get
{
lock (_syncObj)
if (_ipEndPoints == null || !_ipEndPoints.Any())
{
{
_ipEndPoints = new List<IPEndPoint>();
foreach (var server in ServerList)
{
_ipEndPoints.Add(IPEndPointExtensions.GetEndPoint(server));
}
}
}
return _ipEndPoints;
}
}
@acac999 -
I created a jira ticket for this issue: Loading...
If you feel your patch resolves the issue and would like to contribute, you can create a pull request in the Github repo. Let me know if you need any help doing this.
Thanks,
Jeff
the fix in question was the one I had in mind @acac999
as Jeff said, if you want to contribute feel free to do so
the idea of single GET before batching SETs was to force this portion of code to execute, in a single threaded context where the issue wouldn’t arise. That would just confirm the issue is related to threading and act as a workaround.
Indeed fixing the lock is the long term solution.
1 Like