I implemented the Document/Transcoder, which doesn’t validate the flags only output logs.
I get the following log:
192.168.1.106:7092 22:36:29 INFO [org.pigai.couchbase.CikuuTranscoder:31] - Flags (0x2) indicate non-JSON document for id 845d955dfec831dbcac5c5c37d677df8ba5ccef9, could not decode. content={“0”:{“w”:"^",“l”:"^",“p”:"_decl",“b”:"",“t”:2,“s”:{}},“1”:{“w”:“I”,“l”:“i”,“p”:"_r",“b”:“anim,humn”,“t”:1,“s”:{“1”:"_n"}},“2”:{“w”:“bless”,“l”:“bless”,“p”:"_v",“b”:"",“t”:2,“s”:{}},“3”:{“w”:".",“l”:".",“p”:"_h",“b”:"",“t”:3,“s”:{}}}
The meta data which get using web consol is:
meta: {
id: "845d955dfec831dbcac5c5c37d677df8ba5ccef9",
rev: "2-0034940b2b18b1780000000002000000",
expiration: 0,
flags: 33554432
},
My program uses Jetty as the web server, run in a multithreads mode. When an http request comes, it will visit the couchbase using the code:
public static String getRawJson(Bucket bucket, String key, boolean validated) {
try {
if (cluster == null || bucket == null || key == null)
return null;
if (validated)
key = snt_sha(key);
CikuuJsonDocument doc = null;
synchronized (bucket) {
doc = bucket.get(key, CikuuJsonDocument.class,
bucket_read_timeout, TimeUnit.SECONDS);
}
return doc == null ? null : doc.content();
} catch (Exception e) {
logger.error("getRawJson error, for :" + e.toString() + "|" + key);
}
return null;
}
After process, if the data is not in couchbase, I will store the data using the following code:
public static void upsert(Bucket bucket, String key, String json) {
try {
if (cluster != null && bucket != null)
bucket.upsert(RawJsonDocument.create(snt_sha(key), json),
bucket_write_timeout, TimeUnit.SECONDS);
} catch (Exception e) {
logger.error("upsert error, for :" + e.toString() + "|" + key);
}
}
protected static String snt_sha(String snt) {
return DigestUtils.shaHex(snt);
}
My cluster has there servers, one is 3.0.2-1603 Enterprise Edition (build-1603-rel), and tow are 3.0.3-1716 Enterprise Edition (build-1716-rel)
The Java client is couchbase-java-client-2.1.2.jar