Couchbase server document limit

Hi,

I am using one document to collect all the ping logs from a sensor. I read somewhere that document size is limited to 20,000,000 bytes. What will happen if the size exceeds 20M ?

For my application, it makes more sense to store all pins in a document rather than creating new everytime I have a ping log. Please, let me know what is the best solution.

Regards

PP

That is right, on Couchbase buckets a document is limited to 20M.

You want to split it up manually if it gets larger. That said, what about doing something more “intelligent” at the application side and do documents based on something different like every day/hour/second whatever fits? With views, it doesn’t make a difference anyways when you are loading it.

Thanks a lot for a quick answer.

Well, That’s a good suggestion for the problem. Now that leads to another questions:

  1. How long a document key can be ?
  2. In case of over flow(size > 20M) , what will happen ? Will server throw exception ? Will it overwrite old data ? Can we automate backup based on document size ?

Thanks
PP

Depending on the SDK you are using, you are getting a different error type (exception, error) ,but yes the server will refuse if you store a document larger than 20 megs.

A key is a string that you select, up to 250 chars. It should be large enough so you can build patterns which you can quickly use to lookup.

let’s say you want to do group it per day per sensor: sensor1-20141027-1
the last -N you can use in your application to create more docs per day if there is a need for it. Again, your app needs to handle it, but if you follow such a pattern (which is a very common key lookup pattern in Couchbase and very performant) you can also quickly build up those keys on querying. For example if you want data for sensor X on 10 different days you just load 10 docs and you can build the keys from your SDK since you know how they look.

What do you mean with the backup case?

By backup I mean, offline data where a “document size monitor thread” runs and moves a document that exceeds the size of 20M to secondary storage.

Is there a way to do this ?

And thanks for your thoughtful replies.
I appreciate your help.
PP

Hi, no there is no way to do something like this in couchbase. Every document is treated the same, persisted and replicated - regardless of the size. If you want more logic you need to implement that on the application side

Repeatedly updating a large document will be inefficient — you’ll have to keep sending the entire document to the server every time you call Set, so the data traffic will keep growing. It’s better to use many small documents. You can then use a view to collect all the data in one call.