Throw the 'DecoderException' when read serialized doc

recently i had met this exception many times, does the serialized doc’s size has a limittion?
2016-10-10 16:32:13,507 INFO [STDOUT] 20328906 [cb-io-1-3] WARN com.couchbase.client.core.endpoint.AbstractGenericHandler - [/xxx.xxx.xxx.xxx:11210][KeyValueEndpoint]: Caught unknown exception: java.lang.OutOfMemoryError
2016-10-10 16:32:13,507 INFO [STDOUT] com.couchbase.client.deps.io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:418)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:245)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:243)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:962)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:485)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:399)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:371)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
2016-10-10 16:32:13,507 INFO [STDOUT] at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
2016-10-10 16:32:13,507 INFO [STDOUT] at java.lang.Thread.run(Thread.java:662)
2016-10-10 16:32:13,507 INFO [STDOUT] Caused by: java.lang.OutOfMemoryError
2016-10-10 16:32:13,507 INFO [STDOUT] at sun.misc.Unsafe.allocateMemory(Native Method)
2016-10-10 16:32:13,507 INFO [STDOUT] at java.nio.DirectByteBuffer.(DirectByteBuffer.java:101)
2016-10-10 16:32:13,507 INFO [STDOUT] at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)

What document operation are you doing and which version of the java client are you using? As it looks like you are doing a get operation, the max doc size that can be stored in couchbase server is 20MB, there isn’t any limitation at the client end.

I’m doing a get operation,i use java-client-2.3.1,couchbase 4.1.0

Well, you are receicing an OutOfMemoryError from your JVM (!) you should make a heap dump and see what is causing your memory issues, maybe the JVM is under very high pressure (undersized for your workload) or you have a leak somewhere maybe.

I was monitoring the jvm when this oom exception occured ,i saw there still a lot of heap space hadn’t been used.
I have dig the reason out,it is throwed from my bulk get, my bulk get hadn.t limited the batch size,some times i will get more than 20MB in a batch.Why a batch should not more than 1MB,when a batch is bigger enough it will occure OOM?