Android CBL 1.2 Attachments PUT request goes has chunked to SyncGateWay?

Hi Folks,

As per below links discussion that CBL/SyncGW does not support chunked data HTTP PUT.

But in pcap PUT request tcp shows some data chunks…


@jens @hideki Pls comment on this behaviors…if my understanding is right?

Note : Apache HttpClient default Transfer-Encoding is chunked.

Thanks
Nithin

The first issue says “Sync Gateway can handle chunked data”. No problem there.

The second and third issues are using “chunk” as a general term for breaking a large attachment into multiple pieces; they have nothing to do with HTTP chunk encoding.

@jens thanks got you…
First issue comments says “By setting the size of the underlying data on InputStreamBody at construction time, and returning the value in getContentLength(), this allows apache HTTPClient to send attachments to the server unchunked.”
Is this the current behavior?

Android CBL 1.2 Attachments PUT request goes has chunked to SyncGateWay?

If so how and where is octets split done 984/4280/4096 bytes in java or android?

Hi @nitz_couchbase,

CBL Android/Java v1.2.0 does not set Content-Length for attachment fors push. It means Content-Length value is -1.

From following javadoc, HttpClient must use chunk coding if the entity content length is unknown (-1).
https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/entity/AbstractHttpEntity.html#setChunked(boolean)

Android CBL 1.2 Attachments PUT request goes has chunked to SyncGateWay?

I believe HttpClient sends chunked data to Sync Gateway.

If so how and where is octets split done 984/4280/4096 bytes in java or android?

We might need to read HttpClient implementation to understand this.

NOTE:
From your attached REST command image, it said HTTP chunked response. Is it correct???

Thanks,
Hideki

I think what you’re asking about is an implementation detail of the HTTP library being used in Android. It shouldn’t be anything a CBL client app needs to deal with. Could you explain what specifically you need to know and why? Are you experiencing a bug, or performance problems, or something?

(Note that depending on details of the HTTP library is a bad idea, because we’re very likely to switch to a different library in our next release.)

Ok changes done to this fix to send as chunk… https://github.com/couchbase/couchbase-lite-java-core/commit/943438d3fb51b0acccc0cb32d3b314f41c31c559.

NOTE:
From your attached REST command image, it said HTTP chunked response. Is it correct???

Im assuming that wirshark shows as each chuck sent ack as response? is it not?

Basically our proxy server https://f5.com/products/modules/application-security-manager
is rejecting this put request because some ASM policy.
1)HTTP Protocol Compliance - Unparsable request content
chunks number exceeds chunks request limit :1000

Note: Still figuring out at f5 level whats 1000 in the error context is it 4096 bytes.

We see one data chunk is 4205 why? all others are 4096…is it not 4096 is the limit?

Thanks
Nithin

@jens Basically our proxy server Cloud application services delivered on demand | F5
is rejecting this put request because some ASM policy.
1)HTTP Protocol Compliance - Unparsable request content
chunks number exceeds chunks request limit :1000

Note: Still figuring out at f5 level whats 1000 in the error context is it 4096 bytes.

We see one data chunk is 4205 why? all others are 4096…is it not 4096 is the limit?

Thanks
Nithin

That doesn’t make any sense. There’s nothing in the HTTP/1.1 spec about a limit on the number of chunks in a chunk-coded body. This sounds more like some implementation limit in the proxy’s parser, and they’re making an excuse that it’s invalid input.

I don’t think any of us on the team know the details of how the HTTP library chunk-encodes bodies. You’d probably need to dig into the implementation of that library. (Sorry I can’t be more detailed, but I don’t work on our Java implementation.)

It is possible that we could improve the way we send this HTTP body such that it wouldn’t need to be chunk-encoded at all; generally if you can specify the size of the data up front, chunk encoding isn’t necessary. Again, I don’t know the details here. @hideki is the expert on the Java implementation.

But in the short term, I think your best option is to work around your proxy’s problem. Can you turn off the setting/policy that triggers this error?

Hi @nitz_couchbase,

In case encryption is enabled, the stored attachment is encrypted. Obtaining file size by BlobStore.getSizeOfBlob() is not correct approach.

if you can not customize proxy server configuration, I recommend you to compile from source code with your modification.

Thanks!