I did an investigation/prototype of delta compression last year, but it hasn’t made it into the feature list yet. It’s definitely something many customers want, though.
By “not much overhead” I mean that the cost is basically O(n) as you’d expect, not O(n^2) as when you append to a single attachment. For example a document with 200 1k attachments is going to take about twice as long to replicate as one with 100 1k attachments. The entire doc gets sent as a single HTTP body in MIME multipart format, with each attachment one body part. If only some of the attachments have changed, only those are included. So if you keep adding attachments one at a time, each replication only includes the new attachment.
(The document JSON body does get sent every time, and it contains metadata about all the attachments in the _attachments
property, so there’s a slight O(n^2) effect there, but it hopefully shouldn’t be a problem.)