How to improve perfromance of write in couchbase DB

Size of record : one million
Number of nodes : 1
Size of one million record : 1.4 GB
Currently i did a bulk write with KVEndpoint configured.

It is taking close to a minute.

Is there anyway to further improve performance.

I can employ 2 nodes at the max.

1 Like

I would appreciate if I get some help regarding the query posted.

I am getting limited support on the queries I post.
I am new to couch base and trying to use this DB.

I had previously posted this query in below link and waited for 12 days

Please advice.

@tatpum13 you need to provide more details. As @graham.pople said, it depends on where the bottleneck is.

network details : I am running couchbase server and java program in same machine so there will not be too much latency issues.
amount of RAM on the server : 10 GB
Currently it is taking more than 1 minute to insert 1 million records with 1 node.
I added one more node and did rebalancing, However this slowed up the process further.
Can i improve the performance if i add one more node?.
Any further performance tuning suggestions available?.

Can you provide more details:

  • You say “Size of record : one million” but “Size of one million record : 1.4 GB”. Those two numbers don’t match. If a record is one million bytes, then one million records would be 1TB.
  • How much memory is allocated to the bucket?
  • Does the computer in question have spinning disks or SSDs?
  • Are you reading the documents from disk before writing them to Couchbase? It’s possible that the server’s I/O system is being swamped by the Java program reading and Couchbase writing.
  • Did you try running the Java program on a separate machine from the Couchbase server?
  • Can you show the portion of the Java code that writes the records?

Thank you.

1 Like
  • You say “Size of record : one million” but “Size of one million record : 1.4 GB”. Those two numbers don’t match. If a record is one million bytes, then one million records would be 1TB.

I am reading from an excel file containing 1 million records. Coverting this data to JSON. After inserting one million records the size of bucket is 1.4 GB.

  • How much memory is allocated to the bucket?

9.76GB

  • Does the computer in question have spinning disks or SSDs?

Spinning disk

  • Are you reading the documents from disk before writing them to Couchbase? It’s possible that the server’s I/O system is being swamped by the Java program reading and Couchbase writing.

Total time taken by java program : 71 seconds
Total time taken by java program without couchbase lines : 5 seconds.
(Have commented out couchbase lines and then run the java code).
Total insertion time : 71 - 5 = 66 seconds.

  • Did you try running the Java program on a separate machine from the Couchbase server?

Nope

  • Can you show the portion of the Java code that writes the records?

Unfortunately i cannot share the code.

Total RAM 10 GB; 9.76 GB allocated for bucket - that’s like 2.4% of total RAM left for everything else including OS and applications. Depending on what else running at the same time, that is potentially problematic, performance-wise - yes? Best practice, we should leave like at least 20% of RAM for OS, I think.

There is more than 20 % for RAM for OS and other apps.

I am running it on a 16 GB machine.