I am having one file which will contain multiple jsons seperated by comma.
No i want to load the json1 from a file to a bucket x(creating document as y)and the same file(json 2) to the same bucket x(but to an another document z).
Using cbcdocloader we can load data into bucket of a file which is having only one json but not multiple jsons.
Can anyone please help me to find the solution of loading the data into bucket of multiple jsons(same file) to multiple documents.
@ogrdsnielsen as gerald said I think you won’t get around a simple script, or in bash you split it up into multiple docs first. That said if you use a language where we have official SDKs you’d be better off using KV directly since it gives you better performance on those kind of operations (insert where you know the key and the value)
Thanq @avsej…But we are already using the syntax for splitting the jsons into multiple files.
But there was some problem in unix box like limiting the number of files.
Suppose for example our table has data for some 20million records and when we are running the script for generating the jsons only 4million+ jsons are getting created in unix box.Is there any threshold limit in unix?
Is there any solution to overcome that and generate 20million jsons in the unix box.
In this case, why don’t you use regular SDK to load the documents from that huge file? You might use streaming JSON parser (which does not load full file into memory to parse) and then upsert all the docs.