According to the documentation, when you insert or update an item with expiry set to 0, and the Bucket has been set with a non zero TTL, the item’s TTL is reset to the value of Bucket TTL. However, if you set Bucket TTL to 31 days, when you insert or update an item with expiry set to 0 into that bucket, the document seems to expire immediately
We are using a testing Couchbase Enterprise Server 6.0.3, from PHP application using PHP SDK v2.6.1 with fully update couchbase libs.
This is our testing code:
test.php
<?php
$expiry = 0;
$keyTest = "testdoc::1";
$txtTest = "Testing document";
$bucketName = "TestBucket";
// Establish username and password for bucket-access
$authenticator = new \Couchbase\PasswordAuthenticator();
$authenticator->username('my-user')->password('mypass');
// Connect to Couchbase Server - using address of a KV (data) node
$cluster = new CouchbaseCluster("couchbase://myCBserverIP");
// Authenticate, then open bucket
$cluster->authenticate($authenticator);
$bucket = $cluster->openBucket($bucketName);
// Store a document
echo "Storing key $keyTest\n";
$result = $bucket->upsert("$keyTest", array("txt" => "$txtTest"), array('expiry'=>${expiry}));
// Retrieve a document
echo "Getting back key $keyTest\n";
try {
$result = $bucket->get("$keyTest");
$data = $result->value->txt;
echo "Result: txt -> $data \n";
} catch(\Exception $e) {
echo "Error Code: " . $e->getCode() . " --> " . $e->getMessage();
}
Basic testing results:
Results when TestBucket TTL is set to 30 days ( 2592000 seconds )
Results when TestBucket TTL is set to 31 days ( 2678400 seconds )
$ curl https://www.testing.dev/test.php -k
Storing key testdoc::1
Getting back key testdoc::1
Error Code: 13 --> LCB_KEY_ENOENT: The key does not exist on the server
The code does not show how the expiry value in both cases is being calculated. From what is said it look like a value in seconds is being provided in both cases i.e 30 days ( 2592000 seconds ) and 31 days ( 2678400 seconds). As that sounds like the case, then Couchbase Server is behaving correctly. The expiry value behaves differently when the value provided is below 2592000 then it’s used as a delta in seconds if the value is great, it’s used as a unix epoch. In this case 2678400 becomes 1970-02-01T00:00:00Z. As this is in the past the document is expired.
The code is always setting item expiry to 0, and the problem is not there.
If we insert/update an item with TTL set to 0, and the Bucket TTL is set to value o 30 days (2592000 seconds), according to Couchbase Bucket expiration documentation, the item TTL is reset to the the Bucket TTL value ( 30 days ) . But we have verified that if we set the Bucket TTL to 31 days ( 2678400 seconds), and update/insert an item with expiry sets to 0, the item seems to be expired immediately.
You can verify that setting some bucket to maximum TTL of 2678400 second, then you can tray to Add document to the bucket using the document editor from web console. Since you can’t set expiration to the document through Web console document editor, when you click on save, the item should have taken the Bucket TTL, but not. So I think it is a bug setting Bucket TTL to an item saved with expiry sets to zero.
@jlopez I have experienced the same problem when I set a global default “max time-to-live” TTL in the bucket longer than 30 days (2592000 sec), for example 31 days (2678400 sec).
If I create an json item with TTL=0 (or without TTL), it should be created with the default bucket “max time-to-live” TTL (30 days), but instead it expires instantly
@pvarley your link to documentation is about item creation " less than 30 days (i.e. 60 * 60 * 24 * 30 ), it is considered an offset . If it is greater, it is considered an absolute time stamp" but my issue is with global TTL bucket.
In the advanced configuration of the bucket it is only possible to set this delta time in seconds up (to a maximum of 2147483647 sec (+2 years).) Logically it is not possible to configure unix epoch
)
Thank you so much for taking the time to respond quickly.
I have CB Enterprise Edition 6.0.3 build 2895 on Ubuntu 18.04.3 LTS with the latest packages update.
It is a 3 node cluster.
Could it be a bug in 6.0.3?
I have reproduced it in another identical development environment.
We pay for Enterprise support but we prefer to publish it in forum in order to be useful to more users.
1.- Reproducing the bug with your comand examples Bucket TTL 2678400 (31 DAYS)
$ cbc-cat -U couchbase://localhost/cache -u Administrator -P “xxxxxxxxxxxxxxx” itemtest2 itemtest2 The key does not exist on the server (0xd) <------- !!!
$ cbc-subdoc -U couchbase://localhost/cache -u Administrator -P “xxxxxxxxxxxxxxx”
subdoc> get itemtest2 -x $document
itemtest2 The key does not exist on the server (0xd)
We pay for Enterprise support but we prefer to publish it in forum in order to be useful to more users.
There is no guarantee response on the forums and I would always recommend making use of the Support team, they’re excellent. That said, it’s great to see users getting involved in the community.
I have tried to reproduce the issue on Couchbase Server 6.0.3 on Centos 6, Ubuntu 16.04 and 18.04.3, in all three cases it worked as expected.
Time is handle in a particular way inside Couchbase Server to handle a number of edge cases with laptops and virtual environments, maybe something in the environment is triggering this behaviour. To investigate this further logs will be needed. At this point I think it would be best to go via the official Support channel @treo.
I’ll open a ticket on the official enterprise support.
I like colaborate in forums more because of the ease of search and indexing in Google by other users are experiencing the same problems.
I’m a CB power user fan, my only interventions in the forums have been to report confirmed bugs.
I’m a jinx with bad luck
Thank you very much @pvarley
Through the ticket that we opened with the Enterprise Support we have collaborated with your coworker Matt in the diagnosis of the bug and at all times we have been correctly informed.
Besides being solved in 6.5.0 thanks to the refactor, we have also been informed that a fix will be released in 6.0.4
Due to stability issues and not to force a new rebalancing for the upgrade, we will stay in 6.0.3 and as workaroung we will keep the TTL of the buckets in 29 days