Cbbackupmgr backup fails with context deadline exceeded error

I’m using the cbbackupmgr on Couchbase Community 6.5.1 in a docker container on AWS. The backup command keeps failing with the following error message: “Error backing up cluster: context deadline exceeded”

How do I resolve this issue? Is there a parameter that I can add to the command that will increase the partition timeout limit?

Hi @liz,

The error message ‘context deadline exceeded’ may be returned for a number of reasons, but (as you’ve suggested) it generally indicates that something has timed out.

Please could you collect the logs using cbbackupmgr collect-logs and attach them to this post. These logs should contain more information which will allow us to debug the issue further.

If you’re unable to run collect-logs, please could you share the $ARCHIVE/logs/backup-0.log file, which should contain enough information to debug the issue.

Thanks,
James

1 Like

Thanks for responding @jamesl33

I was able to run collect-logs. The backup command is successful through transferring the full text index definitions. Then it runs into an issue attempting to transfer the GSI index index definitions.

Here’s the log print out:

(REST) (Attempt 1) (GET) Dispatching request to 'http://***/getIndexMetadata?bucket=name'
(REST) (Attempt 1) (GET) Failed to dispatch request to 'http://***/getIndexMetadata?bucket=name': Get "http://***/getIndexMetadata?bucket=name": context deadline exceeded -- rest.(*Request).execute() at request.go:220
(Cmd) Error backing up cluster: failed to execute cluster operations: failed to execute bucket operation for bucket 'name': failed to transfer index definitions for bucket 'name': failed to transfer GSI indexes: failed to get GSI index defintions: failed to get GSI index definitions: failed to execute request: http client failed to dispatch/receive request/response: Get "http://***/getIndexMetadata?bucket=name": context deadline exceeded
(Cmd) cbbackupmgr version 7.0.1-6102 Hostname: *** OS: linux Version: 4.14.198-152.320.amzn2.x86_64 Arch: amd64 vCPU: 2 Memory: 8141848576 (7.58GiB)
(Cmd) backup -a /***/cbbackups/current -r *** -c *** -u *** -p *****
(Cmd) mounted archive with id: ***
(REST) (Attempt 1) (GET) Dispatching request to 'http://***/pools/default/nodeServices'
(REST) (Attempt 1) (GET) (200) Received response from 'http://***/pools/default/nodeServices'
(REST) (Attempt 1) (GET) Dispatching request to 'http://***/pools'
(REST) (Attempt 1) (GET) (200) Received response from 'http://***/pools'
(REST) (Attempt 1) (GET) Dispatching request to 'http://***/pools/default'
(REST) (Attempt 1) (GET) (200) Received response from 'http://***/pools/default'
(REST) Successfully connected to cluster | {"enterprise":false,"uuid":"***","version":{"min_version":"6.5.1","is_mixed_cluster":false}}
(Transferable) Backing up cluster ***
(Cmd) Error backing up cluster: failed to get backup transferable: failed to create backup: The most recent backup `2022-01-03T23_03_53.308071062Z` did not finish properly. You can either resume this backup from where it left off by re-running the backup command and using the --resume flag. To delete the unfinished backup and start again use the --purge flag.

Hi @liz,

It looks like cbbackupmgr was unable to backup the index metadata due to a timeout when performing the HTTP request. The endpoint being hit, could be tested with a command such as the following:

curl -u ${USERNAME}:${PASSWORD} http://${HOSTNAME}:${PORT}/getIndexMetadata

This should be done with the same hostname/port that’s being used by cbbackupmgr (which has been redacted from the snippet you provided) from the same location (e.g. the AWS docker container).

I have tried to reproduce this issue (outside of AWS), and I’m unable to do so, so I think there’s potential that it may be an configuration/environmental issue.

Please could you answer/check the following:

  1. Are all the required ports open which would allow cbbackupmgr to communicate with the index node?
  2. Is this a new problem (have you managed to take a backup before)?
  3. Can you reproduce this outside of AWS?
  4. Are you using alternative addressing?

As a temporary workaround, you could configure a new backup archive using --disable-gsi-indexes; please note that this will mean that GSI indexes are not backed up. This may only work if indexing is the only service which cbbackpumgr can’t contact; there may be others.

Thanks,
James