I have try to execute multiple SQLite statements within a transaction, in DB Browser for SQLite. The issue is something I found described here with the hint about excluding the folder: backup to google drive freezes on "waiting for the upload to finish" īut with this backups are now completing daily without any major issues since now the first big backup of everything has completed.I am testing a set of migration code on existing DB. I also had to make sure to exclude to backup the folder that I had the duplicati database files in since I chose to store it on the same drive with all of my files or duplicati would seemingly get stuck on “get waiting for the upload to finish”. I’m not sure if this was a side effect of a backup hitting tiny drop outs in connection stability, or a measure from google to try to limit too many requests or uploads. This was needed because every once in a while the connection to my Google Workspace Drive would fail with 403, 500, 502, 503, and 504 errors. So that it should try 50 total times before giving up and wait 20 seconds between each attempt. I set “number-of-retries” to 50 and “retry-delay” to 20. On top of the setting you suggested I had to add the following. My backup has completed successfully, thank you so much for the help. Use any passed configuration variables so that it writes to those tmp directories instead of the root /tmp/ folder. The backup eventually ends up writing to the /tmp/ directory, I believe right as it begins uploading. asynchronous-upload-folder=/home/duplicati/upload I created a `duplicati` user that runs a shell commandġ. Setting the `TMPDIR` linux environment variableġ. When I try to specify any `tmp` directories in the CLI, duplicati ends up using the root `/tmp/` directory, even if I try to use the following: **Operating system**: Redhat Enterprise Linux 7įirst of all, thank you for developing duplicati, I have had successful small backups, but before really using this software on a large scale, I've had issues in production backups, since when backing up a 1TB to S3 of data it tries using a 50GB /tmp/ partition that quickly fills and ends up killing the job. I have searched open and closed issues for duplicates. (0x80004005): SQLite errorĪt 3.Reset ( stmt) in :0Īt 3.Step ( stmt) in :0Īt .NextResult () in :0Īt …ctor ( cmd, behave) in :0Īt (wrapper remoting-invoke-with-check) …ctor(,)Īt .ExecuteReader ( behavior) in :0Īt .ExecuteNonQuery () in :0Īt .Commit () in :0Īt. (System.Boolean isDisposing) in :0Īt. () in :0Īt .BackupHandler.RunAsync (System.String sources, filter, token) in :0Īt ( task) in :0Īt .BackupHandler.Run (System.String sources, filter, token) in :0Īt +c_Displa圜lass14_0.b_0 ( result) in :0Īt .RunAction (T result, System.String& paths, & filter, System.Action`1 method) in :0Īt .Backup (System.String inputsources, filter) in :0Īt (+IRunnerData data, System.Boolean fromQueue) in :0 Does anyone know what could be wrong and any possible solutions?ħ:18 PM: Failed while executing “Backup” with id: 1 I’ve tried moving the database to a larger disk, I’ve tried setting the tmpdir in the web gui to the same larger disk. The latest log I have with the error is posted below. SQLite error cannot commit - no transaction is activeĪnd I have no idea how to solve it. I used the AUR package here: AUR (en) - duplicati-latest I’m running Duplicati under Manjaro ARM on a Raspberry Pi 4 with 8GB of RAM.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |