We have one database in AWS RDS in a MariaDB instance. It is deployed to three environments (dev, test and prod) using GitLab CI/CD. I am not a DBA but a developer of the API between database and frontend. Now we need some kind of backup system to be prepared to recover in case of failure or data corruption. Basically we decided that we want to copy everything in the prod database to the test database once a week so that once a week the test environment will contain the same data as prod, with prod being the source.
We also have database snapshots turned on but when restoring a snapshot it created a new instance with a new endpoint and not only a backup of the database. So if we restore an old snapshot we would have to change the endpoint connection, which is doable but maybe not ideal.
I see there is also the option of "restore database from S3" but the only the choices of MySQL and Aurora seem to be available if I understand correctly.
My question is if mysqldump is my best choice to perform to schleduled backups. I have already created a task in ECS which can run mysqldump and copy the database from one instance to another (both in dev). If I first copy the mysqldump file to S3 then I could copy the backup file between accounts and also keep old backups if we discover that an error or data corruption has been there for longer than a few days.
The database in not large, less than 1 GB and is likely to stay less than 1 GB for a long time. It is also not very busy and mostly read.
Do you think that mysqldump and the above strategy are my best available choices?