I have a server with 150GB of diskspace. Today I uploaded a dataset of 30GB. I cancelled the import due to internet dying, then noticed there was 29GB of space missing in the database (meaning the CSV was uploaded, but not deleted when I broke the operation). When uploading the data once again, it broke again and I lost another ~25GB. Now there isn't enough free space to upload the data.
This is hosted on AWS RDS, Postgres 10.6.
Is there a way to fix this? But will this delete records? I'm hosting at the moment ~70GB of data and don't want to lose any records. What's the best way to go about this?
The tuples were created and they left dead because the transaction wasn't committed. Simply run
VACUUMand the dead tuples will be purged. Or run
VACUUM FULLand the table will be rewritten (removing the dead tuples). If you have more dead tuples then live ones this may be faster, but it'll require a lock. It's usually massively slower but it sounds like you may be an exception.