POSTGRES LOST SPACE

by ilovejq   Last Updated October 09, 2019 22:06 PM - source

I have a server with 150GB of diskspace. Today I uploaded a dataset of 30GB. I cancelled the import due to internet dying, then noticed there was 29GB of space missing in the database (meaning the CSV was uploaded, but not deleted when I broke the operation). When uploading the data once again, it broke again and I lost another ~25GB. Now there isn't enough free space to upload the data.

This is hosted on AWS RDS, Postgres 10.6.

Is there a way to fix this? But will this delete records? I'm hosting at the moment ~70GB of data and don't want to lose any records. What's the best way to go about this?

Tags : postgresql


Answers 1


The tuples were created and they left dead because the transaction wasn't committed. Simply run

  • VACUUM and the dead tuples will be purged. Or run
  • VACUUM FULL and the table will be rewritten (removing the dead tuples). If you have more dead tuples then live ones this may be faster, but it'll require a lock. It's usually massively slower but it sounds like you may be an exception.
Evan Carroll
Evan Carroll
October 09, 2019 22:03 PM

Related Questions


Postgres gist slow index f_unaccent

Updated April 09, 2018 20:06 PM