How can I simulate JPEG quality degradation?

by Pi Net   Last Updated January 10, 2019 23:18 PM - source

I am writing a report about file formats in images & their properties. In the report I want to talk about how file formats have lossy compression rather than lossless compression & I wanted to create an image which could show how the quality of an image decreases over time when saving. I tried doing this by saving a JPEG image as & naming it something else so I had all of the images with the various qualities. However, the file only decreased its size the first time I saved it.

Is the file just not getting compressed again?

Is the same data being compressed so there is no change?

Do I need to make changes to the image so it gets compressed?

If so how would I do this to show quality degradation ?

Thank you to anyone who helps me with this, I am not very good with image editing & I am very grateful for all of your help,

Pi Net



Answers 2


tl;dr

The JPEG implementations used by most graphics software is designed specifically to prevent degradation when a block is saved over and over if the content does not change. Every 8x8 block will remain the same if re-encoded on the same settings as long as no pixels in the block have been changed.

To simulate this, rack the curves or levels center point up and down a value, or convert from RGB to Lab and back to introduce minor changes that need to be re-compressed.

Some basics on JPEG compression steps and compression vulnerability

The process goes through six distinct phases:

Color Space Transformation (1) and Downsampling (2)

The color space transformation and downsampling are usually bypassed in images saved at a high quality. The short explanation is it separates the luminence from the color data, then downsamples the color channels to reduce the raw data load with minimal data reduction. If this is in the process, it's a major source of degradation on re-saves, but most implementations in graphics software will preserve the full-resolution RGB channels.

Block splitting (3)

Block splitting is lossless except at the extreme edges of an image where dummy data is added, but better algorithms will fill in those blocks with data to minimize artifacts when possible. This cuts the images into 8x8 pixel blocks for compression

Discrete cosine transform (4)

In most modern implementations made for image storage, it's steps 4 and 5 that cause the degradation.

Discrete cosign transformation is too complex to describe here in a way that does it justice, but I recommend you read up on it at some point. To summarize any 8x8 block of pixels can be perfectly represented by linear combination of the 64 8x8 blocks shown below. In most photo type images, the broader detail patters are much stronger than the finer detail blocks. This is not a lossy process, but each final value is rounded to only a couple of digits after the decimal.

DCT Patterns

Quantization (5)

With the values organized in this way, we can take advantage of how the human eye and brain resolve detail, which is that it favors broad detail over narrow detail. This is where most of the compression actually occurs. The lower detail blocks both affect the output less, and that affect is less than noticeable to the human brain, we can just clip some of these smaller values to 0 and ignore them. The quantization matrix is added to the file, every algorithm provides their own, but they are preserved between saves. The raw values are divided by the values in the quantization matrix and rounded to the nearest integer. This rounding is a primary component of the lossiness.

Entropy coding (6)

Entropy coding organized the final matrix from the most significant corner to the least significant corner, generally this groups the 0's at the end of the list. After that, those 0's can be truncated and the results encoded into the file. This, again, is a basically lossless process, but saves a lot of space.

Running a jpeg through this process again, the final compressed blocks will be identical because the removed detail data is not there to be filtered away. Because this is done on 8x8 blocks, this even works on any unedited portions of image that are changed between saves.

LightBender
LightBender
January 11, 2019 00:58 AM

Take a look at the linked post. This question will probably get closed because the other one answers one of your questions. Saving several times with the same compression. What factors cause or prevent "generational loss" when JPEGs are recompressed multiple times?

But if you want to document different compression settings, start from the original file, assign one setting (Best quality) and save as 01-BestQuality.jpg then take the original image, assign new compression values and save it as 02-NotTheMestQuality:o).jpg


But here is another related question: "Optimal" size of a JPEG image in terms of its dimensions

You can read in the answers that you need to take into account the content of the image.

A flat image of a pure blue sky will probably stop to have noticeable compression because some good degree will be done on the first level.

But an image of a forest can potentially show more compression each time because there is a detail to be compressed.

On this old page (which I eventually upgrade and translate) https://otake.com.mx/Apuntes/Imagen/PruebasDeCompresion2/3-CompresionJpg10Porciento.php you can see how the compression affects areas of detail, where flat areas remain with not much change.

Rafael
Rafael
January 11, 2019 00:59 AM

Related Questions


Why aren't dct quantization tables symmetrical

Updated July 30, 2018 19:18 PM



Make photos on Facebook nicer?

Updated July 01, 2016 08:07 AM