Well the structure of a JPEG image (which is entirely seperate from the structure of a JFIF file, .jpg/.jpeg files are actually JFIF files with JPEG images inside, but nothing else ever used JFIF to speak of so calling them .jpeg's works) is pretty trivial if you skip the single-pass/multiple-pass differences. (And you can, conversion to/from 'progressive' images is lossless and can be done with external tools.)
Once you get past the JFIF wrapper, the JPEG is pretty basic once you convert things to the color-space it works in, and decide on the downsampling factors, and convert through the DCT, you're dealing with (essentially) flat planes of blocks of 64 numbers. All of the blocks can be arbitrarilly scaled down with per-item (NOT per-block) divisions that are declared ahead-of-time, before being truncated to integers and finally encoded.
Since it relies on reducing a lot of the later numbers to 0 when truncated to an integer for it's compression, simply zeroing out things further from the end of the list of numbers results in lowered quality and increased compression of arbitrary blocks, since you can't (as I understand it) declare multiple quantization matrices (what the divisor blocks are called) for a single plane.
So basically you'd be looking at zeroing out arbitrary columns in the big fat table of blocks in individual planes, optionally with zeroing out from a given column onwards. To be honest, losing things arbitrarilly in the middle could result in interesting artifacting as well, so that's something else to experiment with perhaps?
Re: Unfortunately...
Date: 2006-09-25 07:13 am (UTC)Once you get past the JFIF wrapper, the JPEG is pretty basic once you convert things to the color-space it works in, and decide on the downsampling factors, and convert through the DCT, you're dealing with (essentially) flat planes of blocks of 64 numbers. All of the blocks can be arbitrarilly scaled down with per-item (NOT per-block) divisions that are declared ahead-of-time, before being truncated to integers and finally encoded.
Since it relies on reducing a lot of the later numbers to 0 when truncated to an integer for it's compression, simply zeroing out things further from the end of the list of numbers results in lowered quality and increased compression of arbitrary blocks, since you can't (as I understand it) declare multiple quantization matrices (what the divisor blocks are called) for a single plane.
So basically you'd be looking at zeroing out arbitrary columns in the big fat table of blocks in individual planes, optionally with zeroing out from a given column onwards. To be honest, losing things arbitrarilly in the middle could result in interesting artifacting as well, so that's something else to experiment with perhaps?