Compression with uneven block size

Picture of Victoria compressed with uneven block size

I came across this article which shows images that have been compressed using blocks of different sizes. It looked interesting, so I made my own version using Python.

The program works by reading in an image (as described here) and calculates the variance of pixel intensities in a 256 x 256 pixel square. If that variance is above a threshold then it splits the image into a four 128 x 128 pixel squares and repeats the process for each of them until it reaches a single pixel. If the variance is below the threshold at any point, then it creates a square of that size with the mean colour. By changing the threshold I can change how blocky the image is.

I later added an option to output the processing.js code so I can create an image as a Khan Academy computer science program. The only real addition was to make the program first order the squares by colour so all the squares of the same colour are next to each other so fill() needs only to be called once per colour. I used it to create a version of the Mona Lisa which is now my most upvoted program.

[Update: I've put a version online at]

Below, I've attached a version of my Python program which is relatively user-friendly. You need to change the file extension from .txt to .py and you need to have Python and the numpy library. To run, call:

>>> python path/to/file.jpg

A png image will be created with the same filename, but ending in _constrast_blocks. The program uses a default block size of 1200, which seems to work quite well on the images I've tried. To change the value use the -t option, e.g.

>>> python path/to/file.jpg -t2000

To output the code for a Khan Academy program add a -k option.

>>> python path/to/file.jpg -k



findContrastBlocks.txt3.69 KB

Post new comment

The content of this field is kept private and will not be shown publicly.