UnBlur is a position-space image deblurring algorithm I developed between 1999 and 2001. In 2021 I re-implemented UnBlur using my current ANSI C codebase (available below).

*In principle*, the UnBlur algorithm can be applied with a completely
position-dependent point spread function (PSF).
In practice, however, I have only coded it up for the assumption of a
position-independent PSF.

As with all deblurring algorithms, UnBlur can only do a fairly good job when the
number of pixels in the PSF is relatively small—for example, for a
moderate amount of motion blur (which does not need to be in a straight line,
and may be non-uniform in intensity).
When the PSF contains too many pixels, each pixel of the original unblurred
image is distributed amongst too many blurred image pixels for *any *
algorithm to
be able to sensibly disentangle the blurring and
trace each contribution back to its source, given that quantization and other
errors will always be present in the blurred image.
The UnBlur code below automatically aborts if it detects that its estimate of
the unblurred image is worse than its previous
estimate; in the case of PSF with a large number of pixels compared to the
typical features in the unblurred image, this will happen even after the
first iteration.
(The initial estimate is allowed to have a greater error than the input image,
since there are cases in which iterations of the process actually reduce such
an error below that of the input image.)

I have provided a brief history of the development of UnBlur on a separate page.

The programs provided below each extract the PSF to be used from a grayscale image provided by the user. The PSF can be provided as white on black (i.e. black is the zero-signal background) or black on white (i.e. white is the zero-signal background); each program analyzes the set of boundary pixels of the PSF image, and chooses whichever of these two choices is most consistent.

The PSF is automatically normalized, and is stored relative to its center of mass in each direction, rounded to the nearest pixel, so that both blurring and unblurring does not change the position of the center of mass of any signal, to within half a pixel in each direction.

The two fundamental operations used in the programs below are
*blurring* and *back-propagation*.
Blurring simply convolves the PSF with an unblurred image, creating a blurred
image.
Back-propagation, described below, attempts to infer what unblurred image
would,
under blurring with the PSF, create a given blurred image.
In this sense it is an attempt at deblurring, but in most cases it is only one
step in the process: back-propagation and reblurring need to be applied iteratively
to successively reduce the error between the reblurred image and the supplied
blurred image.

Extension boundary conditions are applied in all cases. As with all image processing algorithms, this means that if there is a signficant amount of signal information near any boundary, the extension of this signal across the boundary to the exterior of the image may cause artifacts.

In the following, the three channels of a color image are each processed separately: “intensity” should be read to mean the three-vector of all three components of intensity. (In practice, the three channels are just processed separately, so that the “intensity” is actually just that of the given channel.)

I will also refer to “original unblurred image,” as if this
actually existed and then was blurred by the PSF to provide the blurred image
that we have.
*Conceptually*, it did exist, even if only as photons entering a camera
lens,
that may have moved relative to the objects being photographed while the shutter was
open, or may not have been perfectly focused.
For testing the UnBlur algorithm, of course, *actual* sharp
ground-truth images can be
blurred;
these sharp source images are the “original unblurred images”
that I refer to.

The heart of the UnBlur algorithm is the back-propagation algorithm.

It starts by constructing an *initial estimate* of the original unblurred
image.
For each pixel in the initial estimate, it looks at all of the pixels in the blurred
image that are connected by an entry in the PSF to the given estimate pixel.
For each such blurred pixel, it asks the question: what intensity value
would the estimate pixel need to have, to produce this blurred pixel,
*if that estimate pixel was the only source*?
To do this, it actually *divides* the blurred pixel intensity by the
weight of the corresponding entry in the PSF—“undoing”
the multiplication
used in the blurring operation.

For each estimate pixel, all of these estimated intensity values are
collected.
The next task is to decide on a single intensity value for this estimate pixel.
Now, since each of these values has been obtained by *dividing* by
the corresponding PSF weight, and since some PSF weights may be small,
it would not be sensible to simply compute, say, the mean of the set of values,
since the large values corresponding to small weights
could introduce instabilities.
Instead, the back-propagation algorithm computes the *median* of
them, and stores those values as the initial estimate.

Now, for those (in practice rare) cases in which the original unblurred image consists solely of isolated pixels with nonzero intensity, on a background of zero intensity, separated spatially sufficiently that none of their blurred versions overlap, then it is straightforward to see that this estimate will actually be a perfect reconstruction of the original unblurred image, since each of the back-propagated estimated intensity values for each estimate pixel will be precisely the intensity of the original unblurred image, and so the median of them will likewise be precisely this intensity.

In most practical cases, however, more than one source pixel will contribute
to any given blurred pixel.
The initial estimate overcounts all of those contributions.
So the next stage of the back-propagation algorithm is to reblur the initial
estimate with the PSF, and then determine *how much*
of an overcounting has occurred, for each reblurred pixel,
and decide on a “multiplier” for each such reblurred pixel that
compensates for this overcounting.
For this it simply compares each reblurred pixel with its original blurred
counterpart.
If they are of different sign (which will be possible later in the algorithm),
then the estimate is way off, and the multiplier is set to zero.
If the original blurred pixel intensity is zero, but the reblurred is not, then
again the multiplier is set to zero.
If the reblurred pixel intensity is zero, then we arbitrarily set the
multiplier to unity, since there is no contribution anyway.
If the reblurred pixel intensity is *less than* that of the original,
then in fact we have an undercounting, rather than an overcounting, and the
multiplier is set to unity.
(We don’t want to set the multiplier higher than unity, to avoid
introducing instabilities.)
Finally, in the case that the reblurred pixel intensity is in fact greater than
the original, then we set the multiplier to be the latter divided by the
former,
to “undo” the overcounting.

The back-propagation algorithm then back-propagates this set of multipliers itself, back to the original unblurred space, with the difference that values are not divided by the corresponding weight of the PSF. The medians for each unblurred-space pixel are then computed, as the “back-propagated multipliers.” By construction, it is clear that these multipliers will be between zero and unity.

The initial estimate is then scaled by this back-propagated multiplier image, and returned as the best estimate of the back-propagation algorithm.

The remainder of the UnBlur algorithm is rather simple. The back-propagated estimate is reblurred, and the error difference between this and the original blurred image is computed. This error is then itself back-propagated, and forms a delta to be added to the estimate.

This process is iterated, until either the mean absolute value of the error image is lower than the specified tolerance, or the maximum number of iterations specified is reached.

The UnBlur code also allows for an *initial estimate* image to be
provided.
This can arise in practice where one is deblurring frames of a video or film,
where one frame is sharp but the next is blurred by motion blur.
Because of the short time between the two frames, the first frame is (if
correctly rotated and translated to be precisely overlaid) often a very good
initial approximation to the blurred frame.
To the extent that it is consistent with the blurred frame, the UnBlur algorithm
will leave it alone; to the extent of any inconsistencies, UnBlur will adjust
the estimate based on the back-propagation of the error.
This allows for a much more accurate reconstruction of the original unblurred
image to be obtained.
Note, however, that in this process one is “dropping in” detail of
the image from another image, and such a step only represents reality to the
extent that one can assume that the other image is a good first approximation.

Example uses of the code below are provided on this page.

The 2021 reference implementation of UnBlur is contained in the following ANSI C code:

- unblur.tar.gz
(146 KB; MD5:
`66703ac3666ad98bc043fcbdeb42377a`)

There are four sample programs included:

`make_linear_blur`- Make a point spread function (PSF) file that is a simple linear blur of constant intensity.
`make_magic_blur`- Make a circularly symmetric “blob” PSF file using the Magic Blur.
`blur_image_psf`- Blur an image with a PSF.
`unblur_image`- Deblur a blurred image using the given PSF.

There are also 47 executables of unit tests and death tests provided.

Instructions for building the code are contained in the `_README`
file within the archive.

Note that all code provided here is from my personal codebase, and is supplied under the MIT License.

If you like UnBlur, then you may also find useful the following image processing resources that I have put into the public domain over the decades:

- The Magic Kernel
- World-beating resizing algorithm, superior to the popular Lanczos kernels, that helps power Facebook and Instagram (2006–2021).
- The Magic Edge Detector
- Edge detection algorithm, leveraging the power of the Magic Kernel, that is superior to the common Sobel, Scharr, Prewitt, and Roberts-cross algorithms (2011–2021).
- JPEG-Clear
- A top-down progressive image (mipmapping) algorithm, leveraging the power of the Magic Kernel (2006–2021).
- 3D JPEG-Clear
- The same algorithm as JPEG-Clear, but applied to 3D volumetric data (2010–2021).
- UnBlock
- Algorithm for removing the “blockies” from images and videos without the need for any parameter tuning (2005–2021).

This page describes personal hobby research that I have undertaken since 1999. All opinions expressed herein are mine alone. As described in the history page, I put UnBlur into the public domain, via this page and its predecessor on my previous domain, on September 9, 2001. All code provided here is from my personal codebase, and is supplied under the MIT License.

© 1999–2021 John Costella