Technical information about the implementation of UnBlur

This page contains general and technical background information. It is in question and answer format.

What is “UnBlur”?

“UnBlur” is a computer program written by the author that can remove some or all of the “blur” in an image.

What does it cost?


What are these other small programs that are listed with the UnBlur package?

They are simple utilities that are useful in manipulating Windows Bitmap (BMP) images, converting between BMP and the UnBlur image format, and creating two-dimensional point spread functions. The UnBlur program functions quite independently of these utilities.

Isn’t the term “deblur”, not “unblur”?

There are several answers to this. The Oxford English Dictionary recognises neither term, but it does recognise “unblurred” (although the two quoted precedents indicate it meaning “never having been blurred” rather than “removal of blur”). The term “deblur” does, however, seem to have some current usage. One might, then, consider “deblur” to be the generic term, and “UnBlur” the distinctive name of this package, in the same way that “wheat” is a generic term, and “Weet Bix” the distinctive name of a specific product.

I need a three-dimensional version for medical imaging. Any chance of this?

The UnBlur program can already process images of arbitrary dimensionality. All you need to do is specify this in the control file. Of course, the Windows BMP conversion utilities will be of no use; you will need to write some small programs to convert between your medical image data files and the UnBlur image format. Note that position-dependent point spread functions are not yet supported in Version 1.1, although the nature of the algorithm makes them possible. Please contact the author if you have a need for such point spread functions.

The documentation lists a four-dimensional example. When would anyone need this?

In some medical imaging applications, a three-dimensional image is obtained at regular time intervals. Often, a “signal” starting at one time will continue, at reduced strength, through a number of the following time intervals. This represents a “blurring in time”. The UnBlur program could, conceivably, be used to “unblur” such sequences of three-dimensional images in both the three dimensions of space and the one dimension of time.

What sorts of images can be “unblurred”?

Essentially, any image that consists of non-negative intensities, which have been blurred linearly by a non-negative point spread function, can be analysed. This excludes any blurring that is of a wavelike or interference nature, such as diffraction fringes. Please contact the author if you are unsure of whether your application satisfies these requirements.

Is it true that non-rectangular images, or images with holes, can be processed?

Yes. Any number of pixels in an input image can be listed as “void”, which means that no information is known about it. Since this description also applies to all pixels outside the image itself, such “void” pixels essentially allow the “valid” region to be moulded to any shape, or to have holes put in it. The UnBlur program handles such “edges” and “holes” just as easily as it does a rectangular image.

How long does it take to “unblur” an image?

There are many variables, but here is a rough example: A 200 by 200 pixel, full-colour (i.e. 24-bit) image, with a point spread function covering 40 pixels, took about an hour to process on a Pentium III 866 MHz. Processing time should be jointly proportional to the number of pixels in the image and the number of pixels in the point spread function, although “experimental” measurements of the performance of the program have not yet been run.

Do I require any third-party software or licences to run UnBlur?

No. The UnBlur program has been written by the author, entirely in ANSI C, based on a number of such C libraries that the author has written from scratch over the past decade. No third-party software is required at all—not even Foundation Classes.

I don’t trust any code I can’t examine. Can I have the source code?

Yes. Simply download it instead of the Windows executables.

Why are the programs written in C, not C++?

There are historical and personal reasons, which go beyond the scope of this document. Suffice it to say that the code is written within a strongly object-oriented paradigm.

Why doesn’t the source code look anything like C?

The code has been written in the author’s own “flavour” of object-oriented C, based on a fairly extensive “wrapping” of the language using preprocessor macros. It compiles on any ANSI C compiler, but does not look like traditional C. However, if you digest the macros, you will find that it is just C wrapped up in some extra structure.

The reason for doing this is that the author’s “wrapping” employs a number of tools that provide strong checking as well as automatic debugging tools on any ANSI C compiler. If a suitable debug flag is not defined, then the overheads of this extra “wrapping” are taken away, yielding the fastest possible code.

I would be happy to discuss this further in a suitable forum.

Why use this UBI image file format? It is plain text, slow, and wasteful!

As noted above, the UnBlur program will process images of arbitrary dimensionality. Its image file format therefore needs to cater for this degree of flexibility. An UBI image file can also specify resolution uncertainties (i.e. roundoff errors) inherent in its data. Thus, it was felt more appropriate to create a new file format, and either provide or require conversion programs to and from other image formats where necessary. The file format was chosen to be plain text, to aid transmission over the Internet, and allow easy readability for debugging and other purposes. The UnBlur program also checks such input files quite comprehensively, for possible errors or inconsistencies. It has been found that the time taken to load and write such files, whilst significantly longer than other applications employing the same amount of image data, is not large compared to the whole run-time of the program.

I want to incorporate the UnBlur library in my own program. Can I do this?

Legally speaking, you cannot do it without my permission, although I am generally willing to grant permission for suitable purposes if you contact me first. Technically, the library has been written in such a way that such incorporation is quite straightforward, although again you should talk to me about it if you want to avoid massive technical headaches.

If you do not wish to seek my permission, you can call the UnBlur program from any C or C++ program by writing out a suitable control file and calling UnBlur using the system() command. This is also the safest option.

What are these options for “discrete statistics” and “granularity”?

In many precision physics applications, only a small number of particles (photons, electrons, etc.) are detected per pixel. The relationship between the number of particles measured, and the true intensity, is governed by the laws of quantum mechanics. In most practical cases, this leads to a Poisson distribution of particle number (often referred to as “photon noise” in visual applications). It is manifested in an image by a “dotty” appearance, which is most significant for low intensities, and relatively unimportant for high intensities. (Roughly speaking, the size of the noise is of the order of the square-root of the number of particles detected by the pixel detector, so that the relative noise, i.e., the noise as a fraction of the signal, is of the order of the reciprocal of the square root of the number of particles.)

The UnBlur program needs to be told if the number of particles detected is so small that this “noise” is significant; otherwise, the UnBlur algorithm will reject most of the information in the image as not matching the characteristics of the point spread function. If such “discrete statistics” are applicable, the program needs to be further told what intensity value corresponds to a single detected particle, and the “confidence level” that should be applied. The UnBlur program then takes this information into account when processing the image.

A phenomenon that may look superficially similar to discrete statistical noise is the “graininess” of a photographic image, which is due to the chemical and physical nature of the photographic emulsion. When scanned digitally, at high magnification, such granularity becomes apparent. However, unlike discrete statistical noise, which is usually independent for each particle detector in the imaging apparatus, granularity is a property of the emulsion itself, and will, under high enough magnification, display correlations over a number of adjacent pixels. In other words, a “grain” will cause a number of pixels to appear as a brighter or darker “blob” than the surrounding area, if the magnification is sufficiently high. Furthermore, the relative effects of this granularity do not generally decrease for high intensities in the way that occurs for discrete statistical noise.

Again, if the UnBlur program is not informed that the image is significantly granular, it will generally reject some of the information in the image, as not matching the characteristics of the point spread function, and the resulting reconstruction will exhibit artefacts of the granularity.

To avoid this, the program needs to be told that the image is granular. It also needs to be given a “radius of reliability” for each dimension of the image, defined so that, if an ellipse or ellipsoid with these semi-axes is centred on any pixel in the image, the true intensity value of the pixel lies within the range of values covered by the ellipse or ellipsoid. In other words, the ellipse or ellipsoid is larger than the grain size of the image.

This information is used when determining the overall constraints of the reconstructed image (maximum and minimum possible intensities at each pixel position), but is not applied when fine-tuning and optimising the best estimate of the reconstructed image. This allows some of the effects of the granularity to be bypassed, although the granularity will, ultimately, provide a limit to the fine detail that can be reconstructed.

Why doesn’t the DiscreteStatistics option work?

Version 1.1 of the package requires that the DiscreteStatistics and Granular entries be included. At present, the DiscreteStatistics option is not implemented at all, but the Granular option is implemented in full. The author is still considering the best ways of incorporating the DiscreteStatistics feature into the algorithm of the program.

Can I have a point spread function that is position-dependent?

The general philosophy of the UnBlur algorithm will allow it to handle the case when the point spread function is not independent of position, i.e., is not simply a universal function of relative displacement. However, substantial additional program will be required to implement this capability, which is not offered in Version 1.1. Please contact the author if you have a strong need for such capabilities.

I blurred an image and then “unblurred” it. Why didn’t I get back the original?

When an image is blurred, information is often irretrievably lost. The amount of loss depends on the nature of the original image and of the point spread function. Sometimes, perfect reconstruction is possible. In many cases, however, only the edges of objects can be reliably sharpened; fine details, i.e., changes that are rapid compared to the shape of the point spread function, are lost in the noise.

A good analogy is to consider ten people sitting in a line, each having a number of dollar coins in their pocket. Let us (for simplicity only) assume that they each start with a multiple of $10. Let us furthermore assume that their money is “blurred” by the following process: each person gives one-tenth of their original money to each of their nine colleagues. If I now tell you that, after such a “blurring”, each person has $20, can you tell me how much each of them started with?

If you play with this problem for a while, you quickly find that it is impossible. Namely, if you distribute 200 $1 coins amongst the ten people, in packets of $10, in any arbitrary way at all, then you will find that, after the above “blurring” (sharing) process, each of them will have $20. There is no way, after the blurring is complete, to determine what the original distribution of money was. This is the analogue of a case in which an image is blurred so badly that no feature at all can be reconstructed.

A more realistic situation, however, has the following analogue: Let us assume that the “blurring” occurs by each person sharing one-third of their coins with each of their two immediate neighbours, keeping the remaining one-third for themself. Let us furthermore assume, for simplicity, that each person starts with a multiple of $3. If I now tell you that, after this “blurring”, the people have, in order, $0, $5, $5, $5, $0, $5, $10, $10, $5, $0, then some logic tells you how much they originally had. Namely, since the first person has no coins, the second person cannot have had any to share either—otherwise they would have given some to the first person. But since the second person now has $5, they must have received these from the third person—who must, therefore, have started with $15. Continuing alone the line, you find that the original distribution of money must have been $0, $0, $15, $0, $0, $0, $15, $15, $0, $0. In this case, the “unblurring” is complete: no information at all was lost.

In practice, of course, one is never in possession of precisely accurate data. A brief reflection on the above process shows that even small uncertainties in each “blurred” value will quickly snowball into large uncertainties in the reconstructed values, away from the “edges”. This is, essentially, why fine interior details of an image cannot often be reconstructed.

It must, therefore, be always borne in mind that many different original images may produce the same blurred image. For the second example above, an “original image” of $9, $0, $0, $9, $0, $0, $9, $0, $0, $9 would yield the same “blurred image” as an “original image” of $3, $3, $3, $3, $3, $3, $3, $3, $3, $3, at least if we assume that the pattern is continued indefinitely to the left and right, past the ten people that are “visible”. Thus, in full generality, “unblurring” is not usually unique.

The UnBlur program seeks to reconstruct an original image that will account for the observed blurred image as accurately as possible, without introducing fine details that cannot be justified with confidence. In other words, if there is a region within which many possible solutions are possible, the UnBlur program chooses the one that is the smoothest and as featureless as possible.

Why are “Gibbs fringes” mainly absent on the reconstructed images?

The Gibbs phenomenon is a consequence of any method that employs Fourier transform methods on functions that possess discontinuities (i.e., sharp edges). In signal theory, its effects are generally referred to as Gibbs ringing, or simply just “ringing”; in image processing, its effects are referred to as Gibbs fringes. This phenomenon is generally unavoidable when performing image deconvolution in the Fourier domain.

The UnBlur program does not use Fourier methods at all. All processing is done in the spatial domain. One consequence seems to be that the Gibbs phenomenon is very largely avoided. Another is the ability to process images having shapes that are not rectangular or cuboidal (see above question).