Sunday, 9 November 2014

Week 7: Digital Image Processing

Week 7: Digital Image Processing

Why use Digital Image Processing:
  • It removes the cons of using a darkroom and the time required alongside the chemicals involved.
  • It allows for a flexible environment in which you can experiment as much as you want without consequence.
  • There is a large amount of pre-defined options and operations available as opposes to a traditional darkroom. The amount of these operations and options continues to grow.
Digital Camera Imaging Systems: 
A camera contains both a lens and a detector. In digital photography the detector is more often than not a CCD or charge coupled device. This is a linear or matrix array of photosensitive electronic elements. 








A 35mm film frame measures 36x24 mm. A typical CCD array is only 6x4. The lens of a digital camera must then be a high quality to condense the optical image onto an area 36 times smaller.

Digital Camera Image Capture: 
On an area array sensor, a grid of many thousands of tiny photocells each less than 5 micro metres square creates pixels by sensing the light intensity of small bits of the image formed by a lens system. 









Sensor Spatial Resolution:
Pixelization can be seen with the human eye if the resolution is too low. Increasing the number of cells in the sensor array will in turn increase the resolution of the captured image. Sensor devices have around 1 million cells.

Digital Camera Colour: 
In order to capture image in colour, red, green and blue filters are placed over the photocells. Each cell is then assigned three 8-bit numbers (2 to the power of 8 is equal to 256 levels) that corresponds to the brightness values of red, green and blue. For example orange has a red brightness of 227 red, 166 green and 97 blue.

Digital Camera Optics: 
Before light collected by a lens is focused on to the sensor array, it is passed down through an optical low-pass filter that:
  • Gets rid of any picture data beyond the sensor's resolution.
  • Compensates for false coloration and RGB moire caused by large changes of colour contrast in terms of pictures of thin stripes and fine point sources.
  • Lowers the infrared and other non-visible light, both of which that disturbs the sensor's imaging process.
What is Moire and how do you prevent or remove it?
Source: http://static.photo.net/attachments/bboard/00a/00aJtp-461139584.jpg

Moire is a pattern of wavy lines that every now and again appears on objects in digital captures. This occurs when fibers or fine details in an object match the pattern of the imaging chip in the camera. Some cameras uses anti-aliasing filters to blur these areas on captures but others don't use these because it sacrifices image sharpness. With or without this filtering every camera can creating moire.

Digital Image Fundamentals: 
Digital images are known as bit-maps or raster-scans. These are compsed of an array of pixels. Each pixel in the image is a uniform patch of colour, but on the screen is made up of red, green and blue phosphor dots or stripes.

The Pixel:
A pixel is the smallest digital image element that is manipulated by any image processing software. Each pixel is individually coloured but due to their size they can only approximate the actual colouring of an object, Due to this bit maps will show blocky areas when zoomed in. 

Bit-Mapped Graphics:
A bit-mapped image is represented in memory as an array of bits. Each array codes the colour of a single pixel. For example 8 bits for red, green and blue levels needs a group of 24 bits. The array of pixels could be 640x480(VGA Spatial Resolution) and the colour of each pixel would be 24 bits. So 640x480x24 = 7372800 bits. Which is around 7.4 Mb per image. 1Mb is equal to 10 to the power of 6 bits,

Dynamic Range:
The dynamic range of a visual scene is the number of colours of shades of grey or the (grey scale). The dynamic range of a digital image however is fixed by the number of bits or the (bit depth) the system uses to represent each pixel. This is what determines that the max number of colours or shades of grey in it's palette. The specific colours being uses form the image palette. 

1Bit Depth:
A pixel with a bit depth of one has two values. Black or white. As such to simulate the spacing of black white pixels a process using greys called half-tone is used.
Source: http://image.shutterstock.com/display_pic_with_logo/6049/6049,1290519588,1/stock-vector-halftone-rose-in-vector-format-65736655.jpg 
 

8 Bit Depth of Grey and 8 Bit Depth of Colour and True Colour:
In an 8-bit deep grey scale image every pixel has 256 possible shades of grey. In an 8-bit depth of colour image however 256 possible colours can be represented in pixels. In a true colour image 8 bits are used to code the intensities of the three colours. Each pixel as such can represent one of more than 16 million possible colours.

Colour Palette:
If the computer system determines the colour palette then the system palette colours 256 are used for every image. The fidelity of an of a 256 colour image is enhanced by selecting, for the palette, the 256 colours closest to the ones in the image. This is know as an adaptive palette and can cause problems when multiple images are displayed at the same time on a system which can only display 256 colours. The system than has to choose one palette and apply it to all the images shown.
Source: From Lecture

256 Colour Palettes:
Source: From Lecture

What is Digital Image Processing?
There are four common categories for DIP operations. These are analysis, manipulation, enhancement and transformation.

In analysis operations provide information of the photometric features of an image. These include a colour count and a histogram. In manipulation operations change the content of the image. This includes flood fill, crop. In enhancement operations attempt to improve the quality of an image. For example increasing the contrast, enhancing the edge and once more input image yields output image. Transformation operations alter the images geometry. For example rotation of the image. Again input image yields output image.

A Typical Digital Image Processing System:
Source: From Lecture

Analysis - The Histogram:
A common operation in term of analysis is the histogram. A display that plots intensity levels on the horizontal axis and the number of pixels having each level on the vertical axis. 
The histogram shown here is bi-modal. Pixels either have low low or high intensities but some have mid-range levels.







The histogram of an image that has good contrast alongside a good dynamic range will show a good use of the intensity range. Meanwhile the histogram of a low contrast image will show restricted use of the available intensity range. The histogram of an image that has very high contrast will show most of it's pixels either have incredibly low or high intensities. 

Transformation-Rotate:
The rotate transformation will rotate the image by 90 degrees or allow for rotation by free choice of angles. The first type of rotation is performed by remapping the pixel positions row for column. 








Free rotation requires interpolation algorithm is employed due to the rotating image may mean that a final pixel position may not lie uniquely on one output pixel. The interpolation works out an appropriate colour value in each pixel position in the output image. 








Manipulation-Block Fill:
Source: From Lecture

The area that is going to be changed is designated using a selection tool and afterwards is indicated via the marquee. Pixel addresses are tested and are modified by the chosen manipulation. In this case the block fill operation.









Enhancement-Filtering:
Filtering uses a kernel which moves the image over in small 1 pixel steps. At each step the kernel has parts that multiply the corresponding pixel value and are then totaled to come up with the new output pixel value.
Source: From Lectures













Enhancement - EX 1 Depth:
Source: From Lecture

Enhancement - Example 2:


No comments:

Post a Comment