combinations of edge statistics and
morphology techniques. This algorithm can achieve a 99.6 percent detection rate
from 9,825 images—assuming that the license plate frame’s edges are clear and
horizontal. Moreever, this method of extracting characters from the binary
image to define the no plate region is time-consuming because it processes all
the binary objects. Furthermore, it gives an incorrect result if there is other
text in the image.
Greyscale Images are those images which contain only a single
value that is each pixel has only a single value, they carry only the
information of intensity under them. They are also known as black and white
image or a monochrome image as they mostly in grey clour the intensity is
divided in such a way that black has the lowest intensity while white has the
strongest. We firstly start by converting an color image into an greyscale
image. The expression is:
Where R is the greyscaled image and p
is the color image.
processing is a fundamental step in image processing as well as for plate
recognition as in most of the countries certain norms are fixed for the plate
color and nos like in india the vechiles have to keep the letters in black with
a white background. But due to poor lightining conditions and plate location
the output is not efficient that is why we need color processing so as to have
an accurate retrieval of characters eith greater efficiency.
V. RELATED WORK
A. Adaptive Thresholding
Before proceeding with thresholding the
images must be converted in greyscale. Thresholding is done so as to create a
binary images. Adaptive thresholding is a process in which a threshold value is
calculated and then each pixel is compared with that constant(threshold) value
and replaced with a pixel of black colour if the value is less than the
constant value or a white pixel if the value is greater than the constant
value. The threshold value is calculated taking an average of the local values
threshold is calculated based on the local mean of pixels intensity in windows
of m × n pixels:
O(X,Y)= 255 I(X,Y) ?-?
where I and O are the
input and output images respectively. The window size parameters, m and n,
are chosen based on the characters size in the region.
B. CONTRAST EXTENSION
To expand the contrast of the image we
have to perform the process of histogram equalization. Contrast extension
process increases the sharpness of the image. Gray level histogram of an image
is the distribution of grey values of an image. Histogram equalization is a
popular method to improve the appearance of an image which has a very poor
contrast. The total process is divided in four steps: (i) summing up all the
histogram values (ii) dividing these values with the total no of pixels so as
to normalize the values. (iii) multiply these values with the highest grey
level value. (iv) chart the new grey level value.
C. MEDIAN FILTERING
Median filter is used for removing the
unwanted noises in the image. In this method a matrix of 3×3 is passed in the
image. According the noise levels these dimensions can be adjusted.
The process involves (i) From the 3×3
matrix one pixel is chosen as the center pixel (ii) all the other surrounding
pixels arecomputed as neighbourhood pixel (iii) Sorting process are applied
between these nine pixels from smaller to the bigger, (iv) Median element is
assigned to the fifth element (v) Theseprocedures are implemented to the all
pixels in plate image.
VI. CHARACTER SEGMENTATION
By using the Regionprops function of
MATLAB the characters of the resulted number plate region are segmented which
gives us the bounding boxes for each of the characters. The smallest bounding
box that contains a character is returned by Regionprops function. This method
is used to obtain the bounding boxes of all characters in the number plate.
VII. FEATURE EXTRACTION
In Feature extraction process we find, we mark, and save all the
features from the number plate segmented. To recognize the character in number
plate images we use zonal density feature. In Zonal density function image is
divided into different areas and object’s pixel in each of the area is been
counted. The density of each area is the total object’s pixel. Total area in
the image equal to total features acquired in the image. For 16 zonal density
we divide a 32×32 image, so that in an image there are 16 features. In order to
be divided into 16, 64, 128, 256 zones the pixel should be 32 x 32