Friday, October 4, 2013

Automatic Number Plate Recognition System

Automatic Number Plate Recognition System

Automatic vehicleidentification is an essential stage in intelligent traffic systems. Nowadays vehicles play a very big role in transportation. Also the use of vehicles has been increasing because of population growth and human needs in recent years. Therefore, control of vehicles is becoming a big problem and much more difficult to solve. Automatic vehicle identification systems are used for the purpose of effective control.

Automatic number plate recognition (ANPR) is a form of automatic vehicle identification. It is an image processing technology used to identify vehicles by only their number plates. In this study, the proposed algorithm is based on extraction of plate region, segmentation of plate characters and recognition of characters.

ANPR can be used to store the images captured by the cameras as well as the text from the number plate. Systems commonly use infrared lighting to allow the camera to take the picture at any time of the day. ANPR technology tends to be region-specific, owing to plate variation from place to place.
Concerns about these systems have centered on privacy fears of government tracking citizens' movements and media reports of misidentification and high error rates. However, as they have developed, the systems have become much more accurate and reliable.

1.1 Project Idea
The increase in the vehicles in day to day life makes it difficult to monitor each and every vehicle at toll plazas, parking and societies, so we are developing a system for the personal interest of one and all that will allow to decrease the human labour and time constraint.
The automatic number plate recognition system helps us doing it by recognizing the vehicles automatically, tracking the number plates and storing the numbers in a database.

1.2 Need of the Project

Due to the increase in the number of vehicles a lot of time is spent in checking and preparing receipts at various places like societies , toll stations and parking spaces for companies. Thus to reduce the time and the labor cost, our aim is to implement a system which  automatically recognizes the registration number of vehicles . This system can also be use for security purpose.


1.3  Literature Survey

1.3.1 Image Processing

Image Processing and Analysis can be defined as the "act of examining images for the purpose of identifying objects and judging their significance" Image analyst study the remotely sensed data and attempt through logical process in detecting, identifying, classifying, measuring and evaluating the significance of physical and cultural objects.

In computer science, image processing is any form of signal processing for which the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image.

Remote sensing images are recorded in digital forms and then processed by the computers to produce images for interpretation purposes. Images are available in two forms - photographic film form and digital form. Variations in the scene characteristics are represented as variations in brightness on photographic films. A particular part of scene reflecting more energy will appear bright while a different part of the same scene that reflecting less energy will appear black. Digital image consists of discrete picture elements called pixels. Associated with each pixel is a number represented as DN (Digital Number), that depicts the average radiance of relatively small area within a scene. The size of this area effects the reproduction of details within the scene. As the pixel size is reduced more scene detail is preserved in digital representation. 





1.3.2 Matrix Laboratory (MATLAB)

MATLAB, which stands for MATrix LABoratory, is a state-of-the-art mathematical        software package, which is used extensively in both academia and industry. It is an interactive program for numerical computation and data visualization, which along with its programming capabilities provides a very useful tool for almost all areas of science and engineering.

It is a Image Processing Toolbox software that provides a comprehensive set of reference-standard algorithms and graphical tools for image processing, analysis, visualization, and algorithm development. You can restore noisy or degraded images, enhance images for improved intelligibility, extract features, analyze shapes and textures, and register images. Most toolbox functions are written in the MATLAB , giving you the ability to inspect the algorithms, modify the source code, and create your own custom functions.


1.3.4 Algorithm-

The number plate is normalized for brightness and contrast, and then the characters are segmented to be ready for OCR.
There are six primary algorithms that the software requires for identifying a license plate:
  1. Plate localization and preprocessing – responsible for finding and isolating the plate on the picture. In this process the captured RGB image is converted to binary format  ,then filtering is carried out to eliminate the unwanted noise.
  2. Plate orientation and sizing – compensates for the skew of the plate and adjusts the dimensions to the required size. In this process the plate is isolated from the image and the image of the plate is resized. The centroid of the first number is located and he plate is cropped accordingly.
  3. Normalization – adjusts the brightness and contrast of the image.This process is carried out to make the numbers more prominent than the background.
  4. Character segmentation – finds the individual characters on the plates.In this process the individual numbers are cropped using region properties (area,centroid) and bounding box.
  5. Character recognition using neural networks.In this process the neural network is pre trained with a number of fonts to identify the actual cropped number images.


The complexity of each of these subsections of the program determines the accuracy of the system. During the third phase (normalization), this system uses region properties(centroid) technique to detect black portions in the picture. A median filter may also be used to reduce the visual noise

Thursday, October 3, 2013

IRIS RECOGNITION USING MATLAB







IRIS RECOGNITION USING MATLAB

Introduction
Iris recognition is a method of biometric authentication that uses pattern recognition techniques based on high-resolution images of the irises of an individual's eyes. Not to be confused with another less prevalent ocular-based technology, retina scanning, iris recognition uses camera technology, and subtle IR illumination to reduce specular reflection from the convex cornea to create images of the detail-rich, intricate structures of the iris. These unique structures converted into digital templates, provide mathematical representations of the iris that yield unambiguous positive identification of an individual.
Iris recognition efficacy is rarely impeded by glasses or contact lenses. Iris technology has the smallest outlier (those who cannot use/enroll) group of all biometric technologies. The only biometric authentication technology designed for use in a one-to many search environment, a key advantage of iris recognition is its stability, or template longevity as, barring trauma, a single enrolment can last a lifetime.
Breakthrough work to create the iris recognition algorithms required for image acquisition and one-to-many matching was pioneered by John G.Daugman, who holds key patents on the method. Daugman's algorithms are the basis of almost all currently (as of 2006) commercially deployed iris-recognition systems. It has a so far unmatched practical false-accept rate of zero; that is there is no known pair of images of two different irises that the Daugman algorithm in its deployed configuration mistakenly identifies as the same.
            We will use IRIS authentication technique to control the hardware in our project. The hardware can either be a electro-mechanical lock, generator, access panel, ATM, etc.



IRIS AUTHENTICATION STEPS:
  • Grayscale Conversion
  • NTSC Weighted Averaging Conversion
  • RGB Averaging Conversion
  • Segmentation
  • Sharpening
  • Threshold
  • Edge Detection (Horizontal, Vertical)

NORMALIZATION

DAUGMAN’S RUBBER SHEET MODEL

MATLAB Project List:
A Two-Level FH-CDMA Scheme for Wireless Communication Systems over Fading Channels.
Efficient SNR Estimation in OFDM System.
IMAGE Resolution Enhancement by Using Discrete and Stationary Wavelet Decomposition.
A Fast Adaptive Kalman Filtering Algorithm for Speech Enhancement.
Quality Assessment of Deblocked Images.
Number Plate Recognition for Use in Different Countries Using an Improved Segmentation.
A NOVEL APPROACH OF IMAGE FUSION ON MR AND CT IMAGES USING WAVELET TRANSFORMS.
Stationary and Non-Stationary noise removal from Cardiac Signals using a Constrained Stability Least Mean Square Algorithm.
A New ZCT Precoded OFDM System with Pulse Shaping: PAPR Analysis.
Candidate Architecture for MIMO LTE-Advanced Receivers with Multiple Channels Capabilities and Reduced Complexity and Cost.
Super-Resolution Method for Face Recognition Using Nonlinear Mappings on Coherent Features.
A Single Image Enhancement using Inter-channel Correlation.
Adaptive Steganalysis of Least Significant Bit Replacement in Grayscale Natural Images.
HAIRIS: A Method for Automatic Image Registration Through Histogram-Based Image Segmentation.
Non-blind watermarking scheme for color images in RGB space using DWT-SVD.
Research and implementation of information hiding based on RSA and HVS.
Audio Forensic marking using Quantization in DWT-SVD Domain.
Removal of High Density Salt and Pepper Noise Through Modified Decision Based Unsymmetrical Trimmed Median Filter.
Color Constancy for Multiple Light Sources.
Change Detection in Synthetic Aperture Radar Images based on Image Fusion and Fuzzy Clustering.
On Performance Improvement of Wireless Push Systems via Smart Antennas.
A Semi supervised Segmentation Model for Collections of Images.
Secure Communication in the Low-SNR Regime.
Interpolation-Based Image Super-Resolution Using Multi surface Fitting.

Illumination Recovery from Image with Cast Shadows via Sparse Representation.


FEATURE EXTRACTION
Feature Matching
IRIS Template Bit Pattern Generation (Please note that we wont be using Gabor Filters for this but rather histogram filters)


Bit Pattern Comparison

Result

Matlab Projects Pune







Device Control Using Hand-Gesture Recognition

Project Definition:  
This project aims in implementing real time gesture recognition. The primary goal of the project is to create a system that can identify human generated gestures and use this information for device control.

The user performs a gesture in front of a camera, which is linked to the computer. The picture of the gesture is then processed to identify the gesture indicated by the user. Once the gesture is identified corresponding control action assigned to the gesture is actuated.

Scope: 
In this system, we basically recognize hand gestures through software and these hand gestures will be used for controlling certain software as well as hardware devices.

Objective:
·        Provide a more natural Human Computer Interface.
·        Provide the physically challenged users a better way to interact with the computers.
·        Interfacing user with software.
·        User can give input without using keyboard, mouse, etc .i.e. to give input  in the form of different body gestures.
·        Helpful for handicap people.
·        Actually finds out the directional gesture vector.
·        Direction as well as count both will be responsible for the hardware control.
E.g. Advanced Hardware Control, Robot Control.

Relevant theory:
We propose a fast algorithm for automatically recognizing a limited set of gestures from hand images for a robot control application. Hand gesture recognition is a challenging problem in its general form. The algorithm is invariant to translation, rotation, and scale of the hand.

We demonstrate the effectiveness of the technique on real imagery. Vision-based automatic hand gesture recognition has been a very active research topic in recent years with motivating applications such as human computer interaction (HCI), robot control, and sign language interpretation. The general problem is quite challenging due to a number of issues including the complicated nature of static and dynamic hand gestures, complex backgrounds, and occlusions. Attacking the problem in its generality requires elaborate algorithms requiring intensive computer resources. What motivates us for this work is a robot navigation problem, in which we are interested in controlling a robot by hand pose signs given by a human. Due to real-time operational requirements, we are interested in a computationally efficient algorithm.

Early approaches to the hand gesture recognition problem in a robot control context involved the use of markers on the finger tips. An associated algorithm is used to detect the presence and color of the markers, through which one can identify which fingers are active in the gesture.

Device Control Using Hand-Gesture Recognition

Project Definition:   
This project aims in implementing real time gesture recognition. The primary goal of the project is to create a system that can identify human generated gestures and use this information for device control.

The user performs a gesture in front of a camera, which is linked to the computer. The picture of the gesture is then processed to identify the gesture indicated by the user. Once the gesture is identified corresponding control action assigned to the gesture is actuated.

Scope:  
In this system, we basically recognize hand gestures through software and these hand gestures will be used for controlling certain software as well as hardware devices. 

Objective:
Provide a more natural Human Computer Interface.
Provide the physically challenged users a better way to interact with the computers.
Interfacing user with software.
User can give input without using keyboard, mouse, etc .i.e. to give input  in the form of different body gestures.
Helpful for handicap people.
Actually finds out the directional gesture vector.
Direction as well as count both will be responsible for the hardware control.
E.g. Advanced Hardware Control, Robot Control.