Implementation of Edge Detection Using Fpga & Model Bad Approach
Prof.(Dr.) P.K Dash
Department of Electrical Engineering
Institute of Technical Education & Rearch, Khandagiri,
BBSR-751030, Odisha, India.
Phone No: +91-674-2350181
pkdas@iter.ac.in Prof. Shashank Pujari, Miss. Sofia Nayak
Department of Embedded System Design Sambalpur University Institute of Information Technology Jyoti Vihar, Burla-768019, Odisha, India.
Phone No: +91-9861968757
Phone No: +91-8658389595
sspujari@suiit.ac.in
sofia.nayak@suiit.ac.in
Abstract—this paper describes the implementation of SOBEL and PREWITT APPROACH for edge detection in video and image processing applications using FPGA and Model Bad Approach. The main theme behind this work is to understand the video image processing techniques which are ud for edge detection on video and images. On Model Bad Approach it explains the edge detection of video taking directly from webcam and how to read a video from the hard drive. Simulink becomes a key component for Model Bad Approach. Edge detection model can be done using image processing block ts from Simulink Library of Mat-lab. For FPGA implementation the entire video image edge detection algorithm is implemented on CYCLONE II FPGA device using ALTERA DE2 FPGA KIT. The input video or image come from a NTSC/PAL camera and the edge detected images are displayed on a VGA monitor. The functional implementation of all process using VHDL code of FPGA has been compiled on ALTERA QUARTUS-II software tool.
Keywords—fpga;cyclone-ii;simulink;vhdl;edge detection;altera.
I.INTRODUCTION Computationally Intensive DSP applications such as Image Processing is getting widely ud in embedded systems for many applications, such as object detection, space exploratio
n, curity or video surveillance. While technology is incorporating more object detection and recognition into devices, the heart of all the systems starts in edge detection. Some of the various systems that u edge detection for object detection are facial recognition, lane detection warning, lost object detectors, and more. [1, 4]
This paper prents implementation of Prewitt and Sobel approaches for edge detection using FPGA and Simulink Model
电影 后天
When one first learn how to create the algorithms for edge detection, a Simulink model must be developed to fully understand how edge detection works? Since there is not an easy way to program an FPGA system, the best way would be to first create an algorithm in Simulink for edge detection. By modeling in Simulink one can understand how edge detection works on live video and how to perform other operations. [1]
Video and image processing typically require very high computational power. Given the increasing processing demands, the parallel processing capabilities of Field Programmable Gate Array (FPGA) make them an attractive implementation option for highly repetitive tasks found in video and imaging functions. [2].
Image processing is a type of signal processing in which the image or information regarding image is fed as an input signal and various operations are performed on it. The various operations performed on it can be ud on a host of applications such as Image filtering, medical imaging, wireless communication, image compression, computer vision etc. Some of the most common operations on an image that come under the canopy of image processing are image scaling, converting between various color format, image rotation, removing noi, adding noi, filtering, blurring, edge detection and contour detection .Some combination of the algorithms is ud in almost all image processing applications.[3]
Implementing image processing algorithms on reconfigurable hardware minimizes the time-to-market cost, enables rapid prototyping of complex algorithms and simplifies debugging and verification. Therefore, FPGAs are an ideal choice for implementation of real time image processing algorithms. [4]
. In FPGA implementation, the entire video image-processing algorithm is implemented on FPGA board (ALTERA DE2 board). The main aim behind implementation on FPGA is to reach minimum timing by the maximum utilization of resources. The functional implementation of all process is done using ALTERA QUARTUS-II tool.
Section-II describes the Background on edge detection; Section-III describes FPGA Implementation. Section-IV describes its Model Bad Approach. Section-V and Section-VI describes Conclusion and Future Scope.a 10
II.BACKGROUND ON EDGEDETECTION
Edge detection is the process of localizing pixel intensity transitions. The edge detection has been ud by gmentation,
motion analysis, object recognition, target tracking and many more. Therefore, the edge detection is one of the significant techniques in the field of image processing. The most well-known technique for edge detection is gradient-bad. The gradient method looks the edges by finding maximum and minimum in the first derivative of the image. [4]drains
The basic edge detection operator is a matrix area gradient operation that determines the level of variance between different pixels. The edge detection operator is calculated by forming a matrix centered on a pixel chon as the center of the matrix area. If the value of this matrix area is above a given threshold, then the middle pixel is classified as an edge.
Examples of gradient-bad edge detectors are Roberts, Prewitt, and Sobel operators. All the gradient-bad
algorithms have kernel operators that calculate the strength of the slope in directions that are orthogonal to each other, generally horizontal and vertical. Later, the contributions of the different components of the slopes are combined to give the total value of the edge strength. [2]
A. Prewitt Operator Approach The Prewitt operator measures two components: the vertical edge component is calculated with kernel Kx and the horizontal edge component is calculated with kernel Ky as shown in Fig-1.|Kx| + |Ky| give an indication of the intensity of the gradient in the current pixel.
polaroid
Fig1. Prewitt Vertical & Horizontal Operator .
B. Sobel Operator Approach
The Sobel operator is similar to the Prewitt operator. The
difference is that the Sobel operator assigns a higher weight to
pixels located at shorter distances from the middle pixel. Fig-2
-1 0 1 -2 0 2 -1
1
1
2 1 0 0 0 -1
-2
-1
Fig2. Sobel Vertical & Horizontal Operator
III. FPGA IMPLEMENTATION
Fig-3 Video Processing Setup Bad On Altera DE2 BOARD
Initially video processing tup was understood and analyzed using the available Altera DE2 KIT and its existing Video processing VHDL modules implemented in CYCLONE-2 FPGA. After that the design of PREWITT and
SOBEL ALGORITHM in VHDL was carried out. The tasks like synthesis, simulation, implementation, testing was done respectively and finally inclusion into the video processing chain was done as shown in Fig-3 design tup. Here the input video or images comes from NTSC/PAL camera, procesd on Altera DE2 board bad on loaded code and the output display on VGA Monitor which is an edge
detected image. A. Fpga Bad Design Methodology In this video processing chain video data source is NTSC camera, which is given to TV Decoder, which extracts the
analog signal into digitized output, the digitized output is
given as input to itu656 Decoder which, extracts the signals from the TV decoder and conversion of rial to parallel of the digitized input video signal is performed. This output in itu-
656 formats is given as input to YCrCb converter. YCrCb converter gives YCrCb format output which is given to dual
北京留学网buffer which converts the interlaced signal to de-interlace
immortality
signal. The de-interlaced signal is given to YCrCb to RGB converter, which will give the RGB format of the signal as output. RGB is applied to EDGE DETECTOR BLOCK. The output of the filter is the filtered image, which is given to
DAC which converts the digital form of the input to analog
format. Output of DAC is fed to VGA monitor, which is of specifications 640*480. Filtered image could be visible on
VGA monitor. [4]
B. Functional Modules in FPGA
-1 0 1本地人英文
-1 0 1
-1 0 1
1 1 1 0 0 0 -1 -1 -1
Description:
The Block diagram shows the data flow path. The major blocks in this methodology consists of the itu_r656_decoder, Dual Port Line Buffer, HsyncX2, YCrCb2RGB, and VGA Timing Generator and the edge detection filter. The figure also shows the TV Decoder (ADV7181) and the VGA DAC (ADV7123) chips ud. The register values of the TV Decoder chip are ud to configure the TV decoder via the I2C_AV_Config block, which us the I2C protocol to communicate with the TV Decoder chip. The itu_656_decoder block extracts YCrCb (4:4:4) video signals from the 4:2:2 data source nt from the TV Decoder. It also generates a 13.5 MHz pixel clock (Pixel Clock) with blanking signals indicating the valid period of data output. Becau the video signal from the TV Decoder is interlaced, it needs to perform de-interlacing on the data source. It ud the Dual Port Line Buffer block and Hsyncx2 block to perform the de-interlacing operation where the pixel clock is changed 56 to 27 MHz from 13.5 Hz and the Hsync is changed to 31.4 kHz from 15.7 kHz. Internally, the Dual Port Line Buffer us a 1 Kbyte long dual port SRAM to double the YCrCb data amount (Y x 2, Cr x 2, Cb x 2 signals in the block diagram). Finally, the YCrCb2RGB block converts the YCrCbx2 data into RGB output. The VGA Timing Generator block generates standard VGA sync signals VGA_
2017高考试卷HS and VGA_VS to enable the display on a VGA monitor. This video lection block is ud to lect the display pattern weather it is external video, internal pattern, filtered video or unfiltered video.
Fig.4 Selection mode for edge detection
C.VHDL Code for Prewitt and Sobel edge detection
algorithm:
--Prewitt edge detection
X = -1*a1 + 1*a3 - 1*a4 + 1*a6 - 1*a7 + 1*a9 = (1*a3 + 1*a6 + 1*a9) - (1*a1 + 1*a4 + 1*a7)
Y = 1*a1 + 1*a2 + 1*a3 - 1*a7 - 1*a8 - 1*a9
--Sobel edge detection
X = -1*a1 + 1*a3 - 1*a4 - 1*a4 + 1*a6 +1*a6 - 1*a7 + 1*a9
= (1*a3 + + 1*a6 +1*a6 + 1*a9) - (1*a1 + 1*a4 + 1*a4 + 1*a7)
Y = 1*a1 + 2*a2 + 1*a3 - 1*a7 - 1*a8 -1*a8- 1*a9 = 1*a1 + 2*a2 + 1*a3 - (1*a7 + 1*a8 + 1*a8 + 1*a9)
The magnitude of the center pixel of the matrix is replaced by ¥ (x2 + y2) which requires square root function IP of the synthesis tool vendor. The method adapted to reduce logic resource by using approximation of square root function by the equation [0.9 * (max of x & y) + 0.4 *(min of x & y)]. The approximation method works well for non-critical video image processing application.
D.Compilation Report of Quartus-II Tool
Fig-5 Compilation report of Quartus-II tool for edge detection
E.Testing on Target
Fig-6 Program download console
Fig-7 Experimental tup for implementation of edge detection on Altera DE2 Board. Input from NTSC/PAL camera and the output is on VGA monitor.
F.Result and Obrvation
Fig.8 The input before applying edge detection algorithm.
Fig.9 The output after applying edge detection algorithm
Edge detection algorithm was implemented in Cyclone-II FPGA Device EP2C35F672C8 of Altera Quartus-II tool. The device utilization summary is given in Table-1.
TABLE-1. THE UTILIZED FPGA HARDWARE RESOURCES.
Name Ud Available
Percentage Logic Elements 2608 33216 8%
Combinational
Function
2438 33216 7%
Dedicated Logic
Register
1038 33216 3% Register 1038
Pin 0
Memory Bits 49152 483840 10%
Embedded Multiplier 9 70 11%
PLL 1 4 25% From the above list it can be en that the video edge detection system hardware resources needed to take up only a small part in the rich FPGA hardware resources. So the system can also be ud for more complex video image processing algorithms, pattern recognition etc.
IV.Model Bad Approach
Simulink is a software package for modeling, simulating, and analyzing dynamical systems. It suppor
ts linear and nonlinear systems, modeled in continuous time, sampled time, or a hybrid of the two. Systems can also be multi-rate, i.e., have different parts that are sampled or updated at different rates. For modeling, Simulink provides a graphical ur interface (GUI) for building models as block diagrams, using click-and-drag mou operations. With this interface, one can draw the models just as one would with pencil and paper. Simulink includes a comprehensive block, library of sinks, sources, linear and nonlinear components, and connectors. It can also customize and create the own blocks. [6]
After defining a model, one can simulate it, using a choice of integration methods, either from the Simulink menus or by entering commands in MATLAB’s command window. The menus are particularly convenient for interactive work, while the command-line approach is very uful for running a batch of simulations. Using scopes and other display blocks, one can e the simulation results while the simulation is running. In addition, one can change parameters and immediately e what happens, for “what if” exploration. The simulation results can be put in the MATLAB workspace for post processing and visualization. [6]
Running edge detection on live video can be accomplished by using a video of the hard drive or using another source like a webcam. The are explained on following ction.
会计英语
A.Model-1: Simulink model of edge detection bad on video
of hard drive
To u a video from the hard drive, the block needed is “From Multimedia File”. This block is located in the sources library. In the following example the video taken from hard drive is “viptrain.avi”. It is taken by double clicking on “From Multimedia File” and choos the input video as “viptrain.avi”. To display the output three Video Viewer blocks are required. To apply edge detection algorithm Edge detection block is required where either Sobel or Prewitt algorithm can be chon by double clicking on it. Color Space Conversion block is required to convert R’G’B’ to Intensity (Intensity is Gray Scale). Becau of edge detection first the color video has to be converted to grayscale image.
Required Blocks:
1. From multimedia file [viptrain.avi]
2. Edge
detection
3. Color Space Conversation
4. Video Viewer [three]
Fig.10 Simulink model of edge detection. Video is taken from hard drive.
In the above model example, From Multimedia File block is connected as input to the Color Space Conversion Block. Then Color Space Conversion Block is connected to the Edge
Detection block. Here Sobel Edge detection algorithm is
chon. After that three video viewer blocks are ud bad on the above fig.10
Result and Obrvation:
Fig.11 Result of edge detected video output taken from hard drive.Here three images are shown. [Original, RGB to intensity and edge detected image] Fig.11 various stages of the result of edge detected video output taken from hard drive. Here three images are shown. [Original, RGB to intensity and edge
detected image]
B.Model-2: Simulink model of edge detection bad on video
pockefrom webcam
To u a video from the webcam a block called “From Video Device”, located in the Image Acquisition Toolbox is ud. This block will find the appropriate device that will capture live video. The computer being ud is equipped with a webcam so that is the default device.
To get the “From Video Device” block into the design, click and hold on the block in the library, then drag it into the blank document. The next block is needed to be output block. The block is called “To Video Display” or “Video Viewer”. This will output the final video signal onto a viewable screen that will pop up once it runs the model. The block is located in the video and image processing library.
Double click on “From Video Device”. Change parameter as follows: a. Device: Winvideo1 (Laptop Integrated Webcam)
b. Video format: YUY2_640X480
c. Port mode: One Multidimensional Signal
To implement edge detection, first the color video has to be converted to grayscale. To convert the video it has to first convert Y’CbCr to R’ G’ B’ then convert R’ G’B’ to intensity (intensity is grayscale).
Both of the conversions can be done using two of the same blocks, which is called a color space conversion. This block is located in the conversions library After the color conversion blocks, an edge detection block is needed. This is located in the analysis and enhancement library .By double click on Edge Detection block one can apply either Sobel or Prewitt edge detection algorithm. Now the output of the edge detection block is connected to the Video Viewer Block.
In the given example, three Video Viewers are ud bad on the fig.13 respectively
Required Blocks:
1.From Video Device[ YUY2_640X480]
2.Edge detection[ Sobel ]
3.Color Space Convertation [Two] [Y’CbCr to
R’G’B’][R’G’B’ to Intensity]
4.Video Viewer [three]
Fig.13 Simulink model for Edge detection of webcam video.
C.Result and Obrvation