Real-time Image Acquisition and Processing System Design bad on DSP
Kuang Hang
Department of Computer Science, Chongqing Education College
kh.
Abstract—In this paper, the software algorithm to achieve
real-time video image acquisition, processing and identification
is studied and discusd in-depth. Through access to
documents, a image acq uisition and processing system with
DSP processor as the core is given to achieve real-time video
image ac q uisition, processing and identification. And
according to the target image characteristics of the cave, bad
on the experimental platform of DSP C6000 compiling
environment, the image threshold gmentation, edge detection,
image rotation and other pre-processing algorithms are
compiled, and also the experimental results are satisfied. After threshold gmentation, edge detection and other pre-processing algorithms, according to the geometric structural characteristics of the cave, a skeleton algorithm is propod to identify the target cave center.
Keywords-DSP control; image acquisition; image processing
I.I NTRODUCTION
DSP chip, also known as digital signal processors, is a type of microprocessor particularly suitable for digital signal processing operations, and its main role is to rapidly achieve a variety of real-time digital signal processing algorithms. In order to achieve this, DSP chips generally u special hardware and software structure. Image is a two-dimensional function generated through obrving (more precily, apperceive) a scene, which depends on the obrvation geometry (namely, projection geometric relationship between the scene and nsor), scene lighting and its nature, the s
ensor characteristics (such as focus, frequency respon and geometric properties, etc.). In this paper, the design and implementation of the DSP-bad image acquisition and processing system, and its image processing algorithm are described in detail.
II.DSP-BASED REAL-TIME ACQUISITION AND
PROCESSING SYSTEM COMPONENT
DSP-bad real-time image acquisition and processing system chart as shown below, the experimental system mainly consists of five parts, namely: CCD camera, TMS320DM643 experimental box, power supply, computer, and the emluator.
hungry Figure 1. experimental system component chart
TABLE I. BASIC PARAMETER OF EACH PART IN EXPERIMENTAL
SYSTEM
Component
name
Basic parameter
PC
Processor: Sempron Processor3100*,
memory: 512MDDR
DM643
experimental
board
Highest speed of core processing is 600Mhz,
external frequency highest speed is 133Mhz CCD camera
Type: OC-1350D; camera
components:SHARP1/3“CCD
display
PAL format, each line 720pixel, each frame
576lines
JTAG emluator SEED-XDSusb2.0
Power supply
+12V(CCD camera ); +5V(experimental
board)
III.V IDEO IMAGE PROCESSING ALGORITHM
IMPLEMENTATION
A.Image threshold gmentation technology
Threshold gmentation algorithm is to achieve the image gmentation through the detection of the target area. Becau that the image gmentation aim is to extract the target region. Threshold gmentation algorithm is a reprentative and very important gmentation algorithm in region g
mentation algorithms,
新文道
its basic processing approach is: first to lect a gray threshold from the image gray-value range, and apply the threshold in all pixels in the image. Generally speaking, the two parts of pixels after this image gmentation by the threshold belong to different regions, which makes the image region gmentation by using this threshold is possible.
C
978-1-4244-5586-7/10/$26.00 2010 IEEE
Choosing the right threshold is the key of successful threshold gmentation, this choice can be confirmed through interactive mode, also can be determined according to a certain threshold detection method, and the basic idea of the threshold gmentation is to determine a threshold, then compare with the pixel value of each pixel, finally according to the comparing results divide the pixel into pixel class which are background and foreground. The general threshold gmentation can be divided into three steps. First, determine the threshold, then compare with the pixel value, and then classify the pixels. In the above three steps, the key is the first step, if a suitable threshold can be confirmed, the image can be correctly and easily split.
Gray threshold transform can convert a grayscale image into black and white binary image. Its operation process is: first the ur specifies a threshold, if the pixel gray value of the image is less than the threshold, then the pixel gray value is t to 0, or to 255.
The transformation function of the gray threshold can be expresd as follows:
0,f(m,n)<T B(m,n)=255,f(m,n)x T ®t ¯ (1) In which, f (m, n) is the original image, B (m,n) is the image after threshold gmentation , T is the specified threshold. B. Image edge gmentation technology The image edge detection is the most basic approach in all edge-bad image gmentation algorithms, when processing edge-bad gmentation to the image, the first step is to conduct the image edge detection processing. According to the physiological characteristics of the human eye, namely, always more nsitive to the area brightness changes faster in the scene and the area of interaction with different objects, so in a n that the image edge part concentrates most of the image information, the identification and extraction of the image edge is very important for the whole image recognition and understanding, also is important feature the image gmentation relied on. The so-called image edge, refers to the part with most the significant local brightness changes, the gray profile in the region is generally regarded as a step, that is, from buffer area with a small gray value to a buffer area with large difference between the gra
y values. As the edge is the result of discrete gray, so derivative methods can be ud to detect it. Commonly first-order and cond order derivatives are ud to detect the edge. As shown in figure 2, the first row is some sample images with edges, cond row is the profile image along the image horizontal direction, and the third, the fourth row is respectively the first and cond derivative of the profiles. The common edge profiles are three: (1) ladder-shaped (as shown in figure (a) and (b)); (2) pul-shaped (as shown in figure (c)); (3) roof-shaped (as shown in figure (d)). The ladder-shaped edge is between two adjacent regions with different gray values, the pul-shaped edge mainly
corresponds to the strip gray value mutation region, and the roof-shaped edge is more slowly in ascending and falling. As the sampling reasons, the edge of digital image there are always some vague, so in this paper the vertical edge profiles are all expresd as a certain slope.
image
profile
(a)(b)(c)(d)
Figure 2. edge and derivative relationship
In figure 2(a), the first order derivative of the gray profile has an upward step in the location of dark to light in the image, and in other locations is zero. This shows that the
range value of the first derivative can be ud to detect the
existence of the edge, and the peak range value generally
corresponds to the edge location. The step-up area of the
cond derivative to the gray value profile has an upward pul, and the step-down area of the first derivative has a downward pul. Between the two steps there is a zero-crossing, and its location corresponds to the edge position of the original image. Therefore, the zero-crossing of the cond derivative can be ud to detect the edge location, and the sign near the zero-crossing of the cond derivative can be ud to determine the dark or light zone of the edge pixel in the image edge. Analy
zing the figure 2(b) can obtain the similar conclusion. Here image is from the light to the dark, so comparing with figure (a), the profile is about changed, the first derivative is changed from top to bottom, and the cond derivative also about changed. In figure 2(c), the pul-shaped edge profile is the same as the first derivative shape in figure (a), so the first derivative shape in figure (c) is the same as the cond derivative shape in figure (a), and its two cond derivative zero-crossing points exactly correspond to the pul rising and falling edges. By detecting the two cond derivative zero-crossing points of the pul-shaped profile the pul scope can be confirmed. In figure 2(d), the roof-shaped edge profile can be en as the outspread of the bottom edge of the pul, so its first derivative is achieved from the rising of the pul profile first derivative, and its cond derivative is achieved by withdrawing the rising and falling edges of the pul profile cond derivative. By detecting the first derivative zero-crossing points of the roof-shaped edge profile the roof location can be confirmed.
The most classic, simple edge detection method is to construct differential operator nsitive to the pixel gray-scale step change or structure edge operator according to
certain neighborhood characteristics of the pixel, such as Sobel operator and Laplacian operator. 1) Sobel operator
For the step edge, Sobel propod a operator to detect the edge point. For each pixel of the image, to study up, down, left and right neighbor grayscale weighted difference, the clo neighbors will be got big. The two cores the following figure showed form the Sobel operator, and each point of the image us the two as the convolution, the one core is the maximum to the corresponding vertical edge, and the other is the maximum to the corresponding horizontal edge. The maximum of the two is the point output bit. The operation result is an edge range image.
Figure 3. Sobel edge detection operator
Hereby, define Sobel operator as follows:begin的过去式
S=f i-1,j-1+2f i-1,j +f i-1,j+1-(f(i+1,j-1)+2f(i+1,j)+f(i+1,j+1))+(f(i+1,j-1+2(i,j-1)+f(i+1,j-1))-(f(i-1,j+1)+2f(i,j+1)+f(i+1,j+1))
˅
(2)
Sobel operator has advantages of simple computation, rapid computing speed. This method has a stronger robustness to the noi respon. 2) Laplacian operator
For the step-like edge, the cond derivative has zero-crossing at the edge points, namely, the cond derivative function on both sides of the edge points takes different sign. Bad on this, for each pixel of the digital image {f (i, j)}, to take the cond-order differential sum of it on the x-axis direction and the y-axis direction.
222
(,)(,),(1,)(1,)(,1)(,1)4(,)x y f i j f i j f i j f i j f i j f i j f i j f i j
' ' (3)
This is the Laplacian operator, which is an edge detection
operator unrelated to the edge direction. Becau people only care about the location of the edge point, rather than it’s around actual gray difference, so the edge detection operator unrelated to the edge direction is usually lected. If ͪ2f(i ˈj) occurs zero-crossing in the (i, j) point, then in the (i, j) point is the step edge point.
For roof-shaped edge, the cond derivative at the edge points takes the minimum. Bad on this, for each pixel of the image {f (i, j)}, to take the rever of the cond-order differential sum on the x-a
xis direction and the y-axis direction, that is, the rever of Laplacian operator, as following formula:
2(,)(,)(1,)(1,)(,1)(,1)4(,)
L i j f i j f i j f i j f i j f i j f i j
(4)
In which, {L (i,j)} is called as edge image.
C. Image rotation
In practical applications, especially in multimedia programming, the image rotation will be frequently ud. As shown below, point (x0, y0) coordinate is changed into (x1, y1) through the e-degree rotation.
bookdownFigure 4. image rotation sketch map
Before rotation:
0cos()
0sin()x r a y r a ®
¯
(5) After rotation:
1cos()cos()cos()sin()sin()0cos()0sin()
1sin()sin()cos()cos()sin()0sin()0cos()
上海美高学校x r a r a r a x y y rc a r a r a x c y T T T T T T T T T T ®
¯ (6)
1cos() sin() 001sin() cos() 001 0 0 11x x y y T T T T ªºªºªº
«»«»«» «»«»«»«»«»«»¬¼¬¼¬¼
(7)
The inver operation as below:
0cos()
-sin() 010sin() cos() 011 0 0 11x x y y T T T T ªºªºªº«»«»«» «»«»«»«»«»«»¬¼¬¼¬¼
(8) The above rotation is carried out around the coordinate axis origin (0, 0), if it is around a given point (a, b), then the coordinate system should be first moved to the point, and then shift back to the new coordinate origin.
Now we will move the coordinate system I flat to coordinate system II, in which the coordinate of the origin in coordinate system II is (a, b) in the coordinate system I. The coordinate transformation matrix expression as follows:
-1 -2 -1
0 0 0 1 2 1 -1 0 1 -2 0 2 -1 0 1
1111111 0 0 1 0 0 111x x a y b y ªºªº
ªº«»«»«» «»«»«»«»«»«»¬¼¬¼¬¼
(9) The inver transform transformation matrix expression is:
1111111 0 0 1 0 0 111x x a y b y ªºªºlighter是什么意思
ªº«»«»«» «»«»«»«»«»«»¬¼¬¼¬¼
(10) Supposing that before the image rotation the center
coordinate is (a, b), after rotation center coordinate is ((= c, d) (in the new coordinate system, the origin is the upper left corner of the new image after rotation), then the rotation transformation matrix expression is:成都欢迎您
11111'1 1 0 1 0 cos() sin() 010 1 1'0 1 sin() cos() 010 0 10 0 1 0 0 11x x c c y d y d T T T T ªºªºªºªºªº«»«»«»«»« «»«»«»«»«
«»«»«»«»«¬¼¬¼¬¼¬¼¬¼11111111 0 cos() sin() 0 1 0 00 1 sin() cos() 00 1 00 0 1 0 0 10 0 11x y c a x d b y T T T T ªº«»»«»»«»»¬¼
ªºªºªºª«»«»«»« «»«»«»«»«»«»¬¼¬¼¬¼¬º
»«»
«»¼
(11)
The inver transform transformation matrix expression is:
0 1 0 cos() -sin() 0 1 0 100 1 sin() cos() 00 1 110 0 1 0 0 10 0 11x a c x y b d y T T T T ªºªºªºªºª«»«»«»«»« «»«»«»«»««»«»«»«»¬¼¬¼¬¼¬¼¬
º»»
«»¼ (12)
Namely:
0cos() sin() -ccos()-dsin()+a 10sin() cos() csin()-dcos()+b 11 0 0 11x x y y T T T T T T T T ªºªºªº
«»«»«» «»«»«»«»«»«»¬¼¬¼¬¼
(13)
爱尔兰都柏林大学Therefore,
01cos()1sin()cos()sin()0sin()1cos()sin()cos()x x y c d a y x y c d b
T T T T T T T T ®
¯ (14)
IV. V IDEO IMAGE RECOGNITION ALGORITHM
IMPLEMENTATION
Shape analysis and recognition is the basic problem of pattern recognition, image processing and computer vision. The problem’s key is how to express the shape of the object. The skeleton method is one of the currently widely ud methods. Skeleton is an effective form to express the object shape and it is one of the most simple and most effective methods to express the shape. It is not only ud in traditional areas, such as object recognition and expression, industrial parts inspection, printed circuit board verification and medical image analysis. Skeleton method is also ud in the fields of the deformation of
graphics, computer-aided geometric design and other
studies of equidistant curve generation. Aiming at the basic outline of image, the method of calculating the average value of meeting the requirements of the pixels coordinates which is after the processing of obtaining the skeleton in this system, and it achieved good results.
After a ries of preprocessing, the original image will obtain a continuous target edge map. The basic idea of this algorithm is: to scan the obtained edge map line by line, when meeting the black pixels, that is, the edge of the tunnel, the horizontal and vertical coordinates of each point will be accumulated and parately recorded in two arrays, at the same time, to cumulate the arched the number of black pixels. Finally, using the average method, the cumulative vertical and horizontal coordinate value sum will be respectively divided by the number of black pixels, so that to obtain the coordinates of the object center.
The tunnel center point location algorithm is described as follows:
Figure 5. cave skeleton chart
As shown in the above figure, p 1(x 1, y 1), p 2(x 2, y 2) …p n (x n ,y n ) are all the points of the target cave skeleton, in which, x is the horizontal coordinate, and y is the vertical
coordinate. Then 1n
i i x ¦
is the sum of all the horizontal
coordinate values, similarly, 1
n i i y ¦
is the sum of all the vertical coordinate values . And there are total n points on the skeleton, so the center point (x c ,y c ) coordinate is:
1
1
;n
n
i
i
i i c c
x
y
x y n
n
¦¦ (15)
bullock
Figure 6. solving cave center through skeleton average algorithm
V.
C ONCLUSION
The main task of this paper is to give a design of the image acquisition and processing system with the core as DSP processor, to achieve real-time video image acquisition, processing and identification. In the DSP system, the implementation of the image processing and recognition algorithms, including image threshold gmentation, edge detection, image rotation, and the skeleton location algorithm, proved the good performance of the DSP system in he image processing and recognition.
R EFERENCES :
[1] Texas Instruments, TMS320C64x DSP Two-Level Internal Memory
Reference Guide, Aug 2002
[2] Texas Instruments, TMS320C64X DSP Video PortNCXO
Interpolated Control (VIC) Port Reference Guide, Aug 2004 [3] Philips, SAA7105H datasheet, 2004.3
[4] Texas Instruments, TVPS 150A datasheet, 2004
套餐英文[5] Texas Instruments, TMS320C6000 DSP inter-Integrated Circuit (I2C)
Module Reference Guide, Oct 2000
[6] Texas Instruments, TMS320C6000 EMIF-to-External SDRAM
Interface, March 2004
[7] Texas Instruments, TMS320C6000 EMIF to External Flash Memory,
February 2002