Digital Image Processing and Edge Detection
Digital Image Processing
Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and reprentation for autonomous machine perception.
An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is compod of a finite number of elements, each of which has a particular location and value. The elements are referred to as picture elements, image elements, pixels, and pixels. Pixel is the term most widely ud to denote the elements of a digital image.
Vision is the most advanced of our ns, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompass a wide and varied field of applications.
There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, 现在当兵需要什么条件even the trivial task of computing the average intensity of an image (which yields a single number) would not b大班下学期e considered an image processing operation. On the other hand, there are fields such as computer visi
on who ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions bad on visual inputs. This area itlf is a branch of artificial intelligence (AI) 淡静 who objective is to emulate human intelligenc什么歌比较好唱e. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.
There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one uful paradigm is to consider three types of computerized process in this continuum: low-, mid-, and high level process. Low-level process involve primitive opera- tions such as image preprocessing to reduce noi, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as gmentation (partitioning an image into regions or objects), description of tho objects to reduce them to a form suitable for com
puter processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from tho images (e.g., edges, contours, and the identity of individual objects). Finally, higher level processing involves “making n” of an enmble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.
Bad on the preceding comments, we e that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompass process who inputs and outputs are images and, in addition, encompass process that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify the concepts, consider the 借款起诉状怎么写area of automated analysis of text. The process of acquiring an image of the area containing the text, preprocessing that image, extracting (gmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing tho
individual characters are in the scope of what we call digital image processing in this book. Making n of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making nse.” As will become evident shortly, digital image processing, as we have defined it, is ud successfully in a broad range of areas of exceptionascrolllock键是什么意思l social and economic value.
The areas of application of digital image processing are so绿茶有什么功效 varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, 木偶舞and electronic (in the form of electron beams ud in electron microscopy). Synthetic images, ud for modeling and visualization, are generated by computer. In this ction we discuss briefly how images are generated in these various categories and the areas in which they are a
pplied.