边缘检测中英文翻译

更新时间:2023-06-02 13:27:07 阅读: 评论:0

Digital Image Processing and Edge Detection
Digital Image Processing
Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and reprentation for autonomous machine perception.
An image may be defined as a two-dimensional function, f(x,y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is compod of a finite number of elements, each of which has a particular location and value. The elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely ud to denote the elements of a digital image.苹果来电闪光灯
Vision is the most advanced of our ns, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the
electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. The include ultrasound, electron microscopy, and computer generated images. Thus, digital image processing encompass a wide and varied field of applications.
There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition,even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision who ultimate goal is to u computers to emulate human vision, including learning and being able to make inferences and take actions bad on visual inputs. This area itlf is a branch of artificial intelligence
(AI) who objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated.
The area of image analysis (also called image understanding) is in between image processing and computer vision.
宵衣旰食是什么意思There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one uful paradigm is to consider three types of computerized process in this continuum: low, mid, and highlevel process. Low-level process involve primitive operations such as image preprocessing to reduce noi, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as gmentation (partitioning an image into regions or objects), description of tho objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from tho images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making n” of an enmble of recognize d objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.
Bad on the preceding comments, we e that a logical place of overlap between image processin
同比公式g and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompass process who inputs and outputs are images and, in addition, encompass process that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify the concepts, consider the area of automated analysis of text. The process of acquiring an image of the area containing the text, preprocessing that image, extracting (gmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing tho individual characters are in the scope of what we call digital image processing in this book. Making n of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making n.” As will become evident shortly, digital image processing, as we have defined it, is ud successfully in a broad range of areas of exceptional social and economic value.
The areas of application of digital image processing are so varied that some form
直发器of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy sou
球鼠妇
猫头鹰的简笔画rce for images in u today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams ud in electron microscopy). Synthetic images, ud for modeling and visualization, are generated by computer. In this ction we discuss briefly how images are generated in the various categories and the areas in which they are applied.
Images bad on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.
Fig1
Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.
Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image.
A familiar example of enhancement is when we increa the contrast of an image
becau “it looks better.” It is important to keep in mind that enhancement is a very subjective area o
f image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the n that restoration techniques tend to be bad on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is bad on human subjective preferences regarding what constitutes a “good” en hancement result.
Color image processing is an area that has been gaining in importance becau of the significant increa in the u of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is ud also in later chapters as the basis for extracting features of interest in an image.
Wavelets are the foundation for reprenting images in various degrees of resolution. In particular, this material is ud in this book for image data compression and for pyramidal reprentation, in which images are subdivided successively into smaller regions.
油菜蜜F ig2
秘密社团
Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storage
technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in us of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most urs of computers in the form of image file extensions, such as the jpg file extension ud in the JPEG (Joint Photographic Experts Group) image compression standard.
Morphological processing deals with tools for extracting image components that are uful in the reprentation and description of shape. The material in this chapter begins a transition from process that output images to process that output image attributes.
Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous gmentation is one of the most difficult tasks in digital image processing. A rugged gmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic gmentation algorithms almost always guarantee eventual failure. In general, the more accurate the gmentation, the more likely recognition is to succeed.
Reprentation and description almost always follow the output of a gmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the t of pixels parating one image region from another) or all the points in the region itlf. In either ca, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be reprented as a boundary or as a complete region. Boundary reprentation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional reprentation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, the reprentations complement each other. Choosing a reprentation is only part of the solution for transforming raw data into a form suitable for subquent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature lection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.
Recognition is the pro cess that assigns a label (e.g., “vehicle”) to an object bad on its descriptors. As detailed before, we conclude our coverage of digital image

本文发布于:2023-06-02 13:27:07,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/966637.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:闪光灯   简笔画   秘密
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图