外文文献翻译
(含:英文原文及中文译文)
英文原文
Neural Network Introduction
1 Objectives
As you read the words you are using a complex biological neural network. Y ou have a highly interconnected t of some 1011neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons, a rich asmbly of tissue and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience.
Scientists have only just begun to understand how biological neural networks operate. It is generally understood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.
This leads to the following question: Although we have only 会议简报格式
a rudimentary understanding of biological neural networks, is it possible to construct a small t of simple artifi cial “neurons” and perhaps train them to rve a uful function? The answer is “yes.”This book, then, is about
artificial neural networks.
The neurons that we consider here are not biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon. Networks of the artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform uful functions. This book is about such neurons, the networks that contain them and their training.
2 History
The history of artificial neural networks is filled with color人们的英语怎么读
ful, creative individuals from many different fields, many of whom struggled for decades to develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Rearch by John Anderson and Edward Ronfeld. They have collected and edited a t of some 43 papers of special historical interest. Each paper is preceded b
y an introduction that puts the paper in historical perspective.
Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be repeated here. However, it ems appropriate to give a brief overview, a sample of the major developments.
At least two ingredients are necessary for the adva运输合同
ncement of a technology: concept and implementation. First, one must have a concept,
a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they游戏活动
de订婚宴主持词
signed experiments to study its pumping action. The experiments revolutionized截图的快捷键是什么
our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.
Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathematics necessary for the recons
truction of images from computer-aided topography (CA T) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a uful CA T system.
The history of neural networks has progresd through both conceptual innovations and implementation developments. The advancements, however, em to have occurred in fits and starts rather than by steady evolution.
Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily of interdisciplinary work in physics, psychology and
neurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation.
The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [McPi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the neural network field.刀出鞘打一字谜
McCulloch and Pitts were followed by Donald Hebb [Hebb49], who propod that classical conditioning (as discovered by Pavlov) is prent becau of the properties of individual neurons. He propod a mechanism for learning in biological neurons.
The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learning rule by Frank Ronblatt [Ro58]. Ronblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural network rearch. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Ronblatt and the perception learning rule.) At about the same time, Bernard Widrow and Ted Hoff [WiHo60]
introduced a new learning algorithm and ud it to train adaptive linear neural networks, which were similar in structure and capability to Ronblatt’s perception. The Widrow Hoff learning rule is still in u today. (See Chapter 10 for more on Widrow-Hoff learning.) Unfortunately, both Ronblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert [MiPa69]. Ronbla毛电影
tt and Widrow were
aware of the limitations and propod new networks that wo班尼路什么档次
uld overcome them. However, they were not able to successfully modify their learning algorithms to train the more complex networks.
Many people, influenced by Minsky and Papert, believed that further rearch on neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment, caud many rearchers to leave the field. For a decade neural network rearch was largely suspended. Some important work, however, did continue during the 1970s. In 1972 Teuvo Kohonen [Koho72] and James Anderson [Ande72] independently and parately developed new neural networks that could act as memories. Stephen Grossberg [Gros76] was also very active during this period in the investigation of lf-organizing networks.
Interest in neural networks had faltered during the late 1960s becau of the lack of new ideas and powerful computers with which to