Neural networks and deep learning 1

更新时间:2023-05-19 11:08:35 阅读: 评论:0

The human visual system is one of the wonders of the world.
钢琴学习入门
Consider the following quence of handwritten digits:Most people effortlessly recognize tho digits as 504192. That ea is deceptive. In each hemisphere of our brain, humans have a
primary visual cortex, also known as V1, containing 140 million
neurons, with tens of billions of connections between them. And yet human vision involves not just V1, but an entire ries of visual cortices - V2, V3, V4, and V5 - doing progressively more complex
image processing. We carry in our heads a supercomputer, tuned
by evolution over hundreds of millions of years, and superbly adapted to understand the visual world. Recognizing handwritten
digits isn't easy. Rather, we humans are stupendously,
astoundingly good at making n of what our eyes show us. But
nearly all that work is done unconsciously. And so we don't usually appreciate how tough a problem our visual systems solve.The difficulty of visual pattern recognition becomes apparent if
you attempt to write a computer program to recognize digits like
tho above. What ems easy when we do it ourlves suddenly becomes extremely difficult. Simple intuitions about how we recognize shapes - "a 9 has a loop at the top, and a vertical stroke in
the bottom right" - turn out to be not so simple to express
algorithmically. When you try to make such rules preci, you quickly get lost in a morass of exceptions and caveats and special
cas. It ems hopeless.
Neural networks approach the problem in a different way. The idea
is to take a large number of handwritten digits, known as training
梦见河水上涨examples,CHAPTER 1
凡客体
Using neural nets to recognize handwritten digits Neural Networks and Deep Learning What this book is about On the exercis and problems Using neural nets to recognize handwritten digits How the backpropagation algorithm works Improving the way neural networks learn A visual proof that neural nets can compute any function Why are deep neural networks hard to train?Deep learning Acknowledgements Frequently Asked Questions Sponsors  Thanks to all the supporters  who made the book possible. Thanks also to all the contributors to the Bugfinder Hall of Fame .The book is currently a beta relea,and is still under active development. Plea nd error reports to For other enquiries, plea e the FAQ  first.Resources Code repository Mailing list for book announcements
Michael Nieln's project
announcement mailing list
狼三则其一and then develop a system which can learn from tho training
examples. In other words, the neural network us the examples to automatically infer rules for recognizing handwritten digits.
Furthermore, by increasing the number of training examples, the network can learn more about handwriting, and so improve its accuracy. So while I've shown just 100 training digits above,
perhaps we could build a better handwriting recognizer by using thousands or even millions or billions of training examples.In this chapter we'll write a computer program implementing a neural network that learns to recognize handwritten digits. The program is just 74 lines long, and us no special neural network libraries. But this short program can recognize digits with an accuracy over 96 percent, without human intervention.
天堂的孩子电影
Furthermore, in later chapters we'll develop ideas which can
improve accuracy to over 99 percent. In fact, the best commercial neural networks are now so good that they are ud by banks to process cheques, and by post offices to recognize address.We're focusing on handwriting recognition becau it's an
excellent prototype problem for learning about neural networks in general. As a prototype it hits a sweet spot: it's challenging - it's no small feat to recognize handwritten digits - but it's not so difficult as to require an extremely complicated solution, or tremendous computational power. Furthermore, it's a great way to develop more advanced techniques, such as deep learning. And so
throughout the book we'll return repeatedly to the problem of
handwriting recognition. Later in the book, we'll discuss how the ideas may be applied to other problems in computer vision, and By Michael Nieln  / Dec 2014
Typetting math: 90%
also in speech, natural language processing, and other domains.Of cour, if the point of the chapter was only to write a computer program to recognize handwritten digits, then the chapter would be much shorter! But along the way we'll develop many key ideas about neural networks, including two important types of artificial neuron (the perceptron and the sigmoid neuron), and the standard learning algorithm for neural networks, known as stochastic
gradient descent. Throughout, I focus on explaining why  things are done the way they are, and on building your neural networks intuition. That requires a lengthier discussion than if I just
prented the basic mechanics of what's going on, but it's worth it for the deeper understanding you'll attain. Amongst the payoffs, by the end of the chapter we'll be in position to understand what deep learning is, and why it matters.
Perceptrons
What is a neural network? To get started, I'll explain a type of artificial neuron called a perceptron . Perceptrons were developed in the 1950s and 1960s by the scientist Frank Ronblatt , inspired by earlier work  by Warren McCulloch  and Walter Pitts . Today, it's more common to u other models of artificial neurons - in this book, and in much modern work on neural networks, the main neuron model ud is one called the sigmoid neuron . We'll get to sigmoid neurons shortly. But to understand why sigmoid neurons are defined the way they are, it's worth taking the time to first understand perceptrons.春节去旅游攻略
So how do perceptrons work? A perceptron takes veral binary inputs,莞尔是什么意思
, and produces a single binary output:
In the example shown the perceptron has three inputs, .In general it could have more or fewer inputs. Ronblatt propod a simple rule to compute the output. He introduced weights , , real numbers expressing the importance of the respective inputs to the output. The neuron's output,  or , is
,,…x 1x 2,,x 1x 2x 3,,…w 1w 201
determined by whether the weighted sum  is less than or greater than some threshold value . Just like the weights, the
threshold is a real number which is a parameter of the neuron. To put it in more preci algebraic terms:爱眼护眼图片
That's all there is to how a perceptron works!
That's the basic mathematical model. A way you can think about the perceptron is that it's a device that makes decisions by weighing up evidence. Let me give an example. It's not a very
realistic example, but it's easy to understand, and we'll soon get to more realistic examples. Suppo the weekend is coming up, and you've heard that there's going to be a chee festival in yo
ur city.You like chee, and are trying to decide whether or not to go to the festival. You might make your decision by weighing up three factors:
1. Is the weather good?
2. Does your boyfriend or girlfriend want to accompany you?
3. Is the festival near public transit? (You don't own a car).We can reprent the three factors by corresponding binary variables , and . For instance, we'd have  if the weather is good, and  if the weather is bad. Similarly, if your boyfriend or girlfriend wants to go, and  if not. And similarly again for  and public transit.
Now, suppo you absolutely adore chee, so much so that you're happy to go to the festival even if your boyfriend or girlfriend is uninterested and the festival is hard to get to. But perhaps you really loathe bad weather, and there's no way you'd go to the festival if the weather is bad. You can u perceptrons to model this kind of decision-making. One way to do this is to choo a weight  for the weather, and  and  for the other conditions. The larger value of  indicates that the weather matters a lot to you, much more than whether your boyfriend or girlfriend joins you, or the nearness of public transit. Finally,
suppo you choo a threshold of  for the perceptron. With the
∑j w j x j output =⎧⎩⎨⎪⎪⎪⎪⎪⎪01if ≤ threshold ∑j w j x j if > threshold
∑j w j x j (1)
,x 1x 2x 3=1x 1=0x 1=1x 2=0x 2x 3=6w 1=2w 2=2w 3w 15
choices, the perceptron implements the desired decision-making model, outputting  whenever the weather is good, and  whenever the weather is bad. It makes no difference to the output whether your boyfriend or girlfriend wants to go, or whether public transit is nearby.
By varying the weights and the threshold, we can get different models of decision-making. For example, suppo we instead
cho a threshold of . Then the perceptron would decide that you should go to the festival whenever the weather was good or  when both the festival was near public transit and  your boyfriend or girlfriend was willing to join you. In other words, it'd be a different model of decision-making. Dropping the threshold means you're more willing to go to the festival.
Obviously, the perceptron isn't a complete model of human decision-making! But what the example illustrates is how a perceptron can weigh up different kinds of evidence in order to make decisions. And it should em plausible that a complex
network of perceptrons could make quite subtle decisions:
In this network, the first column of perceptrons - what we'll call the first layer  of perceptrons - is making three very simple decisions, by weighing the input evidence. What about the perceptrons in the cond layer? Each of tho perceptrons is
making a decision by weighing up the results from the first layer of decision-making. In this way a pe
rceptron in the cond layer can make a decision at a more complex and more abstract level than perceptrons in the first layer. And even more complex decisions can be made by the perceptron in the third layer. In this way, a many-layer network of perceptrons can engage in sophisticated decision making.
Incidentally, when I defined perceptrons I said that a perceptron has just a single output. In the network above the perceptrons look 103

本文发布于:2023-05-19 11:08:35,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/82/693673.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:钢琴   河水   电影   旅游
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图