Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Tut] How Neural Networks Learn

#1
How Neural Networks Learn

<div>
<div class="kk-star-ratings kksr-auto kksr-align-left kksr-valign-top" data-payload="{&quot;align&quot;:&quot;left&quot;,&quot;id&quot;:&quot;568016&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;top&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;1&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;5&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;5&quot;,&quot;greet&quot;:&quot;Rate this post&quot;,&quot;legend&quot;:&quot;5\/5 - (1 vote)&quot;,&quot;size&quot;:&quot;24&quot;,&quot;width&quot;:&quot;142.5&quot;,&quot;_legend&quot;:&quot;{score}\/{best} - ({count} {votes})&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}">
<div class="kksr-stars">
<div class="kksr-stars-inactive">
<div class="kksr-star" data-star="1" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="2" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="3" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="4" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="5" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
<div class="kksr-stars-active" style="width: 142.5px;">
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
</div>
<div class="kksr-legend" style="font-size: 19.2px;"> 5/5 – (1 vote) </div>
</div>
<figure class="wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube"><a href="https://blog.finxter.com/how-neural-networks-learn/"><img src="https://blog.finxter.com/wp-content/plugins/wp-youtube-lyte/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fx4HH2A23gvE%2Fhqdefault.jpg" alt="YouTube Video"></a><figcaption></figcaption></figure>
<p>Artificial neural networks have become a powerful tool providing many benefits in our modern world. They are used to filter out spam, to perform voice recognition, and are even being developed to drive cars, among many other things.</p>
<p>As remarkable as these tools are, they are readily within the grasp of almost anyone. If you have technical interest and have some experience with computer programming you can build your own <a href="https://blog.finxter.com/tutorial-how-to-create-your-first-neural-network-in-1-line-of-python-code/" data-type="post" data-id="2463" target="_blank" rel="noreferrer noopener">neural networks</a>.</p>
<p>But before you learn the hands-on details of building neural networks you should learn some of the fundamentals of how they work. This article will cover one of those fundamentals – how neural networks learn.</p>
<p class="has-global-color-8-background-color has-background"><strong>Note</strong>: This article includes some algebra and calculus. If you’re not comfortable with algebra, you should still be able to understand the content from the graphs and descriptions. The calculus is not done in any detail. Again you should still be able to follow along from the descriptions. You will not learn the details of how the calculations are done. Instead, you will gain an intuitive understanding of what is going on.</p>
<p>Before learning this, you should be familiar with the basics of how neural networks are structured and how they operate. The article <a href="https://blog.finxter.com/the-magic-of-neural-networks-how-they-work/" target="_blank" rel="noreferrer noopener">“The Magic of Neural Networks: History and Concepts”</a> covers these basics. Still, we offer the following brief refresher.</p>
<h2>Basic Fundamentals: How Neural Networks Work</h2>
<p>Figure 1 shows an artificial neuron. </p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="731" height="372" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-19.png" alt="" class="wp-image-568036" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-19.png 731w, https://blog.finxter.com/wp-content/uplo...00x153.png 300w" sizes="(max-width: 731px) 100vw, 731px" /><figcaption><strong>Figure 1</strong>: artificial neuron</figcaption></figure>
</div>
<p>Signals from other neurons come in through multiple inputs, each multiplied by its corresponding <strong>weight</strong> (Weights express the connection strengths between the neuron and each of its upstream neurons.). </p>
<p>A <strong>bias</strong> is input as well (bias expresses a neuron’s inherent activation, independent of its input from other neurons.). All these inputs add together, and the resulting total signal is then processed through the <strong>activation function</strong> (A sigmoid function is shown here.).</p>
</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="1000" height="450" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-20.png" alt="" class="wp-image-568044" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-20.png 1000w, https://blog.finxter.com/wp-content/uplo...00x135.png 300w, https://blog.finxter.com/wp-content/uplo...68x346.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /><figcaption><strong>Figure 2</strong>: neural network classifying an image (Dog photo by <a href="https://www.pexels.com/photo/shallow-focus-photography-of-a-golden-retriever-686094/" data-type="URL" data-id="https://www.pexels.com/photo/shallow-focus-photography-of-a-golden-retriever-686094/" target="_blank" rel="noreferrer noopener">Garfield Besa</a>)</figcaption></figure>
</div>
<p>Figure 2 shows a network of these neurons. Signals are introduced on the input side, and they progress through the network, passing through neurons and along their connections, getting processed by the calculations described above. How the signals are processed, depends on the weights and biases among all the neurons. </p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> The <strong>key takeaway</strong> is that it is the settings of the weights and biases that establish how the network as a whole computes. In other words, the learning and memory of the network is encoded by the weights and biases.</p>
<p>So how does one program these weights and biases? </p>
<p>They are set by training the network with samples and letting it learn by example. The details of how that is done is the subject of this article.</p>
<h2>Overview of How Neural Networks Learn</h2>
<p>As mentioned, a neural network’s learning and memory is encoded by the connection weights and biases of the neurons throughout the network. </p>
<p>These weights and biases are set by training the network on examples by following this six-step training procedure:</p>
<ol>
<li>Provide a sample to the network.</li>
<li>Since the network is untrained, it will probably get the wrong answer.</li>
<li>Compute how far this answer is from the correct answer. This error is known as <strong>loss</strong>.</li>
<li>Calculate what changes in the weights and biases will make the loss smaller.</li>
<li>Make adjustments to those weights and biases as determined by those calculations.</li>
<li>Repeat this again and again with numerous samples until the network learns to answer the samples correctly.</li>
</ol>
<h2>Presenting Samples and Calculating Loss</h2>
<p>Let’s review some of this in more detail while considering a use case. </p>
<p>Imagine we want to train a network to estimate crowd size. </p>
<p>To do this we must first train the network with a large set of images of crowds. For each image the number of people are counted. We then include <strong>labels</strong> indicating correct crowd size for each picture. This is known as a <strong>training set</strong>.</p>
<p>The pictures are submitted to the network, which then indicates its crowd estimate for each picture. Since the network is not trained, it surely gets the estimate wrong for each image. </p>
<p>For each image/label pair, the network calculates the loss for that sample. </p>
<p>Multiple possible choices can be used for calculating loss. One can choose any calculation that appropriately expresses how far the network’s answer is from the correct answer. </p>
<p>An appropriate choice for crowd-size loss estimate is the square error:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img loading="lazy" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-21-1024x56.png" alt="" class="wp-image-568060" width="847" height="46" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-21-1024x56.png 1024w, https://blog.finxter.com/wp-content/uplo...300x16.png 300w, https://blog.finxter.com/wp-content/uplo...768x42.png 768w, https://blog.finxter.com/wp-content/uplo...age-21.png 1321w" sizes="(max-width: 847px) 100vw, 847px" /></figure>
</div>
<p>where:</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="415" height="107" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-22.png" alt="" class="wp-image-568064" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-22.png 415w, https://blog.finxter.com/wp-content/uplo...300x77.png 300w" sizes="(max-width: 415px) 100vw, 415px" /></figure>
</div>
<p>Suppose we submit an image showing a crowd size of 500 people. Figure 3 shows how the error varies for crowd estimates around the true crowd size of 500 people.</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="413" height="266" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-23.png" alt="" class="wp-image-568070" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-23.png 413w, https://blog.finxter.com/wp-content/uplo...00x193.png 300w" sizes="(max-width: 413px) 100vw, 413px" /><figcaption><strong>Figure 3</strong></figcaption></figure>
</div>
<p>If the Network guesses 350 people the loss is 22500. If the network guesses 600 people the loss is 10000. </p>
<p>Clearly, the loss is minimized when the network guesses the correct crowd size of 500 people.</p>
<p>But recall we said it is the weights and biases in the network that encode its learning and memory, so it is the weights and biases that determine if the network gets the right answer. So we need to adjust the weights and biases so that the network gets closer to the correct answer for this image.</p>
<p>In other words, we need to change the weights and biases to minimize the loss. To do that, we need to figure out how the loss varies when we vary the weights and biases.</p>
<h2>Minimizing Loss: Calculus and the Derivative</h2>
<p>So how do we calculate how loss changes when we vary weights and biases? </p>
<p>This is where calculus comes in.</p>
<p><em>(Don’t worry if you don’t know calculus, we’ll show you everything you need to know, and we’ll keep it intuitive.) </em></p>
<p>Calculus is all about determining how one variable is affected by changes in another variable.</p>
<p><em>(Strictly speaking there’s more to calculus than that, but this idea is one of the core ideas of calculus.)</em></p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> The loss L depends on network output y, but y depends on input, and on weights w and biases b. So there is a somewhat long and complicated chain of dependencies we have to go through to figure out how L varies when w and b vary. </p>
<p>However, for the sake of learning, let’s instead start by just examing how L varies when y varies, since this is simpler and will help develop an intuition for calculus.</p>
<p>How L depends on y is somewhat easy – we saw the equation for it earlier, and we saw the graph of that equation in Figure 3. We can tell by looking at the graph that if the network guesses 350 then we need to increase the output y in order to reduce the loss, and that if the network guesses 600 then we need to decrease the output y in order to reduce the loss.</p>
<p>But with neural networks, we never have the luxury of being able to examine the graph of the loss to figure it out. </p>
<p>We can, however, use calculus to get our answer. To do this, we do what is called <strong>taking the derivative</strong>. </p>
<p>Here is the derivative of the equation for the graph in Figure 3 (note, we will not explain how this is calculated, that is the domain of a calculus course.):</p>
<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="70" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-24-1024x70.png" alt="" class="wp-image-568081" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-24-1024x70.png 1024w, https://blog.finxter.com/wp-content/uplo...300x20.png 300w, https://blog.finxter.com/wp-content/uplo...768x52.png 768w, https://blog.finxter.com/wp-content/uplo...age-24.png 1311w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
<p>This is typically referred to as “taking the derivative of L with respect to y”. You can read that <em><strong>dL/dy</strong></em> as saying “this is how L changes when y changes”. Now let’s calculate how L changes when y changes at the point y = 350:</p>
<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="70" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-25-1024x70.png" alt="" class="wp-image-568084" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-25-1024x70.png 1024w, https://blog.finxter.com/wp-content/uplo...300x21.png 300w, https://blog.finxter.com/wp-content/uplo...768x52.png 768w, https://blog.finxter.com/wp-content/uplo...age-25.png 1317w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
<p>So at y = 350, for every bit y increases, L decreases by 300. That implies that when we increase y the loss will decrease.</p>
<p>Now let’s calculate how L changes when y changes at the point y = 600:</p>
<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="60" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-26-1024x60.png" alt="" class="wp-image-568087" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-26-1024x60.png 1024w, https://blog.finxter.com/wp-content/uplo...300x18.png 300w, https://blog.finxter.com/wp-content/uplo...768x45.png 768w, https://blog.finxter.com/wp-content/uplo...age-26.png 1328w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
<p>So at y = 600, for every bit y increases, L increases by 200. Since we want to <em>decrease</em> L, that means we need to <em>decrease</em> y.</p>
<p>These calculations match what we concluded from looking at the graph.</p>
<p>You can also read <em>dL/dy</em> as saying <em>“this is the slope of the graph”</em>. </p>
<p>This makes sense: at point y = 350 the slope of the graph is -300 (sloping down steeply), while at point y = 600 the slope of the graph is 200 (sloping up, not quite so steeply).</p>
<p>So by using calculus and taking the derivative, we can figure out which way to change y to reduce the loss L, even when we can’t see the graph to figure it out.</p>
<p>Recall, however, that we want to figure out how to change the weights and biases to reduce the loss L. Also recall there is a chain of dependencies, of L depending on y, which itself depends on w and b (for several layers worth of w and b!), and on input. </p>
<p>So a full description could result in some rather complicated equations and some difficult derivatives. For those curious about the math details, the method for figuring out derivatives when there is such dependencies is called <strong>the chain rule</strong>.</p>
<p>Fortunately, with modern neural network software, the computer takes care of calculating derivatives and keeping track of and resolving the chains of dependencies in the derivatives. Just understand that, even if we can’t see its graph:</p>
<ul>
<li>there is some relationship between the loss L and the weights w and biases b (a “graph”Wink</li>
<li>there is some set of weights and biases where the loss L is at a minimum for a given input</li>
<li>we can use calculus to figure out how to adjust the weights and biases to minimize loss</li>
</ul>
<h2>The Loss Surface and Gradient Descent</h2>
<p>Let’s consider a very simple case where there are just two weights, w1 and w2, and no biases. The graph of L as a function of w1 and w2 might look like figure 4.</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="397" height="254" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-27.png" alt="" class="wp-image-568092" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-27.png 397w, https://blog.finxter.com/wp-content/uplo...00x192.png 300w" sizes="(max-width: 397px) 100vw, 397px" /><figcaption><strong>Figure 4</strong>: bowl-shaped error graph</figcaption></figure>
</div>
<p>In this example, with two independent weights, we end up with a bowl-shaped surface for the loss graph. In this case, the loss is minimized when w1 = 4 and w2 = 3. In the beginning, when the network is not yet trained the weights (initially set to small <a rel="noreferrer noopener" href="https://blog.finxter.com/create-a-list-of-random-numbers-the-most-pythonic-way/" data-type="post" data-id="10516" target="_blank">random </a>numbers) are almost certainly not at the correct values for the loss to be at a minimum.</p>
<p>We still figure out which direction to change the weights to reduce the loss by taking the derivative. </p>
<p>Only this time, since there are two independent variables, we take the derivative with respect to each independently. </p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Important</strong>: The result is, for any given point on the loss surface, a direction (a vector, or an arrow) pointing in which direction the loss increases the fastest (“uphill”Wink. This is known as the gradient (instead of derivative). Since we want to reduce loss, we move in the opposite direction, the <em>negative</em> of the gradient.</p>
<p>The larger point is we are still using calculus to figure out which direction to change weights to reduce loss. Repeatedly doing this moves the weights closer to the values which make the network give the correct answer for a given input. This is known as <strong>gradient descent</strong>.</p>
<p>However, most neural networks have many more than two weights, typically dozens for any given layer. </p>
<p>But the same ideas still apply: if we have a layer consisting of 16 weighted connections, the loss is a 16-dimensional surface! You can’t visualize it but it still exists mathematically, and the same principles apply! </p>
<p>You can still calculate the gradient, that is the derivative with respect to all 16 w’s, and figure out which direction to change the w’s to minimize the loss.</p>
<p>So how much do we adjust the weights and biases? </p>
<p>Typically they are adjusted just a small amount. This is because large adjustments can cause problems. </p>
<p>Refer to the loss surface shown in Figure 4. If too large a step is made, you could jump right across the loss surface bowl, even going so far as to make the loss worse! </p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> The adjustment step size is known as the <strong>learning rate</strong>. Figuring out the best learning rate is one of the tricks to optimizing your network that a neural network engineer has to work out.</p>
<h2>Backpropagation</h2>
<p>Ultimately <em>all</em> of the weights and biases throughout the network have to be adjusted to minimize loss. This is done back from the loss, working back layer by layer to the beginning of the network, a process called <strong>backpropagation</strong>. </p>
<p>It has to be done this way because you can’t figure out how the first layer’s weights and biases affect loss until you know how the second layer’s weights and biases affect loss; you can’t tell how the second layer’s weights and biases effect loss until you know how the third layer’s weights and biases effect loss, and so on. </p>
<p>So calculations and adjustments are done starting with the last layer, then working back to the second to the last layer, and so on back to the first layer.</p>
<p>So that’s the core algorithm of training a neural network:</p>
<ol>
<li>Present example image.</li>
<li>Calculate the loss.</li>
<li>Adjust the network weights and biases through backpropagation, calculating gradient descent, and making adjustments layer by layer.</li>
</ol>
<h2>Batch Size</h2>
<p>However, recall that the objective of the training is to adjust the weights and biases for <em>all</em> of the images, not just one. </p>
<p>So how does one train the network, one image at a time, or using the entire set of all training images? Either choice is a possibility. </p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Ultimately the loss we want to minimize is the loss for the entire set of training samples, so a natural choice might be to run all samples through the network before making adjustments to the weights and biases. This is known as <strong>batch processing</strong>. </p>
<p>However performing so many calculations before making adjustments can be very demanding on computer resources and can slow the training process down.</p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> How about adjusting weights and biases for each individual training sample? Optimum weights and biases will be different for each training sample, and this variation can introduce large randomness into the gradient descent. This is known as <strong>stochastic gradient descent</strong>.</p>
<p>To better understand the importance of this refer to the hypothetical loss curve in figure 5:</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="471" height="282" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-28.png" alt="" class="wp-image-568110" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-28.png 471w, https://blog.finxter.com/wp-content/uplo...00x180.png 300w" sizes="(max-width: 471px) 100vw, 471px" /><figcaption><strong>Figure 5</strong>: local and global minimum</figcaption></figure>
</div>
<p>Notice that there is more than one minimum: there is a <strong>local minimum</strong> at point B, which is not quite the lowest loss, and a <strong>global minimum</strong> at point A that is truly the minimum where the loss is lowest. </p>
<p>It is truly possible (even likely) to get loss curves like this, with multiple local minima, and it’s also possible for the network to get stuck in one of these local minima. </p>
<p>The randomness of single sample training can help knock the network out of a local minimum if it gets stuck in one, so there is some benefit to stochastic gradient descent. </p>
<p>However, the randomness can be so extreme that it can actually knock the network out of the true global minimum if it happens to reach it before a training cycle ends. This can slow the training as the network has to work back down to minimize the loss again.</p>
<p>So in practice, it turns out the best approach is to use <strong>minibatches</strong>. These are batch sizes of perhaps a few hundred samples that are run through the network, and <em>then</em> adjustments are made. </p>
<p>The network runs through mini batch after many batch until the entire set of training samples has been processed. This has enough randomness to it that it has the same benefit as stochastic gradient descent of pushing the network out of local minima, but not so much randomness that the loss can get worse.</p>
<p>Running through the entire set of training samples once is called an <strong>epoch</strong>. </p>
<p>Typically networks must run through many epochs to become fully trained. Also the ordering and grouping of training samples within and between batches is randomized from epoch to epoch. This is to avoid <strong>overfitting</strong>. </p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Overfitting is when the network performs successfully on the training samples, but fails on samples it has not seen before. This is like a person memorizing a set of samples, rather than generalizing characteritics from those samples so that it can be successful on new samples.</p>
<p>After training the network is then tested on a <strong>test set</strong>. This is a set of samples the network has not seen before. This allows one to assess how well the trained network performs. It checks to see how effective the network is on unknown samples, and checks to make sure overfitting has not occurred.</p>
<h2>How Neural Networks Learn</h2>
<p>So that is the full process of how neural networks learn:</p>
<ol>
<li>Train the network by presenting it <strong>minibatches</strong> of samples from the <strong>training set</strong>.</li>
<li>The training algorithm calculates the <strong>loss</strong> for the minibatch.</li>
<li>The algorithm calculates the <strong>gradient</strong> of the loss.</li>
<li>The network adjusts <strong>weights</strong> and <strong>biases</strong> according to the gradient calculations, through the process of <strong>backpropagation</strong> and <strong>gradient descent</strong>.</li>
<li>Running this sequence through all training samples is called an <strong>epoch</strong>.</li>
<li>This is then repeated for multiple epochs, until the network is successfully trained on the training set.</li>
<li>Finally the network is tested on a <strong>test set</strong> to make sure it works successfully and does not suffer from <strong>overfitting</strong>.</li>
</ol>
<p>We hope you have found this lesson on how neural networks learn informative.</p>
<p>We wish you happy coding!</p>
</div>


https://www.sickgaming.net/blog/2022/08/...rks-learn/
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016