Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Tut] Tensors: The Vocabulary of Neural Networks

#1
Tensors: The Vocabulary of Neural Networks

<div>
<div class="kk-star-ratings kksr-auto kksr-align-left kksr-valign-top" data-payload="{&quot;align&quot;:&quot;left&quot;,&quot;id&quot;:&quot;616223&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;top&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;1&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;5&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;5&quot;,&quot;greet&quot;:&quot;Rate this post&quot;,&quot;legend&quot;:&quot;5\/5 - (1 vote)&quot;,&quot;size&quot;:&quot;24&quot;,&quot;width&quot;:&quot;142.5&quot;,&quot;_legend&quot;:&quot;{score}\/{best} - ({count} {votes})&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}">
<div class="kksr-stars">
<div class="kksr-stars-inactive">
<div class="kksr-star" data-star="1" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="2" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="3" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="4" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="5" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
<div class="kksr-stars-active" style="width: 142.5px;">
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
</div>
<div class="kksr-legend" style="font-size: 19.2px;"> 5/5 – (1 vote) </div>
</div>
<p>In this article, we will introduce one of the core elements describing the mathematics of neural networks: <strong>tensors</strong>. <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f9ec.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<figure class="wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube"><a href="https://blog.finxter.com/tensors-the-vocabulary-of-neural-networks/"><img src="https://blog.finxter.com/wp-content/plugins/wp-youtube-lyte/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVybtsVcIoSg%2Fhqdefault.jpg" alt="YouTube Video"></a><figcaption></figcaption></figure>
<p>Although typically, you won’t work directly with tensors (usually they operate under the hood), it is important to understand what’s going on behind the scenes. In addition, you may often wish to examine tensors so that you can look directly at the data, or look at the arrays of weights and biases, so it’s important to be able to work with tensors.</p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Note</strong>: This article assumes you are familiar with how neural networks work. To review those basics, see the article <a href="https://blog.finxter.com/the-magic-of-neural-networks-how-they-work/" target="_blank" rel="noreferrer noopener">The Magic of Neural Networks: History and Concepts</a>. It also assumes you have some familiarity with <a href="https://blog.finxter.com/an-introduction-to-python-classes-inheritance-encapsulation-and-polymorphism/" data-type="post" data-id="30977" target="_blank" rel="noreferrer noopener">Python’s object oriented programming</a>.</p>
<p>Theoretically, we could use pure Python to implement neural networks. </p>
<ul>
<li>We could use <a rel="noreferrer noopener" href="https://blog.finxter.com/python-lists/" data-type="post" data-id="7332" target="_blank">Python lists</a> to represent <strong>data</strong> in the network; </li>
<li>We could use other lists representing <strong>weights and biases</strong> in the network; and </li>
<li>We could use <a rel="noreferrer noopener" href="https://blog.finxter.com/how-to-write-a-nested-for-loop-in-one-line-python/" data-type="post" data-id="11859" target="_blank">nested <code>for</code> loops</a> to perform the operations of multiplying the inputs by the connection weights.</li>
</ul>
<p>There are a few issues with this, however: Python, especially the list data type, performs rather slowly. Also, the code would not be very readable with nested <code>for</code> loops.</p>
<p>Instead, the libraries that implement <a rel="noreferrer noopener" href="https://blog.finxter.com/how-neural-networks-learn/" data-type="post" data-id="568016" target="_blank">neural networks</a> in software packages such as <a rel="noreferrer noopener" href="https://blog.finxter.com/pytorch-developer-income-and-opportunity/" data-type="post" data-id="255891" target="_blank">PyTorch</a> use tensors, and they run much more quickly than pure Python. Also, as you will see, tensors allow much more readable descriptions of networks and their data.</p>
<h2 id="Tensors">Tensors<a href="file:///C:/Users/xcent/Downloads/Tensors.html#Tensors"></a></h2>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/2139.png" alt="ℹ" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Tensors</strong> are essentially arrays of values. Since neural networks are essentially arrays of neurons, tensors are a natural fit for describing them. They can be used for describing the data, describing the network connection weights, and other things.</p>
<p>A one-dimensional tensor is known as a <strong>vector</strong>. Here is an example:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="100" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-75-1024x100.png" alt="" class="wp-image-616229" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-75-1024x100.png 1024w, https://blog.finxter.com/wp-content/uplo...300x29.png 300w, https://blog.finxter.com/wp-content/uplo...768x75.png 768w, https://blog.finxter.com/wp-content/uplo...age-75.png 1319w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>Vectors can also be written horizontally. Here’s the same vector written horizontally:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="49" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-76-1024x49.png" alt="" class="wp-image-616230" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-76-1024x49.png 1024w, https://blog.finxter.com/wp-content/uplo...300x14.png 300w, https://blog.finxter.com/wp-content/uplo...768x36.png 768w, https://blog.finxter.com/wp-content/uplo...age-76.png 1329w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>Switching a vector from vertical to horizontal, or vice versa, is called <strong>transposing</strong>, and is sometimes needed depending on the math specifics. We will not go into detail on this in this article (see <a href="https://blog.finxter.com/pandas-dataframe-t-and-transpose-method/" data-type="post" data-id="343967" target="_blank" rel="noreferrer noopener">here for more</a>).</p>
<p>Vectors are typically used to represent data in the network. For example, each individual element in a vector can represent the input value for each individual input neuron in the network.</p>
<h3>2D Tensor Matrix</h3>
<p>A two-dimensional tensor is known as a <strong>matrix</strong>. Here’s an example:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="75" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-77-1024x75.png" alt="" class="wp-image-616231" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-77-1024x75.png 1024w, https://blog.finxter.com/wp-content/uplo...300x22.png 300w, https://blog.finxter.com/wp-content/uplo...768x57.png 768w, https://blog.finxter.com/wp-content/uplo...age-77.png 1332w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>For a fully connected network, where each neuron in one layer connects to every neuron in the next layer, a matrix is typically used to represent all the connection weights. If there are <code>m</code> neurons connected to <code>n</code> neurons you would need an <code>n x m</code> matrix to describe all the connection weights.</p>
<p>Here’s an example of two neurons connected to three neurons. Here is the network, with connection weights included:</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="442" height="402" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-78.png" alt="" class="wp-image-616232" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-78.png 442w, https://blog.finxter.com/wp-content/uplo...00x273.png 300w" sizes="(max-width: 442px) 100vw, 442px" /></figure>
</div>
<p>And here is the connection weights matrix:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="106" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-81-1024x106.png" alt="" class="wp-image-616241" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-81-1024x106.png 1024w, https://blog.finxter.com/wp-content/uplo...300x31.png 300w, https://blog.finxter.com/wp-content/uplo...768x80.png 768w, https://blog.finxter.com/wp-content/uplo...age-81.png 1331w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<h2 id="Why-We-Use-Tensors">Why We Use Tensors<a href="file:///C:/Users/xcent/Downloads/Tensors.html#Why-We-Use-Tensors"></a></h2>
<p>Before we finish introducing tensors, let’s use what we’ve seen so far to see why they’re so important to use when modeling neural networks. </p>
<p>Let’s introduce a two-element vector of data and run it through the network we just showed. </p>
<p class="has-base-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/2139.png" alt="ℹ" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Info</strong>: Recall neurons add together their weighted inputs, then run the result through an <a href="https://blog.finxter.com/how-neural-networks-learn/" data-type="post" data-id="568016" target="_blank" rel="noreferrer noopener">activation function</a>. </p>
<p>In this example, we are ignoring the activation function to keep things simple for the demonstration.</p>
<p>Here is our data vector:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="76" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-82-1024x76.png" alt="" class="wp-image-616242" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-82-1024x76.png 1024w, https://blog.finxter.com/wp-content/uplo...300x22.png 300w, https://blog.finxter.com/wp-content/uplo...768x57.png 768w, https://blog.finxter.com/wp-content/uplo...age-82.png 1328w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>Here’s a diagram depicting the operation:</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="442" height="402" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-84.png" alt="" class="wp-image-616247" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-84.png 442w, https://blog.finxter.com/wp-content/uplo...00x273.png 300w" sizes="(max-width: 442px) 100vw, 442px" /></figure>
</div>
<p>Let’s calculate the operation (the neuron computations) by hand:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="236" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-83-1024x236.png" alt="" class="wp-image-616246" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-83-1024x236.png 1024w, https://blog.finxter.com/wp-content/uplo...300x69.png 300w, https://blog.finxter.com/wp-content/uplo...68x177.png 768w, https://blog.finxter.com/wp-content/uplo...age-83.png 1330w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>The final result is a 3 element vector:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="97" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-79-1024x97.png" alt="" class="wp-image-616236" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-79-1024x97.png 1024w, https://blog.finxter.com/wp-content/uplo...300x28.png 300w, https://blog.finxter.com/wp-content/uplo...768x73.png 768w, https://blog.finxter.com/wp-content/uplo...age-79.png 1333w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>If you have learned about matrices in grade school and remember doing <strong><a href="https://blog.finxter.com/numpy-matmul-operator/" data-type="post" data-id="374" target="_blank" rel="noreferrer noopener">matrix multiplication</a></strong>, you may note that what we just calculated is <em>identical</em> to matrix multiplication:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="99" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-80-1024x99.png" alt="" class="wp-image-616237" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-80-1024x99.png 1024w, https://blog.finxter.com/wp-content/uplo...300x29.png 300w, https://blog.finxter.com/wp-content/uplo...768x74.png 768w, https://blog.finxter.com/wp-content/uplo...age-80.png 1326w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p class="has-base-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/2139.png" alt="ℹ" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Note</strong>: Recall matrix multiplication involves multiplying first matrix rows by second matrix columns element-wise, then adding elements together.</p>
<p>This is why tensors are so important for neural networks: <em>tensor math precisely describes neural network operation</em>.</p>
<p>As an added benefit, the equation above showing matrix multiplication is so much more a succinct description than nested <code>for</code> loops would be. </p>
<p>If we introduce the nomenclature of bold lower case for a vector and bold upper case for a matrix, then the operation of vector data running through a neural network weight matrix is described by this very compact equation:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="51" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-85-1024x51.png" alt="" class="wp-image-616248" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-85-1024x51.png 1024w, https://blog.finxter.com/wp-content/uplo...300x15.png 300w, https://blog.finxter.com/wp-content/uplo...768x38.png 768w, https://blog.finxter.com/wp-content/uplo...age-85.png 1328w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>We will see later that matrix multiplication within PyTorch is a similarly compact code equation.</p>
<h2 id="Larger-dimensional-tensors">Higher Dimensional Tensors</h2>
<p>A three-dimensional (3D) tensor is known simply as a <em>tensor</em>. As you can see, the term <em>tensor </em>generically refers to <em>any dimensional array of numbers</em>. It’s just one-dimensional and two-dimensional tensors that have the unique names “vector” and “matrix” respectively.</p>
<p>You might not think that there is a need for three-dimensional and larger tensors, but that’s not quite true. </p>
<p>A grayscale image is clearly a two-dimensional tensor, in other words, a matrix. But a color image is actually three two-dimensional arrays, one each for red, green, and blue color channels. So a color image is essentially a three-dimensional tensor. </p>
<p>In addition, typically we process data in mini-batches. So if we’re processing a mini-batch of color images we have the three-dimensional aspect already noted, plus one more dimension of the list of images in the mini-batch. So a mini-batch of color images can be represented by a four-dimensional tensor.</p>
<h2 id="Tensors-in-Neural-Network-Libraries">Tensors in Neural Network Libraries<a href="file:///C:/Users/xcent/Downloads/Tensors.html#Tensors-in-Neural-Network-Libraries"></a></h2>
<p>One Python library that is well suited to working with arrays is <a rel="noreferrer noopener" href="https://blog.finxter.com/numpy-tutorial/" data-type="post" data-id="1356" target="_blank">NumPy</a>. In fact, NumPy is used by some users for implementing neural networks. One example is the <a href="https://blog.finxter.com/how-to-install-scikit-learn-in-python/" data-type="post" data-id="35974" target="_blank" rel="noreferrer noopener">scikit-learn</a> machine learning library which works with NumPy.</p>
<p>However, the PyTorch implementation of tensors is more powerful than NumPy arrays. PyTorch tensors are designed with neural networks in mind. PyTorch tensors have these advantages:</p>
<ol>
<li>PyTorch tensors include gradient calculations integrated into them.</li>
<li>PyTorch tensors also support GPU calculations, substantially speeding up neural network calculations.</li>
</ol>
<p>However, if you are used to working with NumPy, you should feel fairly at home with PyTorch tensors. Though the commands to create PyTorch tensors are slightly different, they will feel fairly familiar. For the rest of this article, we will focus exclusively on PyTorch tensors.</p>
<h2 id="Tensors-in-PyTorch:-Creating-Them,-and-Doing-Math">Tensors in PyTorch: Creating Them, and Doing Math<a href="file:///C:/Users/xcent/Downloads/Tensors.html#Tensors-in-PyTorch:-Creating-Them,-and-Doing-Math"></a></h2>
<p>OK, let’s finally do some coding!</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="498" height="466" src="https://blog.finxter.com/wp-content/uploads/2022/08/WhenTheCodingCodingGIF.gif" alt="" class="wp-image-616332"/></figure>
</div>
<p>First, make sure that you have PyTorch available, either by <a rel="noreferrer noopener" href="https://blog.finxter.com/how-to-install-pytorch-on-pycharm/" data-type="post" data-id="35142" target="_blank">installing</a> on your system or by accessing it through online Jupyter notebook servers. </p>
<p class="has-base-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f30d.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Reference</strong>: See <a rel="noreferrer noopener" href="https://PyTorch.org/get-started/locally/" data-type="URL" data-id="https://PyTorch.org/get-started/locally/" target="_blank">PyTorch’s website</a> for instructions on how to install it on your own system.</p>
<p>See this Finxter article for a review of available online Jupyter notebook services:</p>
<p class="has-base-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f30d.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Recommended Tutorial</strong>: <a href="https://blog.finxer.com/survey-of-python-notebook-options">Top 4 Jupyter Notebook Alternatives for Machine Learning</a></p>
<p>For this article, we will use the online Jupyter notebook service provided by Google called <a rel="noreferrer noopener" href="https://blog.finxter.com/how-to-check-your-tensorflow-version-in-colab/" data-type="post" data-id="29991" target="_blank">Colab</a>. PyTorch is already installed in Colab; we simply have to import it as a module to use it:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">import torch</pre>
<p>There are a number of ways of creating tensors in PyTorch. </p>
<p>Typically you would be creating tensors by importing data from data sets available through PyTorch, or by converting your own data into tensors. </p>
<p>For now, since we simply want to demonstrate the use of tensors we will use basic commands to create very simple tensors.</p>
<p>You can create a tensor from a list:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_list = torch.tensor([[1,2], [3,4]])
t_list</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[1, 2], [3, 4]])</pre>
<p>Note that when we evaluate the tensor variable, the output is labeled to indicate it as a tensor. This means that it is a PyTorch <strong>tensor object</strong>, so an object within PyTorch that performs just like math tensors, plus has various features provided by PyTorch (such as supporting gradient calculations, and supporting GPU processing).</p>
<p>You can create tensors filled with zeros, filled with ones, or filled with random numbers:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_zeros = torch.zeros(2,3)
t_zeros</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[0., 0., 0.], [0., 0., 0.]])</pre>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_ones = torch.ones(3,2)
t_ones
</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[1., 1.], [1., 1.], [1., 1.]])</pre>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand = torch.rand(3,2,4)
t_rand
</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[[0.9661, 0.3915, 0.0263, 0.2753], [0.7866, 0.0503, 0.3963, 0.1334]], [[0.4085, 0.1816, 0.2827, 0.3428], [0.9923, 0.4543, 0.0872, 0.0771]], [[0.2451, 0.6048, 0.8686, 0.8148], [0.7930, 0.4150, 0.6125, 0.3401]]])</pre>
<p>An important attribute to be familiar with to understand the shape of a tensor is the appropriately named <strong><code><a href="https://blog.finxter.com/how-to-get-shape-of-array/" data-type="post" data-id="268" target="_blank" rel="noreferrer noopener">shape</a></code></strong> attribute:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand.shape
# Output: torch.Size([3, 2, 4])</pre>
<p>This shows you that tensor “<code>t_rand</code>” is a three-dimensional tensor composed of three elements of two rows by four columns.</p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Note</strong>: The dimensions of a tensor is referred to as its <strong><code>rank</code></strong>. A one-dimensional tensor, or vector, is a rank-1 tensor; a two-dimensional tensor, or matrix, is a rank-2 tensor; a three-dimensional tensor is a rank-3 tensor, and so on.</p>
<p>Let’s do some math with tensors – let’s add two tensors together:</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" width="1024" height="75" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-86-1024x75.png" alt="" class="wp-image-616249" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-86-1024x75.png 1024w, https://blog.finxter.com/wp-content/uplo...300x22.png 300w, https://blog.finxter.com/wp-content/uplo...768x56.png 768w, https://blog.finxter.com/wp-content/uplo...age-86.png 1325w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
<p>Note the tensors are added together <a href="https://blog.finxter.com/how-to-add-two-lists-element-wise-in-python/" data-type="post" data-id="391288" target="_blank" rel="noreferrer noopener">element-wise</a>. Now here it is in PyTorch:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_first = torch.tensor([[1,2], [3,4]])
t_second = torch.tensor([[5,6],[7,8]])
t_sum = t_first + t_second
t_sum</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[ 6, 8], [10, 12]])</pre>
<p>Let’s add a scalar, that is, an independent number (or a rank-0 tensor!) to a tensor:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_add3 = t_first + 3
t_add3</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[4, 5], [6, 7]])</pre>
<p>Note that the scalar is added to each element of the tensor. The same applies when multiplying a scalar by a tensor:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_times3 = t_first * 3
t_times3</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[ 3, 6], [ 9, 12]])</pre>
<p>The same kind of thing applies to raising a tensor to a power, that is the <a href="https://blog.finxter.com/python-exponent-operator/" data-type="post" data-id="31606" target="_blank" rel="noreferrer noopener">power operation</a> is applied element-wise:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_squared = t_first ** 2
t_squared
</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[ 1, 4], [ 9, 16]])</pre>
<p>Recall that after summing weighted inputs, the neuron processes the result through an activation function. Note that the same performance applies here as well: when a vector is processed through an <a href="https://blog.finxter.com/how-neural-networks-learn/" data-type="post" data-id="568016" target="_blank" rel="noreferrer noopener">activation function</a>, the operation is applied to the vector element-wise.</p>
<p>Earlier, we pointed out that matrix multiplication is an important part of neural network calculations. </p>
<p>There are two ways to do this in PyTorch: you can use the <code><a href="https://blog.finxter.com/python-__matmul__-magic-method/" data-type="post" data-id="36136" target="_blank" rel="noreferrer noopener">matmul</a></code> function:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_matmul1 = torch.matmul(t_first, t_second)
t_matmul1</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[19, 22], [43, 50]])</pre>
<p>Or you can use the matrix multiplication symbol “<code><a href="https://blog.finxter.com/numpy-matmul-operator/" data-type="post" data-id="374" target="_blank" rel="noreferrer noopener">@</a></code>“:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_matmul2 = t_first @ t_second
t_matmul2
</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[19, 22], [43, 50]])</pre>
<p>Recall previously, we showed running an input signal through a neural network, where a vector of input signals was multiplied by a matrix of connection weights. </p>
<p>Here is that in PyTorch:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">x = torch.tensor([[7],[8]])
x</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[7], [8]])</pre>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">W = torch.tensor([[1,4], [2,5], [3,6]])
W</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[1, 4], [2, 5], [3, 6]])</pre>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">y = W @ x
y
</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[39], [54], [69]])</pre>
<p>Note how compact and readable that is instead of doing nested <code>for</code> loops.</p>
<p>Other math can be done with tensors as well, but we have covered most situations that are relevant to neural networks. If you find you need to do additional math with your tensors, check PyTorch documentation or do a web search.</p>
<h2 id="Indexing-and-Slicing-Tensors">Indexing and Slicing Tensors<a href="file:///C:/Users/xcent/Downloads/Tensors.html#Indexing-and-Slicing-Tensors"></a></h2>
<p><a href="https://blog.finxter.com/introduction-to-slicing-in-python/" data-type="post" data-id="731" target="_blank" rel="noreferrer noopener">Slicing</a> allows you to examine subsets of your data and better understand how the dataset is constructed. You may find you will use this a lot.</p>
<h3>Indexing Slicing PyTorch vs NumPy vs Python Lists</h3>
<p><a rel="noreferrer noopener" href="https://blog.finxter.com/numpy-boolean-indexing/" data-type="post" data-id="2877" target="_blank">Indexing</a> and slicing tensors work the same way it does with NumPy arrays. Note that the syntax is different from Python lists. With Python lists, a separate pair of brackets are used for each level of nested lists. Instead, with Pytorch one pair of brackets contains all dimensions, separated by commas.</p>
<p>Let’s find the item in tensor “<code>t_rand</code>” that is 2nd element, first row, third column. First here is “<code>t_rand</code>” again:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[[0.9661, 0.3915, 0.0263, 0.2753], [0.7866, 0.0503, 0.3963, 0.1334]], [[0.4085, 0.1816, 0.2827, 0.3428], [0.9923, 0.4543, 0.0872, 0.0771]], [[0.2451, 0.6048, 0.8686, 0.8148], [0.7930, 0.4150, 0.6125, 0.3401]]])</pre>
<p>And here is the item at the 2nd element, first row, and third column (don’t forget indexing starts at <a rel="noreferrer noopener" href="https://blog.finxter.com/daily-python-puzzle-list-indexing/" data-type="post" data-id="84" target="_blank">zero</a>):</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand[1, 0, 2]
# Output: tensor(0.2827)</pre>
<p>Let’s look at the slice second element, first row, second through third columns:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand[1, 0, 1:3]
# tensor([0.1816, 0.2827])</pre>
<p>Let’s look at the entire 3rd column:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand[:, :, 2]</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[0.0263, 0.3963], [0.2827, 0.0872], [0.8686, 0.6125]])</pre>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/2139.png" alt="ℹ" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Important Slicing Tip</strong>: In the above, we use the standard Python convention that a blank before a “<code>:</code>” means “start from the beginning”, and a blank after a “<code>:</code>” means “go all the way to the end”. So a “<code>:</code>” alone means “include everything from beginning to end”.</p>
<p>A likely use for slicing would be to look at a full array (i.e. a matrix) within a set of arrays, i.e. one image out of a set of images. </p>
<p>Let’s pretend our “<code>t_rand</code>” tensor is a list of images. We may wish to sample just a few “images” to get an idea of what they are like. </p>
<p>Let’s examine the first “image” in our tensor (“list of images”Wink:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand[0]</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[0.9661, 0.3915, 0.0263, 0.2753], [0.7866, 0.0503, 0.3963, 0.1334]])</pre>
<p>And here is the last array (“image”Wink in tensor “t_rand”:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">t_rand[-1]</pre>
<p>Output:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tensor([[0.2451, 0.6048, 0.8686, 0.8148], [0.7930, 0.4150, 0.6125, 0.3401]])</pre>
<p>Using small tensors to demonstrate indexing can be instructive, but let’s see it in action for real. Let’s examine some real datasets with real images.</p>
<h2>Real Example</h2>
<p>We won’t describe the following in detail, except to note that we are importing various libraries that allow us to download and work with a dataset. The last line creates a function that converts tensors into PIL images:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt import torchvision.transforms as T conv_to_PIL = T.ToPILImage()
</pre>
<p>The following downloads the Caltech 101 dataset, which is a collection of over 8000 images in 101 categories:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">caltech101_data = datasets.Caltech101( root="data", download=True, transform=ToTensor()
)
</pre>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">Extracting data/caltech101/101_ObjectCategories.tar.gz to data/caltech101
Extracting data/caltech101/Annotations.tar to data/caltech101</pre>
<p>This has created a <strong>dataset object</strong> which is a container for the data. These objects can be indexed like lists:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">len(caltech101_data)
# 8677 type(caltech101_data[0])
# tuple len(caltech101_data[0])
# 2</pre>
<p>The above code shows the dataset contains 8677 items. Looking at the first item of the set we can see they are tuples of 2 items each. Here are the kinds of items in the tuples:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">type(caltech101_data[0][0])
# torch.Tensor type(caltech101_data[0][1])
# int</pre>
<p>The two items in the tuple are the image as a tensor, and an integer code corresponding to the image’s category.</p>
<p>Colab has a convenient function <strong><code>display()</code></strong> which will display images. First, we use the conversion function we created earlier to convert our tensors to a <a href="https://blog.finxter.com/pillow-to-convert-image-formats-png-jpg-and-more/" data-type="post" data-id="130706" target="_blank" rel="noreferrer noopener">PIL image</a>, then we display the images.</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">img = conv_to_PIL(caltech101_data[0][0])
display(img)</pre>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="510" height="337" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-88.png" alt="" class="wp-image-616252" srcset="https://blog.finxter.com/wp-content/uploads/2022/08/image-88.png 510w, https://blog.finxter.com/wp-content/uplo...00x198.png 300w" sizes="(max-width: 510px) 100vw, 510px" /></figure>
</div>
<p>We can use indexing to sample and display a few other images from the set:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">img = conv_to_PIL(caltech101_data[1234][0])
display(img)</pre>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="266" height="300" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-87.png" alt="" class="wp-image-616250"/></figure>
</div>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">img = conv_to_PIL(caltech101_data[4321][0])
display(img)</pre>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" width="266" height="300" src="https://blog.finxter.com/wp-content/uploads/2022/08/image-87.png" alt="" class="wp-image-616251"/></figure>
</div>
<h2 id="Summary">Summary<a href="file:///C:/Users/xcent/Downloads/Tensors.html#Summary"></a></h2>
<p>We have learned a number of things:</p>
<ol>
<li>What tensors are</li>
<li>Why tensors are key mathematical objects for describing and implementing neural networks</li>
<li>Creating tensors in PyTorch</li>
<li>Doing math with tensors in PyTorch</li>
<li>Doing indexing and slicing of tensors in PyTorch, especially to examine images in datasets</li>
</ol>
<p>We hope you have found this article informative. We wish you happy coding!</p>
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<h2>Programmer Humor</h2>
<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://imgs.xkcd.com/comics/computers_vs_humans.png" target="_blank" rel="noreferrer noopener"><img loading="lazy" src="https://blog.finxter.com/wp-content/uploads/2022/06/image-163.png" alt="" class="wp-image-435467" width="578" height="282" srcset="https://blog.finxter.com/wp-content/uploads/2022/06/image-163.png 578w, https://blog.finxter.com/wp-content/uplo...00x146.png 300w" sizes="(max-width: 578px) 100vw, 578px" /></a><figcaption><em>It’s hard to train deep learning algorithms when most of the positive feedback they get is sarcastic.</em> — from <a href="https://imgs.xkcd.com/comics/computers_vs_humans.png" data-type="URL" data-id="https://imgs.xkcd.com/comics/computers_vs_humans.png" target="_blank" rel="noreferrer noopener">xkcd</a></figcaption></figure>
</div>
</div>


https://www.sickgaming.net/blog/2022/08/...-networks/
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016