Now, what is a tensor? Now tensor just immediately seems kind of like a complicated name, you're like, Alright, tensor like this is confusing. But what is? Well, obviously this is going to be a primary aspect of TensorFlow considering the name similarities. And essentially all it is, is a vector generalized to higher dimensions. Now, what is a vector? Well, if you've ever done any linear algebra, or even some basic kind of vector calculus, you should hopefully know what that is. But essentially, it is kind of a data point is kind of the way that I like describe it. And the reason we call it a vector is because it doesn't necessarily have a certain coordinate. So like, if you're talking about a two dimensional data point, you have, you know, maybe an x and a y value, or like an x one value and an x two value. Now a vector can have any amount of dimensions in it, it could have one dimension, which simply means it just one number could have two dimensions, which means we're having two numbers. So like an x and a y value, if we're thinking about a two dimensional graph, we'd have three dimensions, we're thinking about a three dimensional graph. So that would be three data points, we get a four dimensions, if we're talking about sometimes some image data and some video data, five dimensions, and we can keep going going going with vectors. So essentially, what a tensor is, and I'll just read this formal definition to make sure I haven't butchered anything that's from the actual tensor flow website. A tensor is a generalization of vectors and matrices to potentially higher dimensions. Internally, TensorFlow represented tensors as n dimensional arrays of base datatypes. Now we'll understand what that means in a second. But hopefully, that makes sense. Now, since tensors, are so important to TensorFlow, they're kind of the main object that we're going to be working with manipulating and viewing, and it's the main object that's passed around through our program. Now, what we can see here is each tensor represents a partially defined computation that will eventually produce a value. So just like we talked about in the graphs and sessions, what we're going to do is when we create our program, we're going to be creating a bunch of tensors. And TensorFlow is going to be creating them as well. And those are going to store partially defined computations in the graph. Later, when we actually build the graph and have the session running, we will run different parts of the graph, which means we'll execute different tensors and be able to get different results from our tensors. Now, each tensor has what we call a data type and a shape. And that's what we're going to get into now. So a data type is simply what kind of information is stored in the tensor. Now, it's very rare that we see any data types different than numbers, although there is the datatype of strings, and a few others as well. But I haven't included all of them here, because they're not that important. But some examples we can see are float, 32, and 32, string, and others. Now, the shape is simply the representation of the tensor in terms of what dimension it is. And we'll get to some examples, because I don't want to explain the shape until we can see some examples to really dial in. But here is some examples of how we would create different tensors. So what you can do is you can simply do TF dot variable, and then you can do the value and the datatype that your tensor is. So in this case, we've created a string tensor, which stores one string, and it is TF dot strings, we define the data type Second, we have a number tensor, which stores some integer value, and then not as uptight TF in 16. We have a floating point tensor, which stores a simple floating point. Now these tensors have a shape of, I believe it's going to be one, which simply means they are a scalar. Now a scalar value. And you might hear me say this a lot simply means just one value. That's all it means. When we talk about like vector values, that typically means more than one value. And we talked about matrices, we're having different it just it goes up. But scalar simply means one number. So yeah, that is what we get for the different datatypes and creating tensors, we're not really going to do this very much in our program. But just for some examples here, that's how we do it. So we've imported them. So I can actually run these. And I mean, we're not going to really get any output by running this code, because well, there's nothing to see. But now we're going to talk about the rank slash degree of tensors. So another word for rank is agree. So these are interchangeably. And again, this simply means the the, the number of dimensions involved in the tensor. So when we create a tensor of rank zero, which is what we've done up here, we call that a scalar. Now, the reason this has rank zero is because it's simply one thing, we don't have any dimension to this, there's like zero dimensionality. If that was even a word it just one value. Whereas here we have an array. Now when we have an array or a list, we immediately have at least rank one. Now the reason for that is because this array can store more than one value in one dimension, right? So I can do something like test. I could do okay, I could do Tim, which is my name. And we can run this and we're not going to get any output obviously here, but this is what we would call a rank one tensor, because it is simply one a list one array, which means one dimension. And again, you know, that's also like vector. Now, this we're looking at here is a rank two tensor. The reason this is a rank two tensor is because we have a list inside of a list or in this case, multiple lists inside of the list. So the way that you can actually determine the rank of a tensor is the deepest level of a nested list, at least in Python with our representation, that's what that is. So here, we can see we have a list inside of a list, and then another list inside of this upper list. So this will give us rank two. And this is what we typically call a matrices. And this again, is going to be of TF dot strings. So that's the datatype for this tensor variable. So all of these we've created are tensors, they have a datatype. And they have some rank and some shape, and we're going to talk about the shape in a second. So to determine the rank of a tensor, we can simply use the method TF dot rank. So notice, when I run this, we get the shape, which is blank, a rank two tensor, that's fine. And then we get NumPy. Two, which simply means that this is of rank two. Now, if I go for that rank one tensor, and I print this out. So let's have a look at it, we get NumPy, one here, which is telling us that this is simply a rank one. Now, if I want to use one of these ones up here and see what it is, so let's try it, we can do numbers, so TF dot rank number, so we'll print that here, and we get NumPy, zero, because that's rank zero, right? So we'll go back to what we had, which was rank two tensor. But again, those are kind of the examples we want to look at. Okay, so shapes of a tensor. So this is a little bit different. Now, what a shape simply tells us is how many items we have in each dimension. So in this case, when we're looking at rank two, tensor dot shape, so we have dot shape here, that's an attribute of all of our tensors, we get to two. Now let's look up here, what we have is Whoa, look at this two, and two. So we have two elements in the first dimension, right, and then two elements in the second dimension. That's pretty much what this is telling us. Now let's look at the rank for the shape of rank one tensor, we get three. So because we only have a rank one, notice we only get one number, whereas when we had rank two, we got two numbers. And it told us how many elements were in each of these lists, right? So if I go and I add another one here, like that, and we have a look now at the shape, oops, I gotta run this first. So that's something Oh, can't convert non square to tensor. Sorry, so I need to have a uniform amount of elements in each one here, I can't just do what I did there. So we'll add a third element here. Now what we can do is run this shouldn't get any issues, let's have a look at the shape. And notice we get now two, three. So we have two lists. And each of those lists have three elements inside of them. So that's how the shape works. Now, I could go ahead and add another list in here if I wanted to. And I could say like, okay, okay. Okay, so let's run this, hopefully, no errors looks like we're good. Now let's look at the shape again. And now we get a shape of three, three, because we have three interior lists. And in each of those lists, we have three elements. And that is pretty much how that works. Now, again, we could go even further here, we could put another list inside of here, that would give us a rank three tensor. And we'd have to do that inside of all of these lists. And then what that would give us now would be three numbers representing how many elements we have in each of those different dimensions. Okay, so changing shape. Alright, so this is what we need to do a lot of times when we're dealing with tensors in TensorFlow. So essentially, there is many different shapes that can represent the same number of elements. So up here, we have three elements in a rank one tensor. And then here, we have nine elements in a rank two tensor. Now, there's ways that we can reshape this data so that we have the same amount of elements but in a different shape. For example, I could flatten this, right, take all of these elements, and throw them into a rank one tensor. That simply is a length of nine elements. So how do we do that? Well, let me just run this code for us here and have a look at this. So what we've done is we've created tensor one, that is TF dot ones, what this stands for is we're going to create a tensor that simply is populated completely with ones of this shape. So shape 123, which means you know, that's the shape we're going to get. So let's print this out and look at tensor one, just so I can better illustrate this. So tensor one, look at the shape that we have 123, right, so we have one interior list, which we're looking at here. And then we have two lists inside of that list. And then each of those lists we have three elements. So that's the shape we just defined. Now, we have six elements inside of here. So there must be a way that we can reshape this data to have six elements but in a different shape. In fact, what we can do is reshape this in To a 231 shape, we're gonna have two lists, right? We're gonna have three inside of those. And then inside of each of those, we're gonna have one element. So let's have a look at that one. So let's have a look at tensor two, actually, what am I doing, we print out, we can print all of them here. So let's just print them and have a look at them. So when we look at tensor, one, we saw this was a shape. And now we look at this tensor two. And we can see that we have two lists, right? inside of each of those lists, we have three lists. And inside of each of those lists, we have one element. Now, finally, our tensor three is a shape of three, negative one, what is negative one, when we put negative one here, what this does is infer what this number actually needs to be. So if we define an initial shape of three, what this does is say, okay, we're gonna have three lists. That's our first level. And then we need to figure out based on how many elements we have in this reshape, which is the method we're using, which I didn't even talk about, which we'll go into second, what this next dimension should be. Now, obviously, this is going to need to be three. So three, three, right? Because we're gonna have three lists inside of each of those lists we need to have are actually is that correct? Let's see if that's even the shape three to my bat. So this actually needs to change to three, two, I don't know why I wrote three, three there. But you get the point, right. So what this does, we have three lists, we have six elements, this number obviously needs to be two, because well, three times two is going to give us six. And that is essentially how you can determine how many elements are actually in a tensor by just looking at its shape. Now, this is the reshape method, where all we need to do is called TF dot reshape, give the tensor and give the shape we want to change it to so long as that's a valid shape. And when we multiply all the numbers in here, it's equal to the number of elements in this tensor that will reshape it for us and give us that new shaped data. This is very useful, we'll use this actually a lot as we go through TensorFlow. So make sure you're kind of familiar with how that works. Alright, so now we're moving on to types of tensors. So there is a bunch of different types of tensors that we can use. So far, the only one we've looked at is variable. So we've created TF dot variables, and kind of just hard coded our own tensors, we're not really going to do that very much. But just for that example. So we have these different types, we have constant placeholder sparse tensor variable, and there's actually a few other ones as well. Now, we're not going to really talk about these two that much, although constant and variable are important to understand the difference between so we can read this with the exception of variable, all of these tensors are immutable, meaning their value may not change during execution. So essentially, all of these, when we create a tensor mean, we have some constant value, which means that whatever we've defined here, it's not going to change, whereas the variable tensor could change. So that's just something to keep in mind when we use variable. That's because we think we might need to change the value of that tensor later on. Whereas if we're using a constant value tensor, we cannot change it. So that's just something to keep in mind, we can obviously copy it, but we can't change. Okay, so evaluating tensors, we're almost at the end of this section, I know. And then we'll get into some more kind of deeper code. So there will be some times for this guide, we need to evaluate a tense, of course, so what we need to do to evaluate a tensor is create a session. Now, this isn't really like we're not going to do this that much. But I just figured I'd mention it to make sure that you guys are aware of what I'm doing. If I start kind of typing this later on. Essentially, sometimes we have some tensor object, and throughout our code, we actually need to evaluate it to be able to do something else. So to do that, all we need to do is literally just use this kind of default template, a block of code, where we say with TF dot session, as some kind of session doesn't really matter what we put here, then we can just do whatever the tensor name is dot eval, and calling that will actually have TensorFlow, just figure out what it needs to do to find the value of this tensor, it will evaluate it, and then it will allow us to actually use that value. So I put this in here, you guys can obviously read through this if you want to understand some more in depth on how that works. And the source for this is straight from the TensorFlow website, a lot of this is straight up copied from there. And I've just kind of added my own spin to it and made it a little bit easier to understand. Okay, so we've done all that. So let's just go in here and do a few examples of reshaping just to make sure that everyone's kind of on the same page. And then we'll move on to actually talking about some simple learning algorithms. So I want to create a tensor that we can kind of mess with and reshape, so what I'm gonna do is just say t equals and we'll say TF dot ones. Now, what TF dot ones does is just create, again, all the values to be ones that we're going to have and whatever shape now we can also do zeros and zeros is just going to give us a bunch of zeros. And let's create some like crazy shape and just visualize this, let's see like a five by five by five. So obviously, if we want to figure out how many elements are going to be in here, we need to multiply this value. So I believe this is going to be 625 because that should be five to the power of four. So five times five times five times five. And let's actually print T and have a look at that and see what this is. So we run this now, and you can see this is the output we're getting. So obviously this is a pretty crazy look. tensor, but you get the point, right, and it tells us the shape is 50555. Now watch what happens when I reshape this tensor. So if I want to take all of these elements and flatten them out, what I could do is simply say, we'll say t equals TF dot reshape, like that. And we'll reshape the tensor t to just the shape 625. Now, if we do this, and we run here, oops, I got to print T. At the bottom, after we've done that, if I could spell the print statement correctly, you can see that now we've just got this massive list that just has 625 zeros. And again, if we wanted to reshape this to something like 125, and maybe we weren't that good at math, and couldn't figure out that this last value should be five, we could put a negative one, this would mean that TensorFlow would infer now what the shape needs to be. And now when we look at it, we can see that we're we're going to get is well just simply five kind of sets of these. I don't know matrices, whatever you want to call them, and our shape is 125. Five. So that is essentially how that works. So that's how we reshape. That's how we kind of deal with tensors. Create variables, how that works in terms of sessions and graphs. And hopefully with that, that gives you enough of an understanding of tensors of shapes of ranks of value so that when we move into the next part of the tutorial, we're actually writing code and I promise we're going to be writing some more advanced code, you'll understand how that works. So with that being said, let's get into the next section.
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )