**5 Useful Pytorch Functions to Get Started**

## - Learn some useful PyTorch Functions

“ Start like morning, end it like evening ” — Aurora

*What is PyTorch?*

*What is PyTorch?*

- PyTorch is an open-source machine learning library used for developing and training neural network-based deep learning models. It is primarily developed by Facebook’s AI research group released in 2016 and written in C++, Python. Pytorch is immensely popular in research labs of Facebook, Microsoft, Uber, etc. Not yet on many production servers — that are ruled by frameworks like TensorFlow (Backed by Google). TensorFlow, which uses static computation graphs, PyTorch uses dynamic computation, which allows greater flexibility in building complex architectures.

# Why PyTorch is so important?

- PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. GPUs provide maximum flexibility and speed. It’s easy to handle neural networks by torch.nn module. It is a replacement for NumPy to use the power of GPUs. If you are familiar with Python programming language then you can easily write down code using PyTorch.

# Some PyTorch Functions we learn in this tutorial.

- torch.tensor
- torch.stack
- torch.unbind
- torch.split
- torch.cat

Installation:Firstly, you have to install and import PyTorch in your system.

Now, the Important part is to import PyTorch library.

# Function 1 — torch.tensor()

It returns a tensor object. To create a new tensor arguuments are:

- Data: The actual data to be stored in the tensor.
- dtype: Type of data. Note that type of all the elements of a tensor must be the same.
- device: To tell if GPU or CPU should be used.
- requires_grad: If you want the tensor to be differentiable (to compute gradient), set to True.

This creates a tensor with shape (10,2)

As we can see, an empty tensor with size (0) is created.

A tensor with string values cannot be created.

torch.tensor() forms the core of any PyTorch project, quite literally, as it forms tensors.

# Function 2 — torch.stack()

Concatenates a sequence of tensors along a new dimension. All tensors need to be of the same size.

Here we get a stacked matrix from two matrixs x and y along x direction.

Here we get a stacked matrix from two matrixs x and y along y direction.

This is failed to stack because x and y are not with same dimensions.

Use this function to stack multiple matrixes into a single matrix along its x or y directions.

# Function 3 — torch.unbind()

This function removes a dimension from a tensor and returns a tuple of slices of the tensor without the removed dimension. Arguments it takes are :

- A tensor
- Dimension to be removed (0 by default)

Returns a tuple of slices.

The 0th dimension from (0,1) is removed thus we get 3 slices of the tensor in a tuple

As seen above, now the data is sliced into data of 10 students for each day and stored in a tuple of tensors, that is, 7 different tensors are created, each corresponding to a day of the week. This sliced data can be used to further apply logic based on that particular day of the week.

We had created a tensor with 2 dimensions. We provided a value of 2 to the tensor.unbind() function. As 2 corresponds to the 3rd dimension, which doesn’t exist in our example, the error ‘Dimension out of range’ is produced.

This is a powerful PyTorch function that could be useful when you want to work on particular slices of the data along a dimension of the tensor.

# Function 4 — torch.split()

Splits the tensor into chunks. Each chunk is a view of the original tensor.

In this split function splits 3x3 matrix v into 3x2 matrix along its first two rows.

In this, function splits 3x8 tensor matrix z into 3 matrixes 3x3, 3x3, 3x2.

When an integer is specified for split_size_or_sections, torch.split will split the tensor even if the number does not equally divide the tensor; however, when a list of integer is passed for split_size_or_sections, the sum of all numbers in the list should be exactly the size of dimension 1, or it will encounter an runtime error.

# Function 5 — torch.cat()

Concatenates or combines the given sequence of tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.

The tensor x is concatenated thrice row -wise

The tensor x is concatenated thrice column -wise

The dimension should be within the shape of the tensor.

The concatenation function can be used whenever the two or more tensors are to combined together. This concatenates the same-shaped tensors. When the source data is split into several dataframes, torch.cat can be used to concatenate them all.

# Conclusion:

This concludes our look at 5 PyTorch functions to get started. These are easier to learn and very useful in data science. This was a beginner-friendly introduction to PyTorch. There is much more. From here it would be a good idea to explore the documentation and create your own tensors and play around with others functions. As we learn day by day we get tighter grip on the basics knowledge, we can move forward to Neural Networks and Deep Learning.

# Reference Links:

Provide links to your references and other interesting articles about tensors

- Official documentation for tensor operations: https://pytorch.org/docs/stable/torch.html

Thanks For Reading…