PyTorch is an open source deep learning framework built to be flexible and modular for research, with the stability and support needed for production deployment. PyTorch provides a Python package for high-level features like tensor computation (like NumPy) with strong GPU acceleration and TorchScript for an easy transition between eager mode and graph mode. With the latest release of PyTorch, the framework provides graph-based execution, distributed training, mobile deployment, and quantization.
While static graphs are great for production deployment, the research process involved in developing the next great algorithm is truly dynamic. PyTorch uses a technique called reverse-mode auto-differentiation, which allows developers to modify network behavior arbitrarily with zero lag or overhead, speeding up research iterations.
With TorchScript, PyTorch provides ease-of-use and flexibility in eager mode, while seamlessly transitioning to graph mode for speed, optimization, and functionality in C++ runtime environments.
Optimize performance in both research and production with native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++.
PyTorch supports an end-to-end workflow from Python to deployment on iOS and Android. It extends the PyTorch API to cover common preprocessing and integration tasks needed for incorporating ML in mobile applications.
Install PyTorch. Multiple installation options are supported, including from source, pip, conda, and pre-built cloud services like AWS. For more installation options, visit here.
Review documentation and tutorials to familiarize yourself with PyTorch's tensor library and neural networks.
If you are new to machine learning and PyTorch, check out these getting started resources:
Check out tools, libraries, pre-trained models, and datasets to support your development needs.
Build, train, and evaluate your neural network. Here's an example of code used to define a simple network:
import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net)
Foundational models
Latest news
Foundational models