PyTorch on ARC: Difference between revisions
Jump to navigation
Jump to search
Line 16: | Line 16: | ||
We will install '''PyTorch''' into its own '''conda environment'''. | We will install '''PyTorch''' into its own '''conda environment'''. | ||
It is very important to create the environment with '''python''' and '''pytorch''' in the same command. | |||
It is '''very important''' to create the environment with '''python''' and '''pytorch''' in the same command. | |||
This way '''conda''' can select the best '''pytorch''' and '''python''' combination. | This way '''conda''' can select the best '''pytorch''' and '''python''' combination. | ||
Revision as of 20:07, 31 January 2022
Intro to Torch
Checkpointing
Installing PyTorch
You will need a working local Conda install in your home directory first. If you do not have it yet, plaese follow these instructions to have it isntalled.
Once you have your own Conda, activate it with
$ ~/software/init-conda
We will install PyTorch into its own conda environment.
It is very important to create the environment with python and pytorch in the same command. This way conda can select the best pytorch and python combination.
Test script
torch-gpu-test.py
:
#! /usr/bin/env python
# -------------------------------------------------------
import torch
# -------------------------------------------------------
print("Defining torch tensors:")
x = torch.Tensor(5, 3)
print(x)
y = torch.rand(5, 3)
print(y)
# -------------------------------------------------------
# let us run the following only if CUDA is available
if torch.cuda.is_available():
print("CUDA is available.")
x = x.cuda()
y = y.cuda()
print(x + y)
else:
print("CUDA is NOT available.")
# -------------------------------------------------------