PyTorch on ARC: Difference between revisions
Line 32: | Line 32: | ||
It is normal, as the login node does not have any. | It is normal, as the login node does not have any. | ||
You will need a GPU node to test the GPUs. | You will need a GPU node to test the GPUs. | ||
Once you know that your '''pytorch''' environment is working properly, you can add more packages to the environment using '''conda'''. | |||
To deactivate the environment use the | To deactivate the environment use the |
Revision as of 20:19, 31 January 2022
Intro to Torch
Checkpointing
Installing PyTorch
You will need a working local Conda install in your home directory first. If you do not have it yet, plaese follow these instructions to have it isntalled.
Once you have your own Conda, activate it with
$ ~/software/init-conda
We will install PyTorch into its own conda environment.
It is very important to create the environment with python and pytorch in the same command. This way conda can select the best pytorch and python combination.
$ conda create -n pytorch python pytorch-gpu torchvision
Once it is done, activate your pytorch environment:
$ conda activate pytorch
You can test with the torch-gpu-test.py
script shown below.
Copy and paste the text into a file and run if from the command line:
$ python torch-gpu-test.py
If you try this on the login node, it should tell you that GPUs are not available. It is normal, as the login node does not have any. You will need a GPU node to test the GPUs.
Once you know that your pytorch environment is working properly, you can add more packages to the environment using conda.
To deactivate the environment use the
$ conda deactivate
command.
Test script
torch-gpu-test.py
:
#! /usr/bin/env python
# -------------------------------------------------------
import torch
# -------------------------------------------------------
print("Defining torch tensors:")
x = torch.Tensor(5, 3)
print(x)
y = torch.rand(5, 3)
print(y)
# -------------------------------------------------------
# let us run the following only if CUDA is available
if torch.cuda.is_available():
print("CUDA is available.")
x = x.cuda()
y = y.cuda()
print(x + y)
else:
print("CUDA is NOT available.")
# -------------------------------------------------------