PyTorch on ARC: Difference between revisions
Jump to navigation
Jump to search
Line 7: | Line 7: | ||
= Installing PyTorch = | = Installing PyTorch = | ||
=== Test script === | |||
<code>torch-gpu-test.py</code>: | |||
<syntaxhighlight lang=python> | |||
#! /usr/bin/env python | |||
# ------------------------------------------------------- | |||
import torch | |||
# ------------------------------------------------------- | |||
print("Defining torch tensors:") | |||
x = torch.Tensor(5, 3) | |||
print(x) | |||
y = torch.rand(5, 3) | |||
print(y) | |||
# ------------------------------------------------------- | |||
# let us run the following only if CUDA is available | |||
if torch.cuda.is_available(): | |||
print("CUDA is available.") | |||
x = x.cuda() | |||
y = y.cuda() | |||
print(x + y) | |||
else: | |||
print("CUDA is NOT available.") | |||
# ------------------------------------------------------- | |||
</syntaxhighlight> | |||
= Requesting GPU Resources for PyTorch Jobs = | = Requesting GPU Resources for PyTorch Jobs = |
Revision as of 19:25, 31 January 2022
Intro to Torch
Checkpointing
Installing PyTorch
Test script
torch-gpu-test.py
:
#! /usr/bin/env python
# -------------------------------------------------------
import torch
# -------------------------------------------------------
print("Defining torch tensors:")
x = torch.Tensor(5, 3)
print(x)
y = torch.rand(5, 3)
print(y)
# -------------------------------------------------------
# let us run the following only if CUDA is available
if torch.cuda.is_available():
print("CUDA is available.")
x = x.cuda()
y = y.cuda()
print(x + y)
else:
print("CUDA is NOT available.")
# -------------------------------------------------------