본문 바로가기

Programming/(Python)(Ubuntu)

PyTorch-Basic (tensor 선언/연산, model parameters/epoch/loss)

0. torch version check

import torch

print("Output : ", torch.__version__)


1. torch.tensor 선언하기 (torch.zeros/torch.ones)

torch.tensor Data type Homepage : https://pytorch.org/docs/stable/tensors.html

 

torch.Tensor — PyTorch master documentation

Shortcuts

pytorch.org

import torch

x = torch.tensor([1,2,3,4,5,6])
print(x)
x = torch.zeros((5,6))
print(x)
x = torch.ones((5,6), dtype = torch.int8)
print(x)

ex) 간단한 텐서 만들기

import torch
x = torch.rand((2,3))
print(x)
x = torch.full((3,3), 100)
print(x)

2. tensor 연산하기

 

  • 단순 덧셈
import torch
x = torch.tensor([1,2,3,4])
y = torch.tensor([4,4,4,4])
z = x + y
print(z)

  • 덧셈(bias)
import torch
x = torch.tensor([[[1,2,3],
                 [4,5,6],
                 [7,8,9]]])
y = torch.tensor([100, 200, 300])
z = x + y
print(z)

  • 단순 곱셈
import torch
x = torch.tensor([9,8,7,6])
z = x ** 2 # x^2, ** : 제곱연산자
print(z)
print(z*x) # z*x = z*x^3

  • 행렬 곱셈(tensor.mm 은 외적/내적이 가능하다)
import torch
x = torch.tensor([[1,2,3,4]])
y = torch.tensor([[4],[4],[4],[4]])
z1 = torch.mm(x,y)
z2 = torch.mm(y,x)

print(x.size())
print(y.size())
print(z1)
print(z2)
print(x*y)

  • Slicing
import torch

# Slicing
x = torch.tensor([range(10), range(10,20)])
print(x.size())
y = x[1, 4:8] # y = x[1][4:8]
print(x)
print(y)

  • Transpose
import torch

# Transpose
x = torch.zeros((3,5))
print(x)
x = x.transpose(0,1)
print(x)

  • Squeeze/Unsqueeze
import torch

# Squeeze/Unsqueeze
x = torch.rand((1,1,3,4))
y = x.squeeze()
print(y.size())
print(y.unsqueeze(1).size())


3. Auto grad

  • Auto grad(자동 미분) : tensor의 미분값 얻기
  • Auto grad는 tensor에서 일어나는 모든 연산에 대해서 미분값을 얻음
  • requires_grad를 True로 설정하면 해당 Tensor에 대한 모든 연산을 추적
import torch
x = torch.ones((3,3), requires_grad=True)
print(x)

  • 순방향(Forward) 계산이 끝난 후 역방향(Backward)가 호출되면, 추적(Tracking) 연산을 바탕으로 자동으로 미분값이 계산
import torch
w = torch.tensor([[2.0, -3.0]], requires_grad=True)
x = torch.tensor([[-1.0], [-2.0]], requires_grad=True)
b = torch.tensor([[-3.0]], requires_grad=True)

def sigmoid(x):
    return 1/(1+torch.exp(-x))

y = torch.mm(w,x) + b
y = sigmoid(y)
print(y)
y.backward()
print(w.grad)
print(x.grad)
print(b.grad)


4. 모델 생성하기

 

  • 모델 정의
import torch
import torch.nn as nn

model = nn.Sequential(nn. Linear(3,5),
                     nn.ReLU(),
                     nn. Linear(5,5),
                     nn.ReLU(),
                     nn. Linear(5,5),
                     nn.ReLU(),
                     nn. Linear(5,3),
                     nn.ReLU(),
                     nn. Linear(3,1),
                     nn.ReLU())

print(model)

  • Input 생성
x = torch.rand((5, 3))
out = model(x)
print(out.size())
print(out)

  • Model parameters
for param in model.parameters():
    print(param)

  • 모델 훈련 / loss 구하기
import torch
import torch.nn as nn
import torch.optim as optim

model = nn.Sequential(nn. Linear(3,5),
                     nn.ReLU(),
                     nn. Linear(5,5),
                     nn.ReLU(),
                     nn. Linear(5,5),
                     nn.ReLU(),
                     nn. Linear(5,3),
                     nn.ReLU(),
                     nn. Linear(3,1),
                     nn.ReLU())

optimizer = optim.Adam(model.parameters(), lr=1e-2)
criterion = nn.MSELoss()
epochs = 10
for epoch in range(epochs):
    inputs = torch.tensor([[0.1, 0.2, 0.3],
                         [0.4, 0.5, 0.6]])
    targets = torch.tensor([[0.1],
                            [0.2]])
    predictions = model(inputs)
    loss = criterion(predictions, targets)
    print("{}: loss: {:.4f}".format(epoch, loss))
    optimizer.zero_grad() # 필수
    loss.backward()
    optimizer.step()

반응형