当前位置:网站首页>Pytorch trains the basic process of a network in five steps
Pytorch trains the basic process of a network in five steps
2022-04-23 07:17:00 【Breeze_】
- step1. Load data
- step2. Defining network
- step3. Define the loss function and optimizer
- step4. Training network , loop 4.1 To 4.6 Until the scheduled time is reached epoch Number
– step4.1 Load data
– step4.2 Initialization gradient
– step4.3 Computational feedforward
– step4.4 Calculate the loss
– step4.5 Calculate the gradient
– step4.6 Update the weights - step5. Save weights
# Training a classifier
import torchvision.datasets
import torch.utils.data
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torch import optim
def train():
''' Training '''
'''1. Load data '''
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
)
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = (
'plane', 'car', 'bird', 'cat','deer',
'dog', 'frog', 'horse', 'ship', 'truck'
)
'''2. Defining network '''
Net = LeNet()
'''3. Define the loss function and Optimizer '''
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(Net.parameters(),lr=1e-3,momentum=0.9)
'''cuda Speed up '''
device = ['gpu' if torch.cuda.is_available() else 'cpu']
if device == 'gpu':
criterion.cuda()
Net.to(device)
# Net.cuda() # many GPU Please use DataParallel Method
'''4. Training network '''
print(' Start training ')
for epoch in range(3):
runing_loss = 0.0
for i,data in enumerate(trainloader,0):
inputs,label = data #1. Data loading
if device == 'gpu':
inputs = inputs.cuda()
label = label.cuda()
optimizer.zero_grad() #2. Initialization gradient
output = Net(inputs) #3. Computational feedforward
loss = criterion(output,label) #4. Calculate the loss
loss.backward() #5. Calculate the gradient
optimizer.step() #6. Update the weights
runing_loss += loss.item()
if i % 20 == 19:
print('epoch:',epoch,'loss',runing_loss/20)
runing_loss = 0.0
print(' Training done ')
'''4. Save model parameters '''
torch.save(Net.state_dict(),'cifar_AlexNet.pth')
if __name__=='__main__':
train()
版权声明
本文为[Breeze_]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230610323060.html
边栏推荐
猜你喜欢
【2021年新书推荐】Learn WinUI 3.0
机器学习笔记 一:学习思路
Ffmpeg common commands
[2021 book recommendation] Red Hat Certified Engineer (RHCE) Study Guide
Personal blog website construction
[2021 book recommendation] practical node red programming
1.1 PyTorch和神经网络
[2021 book recommendation] learn winui 3.0
组件化学习(1)思想及实现方式
JVM basics you should know
随机推荐
去掉状态栏
【动态规划】三角形最小路径和
launcher隐藏不需要显示的app icon
【2021年新书推荐】Practical IoT Hacking
常用UI控件简写名
Data class of kotlin journey
Handler进阶之sendMessage原理探索
Miscellaneous learning
xcode 编译速度慢的解决办法
【动态规划】最长递增子序列
电脑关机程序
MySQL notes 4_ Primary key auto_increment
组件化学习(3)ARouter中的Path和Group注解
Fill the network gap
MySQL notes 3_ Restraint_ Primary key constraint
[sm8150] [pixel4] LCD driver
[dynamic programming] triangle minimum path sum
Binder mechanism principle
Android interview Online Economic encyclopedia [constantly updating...]
c语言编写一个猜数字游戏编写