当前位置:网站首页>nn. Explanation of module class
nn. Explanation of module class
2022-04-23 09:11:00 【Graduate students are not late】
List of articles
1 Introduce torch.nn.Module
- It is the of all neural networks base class
- All the networks we write should inherit this class
- A simple
Model
2 add_module(name, module) Method
- To the current module , Add submodule
add_module(name, module)
Method : add_module(name, module)
Adds a child module to the current module.
Add a sub module to the current module .
The module can be accessed as an attribute using the given name.
By a given name , We can access the module by accessing properties .
Parameters Parameters
name (string) – name of the child module. The child
module can be accessed from this module using the
given name
name ( character string ) – The name of the submodule . Use the given in the current module name Just
You can access sub modules .
module (Module) – child module to be added to the module.
module ( modular ) – Sub modules to be added to the current module .
2.1 Code implementation
- It can run directly , This code was moved over , Paste the connection of the original post :
https://blog.csdn.net/m0_46653437/article/details/112649366?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522165062788416780366588245%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=165062788416780366588245&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2allfirst_rank_ecpm_v1~rank_v31_ecpm-1-112649366.142v9control,157v4control&utm_term=add_module%EF%BC%88name%2C+module%EF%BC%89&spm=1018.2226.3001.4187
import torch
import torch.nn as nn
torch.manual_seed(seed=20200910)
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.conv1 = torch.nn.Sequential( # Input torch.Size([64, 1, 28, 28])
torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(), # Output torch.Size([64, 64, 28, 28])
torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), # Output torch.Size([64, 128, 28, 28])
torch.nn.ReLU(),
torch.nn.MaxPool2d(stride=2, kernel_size=2) # Output torch.Size([64, 128, 14, 14])
)
self.dense = torch.nn.Sequential( # Input torch.Size([64, 14*14*128])
torch.nn.Linear(14*14*128, 1024), # Output torch.Size([64, 1024])
torch.nn.ReLU(),
torch.nn.Dropout(p=0.5),
torch.nn.Linear(1024, 10) # Output torch.Size([64, 10])
)
self.layer4cxq1 = torch.nn.Conv2d(2, 33, 4, 4)
self.layer4cxq2 = torch.nn.ReLU()
self.layer4cxq3 = torch.nn.MaxPool2d(stride=2, kernel_size=2)
self.layer4cxq4 = torch.nn.Linear(14*14*128, 1024)
self.layer4cxq5 = torch.nn.Dropout(p=0.8)
self.attribute4cxq = nn.Parameter(torch.tensor(20200910.0))
self.attribute4lzq = nn.Parameter(torch.tensor([2.0, 3.0, 4.0, 5.0]))
self.attribute4hh = nn.Parameter(torch.randn(3, 4, 5, 6))
self.attribute4wyf = nn.Parameter(torch.randn(7, 8, 9, 10))
def forward(self, x): # torch.Size([64, 1, 28, 28])
x = self.conv1(x) # Output torch.Size([64, 128, 14, 14])
x = x.view(-1, 14*14*128) # torch.Size([64, 14*14*128])
x = self.dense(x) # Output torch.Size([64, 10])
return x
print('cuda(GPU) Is it available :', torch.cuda.is_available())
print('torch Version of :', torch.__version__)
model = Model() #.cuda()
print(" test model (CPU)".center(100, "-"))
print(type(model))
print("torch.nn.Module.add_module(name, module) Before method call ".center(100, "-"))
for name, child in model.named_modules():
print(' The name of the module is :', name, '### The module itself is :', child)
print("torch.nn.Module.add_module(name, module) After method call ".center(100,"-"))
model.add_module('JUJU', torch.nn.Conv2d(38, 38, 38, 38))
for name, child in model.named_modules():
print(' The name of the module is :', name, '### The module itself is :', child)
cuda(GPU) Is it available : False
torch Version of : 1.11.0+cpu
--------------------------------------------- test model (CPU)----------------------------------------------
<class '__main__.Model'>
---------------------------torch.nn.Module.add_module(name, module) Before method call ---------------------------
The name of the module is : ### The module itself is : Model(
(conv1): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(dense): Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
(layer4cxq1): Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
(layer4cxq2): ReLU()
(layer4cxq3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer4cxq4): Linear(in_features=25088, out_features=1024, bias=True)
(layer4cxq5): Dropout(p=0.8, inplace=False)
)
The name of the module is : conv1 ### The module itself is : Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
The name of the module is : conv1.0 ### The module itself is : Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.1 ### The module itself is : ReLU()
The name of the module is : conv1.2 ### The module itself is : Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.3 ### The module itself is : ReLU()
The name of the module is : conv1.4 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : dense ### The module itself is : Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
The name of the module is : dense.0 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : dense.1 ### The module itself is : ReLU()
The name of the module is : dense.2 ### The module itself is : Dropout(p=0.5, inplace=False)
The name of the module is : dense.3 ### The module itself is : Linear(in_features=1024, out_features=10, bias=True)
The name of the module is : layer4cxq1 ### The module itself is : Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
The name of the module is : layer4cxq2 ### The module itself is : ReLU()
The name of the module is : layer4cxq3 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : layer4cxq4 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : layer4cxq5 ### The module itself is : Dropout(p=0.8, inplace=False)
---------------------------torch.nn.Module.add_module(name, module) After method call ---------------------------
The name of the module is : ### The module itself is : Model(
(conv1): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(dense): Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
(layer4cxq1): Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
(layer4cxq2): ReLU()
(layer4cxq3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer4cxq4): Linear(in_features=25088, out_features=1024, bias=True)
(layer4cxq5): Dropout(p=0.8, inplace=False)
(JUJU): Conv2d(38, 38, kernel_size=(38, 38), stride=(38, 38))
)
The name of the module is : conv1 ### The module itself is : Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
The name of the module is : conv1.0 ### The module itself is : Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.1 ### The module itself is : ReLU()
The name of the module is : conv1.2 ### The module itself is : Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.3 ### The module itself is : ReLU()
The name of the module is : conv1.4 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : dense ### The module itself is : Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
The name of the module is : dense.0 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : dense.1 ### The module itself is : ReLU()
The name of the module is : dense.2 ### The module itself is : Dropout(p=0.5, inplace=False)
The name of the module is : dense.3 ### The module itself is : Linear(in_features=1024, out_features=10, bias=True)
The name of the module is : layer4cxq1 ### The module itself is : Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
The name of the module is : layer4cxq2 ### The module itself is : ReLU()
The name of the module is : layer4cxq3 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : layer4cxq4 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : layer4cxq5 ### The module itself is : Dropout(p=0.8, inplace=False)
The name of the module is : JUJU ### The module itself is : Conv2d(38, 38, kernel_size=(38, 38), stride=(38, 38))
Process finished with exit code 0
2.2 summary
- From the last line of output you can see , there add_module It was actually implemented , It's directly added to
init
After the function , But not inforward
Function ,forward
The network in the function is the neural network used in the calculation of the later model .
3 apply(fn) Method
- Used to randomly initialize a parameter
4 bfloat16() Method
- Convert all floating-point numbers to
bfloat16
Type of
5 parameters()
-
torch.nn.Parameter It is inherited from torch.Tensor Subclasses of , Its main function is as nn.
-
Module Use the trainable parameters in . It is associated with torch.Tensor The difference is that nn.Parameter Will automatically be considered module The trainable parameters of , That is to add to parameter() This iterator goes to ; and module Central African nn.Parameter() Ordinary tensor Yes, no parameter Medium .
-
nn.Parameter Of requires_grad The default value of the property is True, It's something that can be trained , This is related to torh.Tensor The default value of the object is the opposite .
-
stay nn.Module Class ,pytorch Is also used nn.Parameter To every one of them module Parameters of .
版权声明
本文为[Graduate students are not late]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230657535165.html
边栏推荐
- Star Trek's strong attack opens the dream linkage between metacosmic virtual reality
- SAP 101K 411K 库存变化
- 爬虫使用xpath解析时返回为空,获取不到相应的元素的原因和解决办法
- ALV tree (ll LR RL RR) insert delete
- Resource packaging dependency tree
- 小程序报错 :should have url attribute when using navigateTo, redirectTo or switchTab
- Valgrind et kcachegrind utilisent l'analyse d'exécution
- Thread scheduling (priority)
- Introduction to matlab
- Machine learning (VI) -- Bayesian classifier
猜你喜欢
[Luke V0] verification environment 2 - Verification Environment components
L2-022 重排链表 (25 分)(map+结构体模拟)
Multi view depth estimation by fusing single view depth probability with multi view geometry
Notes on xctf questions
The most concerned occupations after 00: civil servants ranked second. What was the first?
108. Convert an ordered array into a binary search tree
MySQL small exercise (only suitable for beginners, non beginners are not allowed to enter)
Kettle实验 (三)
How to protect open source projects from supply chain attacks - Security Design (1)
资源打包关系依赖树
随机推荐
机器学习(六)——贝叶斯分类器
[58] length of the last word [leetcode]
Single chip microcomputer nixie tube stopwatch
Leetcode-199 - right view of binary tree
Go language self-study series | golang method
LeetCode_ DFS_ Medium_ 1254. Count the number of closed islands
How to render web pages
Detailed explanation of delete, truncate and drop principles in MySQL database
Share the office and improve the settled experience
First principle mind map
Kettle实验
Experimental report on analysis of overflow vulnerability of assembly language and reverse engineering stack
Program, process, thread; Memory structure diagram; Thread creation and startup; Common methods of thread
Error: cannot find or load main class
Is Zhongyan futures safe and reliable?
Withholding agent
Technological innovation in government affairs in the construction of Digital Government
JS prototype chain
kettle实验
论文阅读《Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry》