当前位置:网站首页>nn. Explanation of module class
nn. Explanation of module class
2022-04-23 09:11:00 【Graduate students are not late】
List of articles
1 Introduce torch.nn.Module
- It is the of all neural networks base class
- All the networks we write should inherit this class
- A simple
Model

2 add_module(name, module) Method
- To the current module , Add submodule
add_module(name, module)
Method : add_module(name, module)
Adds a child module to the current module.
Add a sub module to the current module .
The module can be accessed as an attribute using the given name.
By a given name , We can access the module by accessing properties .
Parameters Parameters
name (string) – name of the child module. The child
module can be accessed from this module using the
given name
name ( character string ) – The name of the submodule . Use the given in the current module name Just
You can access sub modules .
module (Module) – child module to be added to the module.
module ( modular ) – Sub modules to be added to the current module .
2.1 Code implementation
- It can run directly , This code was moved over , Paste the connection of the original post :
https://blog.csdn.net/m0_46653437/article/details/112649366?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522165062788416780366588245%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=165062788416780366588245&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2allfirst_rank_ecpm_v1~rank_v31_ecpm-1-112649366.142v9control,157v4control&utm_term=add_module%EF%BC%88name%2C+module%EF%BC%89&spm=1018.2226.3001.4187
import torch
import torch.nn as nn
torch.manual_seed(seed=20200910)
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.conv1 = torch.nn.Sequential( # Input torch.Size([64, 1, 28, 28])
torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(), # Output torch.Size([64, 64, 28, 28])
torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), # Output torch.Size([64, 128, 28, 28])
torch.nn.ReLU(),
torch.nn.MaxPool2d(stride=2, kernel_size=2) # Output torch.Size([64, 128, 14, 14])
)
self.dense = torch.nn.Sequential( # Input torch.Size([64, 14*14*128])
torch.nn.Linear(14*14*128, 1024), # Output torch.Size([64, 1024])
torch.nn.ReLU(),
torch.nn.Dropout(p=0.5),
torch.nn.Linear(1024, 10) # Output torch.Size([64, 10])
)
self.layer4cxq1 = torch.nn.Conv2d(2, 33, 4, 4)
self.layer4cxq2 = torch.nn.ReLU()
self.layer4cxq3 = torch.nn.MaxPool2d(stride=2, kernel_size=2)
self.layer4cxq4 = torch.nn.Linear(14*14*128, 1024)
self.layer4cxq5 = torch.nn.Dropout(p=0.8)
self.attribute4cxq = nn.Parameter(torch.tensor(20200910.0))
self.attribute4lzq = nn.Parameter(torch.tensor([2.0, 3.0, 4.0, 5.0]))
self.attribute4hh = nn.Parameter(torch.randn(3, 4, 5, 6))
self.attribute4wyf = nn.Parameter(torch.randn(7, 8, 9, 10))
def forward(self, x): # torch.Size([64, 1, 28, 28])
x = self.conv1(x) # Output torch.Size([64, 128, 14, 14])
x = x.view(-1, 14*14*128) # torch.Size([64, 14*14*128])
x = self.dense(x) # Output torch.Size([64, 10])
return x
print('cuda(GPU) Is it available :', torch.cuda.is_available())
print('torch Version of :', torch.__version__)
model = Model() #.cuda()
print(" test model (CPU)".center(100, "-"))
print(type(model))
print("torch.nn.Module.add_module(name, module) Before method call ".center(100, "-"))
for name, child in model.named_modules():
print(' The name of the module is :', name, '### The module itself is :', child)
print("torch.nn.Module.add_module(name, module) After method call ".center(100,"-"))
model.add_module('JUJU', torch.nn.Conv2d(38, 38, 38, 38))
for name, child in model.named_modules():
print(' The name of the module is :', name, '### The module itself is :', child)
cuda(GPU) Is it available : False
torch Version of : 1.11.0+cpu
--------------------------------------------- test model (CPU)----------------------------------------------
<class '__main__.Model'>
---------------------------torch.nn.Module.add_module(name, module) Before method call ---------------------------
The name of the module is : ### The module itself is : Model(
(conv1): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(dense): Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
(layer4cxq1): Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
(layer4cxq2): ReLU()
(layer4cxq3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer4cxq4): Linear(in_features=25088, out_features=1024, bias=True)
(layer4cxq5): Dropout(p=0.8, inplace=False)
)
The name of the module is : conv1 ### The module itself is : Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
The name of the module is : conv1.0 ### The module itself is : Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.1 ### The module itself is : ReLU()
The name of the module is : conv1.2 ### The module itself is : Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.3 ### The module itself is : ReLU()
The name of the module is : conv1.4 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : dense ### The module itself is : Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
The name of the module is : dense.0 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : dense.1 ### The module itself is : ReLU()
The name of the module is : dense.2 ### The module itself is : Dropout(p=0.5, inplace=False)
The name of the module is : dense.3 ### The module itself is : Linear(in_features=1024, out_features=10, bias=True)
The name of the module is : layer4cxq1 ### The module itself is : Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
The name of the module is : layer4cxq2 ### The module itself is : ReLU()
The name of the module is : layer4cxq3 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : layer4cxq4 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : layer4cxq5 ### The module itself is : Dropout(p=0.8, inplace=False)
---------------------------torch.nn.Module.add_module(name, module) After method call ---------------------------
The name of the module is : ### The module itself is : Model(
(conv1): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(dense): Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
(layer4cxq1): Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
(layer4cxq2): ReLU()
(layer4cxq3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer4cxq4): Linear(in_features=25088, out_features=1024, bias=True)
(layer4cxq5): Dropout(p=0.8, inplace=False)
(JUJU): Conv2d(38, 38, kernel_size=(38, 38), stride=(38, 38))
)
The name of the module is : conv1 ### The module itself is : Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
The name of the module is : conv1.0 ### The module itself is : Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.1 ### The module itself is : ReLU()
The name of the module is : conv1.2 ### The module itself is : Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
The name of the module is : conv1.3 ### The module itself is : ReLU()
The name of the module is : conv1.4 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : dense ### The module itself is : Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=10, bias=True)
)
The name of the module is : dense.0 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : dense.1 ### The module itself is : ReLU()
The name of the module is : dense.2 ### The module itself is : Dropout(p=0.5, inplace=False)
The name of the module is : dense.3 ### The module itself is : Linear(in_features=1024, out_features=10, bias=True)
The name of the module is : layer4cxq1 ### The module itself is : Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
The name of the module is : layer4cxq2 ### The module itself is : ReLU()
The name of the module is : layer4cxq3 ### The module itself is : MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
The name of the module is : layer4cxq4 ### The module itself is : Linear(in_features=25088, out_features=1024, bias=True)
The name of the module is : layer4cxq5 ### The module itself is : Dropout(p=0.8, inplace=False)
The name of the module is : JUJU ### The module itself is : Conv2d(38, 38, kernel_size=(38, 38), stride=(38, 38))
Process finished with exit code 0
2.2 summary
- From the last line of output you can see , there add_module It was actually implemented , It's directly added to
initAfter the function , But not inforwardFunction ,forwardThe network in the function is the neural network used in the calculation of the later model .
3 apply(fn) Method
- Used to randomly initialize a parameter
4 bfloat16() Method
- Convert all floating-point numbers to
bfloat16Type of
5 parameters()
-
torch.nn.Parameter It is inherited from torch.Tensor Subclasses of , Its main function is as nn.
-
Module Use the trainable parameters in . It is associated with torch.Tensor The difference is that nn.Parameter Will automatically be considered module The trainable parameters of , That is to add to parameter() This iterator goes to ; and module Central African nn.Parameter() Ordinary tensor Yes, no parameter Medium .
-
nn.Parameter Of requires_grad The default value of the property is True, It's something that can be trained , This is related to torh.Tensor The default value of the object is the opposite .
-
stay nn.Module Class ,pytorch Is also used nn.Parameter To every one of them module Parameters of .
版权声明
本文为[Graduate students are not late]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230657535165.html
边栏推荐
- Kettle实验 转换案例
- How to render web pages
- Failed to download esp32 program, prompting timeout
- Flink同时读取mysql与pgsql程序会卡住且没有日志
- kettle实验
- Number theory to find the sum of factors of a ^ B (A and B are 1e12 levels)
- Go language self-study series | initialization of golang structure
- Arbre de dépendance de l'emballage des ressources
- Get trustedinstaller permission
- web页面如何渲染
猜你喜欢

L2-022 重排链表 (25 分)(map+结构体模拟)

Failed to download esp32 program, prompting timeout

Bk3633 specification
![[in-depth good article] detailed explanation of Flink SQL streaming batch integration technology (I)](/img/c9/43a63f526068ef6a3e4964a22c5a1f.png)
[in-depth good article] detailed explanation of Flink SQL streaming batch integration technology (I)

爬虫使用xpath解析时返回为空,获取不到相应的元素的原因和解决办法

L2-022 rearrange linked list (25 points) (map + structure simulation)

NPM reports an error: operation not allowed, MKDIR 'C: \ program files \ node JS \ node_ cache _ cacache’

The most concerned occupations after 00: civil servants ranked second. What was the first?

Emuelec compilation summary

Strength comparison vulnerability of PHP based on hash algorithm
随机推荐
ALV tree (ll LR RL RR) insert delete
NPM installation yarn
LeetCode_DFS_中等_1254. 统计封闭岛屿的数目
Withholding agent
Failed to prepare device for development
MYCAT configuration
112. Path sum
ATSS(CVPR2020)
web页面如何渲染
机器学习(六)——贝叶斯分类器
Kettle实验 转换案例
Node installation
数据清洗 ETL 工具Kettle的安装
L2-024 部落 (25 分)(并查集)
Go language self-study series | initialization of golang structure
Kettle实验 (三)
调包求得每个样本的k个邻居
The crawler returns null when parsing with XPath. The reason why the crawler cannot get the corresponding element and the solution
L2-022 rearrange linked list (25 points) (map + structure simulation)
Valgrind and kcache grind use run analysis