当前位置:网站首页>PyTorch 19. Differences and relations of similar operations in pytorch
PyTorch 19. Differences and relations of similar operations in pytorch
2022-04-23 07:29:00 【DCGJ666】
PyTorch 19. PyTorch The difference and connection of similar operations in
view() and reshape()
Written in the beginning :
There is a big man's summary that is very in place : Blog
summary
- view() In operation tensor when , need tensor Memory is continuous , And when changing the size ,view() The operation will not open up new memory space . But make sure that tensor continuity , Yes tensor Conduct
tensor.contiguous()
when , Will open up New memory space , Store continuous data in memory . - reshape() operation , And view() The effect is as like as two peas. , But it's better than view() The more advanced , Operated tensor When memory is continuous , Direct adoption reshape Will not open up new memory ; Operated tensor When memory is not continuous ,reshape Operation will open up new memory , Right again tensor Conduct reshape.
- Last , use reshape operation Just a matter of
expand() and repeat()
expand()
Returns the tensor after the current tensor is expanded in a certain dimension . Expand (expand) The tensor does not allocate new memory , Just create a new view on the existing tensor (view), A size equal to 1 Expand the dimension of to a larger size
Example :
import torch
x = torch.tensor([1, 2, 3])
x.expand(2,3)
tensor([[1, 2, 3],
[1, 2, 3]])
Be careful expand() The dimension can only be extended to 1 Dimension of , Dimension is not 1 Keep the parts consistent
repeat()
Repeat the tensor along a specific dimension , and expand() The difference is , This function copies the tensor data
Example
import torch
x = torch.tensor([1, 2, 3])
x.repeat(3,2)
tensor([[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3]])
x2 = torch.randn(2, 3, 4)
x2.repeat(2, 1, 3).shape
torch.Tensor([4, 3, 12])
Multiplication operation
pytorch The multiplication operations in are :torch.mm(), torch.bmm(), torch.matmul(), torch.mul(), Operator , as well as torch.einsum()
Two dimensional matrix multiplication torch.mm()
This function is generally only used to calculate the matrix multiplication of two two-dimensional matrices , And does not support broadcast operation .
torch.mm(mat1, mat2, out=None), among mat1 by (nxm),mat2 by (mxd), The output dimension is (nxd)
Three dimensional band batch Matrix multiplication of torch.bmm()
The two inputs of the function must be a three-dimensional matrix and the first dimension is the same ( Express Batch dimension ), I won't support it broadcast operation .
Because neural network training generally adopts mini-batch, Often input three-dimensional band batch Matrix , So provide torch.bmm(bmat1, bmat2, out=None)
, among bmat1
by (bxnxm),bmat2
by (bxmxd), Output out
The dimension of is (bxnxd)
Multidimensional matrix multiplication torch.matmul()
torch,matmul(input, other, out=None) Support broadcast operation
For multidimensional data matmul()
Multiplication , It can be considered that matmul()
Multiplication uses the last two dimensions of the two parameters to calculate , Other dimensions can be considered as batch dimension . Suppose the dimensions of the two inputs are input->(100x500x99x11)
,other->(500x11x99)
Then we can think that the multiplication first carries out the matrix multiplication of the last two bits to get (99x11)x(11x99)->(99,99)
, Then analyze the of the two parameters batch size Respectively (1000x500)
and 500
, Can be broadcast as (1000x500)
, So the dimension of the final output is (1000x500x99x99)
Matrix element by element (Element-wise) Multiplication torch.mul()
function torch.mul(mat1, other, out=None)
, among other Multipliers can be scalars , It can also be a matrix of any dimension , As long as the final multiplication is satisfied broadcast that will do .
Two operators @ and *
- @: Matrix multiplication , Automatically execute the appropriate matrix multiplication function
- *:elemnet-wise Multiplication
register_parameter() and parameter()
- Parameter()
Parameter yes Tensor, namely Tensor All the attributes it has , For example, according to data To access parameter values , use grad To access the parameter gradient
# Just define a network
net = nn.Sequential(nn.Linear(4,3), nn.ReLU(), nn.Linear(3,1))
# list Make it accessible
weight_0 = list(net[0].parameters())[0]
print(weight_0.data)
print(weight_0.grad)
- register_parameter(name, parameters)
To build a network module add to parameter
The biggest difference :parameter You can register online through name obtain
Example
class Example(nn.Module):
def __init__(self):
super(Example, self).__init__()
self.W1_params = nn.Parameter(torch.rand(2,3))
self.register_parameter('W2_params', nn.Parameter(torch.rand(2,3)))
def forward(self, x):
return x
版权声明
本文为[DCGJ666]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230611343663.html
边栏推荐
- 【点云系列】SG-GAN: Adversarial Self-Attention GCN for Point Cloud Topological Parts Generation
- Solution to slow compilation speed of Xcode
- 机器学习——朴素贝叶斯
- PyTorch 22. PyTorch常用代码段合集
- Device Tree 详解
- 【技术规范】:如何写好技术文档?
- Chapter 4 pytoch data processing toolbox
- AUTOSAR从入门到精通100讲(五十一)-AUTOSAR网络管理
- 《Attention in Natural Language Processing》翻译
- GIS实战应用案例100篇(五十一)-ArcGIS中根据指定的范围计算nc文件逐时次空间平均值的方法
猜你喜欢
随机推荐
GIS实用小技巧(三)-CASS怎么添加图例?
Chapter 8 generative deep learning
Unable to determine the device handle for GPU 0000:02:00.0: GPU is lost.
PyTorch 11.正则化
【无标题】制作一个0-99的计数器,P1.7接按键,P2接数码管段,共阳极数码管,P3.0,P3.1接数码管位码,每按一次键,数码管显示加一。请写出单片机的C51代码
armv8m(cortex m33) MPU实战
Pep517 error during pycuda installation
【点云系列】Learning Representations and Generative Models for 3D pointclouds
AUTOSAR从入门到精通100讲(八十六)-UDS服务基础篇之2F
北峰油气田自组网无线通信对讲系统解决方案
PyTorch 10. 学习率
安装 pycuda 出现 PEP517 的错误
Systrace 解析
【点云系列】SG-GAN: Adversarial Self-Attention GCN for Point Cloud Topological Parts Generation
Chapter 2 pytoch foundation 2
防汛救灾应急通信系统
画 ArcFace 中的 margin 曲线
多机多卡训练时的错误
F.pad 的妙用
重大安保事件应急通信系统解决方案