当前位置:网站首页>PyTorch 19. Differences and relations of similar operations in pytorch
PyTorch 19. Differences and relations of similar operations in pytorch
2022-04-23 07:29:00 【DCGJ666】
PyTorch 19. PyTorch The difference and connection of similar operations in
view() and reshape()
Written in the beginning :
There is a big man's summary that is very in place : Blog
summary
- view() In operation tensor when , need tensor Memory is continuous , And when changing the size ,view() The operation will not open up new memory space . But make sure that tensor continuity , Yes tensor Conduct
tensor.contiguous()
when , Will open up New memory space , Store continuous data in memory . - reshape() operation , And view() The effect is as like as two peas. , But it's better than view() The more advanced , Operated tensor When memory is continuous , Direct adoption reshape Will not open up new memory ; Operated tensor When memory is not continuous ,reshape Operation will open up new memory , Right again tensor Conduct reshape.
- Last , use reshape operation Just a matter of
expand() and repeat()
expand()
Returns the tensor after the current tensor is expanded in a certain dimension . Expand (expand) The tensor does not allocate new memory , Just create a new view on the existing tensor (view), A size equal to 1 Expand the dimension of to a larger size
Example :
import torch
x = torch.tensor([1, 2, 3])
x.expand(2,3)
tensor([[1, 2, 3],
[1, 2, 3]])
Be careful expand() The dimension can only be extended to 1 Dimension of , Dimension is not 1 Keep the parts consistent
repeat()
Repeat the tensor along a specific dimension , and expand() The difference is , This function copies the tensor data
Example
import torch
x = torch.tensor([1, 2, 3])
x.repeat(3,2)
tensor([[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3]])
x2 = torch.randn(2, 3, 4)
x2.repeat(2, 1, 3).shape
torch.Tensor([4, 3, 12])
Multiplication operation
pytorch The multiplication operations in are :torch.mm(), torch.bmm(), torch.matmul(), torch.mul(), Operator , as well as torch.einsum()
Two dimensional matrix multiplication torch.mm()
This function is generally only used to calculate the matrix multiplication of two two-dimensional matrices , And does not support broadcast operation .
torch.mm(mat1, mat2, out=None), among mat1 by (nxm),mat2 by (mxd), The output dimension is (nxd)
Three dimensional band batch Matrix multiplication of torch.bmm()
The two inputs of the function must be a three-dimensional matrix and the first dimension is the same ( Express Batch dimension ), I won't support it broadcast operation .
Because neural network training generally adopts mini-batch, Often input three-dimensional band batch Matrix , So provide torch.bmm(bmat1, bmat2, out=None)
, among bmat1
by (bxnxm),bmat2
by (bxmxd), Output out
The dimension of is (bxnxd)
Multidimensional matrix multiplication torch.matmul()
torch,matmul(input, other, out=None) Support broadcast operation
For multidimensional data matmul()
Multiplication , It can be considered that matmul()
Multiplication uses the last two dimensions of the two parameters to calculate , Other dimensions can be considered as batch dimension . Suppose the dimensions of the two inputs are input->(100x500x99x11)
,other->(500x11x99)
Then we can think that the multiplication first carries out the matrix multiplication of the last two bits to get (99x11)x(11x99)->(99,99)
, Then analyze the of the two parameters batch size Respectively (1000x500)
and 500
, Can be broadcast as (1000x500)
, So the dimension of the final output is (1000x500x99x99)
Matrix element by element (Element-wise) Multiplication torch.mul()
function torch.mul(mat1, other, out=None)
, among other Multipliers can be scalars , It can also be a matrix of any dimension , As long as the final multiplication is satisfied broadcast that will do .
Two operators @ and *
- @: Matrix multiplication , Automatically execute the appropriate matrix multiplication function
- *:elemnet-wise Multiplication
register_parameter() and parameter()
- Parameter()
Parameter yes Tensor, namely Tensor All the attributes it has , For example, according to data To access parameter values , use grad To access the parameter gradient
# Just define a network
net = nn.Sequential(nn.Linear(4,3), nn.ReLU(), nn.Linear(3,1))
# list Make it accessible
weight_0 = list(net[0].parameters())[0]
print(weight_0.data)
print(weight_0.grad)
- register_parameter(name, parameters)
To build a network module add to parameter
The biggest difference :parameter You can register online through name obtain
Example
class Example(nn.Module):
def __init__(self):
super(Example, self).__init__()
self.W1_params = nn.Parameter(torch.rand(2,3))
self.register_parameter('W2_params', nn.Parameter(torch.rand(2,3)))
def forward(self, x):
return x
版权声明
本文为[DCGJ666]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230611343663.html
边栏推荐
猜你喜欢
SSL / TLS application example
【无标题】制作一个0-99的计数器,P1.7接按键,P2接数码管段,共阳极数码管,P3.0,P3.1接数码管位码,每按一次键,数码管显示加一。请写出单片机的C51代码
[point cloud series] sg-gan: advantageous self attention GCN for point cloud topological parts generation
Device Tree 详解
网络层重要知识(面试、复试、期末)
【点云系列】PnP-3D: A Plug-and-Play for 3D Point Clouds
北峰油气田自组网无线通信对讲系统解决方案
[point cloud series] a rotation invariant framework for deep point cloud analysis
MySQL installation and configuration - detailed tutorial
AUTOSAR从入门到精通100讲(八十一)-AUTOSAR基础篇之FiM
随机推荐
torch.where能否传递梯度
《Attention in Natural Language Processing》翻译
PyTorch 12. hook的用法
[point cloud series] pnp-3d: a plug and play for 3D point clouds
Chapter 8 generative deep learning
armv8m(cortex m33) MPU实战
北峰通信助力湛江市消防支队构建PDT无线通信系统
Swin transformer to onnx
Chapter 4 pytoch data processing toolbox
F. The wonderful use of pad
【期刊会议系列】IEEE系列模板下载指南
EasyUI combobox determines whether the input item exists in the drop-down list
应急医疗通讯解决方案|MESH无线自组网系统
【点云系列】 A Rotation-Invariant Framework for Deep Point Cloud Analysis
torch.where能否传递梯度
【3D形状重建系列】Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion
swin transformer 转 onnx
Pytoch model saving and loading (example)
关于短视频技术轮廓探讨
Mysql database installation and configuration details