当前位置:网站首页>torch. mm() torch. sparse. mm() torch. bmm() torch. Mul () torch The difference between matmul()
torch. mm() torch. sparse. mm() torch. bmm() torch. Mul () torch The difference between matmul()
2022-04-23 07:17:00 【Breeze_】
torch.mm()
Multiplication of two-dimensional matrix , Suppose the input matrix mat1 Dimension is ( m × n ) (m×n) (m×n), matrix mat2 Dimension is ( n × p ) (n×p) (n×p), Then the output dimension is ( m × p ) (m×p) (m×p), It can only be two-dimensional
mat1 = torch.randn(2, 3)
mat2 = torch.randn(3, 3)
out = torch.mm(mat1, mat2)
''' tensor([[ 0.4851, 0.5037, -0.3633], [-0.0760, -3.6705, 2.4784]]) '''
torch.sparse.mm()
a It's a sparse matrix ,b Is a sparse matrix or a dense matrix ,sparse.mm Function and torch.mm equally , It's all about matrix multiplication
a = torch.randn(2, 3).to_sparse().requires_grad_(True)
b = torch.randn(3, 2, requires_grad=True)
y = torch.sparse.mm(a, b)
''' a: tensor(indices=tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]), values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]), size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True) b: tensor([[-0.6479, 0.7874], [-1.2056, 0.5641], [-1.1716, -0.9923]], requires_grad=True) y: tensor([[-0.3323, 1.8723], [-1.8951, 0.7904]], grad_fn=<SparseAddmmBackward>) '''
torch.bmm()
And torch.mm similar , But one more batch_size dimension , Input matrix tensor mat1 Dimension is ( b × m × n ) (b×m×n) (b×m×n), Matrix tensor mat2 Dimension is ( b × n × p ) (b×n×p) (b×n×p), Then the output dimension is ( b × m × p ) (b×m×p) (b×m×p)
mat1 = torch.randn(10, 3, 4)
mat2 = torch.randn(10, 4, 5)
res = torch.bmm(mat1, mat2)
print(res.size())
# torch.Size([10, 3, 5])
torch.mul()
Will input tensor input Each element of is associated with another scalar other Multiply , Returns a new tensor out, Both dimensions need to meet radio broadcast The rules
# The way 1: tensor and Scalar multiplication
a = torch.randn(3)
torch.mul(a, 100)
''' a: tensor([ 0.2015, -0.4255, 2.6087]) tensor([ 20.1494, -42.5491, 260.8663]) '''
# The way 2: tensor and tensor ( Broadcasting rules must be met )
a = torch.randn(4, 1)
b = torch.randn(1, 4)
c = torch.mul(a,b)
''' c: tensor([[-0.1183, -0.4246, -0.0512, 0.1757], [-0.4215, -1.5121, -0.1823, 0.6257], [-0.0358, -0.1284, -0.0155, 0.0531], [ 0.1649, 0.5917, 0.0713, -0.2448]]) '''
# The way 3: Multiply the element correspondence
a = torch.randn(3, 2)
b = torch.randn(3, 2)
c = torch.mul(a,b)
''' C: tensor([[-1.9259, -0.0116], [-1.8523, -0.0392], [-0.4881, -0.4235]]) '''
torch.matmul()
The matrix product of two tensors . Its behavior depends on the dimension of the tensor as follows :
- If both tensors are one-dimensional , Returns the dot product ( Scalar ).
- If both parameters are two-dimensional , Then return the matrix - matrix product .
- If the first parameter is one-dimensional , The second parameter is two-dimensional , Then add a... Before its dimension 1, To achieve matrix multiplication . After matrix multiplication , The attached dimension is deleted .
- If the first parameter is two-dimensional , The second parameter is one-dimensional , Returns the matrix vector product .
- If the two parameters are at least one-dimensional , And at least one parameter is N Dimensional ( among N > 2), Returns a batch matrix multiplication . If the first parameter is one-dimensional , Add... Before its dimension 1, To multiply the batch matrix , Then delete . If the second parameter is one-dimensional , For the purpose of batch matrix multiples , A dimension will be appended to it 1, Then delete it . Non matrix ( Batch processing ) Dimensions are broadcast ( Therefore, it must be broadcast )
# vector x vector
tensor1 = torch.randn(3)
tensor2 = torch.randn(3)
torch.matmul(tensor1, tensor2).size()
torch.Size([])
# matrix x vector
tensor1 = torch.randn(3, 4)
tensor2 = torch.randn(4)
torch.matmul(tensor1, tensor2).size()
torch.Size([3])
# batched matrix x broadcasted vector
tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(4)
torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3])
# batched matrix x batched matrix
tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(10, 4, 5)
torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3, 5])
# batched matrix x broadcasted matrix
tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(4, 5)
torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3, 5])
summary
- The product of two-dimensional matrix is
torch.mm()ortorch.sparse.mm() - The product between two-dimensional matrices of multiple batches is expressed in
torch.bmm() - Scalar product or corresponding term product is expressed by
torch.mul() - Product by batch or broadcast
torch.matmul()
版权声明
本文为[Breeze_]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230610323214.html
边栏推荐
- Android exposed components - ignored component security
- 素数求解的n种境界
- c语言编写一个猜数字游戏编写
- 电脑关机程序
- launcher隐藏不需要显示的app icon
- Keras如何保存、加载Keras模型
- GEE配置本地开发环境
- 【2021年新书推荐】Practical Node-RED Programming
- PyTorch最佳实践和代码编写风格指南
- Viewpager2 realizes Gallery effect. After notifydatasetchanged, pagetransformer displays abnormal interface deformation
猜你喜欢

ArcGIS License Server Administrator 无法启动解决方法

face_recognition人脸检测

ffmpeg常用命令

Bottom navigation bar based on bottomnavigationview
![[2021 book recommendation] practical node red programming](/img/f4/e397c01f1551cd6c59ea4f54c197e6.png)
[2021 book recommendation] practical node red programming
![[2021 book recommendation] Red Hat Certified Engineer (RHCE) Study Guide](/img/36/1c484aec5efbac8ae49851844b7946.png)
[2021 book recommendation] Red Hat Certified Engineer (RHCE) Study Guide

基于BottomNavigationView实现底部导航栏
![[2021 book recommendation] kubernetes in production best practices](/img/78/2b5bf03adad5da9a109ea5d4e56b18.png)
[2021 book recommendation] kubernetes in production best practices

Easyui combobox 判断输入项是否存在于下拉列表中

GEE配置本地开发环境
随机推荐
組件化學習
最简单完整的libwebsockets的例子
[2021 book recommendation] kubernetes in production best practices
[SM8150][Pixel4]LCD驱动
读书小记——Activity
MySQL5.7插入中文数据,报错:`Incorrect string value: ‘\xB8\xDF\xAE\xF9\x80 at row 1`
Migrating your native/mobile application to Unified Plan/WebRTC 1.0 API
face_recognition人脸检测
MySQL notes 5_ Operation data
【 planification dynamique】 différentes voies 2
杂七杂八的学习
Fill the network gap
Component based learning (1) idea and Implementation
MySQL notes 2_ data sheet
常用UI控件简写名
[2021 book recommendation] effortless app development with Oracle visual builder
Kotlin征途之data class [数据类]
DCMTK(DCM4CHE)与DICOOGLE协同工作
机器学习 三: 基于逻辑回归的分类预测
素数求解的n种境界