当前位置:网站首页>Usage of topk()/eq( ) / gt( ) / lt( ) / t( )
Usage of topk()/eq( ) / gt( ) / lt( ) / t( )
2022-08-08 04:04:00 【nine pounds and fifteen pence,】
topk()/eq( ) / gt( ) / lt( ) / t( )的用法
eq( ) / gt( ) / lt( ) / t( )
import torch
x1 = torch.Tensor([0.2,0.8])
x2 = torch.Tensor([0,3])
print('x1等于x2:',x1.eq(x2))
print('x1大于x2:',x1.gt(x2))
print('x1小于x2:',x1.lt(x2))
# x1等于x2: tensor([False, False])
# x1大于x2: tensor([ True, False])
# x1小于x2: tensor([False, True])
--------------------------------------------------------
x3 = torch.Tensor([[2,1],[3,4]])
print('c:',x3)
print('c转置:',x3.t())
# c: tensor([[2., 1.],
# [3., 4.]])
# c转置: tensor([[2., 3.],
# [1., 4.]])
topk()
import torch
a = torch.randn((4, 8))
print(a)
# tensor([[-0.6378, 0.4055, -1.1109, -0.2804, -0.5933, -0.8631, -0.7764, -0.2232],
# [-2.1446, 0.4058, 1.1801, 1.5446, 0.7786, 0.0172, -2.2552, 0.2385],
# [ 0.7129, -0.8664, -1.2198, -0.1463, 0.0565, -0.0409, -0.4247, 0.8256],
# [-0.3058, -0.5409, 0.1872, -1.4345, 0.1649, 0.7080, 1.5167, 1.2903]])
max()
格式:
torch.max(input, dim)
#max:取最大值
maxk = max((1, 3))
print(maxk) #3
需要设置keepdim=True,避免降维
_, indices_max = a.max(dim=1, keepdim=True)
print(_)
print(indices_max) #对应索引
#tensor([[0.4055],
# [1.5446],
# [0.8256],
# [1.5167]])
# tensor([[1],
# [3],
# [7],
# [6]])
torch.topk()
格式:
torch.max(input, k, dim, largest=True)
input:一个tensor数据
k:specify before gettingkdata and itsindex
dim: Specifies which dimension to sort on, 默认是最后一个维度
largest:如果为True,按照大到小排序; 如果为False,Sort by smallest to largest
#_返回的是前maxk个最大值,pred返回对应index
#是指定维度dim=0,按行取,dim=1,按列取.
_, pred = a.topk(maxk, 1, True, True)
print(_)
# tensor([[ 0.4055, -0.2232, -0.2804],
# [ 1.5446, 1.1801, 0.7786],
# [ 0.8256, 0.7129, 0.0565],
# [ 1.5167, 1.2903, 0.7080]])
print(pred)
# tensor([[1, 7, 3],
# [3, 2, 4],
# [7, 0, 4],
# [6, 7, 5]])
_, pred = a.topk(1, 1, True, True)
print(_)
# tensor([[0.4055],
# [1.5446],
# [0.8256],
# [1.5167]])
print(pred)
# tensor([[1],
# [3],
# [7],
# [6]])
_, pred = a.topk(1, 0, True, True)
print(_)
print(pred)
# tensor([[0.7129, 0.4058, 1.1801, 1.5446, 0.7786, 0.7080, 1.5167, 1.2903]])
# tensor([[2, 1, 1, 1, 1, 3, 3, 3]])
边栏推荐
猜你喜欢
随机推荐
VSCode opens some records of C (embedded) projects
Vulfocus Shooting Range Scenario Mode - Intranet Dead End
Basic introduction to NLP
The storage principle of NorFlash
LeetCode_485_最大连续1的个数
Heterogeneous on the Graph paper to share 】 【 small sample learning: HG - Meta: Graph Meta - learning over Heterogeneous Graphs
NorFlash的存储原理
Redis持久化机制、主从、哨兵、cluster集群方案解析
开发如何尽可能的避免BUG
妙才周刊
6G时代新用户面设计和关键技术
KDD'22 | CausalMTA: Unbiased Advertising Multi-Touch Attribution Technology Based on Causal Inference
leetcode: 874. 模拟行走机器人
杭电多校-Map-(模拟退火)
Week 4 Step by step building multi-layer neural network and application (1 & 2)
New User Plane Design and Key Technologies in the 6G Era
农产品直播带货持续升温,经济日报:冲流量勿忘质量
SIGIR'22 | Joint optimization of inter-ad ordering and intra-ad creative prioritization
面向6G的通信感知一体化架构与关键技术
数据库篇复习篇









