当前位置:网站首页>tf. keras. layers. Conv? D function
tf. keras. layers. Conv? D function
2022-04-23 02:56:00 【Live up to your youth】
1、tf.keras.layers.Conv1D function
The function prototype
tf.keras.layers.Conv1D(filters,
kernel_size,
strides=1,
padding='valid',
data_format='channels_last',
dilation_rate=1,
groups=1,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
Function description
One dimensional convolution layer is used for convolution in a single spatial or temporal dimension . Usually used for Sequence model 、 natural language processing field .

The convolution process is shown in the figure above , The size of the input vector is 20, The convolution kernel size is 5, step ( Each step moves a distance ) by 1, Filling... Is not considered , Then the size of the output vector is (20 - 5) / 1 + 1 = 16; If you consider filling , Then the size of the output vector is 20 / 1 = 20.
A more general , Suppose the input vector size is F, The convolution kernel size is K, In steps of S, The filling method is “VALID”( That is, filling is not considered ), So the size of the output vector N= (F - K / S) + 1; If the filling method is “SAME”( Step size... Not considered , Make the input matrix and output matrix the same size ), Then the output vector size N = F / S
Parameters filters Is the dimension of the output space , Generally, the last dimension of input data is changed . For one-dimensional convolution , The input data is usually three-dimensional , Shape is (batch_size, d_size, n_dim),batch_size Indicates the number of batch data ,d_size Represents the size of each data ,n_dim Represents the dimension of the data element .
Parameters kernel_size Represents the size of the convolution kernel ; Parameters strides Indicating step size , The default is 1;padding Indicates the filling method , The default is VALID, That is, no filling .
In addition, there are some commonly used parameters , such as activation Is the activation function ,use_bias Indicates whether an offset matrix is used , The default is True,kernel_initializer Represents the kernel matrix used ,bias_initializer Represents the offset matrix used .
There are also three parameters kernel_regularizer、bias_regularizer、activity_regularizer Respectively used to calculate the kernel matrix through the regularizer 、 Bias matrix 、 Of the output matrix of the activation function loss. After this layer is output ,tf.losses Medium loss The loss function will get the loss.
The regularizer is mainly used to prevent overfitting , There are two commonly used regularizers L1 and L2, They calculate the loss function differently .
L1 Calculate the loss loss The way loss = l1 * reduce_sum(abs(x)), among l1=0.01;L2 Calculate the loss loss The way loss = l2 * reduce_sum(square(x)), among l2=0.01.
If this layer is used as the first layer , You need to provide a parameter input_shape To specify the size of the input tensor .
The usage function
First example
model = tf.keras.Sequential([
# One dimensional convolution , The output shape is (None, 16, 8), Definition input_shape As the first floor
tf.keras.layers.Conv1D(8, 5, activation="relu", input_shape=(20, 1)),
# One dimensional maximum pool layer , The output shape is (None, 8, 8)
tf.keras.layers.MaxPooling1D(2),
# Flattening layer
tf.keras.layers.Flatten(),
# Fully connected layer
tf.keras.layers.Dense(4)
])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 16, 8) 48
max_pooling1d (MaxPooling1D (None, 8, 8) 0
)
flatten (Flatten) (None, 64) 0
dense (Dense) (None, 4) 260
=================================================================
Total params: 308
Trainable params: 308
Non-trainable params: 0
_________________________________________________________________
Second example
model = tf.keras.Sequential([
# Input layer , The output shape is (None, 20, 1)
tf.keras.layers.InputLayer(input_shape=(20, 1)),
# One dimensional convolution , The output shape is (None, 16, 8)
tf.keras.layers.Conv1D(8, 5, activation="relu"),
# One dimensional maximum pool layer , The output shape is (None, 8, 8)
tf.keras.layers.MaxPooling1D(2),
# One dimensional convolution
tf.keras.layers.Conv1D(16, 3, activation="relu"),
# One dimensional maximum pool layer
tf.keras.layers.MaxPooling1D(2),
# Flattening layer
tf.keras.layers.Flatten(),
# Fully connected layer
tf.keras.layers.Dense(4)
])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 16, 8) 48
max_pooling1d (MaxPooling1D (None, 8, 8) 0
)
conv1d_1 (Conv1D) (None, 6, 16) 400
max_pooling1d_1 (MaxPooling (None, 3, 16) 0
1D)
flatten (Flatten) (None, 48) 0
dense (Dense) (None, 4) 196
=================================================================
Total params: 644
Trainable params: 644
Non-trainable params: 0
_________________________________________________________________
2、tf.keras.layers.Conv2D function
The function prototype
tf.keras.layers.Conv2D(
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format=None,
dilation_rate=(1, 1),
groups=1,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
Function description
Two dimensional convolution layer is generally used in image processing 、 In the field of computer vision . Convolution on two dimensions .

The operation of two-dimensional convolution is similar to that of one bit convolution , It just adds a dimension . As shown in the figure above , The size of the input matrix is 5×5, The size of convolution kernel matrix is 3×3, stay x, y Move the direction in steps of (1, 1), The filling method is adopted (SAME) Convolution . The result is a matrix with the same size as the input matrix (5×5).
The calculation formula of two-dimensional convolution is similar to that of one-dimensional convolution , Suppose the size of the input image is F×F, The size of convolution kernel matrix is K×K, In steps of (S,S), If the filling method is VALID, The output image size is N×N, Then there are N = (F - K / S) + 1; If the filling method is SAME, Then there are N = F / S.
Parameters filters、kernel_size、strides、padding wait , The meaning of these parameters is the same as that of the corresponding parameters in one-dimensional convolution layer , The difference is that the form of parameters is different . For example Conv1D in kernel_size=5 Indicates that the size of the convolution kernel is 5, stay Conv2D in kernel_size It should be for (5, 5) Indicates that the size of the convolution kernel is 5×5.
Parameters dilation_rate Represents the expansion rate , For the expansion of kernel matrix . It can be a single integer , Specify the same value for all spatial dimensions .
The input data of two-dimensional convolution layer must be four-dimensional , Shape is (batch_size, height, width, channels),batch_size Indicates the size of batch data ,height Indicates the height of the data ,width Represents the width of the data ,channels Indicates the number of channels .
Function USES
model = tf.keras.Sequential([
# Input layer , Shape of the output (None, 128, 128, 3)
tf.keras.layers.InputLayer(input_shape=(128, 128, 3)),
# Two dimensional convolution , Shape of the output (None, 42, 42, 16)
tf.keras.layers.Conv2D(16, (5, 5), (3, 3), activation="relu"),
# Two dimensional maximum pool layer , Shape of the output (None, 21, 21, 16)
tf.keras.layers.MaxPooling2D((2, 2)),
# Two dimensional convolution , Shape of the output (None, 5, 5, 32)
tf.keras.layers.Conv2D(32, (5, 5), (4, 4), activation="relu"),
# Two dimensional maximum pool layer , Shape of the output (None, 1, 1, 32)
tf.keras.layers.MaxPooling2D((5, 5)),
# Flattening layer , Shape of the output (None, 32)
tf.keras.layers.Flatten(),
# Fully connected layer , Shape of the output (None, 4)
tf.keras.layers.Dense(4)
])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 42, 42, 16) 1216
max_pooling2d (MaxPooling2D (None, 21, 21, 16) 0
)
conv2d_1 (Conv2D) (None, 5, 5, 32) 12832
max_pooling2d_1 (MaxPooling (None, 1, 1, 32) 0
2D)
flatten (Flatten) (None, 32) 0
dense (Dense) (None, 4) 132
=================================================================
Total params: 14,180
Trainable params: 14,180
Non-trainable params: 0
_________________________________________________________________
3、tf.keras.layers.Conv3D function
The function prototype
tf.keras.layers.Conv3D(
filters,
kernel_size,
strides=(1, 1, 1),
padding='valid',
data_format=None,
dilation_rate=(1, 1, 1),
groups=1,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
Function description
Three dimensional convolution is used in the field of medicine 、 In the field of video processing ( Detect character behavior ), For convolution of three dimensions .

Three dimensional convolution applies a three-dimensional filter to the data set , Filter direction 3 A direction (x,y,z) Move , Calculate the low-level feature representation . The output shape is a three-dimensional volume space , Such as a cube or box . Help video 、 Target detection of three-dimensional medical images .
The meaning of parameters and Conv1D and Conv2D similar , No more details here . The input shape of three-dimensional convolution is five-dimensional tensor (batch_size, frames, height, width, channels),batch_size Is the size of the batch data ,frams It can be understood as the number of frames in the video , Each frame is an image ,height Is the height of the image ,width Is the width of the image ,channels Is the number of image channels . The output shape is also a five-dimensional tensor .
Function USES
model = tf.keras.Sequential([
# Input layer , Shape of the output (None, 128, 128, 128, 3)
tf.keras.layers.InputLayer(input_shape=(128, 128, 128, 3)),
# Three dimensional convolution , Shape of the output (None, 42, 42, 42, 16)
tf.keras.layers.Conv3D(16, (5, 5, 5), (3, 3, 3), activation="relu"),
# Three dimensional maximum pool layer , Shape of the output (None, 21, 21, 21, 16)
tf.keras.layers.MaxPooling3D((2, 2, 2)),
# Three dimensional convolution , Shape of the output (None, 5, 5, 5, 32)
tf.keras.layers.Conv3D(32, (5, 5, 5), (4, 4, 4), activation="relu"),
# Three dimensional maximum pool layer , Shape of the output (None, 1, 1, 1, 32)
tf.keras.layers.MaxPooling3D((5, 5, 5)),
# Flattening layer , Shape of the output (None, 32)
tf.keras.layers.Flatten(),
# Fully connected layer , The output shape is (None, 16)
tf.keras.layers.Dense(16, activation="relu"),
# Fully connected layer , Shape of the output (None, 4)
tf.keras.layers.Dense(4, activation="tanh")
])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv3d (Conv3D) (None, 42, 42, 42, 16) 6016
max_pooling3d (MaxPooling3D (None, 21, 21, 21, 16) 0
)
conv3d_1 (Conv3D) (None, 5, 5, 5, 32) 64032
max_pooling3d_1 (MaxPooling (None, 1, 1, 1, 32) 0
3D)
flatten (Flatten) (None, 32) 0
dense (Dense) (None, 16) 528
dense_1 (Dense) (None, 4) 68
=================================================================
Total params: 70,644
Trainable params: 70,644
Non-trainable params: 0
_________________________________________________________________
版权声明
本文为[Live up to your youth]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204220657127335.html
边栏推荐
- Codeforces Round #784 (Div. 4) (A - H)题解
- Log cutting - build a remote log collection server
- Centos7 install MySQL 8 0
- 基于ele封装下拉菜单等组件
- Airtrack cracking wireless network password (Dictionary running method)
- First knowledge of C language ~ branch statements
- windows MySQL8 zip安装
- MySQL complex query uses temporary table / with as (similar to table variable)
- php+mysql對下拉框搜索的內容修改
- Linux Redis——Redis 数据库缓存服务
猜你喜欢

Android high-level interview must ask: overall business and project architecture design and reconstruction

Devil cold rice 𞓜 078 devil answers the market in Shanghai and Nanjing; Communication and guidance; Winning the country and killing and screening; The purpose of making money; Change other people's op

Error installing Mongo service 'mongodb server' on win10 failed to start

Kubernetes study notes

L2-006 樹的遍曆(中後序確定二叉樹&層序遍曆)

Specific field information of MySQL export table (detailed operation of Navicat client)

ele之Table表格的封装
![How to use C language to realize [guessing numbers game]](/img/8c/052dcb0ce64ee1713bebb1340248e6.png)
How to use C language to realize [guessing numbers game]
![[wechat applet] set the bottom menu (tabbar) for the applet](/img/e2/98711dfb1350599cbdbdf13508b84f.png)
[wechat applet] set the bottom menu (tabbar) for the applet

机器学习(周志华) 第十四章概率图模型
随机推荐
Résumé du gestionnaire de projet du système d'information Chapitre VI gestion des ressources humaines du projet
The second day of learning rhcsa
OCR识别PDF文件
Classification of technology selection (2022)
It turns out that PID was born in the struggle between Lao wangtou and Lao sky
The problem of removing spaces from strings
工业互联网+危化安全生产综合管理平台怎样建
MySQL function syntax
Fashion MNIST dataset classification training
解决win7 中powershell挖矿占用CPU100%
Intelligent agricultural management model
Traversée de l'arbre L2 - 006
Rhcsa day 4 operation
Shell script learning notes -- shell operation on files sed
How to deploy a website with only a server and no domain name?
Difference between relative path and absolute path (often asked in interview)
LeetCode 1450 - 1453
Shell script learning -- practical case
[unity3d] rolling barrage effect in live broadcasting room
MySQL insert free column