当前位置:网站首页>Keras. Layers introduction to various layers

Keras. Layers introduction to various layers

2022-04-23 21:21:00 Minaldo7

Keras.layers Introduction to various layers
Link to the original text :https://www.cnblogs.com/peng8098/p/keras_7.html

One 、 The network layer

keras The main layers of this paper are :

Common layers (Core)、 Convolution layer (Convolutional)、 Pooling layer (Pooling)、 Local connection layer 、 Recursive layer (Recurrent)、 Embedded layer ( Embedding)、 Advanced activation layer 、 Normative layer 、 Noise layer 、 The packaging , Of course, you can also write your own layer .

For layer operations

layer.get_weights() # Returns the weight of the layer (numpy array)
layer.set_weights(weights)# Load weights into the layer 
config = layer.get_config()# Save the configuration of this layer 
layer = layer_from_config(config)# Load a configuration to the layer 

# If the layer has only one compute node ( That is, this layer is not a shared layer ), Then the input tensor can be obtained by the following methods 、 Output tensor 、 The shape of the input data and the shape of the output data :
layer.input
layer.output
layer.input_shape
layer.output_shape

# If the layer has multiple compute nodes . You can use the following method 
layer.get_input_at(node_index)
layer.get_output_at(node_index)
layer.get_input_shape_at(node_index)
layer.get_output_shape_at(node_index)

1、 Common network layer

1.1、Dense layer ( Fully connected layer )

keras.layers.core.Dense(units,activation=None,use_bias=True,kernel_initializer='glorot_uniform',bias_initializer='zeros',kernel_regularizer=None,bias_regularizer=None,activity_regularizer=None,kernel_constraint=None,bias_constraint=None)

Parameters :

units: Greater than 0 The integer of , Represents the output dimension of the layer .
use_bias: Boolean value , Whether to use the offset term
kernel_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights .
bias_initializer: Offset vector initialization method , A string that initializes the method name for the predefined , Or initializer for initializing offset vector .
regularizer: The regularization ,kernel For the weight of 、bias For paranoid ,activity For output
constraints: Constraints ,kernel For the weight of ,bias For paranoid .
activation: Activation function , For the predefined activation function name ( Reference activation function ), Or element by element (element-wise) Of Theano function . If this parameter is not specified , No activation functions will be used ( That is, using a linear activation function :a(x)=x)
input_dim: The dimension entered by this layer
The operations implemented in this layer are

output=activation(dot(input,kernel)+bias)output=activation(dot(input,kernel)+bias)

1.2、Activation layer

keras.layers.core.Activation(activation)

The activation layer applies an activation function to the output of a layer

Parameters :

activation: The activation function to be used , The name of the predefined activation function or a Tensorflow/Theano Function of . Reference activation function
Input shape: arbitrarily , When using the active layer as the first layer , To specify input_shape
Output shape: With the input shape identical

1.3、dropout layer

keras.layers.core.Dropout(rate, noise_shape=None, seed=None)

Apply... To input data Dropout.Dropout In the process of training, every time the parameters are updated, a certain probability will be used (rate) Randomly disconnect input neurons ,Dropout Layers are used to prevent over fitting .

Parameters

rate:0~1 Floating point number , Control the proportion of neurons that need to be disconnected
noise_shape: Integral tensor , Is the binary value to be applied to the input Dropout mask Of shape, For example, your input is (batch_size, timesteps, features), And you want to at every time step Dropout mask All the same , Then you can pass in noise_shape=(batch_size, 1, features).
seed: Integers , Random number seed used

1.4、Flatten layer

keras.layers.core.Flatten()

Flatten Layer is used to input “ Flatten ”, That is, the multidimensional input is unidimensional , It is often used in the transition from convolution layer to fully connected layer .Flatten No effect batch Size .

demo:

model = Sequential()
model.add(Convolution2D(64, 3, 3,
            border_mode='same',
            input_shape=(3, 32, 32)))

# now: model.output_shape == (None, 64, 32, 32)

model.add(Flatten())
# now: model.output_shape == (None, 65536)

1.5、Reshape layer

keras.layers.core.Reshape(target_shape)

Reshape Layer is used to input shape Convert to specific shape

Parameters

target_shape: The goal is shape, Is an integer tuple, Dimensions that do not contain the number of samples (batch size )
Input shape: arbitrarily , But the input shape Must be fixed . When using this layer as the first layer of the model , You need to specify the input_shape Parameters
Output shape:(batch_size,)+target_shape

demo:

# as first layer in a Sequential model
model = Sequential()
model.add(Reshape((3, 4), input_shape=(12,)))
# now: model.output_shape == (None, 3, 4)
# note: `None` is the batch dimension

# as intermediate layer in a Sequential model
model.add(Reshape((6, 2)))
# now: model.output_shape == (None, 6, 2)

# also supports shape inference using `-1` as dimension
model.add(Reshape((-1, 2, 2)))
# now: model.output_shape == (None, 3, 2, 2)

1.6、Permute layer

keras.layers.core.Permute(dims)

Permute The layer rearranges the input dimensions according to the given pattern , for example , When it is necessary to RNN and CNN On network connection , This layer may be used . The so-called rearrangement is to exchange two lines

Parameters

dims: Integers tuple, Specify the mode of rearrangement , Dimension without sample number . The index of the remake mode is from 1 Start . for example (2,1) Represents the rearrangement of the second dimension of the input to the first dimension of the output , Rearrange the first dimension of the input to the second dimension

model = Sequential()
model.add(Permute((2, 1), input_shape=(10, 64)))
# now: model.output_shape == (None, 64, 10)
# note: `None` is the batch dimension

Input shape: arbitrarily , When using the active layer as the first layer , To specify input_shape
Output shape: Same as input , But its dimensions are rearranged according to the specified pattern

1.7、RepeatVector layer

keras.layers.core.RepeatVector(n)

RepeatVector The layer will repeat the input n Time

Parameters

n: Integers , Number of repetitions
Input shape: Form like (nb_samples, features) Of 2D tensor
Output shape: Form like (nb_samples, n, features) Of 3D tensor

Example

model = Sequential()
model.add(Dense(32, input_dim=32))
# now: model.output_shape == (None, 32)
# note: `None` is the batch dimension

model.add(RepeatVector(3))
# now: model.output_shape == (None, 3, 32)

1.8、Lambda layer

keras.layers.core.Lambda(function, output_shape=None, mask=None, arguments=None)

This function is used to apply any... To the output of the previous layer Theano/TensorFlow expression

Parameters

function: Function to implement , This function accepts only one variable , That is, the output of the upper layer
output_shape: The value that the function should return shape, It could be a tuple, It can also be an input
shape Calculate the output shape Function of
mask: A mask
arguments: Optional , Dictionaries , Used to record other keyword parameters passed to the function
Input shape: arbitrarily , When using this layer as the first layer , To specify input_shape
Output shape: from output_shape The output specified by the parameter shape, When using tensorflow Can automatically infer

# add a x -> x^2 layer
model.add(Lambda(lambda x: x ** 2))

# add a layer that returns the concatenation
# of the positive part of the input and
# the opposite of the negative part

def antirectifier(x):
    x -= K.mean(x, axis=1, keepdims=True)
    x = K.l2_normalize(x, axis=1)
    pos = K.relu(x)
    neg = K.relu(-x)
    return K.concatenate([pos, neg], axis=1)

def antirectifier_output_shape(input_shape):
    shape = list(input_shape)
    assert len(shape) == 2  # only valid for 2D tensors
    shape[-1] *= 2
    return tuple(shape)

model.add(Lambda(antirectifier,
         output_shape=antirectifier_output_shape))

1.9、ActivityRegularizer layer

keras.layers.core.ActivityRegularization(l1=0.0, l2=0.0)

The data passing through this layer will not change , However, the loss function value is updated based on its activation value

Parameters

l1:1 Norm regular factor ( Positive floating point )
l2:2 Norm regular factor ( Positive floating point )
Input shape: arbitrarily , When using this layer as the first layer , To specify input_shape
Output shape: With the input shape identical

2.0、Masking layer

keras.layers.core.Masking(mask_value=0.0)

2、 Convolution layer Convolutional

2.1、Conv1D layer

keras.layers.convolutional.Conv1D(filters, kernel_size, strides=1, padding='valid', dilation_rate=1, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

One dimensional convolution ( Real time domain convolution ), It is used for neighborhood filtering on one-dimensional input signal . When using this layer as the first layer , Keyword parameters are required input_shape. for example (10,128) Represents a person with a length of 10 Sequence , Each signal in the sequence is 128 vector . and (None, 128) Stands for longer 128 Dimensional vector sequence .

This layer generates the input signal and convolution kernel in a single spatial domain ( Or time domain ) Direction convolution . If use_bias=True, Then an offset term will be added , if activation Not for None, The output is the output of the activated function .

Parameters

filters: The number of convolution kernels ( The dimension of output )
kernel_size: An integer or consisting of a single integer list/tuple, The length of the spatial or temporal window of the convolution kernel
strides: An integer or consisting of a single integer list/tuple, Is the step size of the convolution . Anything not for 1 Of strides All with any not for 1 Of dilation_rate Are not compatible
padding: repair 0 Strategy , by “valid”, “same” or “causal”,“causal” There will be cause and effect ( Expansive ) Convolution , namely output[t] Don't depend on input[t+1:]. Useful when modeling timing signals that cannot violate the chronological order . Reference resources WaveNet: A Generative Model for Raw Audio, section 2.1..“valid” Represents only effective convolution , That is, the boundary data is not processed .“same” Represents the convolution result at the reserved boundary , It usually leads to output shape With the input shape identical .
activation: Activation function , For the predefined activation function name ( Reference activation function ), Or element by element (element-wise) Of Theano function . If this parameter is not specified , No activation functions will be used ( That is, using a linear activation function :a(x)=x)
dilation_rate: An integer or consisting of a single integer list/tuple, Appoint dilated convolution The expansion ratio in . Anything not for 1 Of dilation_rate All with any not for 1 Of strides Are not compatible .
use_bias: Boolean value , Whether to use the offset term
kernel_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
bias_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
kernel_regularizer: A regular term applied to a weight , by Regularizer object
bias_regularizer: The regular term applied to the offset vector , by Regularizer object
activity_regularizer: A regular term applied to the output , by Regularizer object
kernel_constraints: Constraints imposed on weights , by Constraints object
bias_constraints: Constraints imposed on the offset , by Constraints object
Input shape: Form like (samples,steps,input_dim) Of 3D tensor
Output shape: Form like (samples,new_steps,nb_filter) Of 3D tensor , Because there is vector filling ,steps That's going to change

【Tips】 Can be Convolution1D regard as Convolution2D Fast version of , Yes, in the example (10,32) The signal of 1D Convolution is equivalent to convolution. The kernel is (filter_length, 32) Of 2D Convolution .

2.2、Conv2D layer

keras.layers.convolutional.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

Two dimensional convolution , That is, the spatial convolution of the image . This layer performs sliding window convolution on two-dimensional input , When using this layer as the first layer , Shall provide input_shape Parameters . for example input_shape = (128,128,3) representative 128*128 The color of RGB Images (data_format=‘channels_last’)

Parameters

filters: The number of convolution kernels ( The dimension of output )
kernel_size: A single integer or two integers list/tuple, The width and length of convolution kernel . If it is a single integer , It means the same length in each spatial dimension .
strides: A single integer or two integers list/tuple, Is the step size of the convolution . If it is a single integer , Represents the same step size in each spatial dimension . Anything not for 1 Of strides All with any not for 1 Of dilation_rate Are not compatible
padding: repair 0 Strategy , by “valid”, “same” .“valid” Represents only effective convolution , That is, the boundary data is not processed .“same” Represents the convolution result at the reserved boundary , It usually leads to output shape With the input shape identical .
activation: Activation function , For the predefined activation function name ( Reference activation function ), Or element by element (element-wise) Of Theano function . If this parameter is not specified , No activation functions will be used ( That is, using a linear activation function :a(x)=x)
dilation_rate: A single integer or two integers list/tuple, Appoint dilated convolution The expansion ratio in . Anything not for 1 Of dilation_rate All with any not for 1 Of strides Are not compatible .
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
use_bias: Boolean value , Whether to use the offset term
kernel_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
bias_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
kernel_regularizer: A regular term applied to a weight , by Regularizer object
bias_regularizer: The regular term applied to the offset vector , by Regularizer object
activity_regularizer: A regular term applied to the output , by Regularizer object
kernel_constraints: Constraints imposed on weights , by Constraints object
bias_constraints: Constraints imposed on the offset , by Constraints object
Input shape:
‘channels_first’ In mode , The input is like (samples,channels,rows,cols) Of 4D tensor .
‘channels_last’ In mode , The input is like (samples,rows,cols,channels) Of 4D tensor .

Note the input here shape It refers to the input of the internal implementation of the function shape, Not what the function interface should specify input_shape, Please refer to the example provided below .

Output shape:
‘channels_first’ In mode , In the form of (samples,nb_filter, new_rows, new_cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,new_rows, new_cols,nb_filter) Of 4D Zhang . The amount

The number of output rows and columns may change due to the filling method .

2.3、SeparableConv2D layer

keras.layers.convolutional.SeparableConv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer='glorot_uniform', pointwise_initializer='glorot_uniform', bias_initializer='zeros', depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None)

This layer is a separable convolution layer in the depth direction .

Separable convolution is first performed in the depth direction ( Convolute each input channel separately ), Then convolute point by point , Mix the convolution results of the previous step into the output channel . Parameters depth_multiplier Controlled in depthwise Convolution ( First step ) In the process of , How many output channels are generated by each input channel signal .

Intuitive, , Separable convolution can be seen as decomposing a convolution kernel into two small convolution kernels , Or as Inception An extreme case of a module .

When using this layer as the first layer , Shall provide input_shape Parameters . for example input_shape = (3,128,128) representative 128*128 The color of RGB Images .

Parameters

filters: The number of convolution kernels ( The dimension of output )
kernel_size: A single integer or two integers list/tuple, The width and length of convolution kernel . If it is a single integer , It means the same length in each spatial dimension .
strides: A single integer or two integers list/tuple, Is the step size of the convolution . If it is a single integer , Represents the same step size in each spatial dimension . Anything not for 1 Of strides All with any not for 1 Of dilation_rate Are not compatible
padding: repair 0 Strategy , by “valid”, “same”
.“valid” Represents only effective convolution , That is, the boundary data is not processed .“same” Represents the convolution result at the reserved boundary , It usually leads to output shape With the input shape identical .
activation: Activation function , For the predefined activation function name ( Reference activation function ), Or element by element (element-wise) Of Theano function . If this parameter is not specified , No activation functions will be used ( That is, using a linear activation function :a(x)=x)
dilation_rate: A single integer or two integers list/tuple, Appoint dilated
convolution The expansion ratio in . Anything not for 1 Of dilation_rate All with any not for 1 Of strides Are not compatible .
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
use_bias: Boolean value , Whether to use the offset term depth_multiplier: In the step of convolution by depth , How many output channels does each input channel use
kernel_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
bias_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
depthwise_regularizer: Regular terms applied to weights convoluted by depth , by Regularizer object
pointwise_regularizer: The regular term applied to the weight of convolution by point , by Regularizer object
kernel_regularizer: A regular term applied to a weight , by Regularizer object
bias_regularizer: The regular term applied to the offset vector , by Regularizer object
activity_regularizer: A regular term applied to the output , by Regularizer object
kernel_constraints: Constraints imposed on weights , by Constraints object
bias_constraints: Constraints imposed on the offset , by Constraints object
depthwise_constraint: Constraints imposed on convolution weights by depth , by Constraints object
pointwise_constraint The constraint term applied to the convolution weight by point , by Constraints object
Input shape
‘channels_first’ In mode , The input is like (samples,channels,rows,cols) Of 4D tensor .
‘channels_last’ In mode , The input is like (samples,rows,cols,channels) Of 4D tensor .

Note the input here shape It refers to the input of the internal implementation of the function shape, Not what the function interface should specify input_shape, Please refer to the example provided below .

Output shape
‘channels_first’ In mode , In the form of (samples,nb_filter, new_rows, new_cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,new_rows, new_cols,nb_filter) Of 4D tensor .

The number of output rows and columns may change due to the filling method

2.4、Conv2DTranspose layer

keras.layers.convolutional.Conv2DTranspose(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

This layer is a transposed convolution operation ( deconvolution ). The need for deconvolution usually occurs when the user wants to transform the result of an ordinary convolution in the opposite direction . for example , Will have the volume layer output shape Of tensor Convert to input with this volume layer shape Of tensor. At the same time, the connection mode compatible with the convolution layer is retained .

When using this layer as the first layer , Shall provide input_shape Parameters . for example input_shape = (3,128,128) representative 128*128 The color of RGB Images .

Parameters

filters: The number of convolution kernels ( The dimension of output )
kernel_size: A single integer or two integers list/tuple, The width and length of convolution kernel . If it is a single integer , It means the same length in each spatial dimension .
strides: A single integer or two integers list/tuple, Is the step size of the convolution . If it is a single integer , Represents the same step size in each spatial dimension . Anything not for 1 Of strides All with any not for 1 Of dilation_rate Are not compatible
padding: repair 0 Strategy , by “valid”, “same” .“valid” Represents only effective convolution , That is, the boundary data is not processed .“same” Represents the convolution result at the reserved boundary , It usually leads to output shape With the input shape identical .
activation: Activation function , For the predefined activation function name ( Reference activation function ), Or element by element (element-wise) Of Theano function . If this parameter is not specified , No activation functions will be used ( That is, using a linear activation function :a(x)=x)
dilation_rate: A single integer or two integers list/tuple, Appoint dilated convolution The expansion ratio in . Anything not for 1 Of dilation_rate All with any not for 1 Of strides Are not compatible .
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
use_bias: Boolean value , Whether to use the offset term
kernel_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
bias_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
kernel_regularizer: A regular term applied to a weight , by Regularizer object
bias_regularizer: The regular term applied to the offset vector , by Regularizer object
activity_regularizer: A regular term applied to the output , by Regularizer object
kernel_constraints: Constraints imposed on weights , by Constraints object
bias_constraints: Constraints imposed on the offset , by Constraints object
Input shape
‘channels_first’ In mode , The input is like (samples,channels,rows,cols) Of 4D tensor .
‘channels_last’ In mode , The input is like (samples,rows,cols,channels) Of 4D tensor .

Note the input here shape It refers to the input of the internal implementation of the function shape, Not what the function interface should specify input_shape, Please refer to the example provided below .

Output shape
‘channels_first’ In mode , In the form of (samples,nb_filter, new_rows, new_cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,new_rows, new_cols,nb_filter) Of 4D tensor .

The number of output rows and columns may change due to the filling method

2.5、Conv3D layer

keras.layers.convolutional.Conv3D(filters, kernel_size, strides=(1, 1, 1), padding='valid', data_format=None, dilation_rate=(1, 1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

Three dimensional convolution performs sliding window convolution on three-dimensional input , When using this layer as the first layer , Shall provide input_shape Parameters . for example input_shape = (3,10,128,128) Represents right 10 frame 128*128 The color of RGB The image is convoluted . The location of the data channel still has data_format Parameter assignment .

Parameters

filters: The number of convolution kernels ( The dimension of output )
kernel_size: Single integer or by 3 It's made up of two integers list/tuple, The width and length of convolution kernel . If it is a single integer , It means the same length in each spatial dimension .
strides: Single integer or by 3 It's made up of two integers list/tuple, Is the step size of the convolution . If it is a single integer , Represents the same step size in each spatial dimension . Anything not for 1 Of strides All with any not for 1 Of dilation_rate Are not compatible
padding: repair 0 Strategy , by “valid”, “same” .“valid” Represents only effective convolution , That is, the boundary data is not processed .“same” Represents the convolution result at the reserved boundary , It usually leads to output shape With the input shape identical .
activation: Activation function , For the predefined activation function name ( Reference activation function ), Or element by element (element-wise) Of Theano function . If this parameter is not specified , No activation functions will be used ( That is, using a linear activation function :a(x)=x)
dilation_rate: Single integer or by 3 Composed of integers list/tuple, Appoint dilated convolution The expansion ratio in . Anything not for 1 Of dilation_rate All with any not for 1 Of strides Are not compatible .
data_format: character string ,“channels_first” or “channels_last” One of , Represents the location of the channel dimension of the data . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128x128 As an example ,“channels_first” The data should be organized as (3,128,128,128), and “channels_last” The data should be organized as (128,128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
use_bias: Boolean value , Whether to use the offset term
kernel_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
bias_initializer: Weight initialization method , A string that initializes the method name for the predefined , Or initializer for initializing weights . Reference resources initializers
kernel_regularizer: A regular term applied to a weight , by Regularizer object
bias_regularizer: The regular term applied to the offset vector , by Regularizer object
activity_regularizer: A regular term applied to the output , by Regularizer object
kernel_constraints: Constraints imposed on weights , by Constraints object
bias_constraints: Constraints imposed on the offset , by Constraints object
Input shape
‘channels_first’ In mode , The input should be in the form of (samples,channels,input_dim1,input_dim2, input_dim3) Of 5D tensor
‘channels_last’ In mode , The input should be in the form of (samples,input_dim1,input_dim2, input_dim3,channels) Of 5D tensor

Input here shape It refers to the input of the internal implementation of the function shape, Not what the function interface should specify input_shape.

2.6、Cropping1D layer

keras.layers.convolutional.Cropping1D(cropping=(1, 1))

On the timeline (axis1) Yes 1D Input ( Time series ) Tailoring

Parameters

cropping: Long for 2 Of tuple, Specify how many elements to crop at the beginning and end of the sequence
Input shape: Form like (samples,axis_to_crop,features) Of 3D tensor
Output shape: Form like (samples,cropped_axis,features) Of 3D tensor .

2.7、Cropping2D layer

keras.layers.convolutional.Cropping2D(cropping=((0, 0), (0, 0)), data_format=None)

Yes 2D Input ( Images ) Tailoring , Will be in the airspace dimension , That is, cut in the direction of width and height

Parameters

cropping: Long for 2 The integer of tuple, It is the number of elements to be cut off from the head and tail in the width and height directions respectively
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape: Form like (samples,depth, first_axis_to_crop, second_axis_to_crop)
Output shape: Form like (samples, depth, first_cropped_axis, second_cropped_axis) Of 4D tensor .

2.8、Cropping3D layer

keras.layers.convolutional.Cropping3D(cropping=((1, 1), (1, 1), (1, 1)), data_format=None)

Yes 2D Input ( Images ) Tailoring

Parameters

cropping: Long for 3 The integer of tuple, It is the number of elements that need to be cut off at the head and tail in three directions
data_format: character string ,“channels_first” or “channels_last” One of , Represents the location of the channel dimension of the data . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128x128 As an example ,“channels_first” The data should be organized as (3,128,128,128), and “channels_last” The data should be organized as (128,128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape: Form like (samples, depth, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop) Of 5D tensor .
Output shape: Form like (samples, depth, first_cropped_axis, second_cropped_axis, third_cropped_axis) Of 5D tensor .

2.9、UpSampling1D layer

keras.layers.convolutional.UpSampling1D(size=2)

On a timeline , Repeat each time step length Time

Parameters

size: Up sampling factor
Input shape: Form like (samples,steps,features) Of 3D tensor
Output shape: Form like (samples,upsampled_steps,features) Of 3D tensor

3.0、UpSampling2D layer

keras.layers.convolutional.UpSampling2D(size=(2, 2), data_format=None)

Repeat the rows and columns of the data separately size[0] and size[1] Time

Parameters

size: Integers tuple, Sampling factors on rows and columns, respectively

data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.

Input shape:
‘channels_first’ In mode , In the form of (samples,channels, rows,cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,rows, cols,channels) Of 4D tensor .

Output shape:
‘channels_first’ In mode , In the form of (samples,channels, upsampled_rows, upsampled_cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,upsampled_rows, upsampled_cols,channels) Of 4D tensor .

3.1、UpSampling3D layer

keras.layers.convolutional.UpSampling3D(size=(2, 2, 2), data_format=None)

Repeat the three dimensions of the data size[0]、size[1] and ize[2] Time

This layer can only be used in Theano Available for backend

Parameters

size: Long for 3 The integer of tuple, Represents the upsampling factor in three dimensions
data_format: character string ,“channels_first” or “channels_last” One of , Represents the location of the channel dimension of the data . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128x128 As an example ,“channels_first” The data should be organized as (3,128,128,128), and “channels_last” The data should be organized as (128,128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape:
‘channels_first’ In mode , In the form of (samples, channels, len_pool_dim1, len_pool_dim2, len_pool_dim3) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, len_pool_dim1, len_pool_dim2, len_pool_dim3,channels, ) Of 5D tensor

Output shape:
‘channels_first’ In mode , In the form of (samples, channels, dim1, dim2, dim3) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, upsampled_dim1, upsampled_dim2, upsampled_dim3,channels,) Of 5D tensor .

3.2、ZeroPadding1D layer

keras.layers.convolutional.ZeroPadding1D(padding=1)

Yes 1D The beginning and end of the input ( Such as time domain sequence ) fill 0, To control the length of the vector after convolution

Parameters

padding: Integers , Indicates filling at the beginning and end of the axis to be filled 0 Number of , The axis to be filled here is the axis 1( The first 1 dimension , The first 0 Dimension is the number of samples )
Input shape: Form like (samples,axis_to_pad,features) Of 3D tensor
Output shape: Form like (samples,paded_axis,features) Of 3D tensor

3.3、ZeroPadding2D layer

keras.layers.convolutional.ZeroPadding2D(padding=(1, 1), data_format=None)

Yes 2D Input ( Such as images ) Boundary filling of 0, To control the size of the characteristic graph after convolution

Parameters

padding: Integers tuple, Indicates filling at the beginning and end of the axis to be filled 0 Number of , The axis to be filled here is the axis 3 And axis 4( That is to say ’th’ Rows and columns of the image in mode , stay ‘channels_last’ In mode, the axis to be filled 2,3)
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape:
‘channels_first’ In mode , Form like (samples,channels,first_axis_to_pad,second_axis_to_pad) Of 4D tensor .
‘channels_last’ In mode , Form like (samples,first_axis_to_pad,second_axis_to_pad, channels) Of 4D tensor .

Output shape:
‘channels_first’ In mode , Form like (samples,channels,first_paded_axis,second_paded_axis) Of 4D tensor
‘channels_last’ In mode , Form like (samples,first_paded_axis,second_paded_axis, channels) Of 4D tensor

3.4、ZeroPadding3D layer

keras.layers.convolutional.ZeroPadding3D(padding=(1, 1, 1), data_format=None)

Fill the three dimensions of the data 0

This layer can only be used in Theano Available for backend

Parameters

padding: Integers tuple, Indicates filling at the beginning and end of the axis to be filled 0 Number of , The axis to be filled here is the axis 3, Axis 4 And axis 5,‘channels_last’ In mode, it is the axis 2,3 and 4
data_format: character string ,“channels_first” or “channels_last” One of , Represents the location of the channel dimension of the data . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128x128 As an example ,“channels_first” The data should be organized as (3,128,128,128), and “channels_last” The data should be organized as (128,128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape:
‘channels_first’ In mode , In the form of (samples, channels, first_axis_to_pad,first_axis_to_pad, first_axis_to_pad,) Of 5D tensor .
‘channels_last’ In mode , In the form of (samples, first_axis_to_pad,first_axis_to_pad, first_axis_to_pad, channels) Of 5D tensor .

Output shape:
‘channels_first’ In mode , In the form of (samples, channels, first_paded_axis,second_paded_axis, third_paded_axis,) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, len_pool_dim1, len_pool_dim2, len_pool_dim3,channels, ) Of 5D tensor

3、 Pooling layer Pooling

3.1、MaxPooling1D layer

keras.layers.pooling.MaxPooling1D(pool_size=2, strides=None, padding='valid')

Yes 1D The signal is pooled to the maximum

Parameters

pool_size: Integers , Pool window size
strides: An integer or None, Down sampling factor , For example, set 2 Will make the output shape For half of the input , if None The default value is pool_size.
padding:‘valid’ perhaps ‘same’
Input shape: Form like (samples,steps,features) Of 3D tensor
Output shape: Form like (samples,downsampled_steps,features) Of 3D tensor

3.2、MaxPooling2D layer

keras.layers.pooling.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None)

Apply maximum pooling for airspace signals

Parameters

pool_size: The integer or length is 2 The integer of tuple, It's in two directions ( vertical , level ) The downsampling factor on , If you take (2,2) Will make the picture half the original length in both dimensions . It means that all dimensions have the same value and are the same number .
strides: The integer or length is 2 The integer of tuple, perhaps None, Step value .
border_mode:‘valid’ perhaps ‘same’
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape
‘channels_first’ In mode , In the form of (samples,channels, rows,cols) Of 4D tensor
‘channels_last’ In mode , In the form of (samples,rows, cols,channels) Of 4D tensor

Output shape
‘channels_first’ In mode , In the form of (samples,channels, pooled_rows, pooled_cols) Of 4D tensor
‘channels_last’ In mode , In the form of (samples,pooled_rows, pooled_cols,channels) Of 4D tensor

3.3、MaxPooling3D layer

keras.layers.pooling.MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format=None)

by 3D The signal ( Airspace or space-time domain ) Apply maximum pooling . This layer can only be used in Theano Available for backend

Parameters

pool_size: The integer or length is 3 The integer of tuple, Represents the downsampling factor in three dimensions , If you take (2,2,2) Will make the signal half as long in each dimension .
strides: The integer or length is 3 The integer of tuple, perhaps None, Step value .
padding:‘valid’ perhaps ‘same’
data_format: character string ,“channels_first” or “channels_last” One of , Represents the location of the channel dimension of the data . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128x128 As an example ,“channels_first” The data should be organized as (3,128,128,128), and “channels_last” The data should be organized as (128,128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape
‘channels_first’ In mode , In the form of (samples, channels, len_pool_dim1, len_pool_dim2, len_pool_dim3) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, len_pool_dim1, len_pool_dim2, len_pool_dim3,channels, ) Of 5D tensor

Output shape
‘channels_first’ In mode , In the form of (samples, channels, pooled_dim1, pooled_dim2, pooled_dim3) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, pooled_dim1, pooled_dim2, pooled_dim3,channels,) Of 5D tensor

3.4、AveragePooling1D layer

keras.layers.pooling.AveragePooling1D(pool_size=2, strides=None, padding='valid')

Yes 1D The average value of the signal is pooled

Parameters

pool_size: Integers , Pool window size
strides: An integer or None, Down sampling factor , For example, set 2 Will make the output shape For half of the input , if None The default value is pool_size.
padding:‘valid’ perhaps ‘same’
Input shape: Form like (samples,steps,features) Of 3D tensor
Output shape: Form like (samples,downsampled_steps,features) Of 3D tensor

3.5、AveragePooling2D layer

keras.layers.pooling.AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None)

Average pooling is applied to airspace signals

Parameters

pool_size: The integer or length is 2 The integer of tuple, It's in two directions ( vertical , level ) The downsampling factor on , If you take (2,2) Will make the picture half the original length in both dimensions . It means that all dimensions have the same value and are the same number .
strides: The integer or length is 2 The integer of tuple, perhaps None, Step value .
border_mode:‘valid’ perhaps ‘same’
data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape
‘channels_first’ In mode , In the form of (samples,channels, rows,cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,rows, cols,channels) Of 4D tensor .

Output shape
‘channels_first’ In mode , In the form of (samples,channels, pooled_rows, pooled_cols) Of 4D tensor .
‘channels_last’ In mode , In the form of (samples,pooled_rows, pooled_cols,channels) Of 4D tensor .

3.6、AveragePooling3D layer

keras.layers.pooling.AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format=None)

by 3D The signal ( Airspace or space-time domain ) Apply average pooling . This layer can only be used in Theano Available for backend

Parameters

pool_size: The integer or length is 3 The integer of tuple, Represents the downsampling factor in three dimensions , If you take (2,2,2) Will make the signal half as long in each dimension .
strides: The integer or length is 3 The integer of tuple, perhaps None, Step value .
padding:‘valid’ perhaps ‘same’
data_format: character string ,“channels_first” or “channels_last” One of , Represents the location of the channel dimension of the data . This parameter is Keras 1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128x128 As an example ,“channels_first” The data should be organized as (3,128,128,128), and “channels_last” The data should be organized as (128,128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape:
‘channels_first’ In mode , In the form of (samples, channels, len_pool_dim1, len_pool_dim2, len_pool_dim3) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, len_pool_dim1, len_pool_dim2, len_pool_dim3,channels, ) Of 5D tensor

Output shape:
‘channels_first’ In mode , In the form of (samples, channels, pooled_dim1, pooled_dim2, pooled_dim3) Of 5D tensor
‘channels_last’ In mode , In the form of (samples, pooled_dim1, pooled_dim2, pooled_dim3,channels,) Of 5D tensor

3.7、GlobalMaxPooling1D layer

keras.layers.pooling.GlobalMaxPooling1D()

For the global maximum pooling of time signals

Input shape: Form like (samples,steps,features) Of 3D tensor .
Output shape: Form like (samples, features) Of 2D tensor .

3.8、GlobalAveragePooling1D layer

keras.layers.pooling.GlobalAveragePooling1D()

Global mean pooling is applied to the time domain signal

Input shape: Form like (samples,steps,features) Of 3D tensor
Output shape: Form like (samples, features) Of 2D tensor

3.9、GlobalMaxPooling2D layer

keras.layers.pooling.GlobalMaxPooling2D(dim_ordering='default')

Global maximum pooling is applied to airspace signals

Parameters

data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras
1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape:
‘channels_first’ In mode , In the form of (samples,channels, rows,cols) Of 4D tensor
‘channels_last’ In mode , In the form of (samples,rows, cols,channels) Of 4D tensor

Output shape: Form like (nb_samples, channels) Of 2D tensor

3.10、GlobalAveragePooling2D layer

keras.layers.pooling.GlobalAveragePooling2D(dim_ordering='default')

Global mean pooling is applied to spatial signals

Parameters

data_format: character string ,“channels_first” or “channels_last” One of , Represents the position of the channel dimension of the image . This parameter is Keras
1.x Medium image_dim_ordering,“channels_last” Corresponding to the original “tf”,“channels_first” Corresponding to the original “th”. With 128x128 Of RGB The image, for example ,“channels_first” The data should be organized as (3,128,128), and “channels_last” The data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.json The value set in , If it has never been set , Then for “channels_last”.
Input shape:
‘channels_first’ In mode , In the form of (samples,channels, rows,cols) Of 4D tensor
‘channels_last’ In mode , In the form of (samples,rows, cols,channels) Of 4D tensor

Output shape: Form like (nb_samples, channels) Of 2D tensor

4、 Local connection layer LocallyConnceted

4、 Circulation layer Recurrent

2.4、 Recursive layer (Recurrent)

The recursive layer contains three models :LSTM、GRU and SimpleRNN

2.4.1 Abstraction layer , Can't be used directly

keras.layers.recurrent.Recurrent(weights=None, return_sequences=False, go_backwards=False, stateful=False, unroll=False, consume_less='cpu', input_dim=None, input_length=None)

return_sequences:True Back to the whole sequence ,false Returns the last output of the output sequence

go_backwards:True, Reverse processing input sequence , The default is False

stateful: Boolean value , The default is False, if True, Is a batch Subscript is i The final state of the sample will be used as the next batch Also, the initial state of the subscript sample

2.4.2、 Full connection RNN The Internet

keras.layers.recurrent.SimpleRNN(output_dim, init='glorot_uniform', inner_init='orthogonal', activation='tanh', W_regularizer=None, U_regularizer=None, b_regularizer=None, dropout_W=0.0, dropout_U=0.0)

inner_init: Initialization method of internal unit

dropout_W:0~1 The floating point number between , Control input unit to input gate connection disconnection ratio

dropout_U:0~1 The floating point number between , Control the disconnection ratio of input unit to recursive connection

2.4.3、LSTM layer

keras.layers.recurrent.LSTM(output_dim, init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one', activation='tanh', inner_activation='hard_sigmoid', W_regularizer=None, U_regularizer=None, b_regularizer=None, dropout_W=0.0, dropout_U=0.0)

forget_bias_init: Initialization function of forgetting gate bias ,Jozefowicz et al. It is recommended to initialize to full 1 Elements

inner_activation: Internal unit activation function

Embedded layer Embedding

2.5 Embedding layer

keras.layers.embeddings.Embedding(input_dim, output_dim, init='uniform', input_length=None, W_regularizer=None, activity_regularizer=None, W_constraint=None, mask_zero=False, weights=None, dropout=0.0)

It can only be used as the first layer of the model

mask_zero: Boolean value , Determine whether to input ‘0’ As something that should be ignored ‘ fill ’(padding) value , This parameter is useful when dealing with variable length input using recursive layers . Set to True Words , All subsequent layers in the model must support masking, Otherwise, an exception will be thrown

Merge layer

A network model

The network model can combine the various basic network layers defined above .

Keras There are two types of models , Sequential model (Sequential) And functional models (Model), Functional models are more widely used , Sequential model is a special case of functional model .

Some methods of the two models are the same :

model Methods :

model.summary() : Print out the Model Overview , What it actually calls is keras.utils.print_summary

model.get_config() : Returns containing model configuration information Python Dictionaries

model = Model.from_config(config) Model from its config Refactoring information back
model = Sequential.from_config(config) Model from its config Refactoring information back

model.get_weights(): Returns a list of model weight tensors , The type is numpy array

model.set_weights(): from numpy array Load weights into the model

model.to_json: Returns that represent the model JSON character string , Only network structure , Weight not included . It can be downloaded from JSON Reconstructing the original model from a string :

from models import model_from_json

json_string = model.to_json()
model = model_from_json(json_string)

model.to_yaml: And model.to_json similar , It can also be derived from YAML Refactor the model in the string

from models import model_from_yaml

yaml_string = model.to_yaml()
model = model_from_yaml(yaml_string)

model.save_weights(filepath): Save the model weights to the specified path , File type is HDF5( Suffix is .h5)

model.load_weights(filepath, by_name=False): from HDF5 Load weights into the current model , By default, the structure of the model remains unchanged . If you want to load weights into different models ( Some layers are the same ) in , Is set by_name=True, Only layers with matching names load weights

keras There are two kinds of model, Namely Sequential Models and generic models

2.1 Sequential Model

Sequential Is a linear stack of multiple network layers

Can be passed to Sequential Model passing a layer Of list To construct the model :

from keras.models import Sequential
from keras.layers import Dense, Activation

model = Sequential([
Dense(32, input_dim=784),
Activation('relu'),
Dense(10),
Activation('softmax'),
])

It can also be done through .add() Method by method layer Add to the model :

model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Activation('relu'))

You can also use merge Put two Sequential Models merge in some way

Sequential Model approach :

compile(self, optimizer, loss, metrics=[], sample_weight_mode=None)

fit(self, x, y, batch_size=32, nb_epoch=10, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None)

evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None)

# Press batch Get the output corresponding to the input data , The return value of the function is predicted numpy array
predict(self, x, batch_size=32, verbose=0)

# Press batch Produces a category prediction result for the input data , The return value of the function is the result of the class prediction numpy array or numpy
predict_classes(self, x, batch_size=32, verbose=1)

# According to the function batch The probability of producing input data that belongs to each category , The return value of the function is class probability numpy array
predict_proba(self, x, batch_size=32, verbose=1)

train_on_batch(self, x, y, class_weight=None, sample_weight=None)

test_on_batch(self, x, y, sample_weight=None)

predict_on_batch(self, x)


fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose=1, callbacks=[], validation_data=None, nb_val_samples=None, class_weight=None, max_q_size=10)

evaluate_generator(self, generator, val_samples, max_q_size=10)

2.2 Generic model

Keras The generic model interface is :

User defined multiple output model 、 The approach of complex models such as acyclic directed models or models with shared layers

Apply to the implementation of : Fully connected networks and MIMO models

Multiple inputs, multiple outputs , The official example gives : Predict the number of likes and forwards of a news , The main input is the news itself , You can also add extra input , Like the date of the press release , News writers, etc , See the official website for the specific implementation :
http://keras-cn.readthedocs.io/en/latest/getting_started/functional_API/

So it feels like this model can be specific task Make some innovations

Generic model model Properties of :

model.layers: The layers that make up the model diagram
model.inputs: A list of input tensors for the model
model.outputs: A list of output tensors for the model

Method : Methods similar to sequence models
Add get_layer

get_layer(self, name=None, index=None)
This function obtains the layer object according to the subscript or name of the layer in the model , Subscripts in a generic model are based on bottom-up , The order of horizontal traversal .

name: character string , The name of the layer
index: Integers , Subscript of layer
The return value of the function is the layer object

from keras.models import Model
from keras.layers import Input, Dense

a = Input(shape=(32,))
b = Dense(32)(a)
model = Model(inputs=a, outputs=b)

版权声明
本文为[Minaldo7]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/111/202204210544479211.html