当前位置:网站首页>Convolutional Neural Network Notes
Convolutional Neural Network Notes
2022-08-06 06:17:00 【Mr Robot】
Event address: CSDN 21-day Learning Challenge
The biggest reason for learning is to get rid of mediocrity and get an extra splendid life one day earlier;
Convolutional
Neural Networks (CNN), CNN can effectively reduce the complexity of feedback neural network (traditional neural network), the common CNN structure is LeNet-5, AlexNet, ZFNet, VGGNet, GoogleNet, ResNet, etc. Among them, ResNet, the champion of LVSVRC2015, is more than 20 times that of AlexNet and 8 times that of VGGNet; from these structures, one direction of CNN development is the increase of levels. In this wayThe approximate structure of the objective function can be obtained by using the added nonlinearity, and a better feature expression can be obtained at the same time, but this method leads to an increase in the overall complexity of the network, making the network more difficult to optimize and easy to overfit.
CNN is mainly used in application scenarios such as image classification and item recognition.
CNN is mainly divided into the following layers:
Data input layer: Input Layer Convolution calculation layer: CONV Layer
ReLU excitation layer: ReLU Layer Pooling layer: Pooling
Layer Fully connected layer: FCLayer
Similar to neural network/machine learning, the input data needs to be preprocessed. The main reasons for preprocessing are:
The input data units are different, which may cause the neural network to converge slowly and trainInputs with a long data range and a large range may play a large role in pattern classification, while a small data range may play a small role. Since the activation function existing in the neural network is limited by the value range, it is necessary to set the target of network training.The sigmoid activation function is very flat outside the (0,1) interval, and the discrimination is too small.For example, the sigmoid function f(X), f(100) and f(5) differ only by 0.0067.
3 common data preprocessing methods
De-Mean
Center each dimension of the input data to 0
Normalization
Normalize the magnitude of each dimension of the input data to the same range
PCA/Albino
Using PCA to reduce dimensionality
Whitening is the amplitude normalization on each feature of the data
In the process of recognizing pictures, different cortical layers process different aspects of data, such as color, shape, light and darkness, etc., and then combine and map the processing results of different cortical layers to getThe final result value, the first part is essentially a partial observation, and the second part is a combined result of the whole.
Convolution calculation layer: CONV Layer
Local association: each neuron is regarded as a filter
The window (receptive field) slides, and the filter calculates the local data
Related concepts
Depth: depth
Step size: stride
Padding value: zero-padding
Incentive layer suggestion
CNN try not to use sigmoid, if you want to use it, it is recommended to use it only in the fully connected layer
Use RELU first, because the iteration speed is fast, but it may not work well
If the use of RELU fails, consider using Leaky
ReLu or Maxout, and the general situation can be solved at this time
The tanh activation function has better effects in some cases, but there are few application scenarios.
The pooling layer exists in the middle of the continuous convolutional layers. The main function is to reduce the amount of parameters and the calculation in the network by gradually reducing the spatial size of the representation; the pooling layer is independent on each feature map.operate.Using pooling layers can compress the amount of data and parameters and reduce overfitting.
In the pooling layer, two strategies are generally used when compressing and reducing the number of features:
Max Pooling: Maximum pooling, generally used in this way
Average Pooling: Average Pooling
边栏推荐
- 树莓派安装高版本Chromium和Chromedriver
- Explain in detail how to install Home Assistant Supervised on Raspberry Pi to make smart devices at home smarter
- 创建线程,线程状态转换,join,yield,stop,interrupt,wait方法
- 获取arcgis server 发布的mapserver图例
- 网安大事件丨Fortinet对Apache Log4j漏洞利用的全面复盘与防御
- 软件测试基本概念知识
- 动态规划之不同路径
- 最深入全面讲解高可用 Redis 服务架构分析与搭建——太厉害了
- openstack报错 AMQPLAIN
- openalyers 好玩的效果之蒙版图层
猜你喜欢
随机推荐
树莓派官方系统取消pi用户,没有显示器如何初始化默认用户并进行SSH连接?
lvs-dr
lamp和lnmp的数据流向及区别
Raspberry Pi official system cancels the pi user, how to initialize the default user and connect SSH without a display?
线程同步方法
你懂函数递归吗!五分钟教你玩转shell函数和数组
无需拆机,Kindle 全系列 5.12.2.2 ~ 5.14.2版本如何越狱?如何安装第三方插件
shell之循环语句
LNMP架构服务一键部署
LVS load balancing server construction
antdesign 动态引入icon
详解使用可道云Kodbox快速在云服务器上搭建云盘
arcpy 将本地mxd自动发布arcgis server 的mapserver
最深入全面讲解高可用 Redis 服务架构分析与搭建——太厉害了
shell脚本编写(3.修改文件内容)
详解树莓派上如何安装Home Assistant Supervised,让家里的智能设备更智能
External Interrupts and Timers
教你怎样用三剑客玩转正则表达式
卷积神经网络手写数字分类
LVS introduces keepalived automatic switching mechanism









