当前位置:网站首页>Classification of cifar100 data set based on convolutional neural network
Classification of cifar100 data set based on convolutional neural network
2022-04-23 17:53:00 【Stephen_ Tao】
List of articles
CIFAR100 Data set introduction
CIFAR100 The dataset has 100 Categories , Each category contains 600 A picture , And each category has 500 Training pictures and 100 Test pictures .CIFAR100 Data sets 100 The three categories are divided into 20 A superclass . Each image has a " fine " label ( The class it belongs to ) And a " Rough " label ( The superclass it belongs to ).
Code implementation
Reading data sets
# Import dataset
from tensorflow.python.keras.datasets import cifar100
class CNNMnist(object):
def __init__(self):
# Reading data sets
(self.train,self.train_label),(self.test,self.test_label) = cifar100.load_data()
# Normalize the data set
self.train = self.train.reshape(-1,32,32,3) / 255.0
self.test = self.test.reshape(-1,32,32,3) / 255.0
Build a network model
- Convolution layer :32 individual 5*5 Convolution kernel , The step size is set to 1, The activation function uses relu
- Pooling layer : The pool size is 2, The step size is set to 2
- Convolution layer :64 individual 5*5 Convolution kernel , The step size is set to 1, The activation function uses relu
- Pooling layer : The pool size is 2, The step size is set to 2
- Fully connected layer : Set up 1024 Neurons , The activation function is relu
- Fully connected layer : Set up 100 Neurons , The activation function is softmax
# Import necessary packages
from tensorflow.python.keras import layers,losses,optimizers
from tensorflow.python.keras.models import Sequential
import tensorflow as tf
class CNNMnist(object):
model = Sequential([
layers.Conv2D(32,kernel_size=5,strides=1,padding='same',data_format='channels_last',activation=tf.nn.relu),
layers.MaxPool2D(pool_size=2,strides=2,padding='same'),
layers.Conv2D(64,kernel_size=5,strides=1,padding='same',data_format='channels_last',activation=tf.nn.relu),
layers.MaxPool2D(pool_size=2,strides=2,padding='same'),
layers.Flatten(),
layers.Dense(1024,activation=tf.nn.relu),
layers.Dense(100,activation=tf.nn.softmax)
])
Network model compilation
class CNNMnist(object):
def compile(self):
CNNMnist.model.compile(optimizer=optimizers.adam_v2.Adam(),
loss=losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return None
model training
class CNNMnist(object):
def fit(self):
CNNMnist.model.fit(self.train,self.train_label,epochs=1,batch_size=32)
return None
Model to evaluate
class CNNMnist(object):
def evaluate(self):
train_loss,train_acc = CNNMnist.model.evaluate(self.train,self.train_label)
test_loss,test_acc = CNNMnist.model.evaluate(self.test,self.test_label)
print("train_loss:",train_loss)
print("train_acc:",train_acc)
print("test_loss:",test_loss)
print("test_acc:",test_acc)
return None
The model runs
if __name__ == '__main__':
cnn = CNNMnist()
cnn.compile()
cnn.fit()
cnn.evaluate()
Model running results
1563/1563 [==============================] - 199s 126ms/step - loss: 3.5098 - accuracy: 0.1748
1563/1563 [==============================] - 56s 35ms/step - loss: 2.8101 - accuracy: 0.3094
313/313 [==============================] - 11s 33ms/step - loss: 2.9732 - accuracy: 0.2672
train_loss: 2.81014084815979
train_acc: 0.3094399869441986
test_loss: 2.9731905460357666
test_acc: 0.2671999931335449
You can see from the results , The accuracy obtained is still relatively low . Because the loss of convolutional neural networks does not decline as fast as fully connected neural networks , And the above code only iterates once . However, compared with fully connected neural networks, convolutional neural networks , Reduced training parameters , It can reduce the requirements for computing power and performance of equipment , Therefore, in pattern recognition 、 Object detection has a wide range of applications .
summary
This paper focuses on how to build a convolutional neural network model , No necessary improvements have been made to the model .
notes : The code resource of this article comes from the dark horse programmer course
版权声明
本文为[Stephen_ Tao]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230548468570.html
边栏推荐
- Encapsulate a timestamp to date method on string prototype
- 394. String decoding - auxiliary stack
- 2022 Jiangxi energy storage technology exhibition, China Battery exhibition, power battery exhibition and fuel cell Exhibition
- Read software engineering at Google (15)
- undefined reference to `Nabo::NearestNeighbourSearch
- 01 - get to know the advantages of sketch sketch
- Oil monkey website address
- Eigen learning summary
- JVM class loading mechanism
- SystemVerilog(六)-变量
猜你喜欢
SystemVerilog (VI) - variable
JS forms the items with the same name in the array object into the same array according to the name
Kubernetes 服务发现 监控Endpoints
Go对文件操作
Chrome浏览器的跨域设置----包含新老版本两种设置
Index: teach you index from zero basis to proficient use
470. 用 Rand7() 实现 Rand10()
Go语言JSON包使用
Learning record of uni app dark horse yougou project (Part 2)
Detailed deployment of flask project
随机推荐
ros常用的函数——ros::ok(),ros::Rate,ros::spin()和ros::spinOnce()
Listen for click events other than an element
122. 买卖股票的最佳时机 II-一次遍历
关于gcc输出typeid完整名的方法
SystemVerilog (VI) - variable
Commonly used functions -- spineros:: and spineros::)
Uniapp custom search box adaptation applet alignment capsule
2022年茶艺师(初级)考试模拟100题及模拟考试
Special effects case collection: mouse planet small tail
Click Cancel to return to the previous page and modify the parameter value of the previous page, let pages = getcurrentpages() let prevpage = pages [pages. Length - 2] / / the data of the previous pag
JVM class loading mechanism
102. 二叉树的层序遍历
.104History
2022年上海市安全员C证操作证考试题库及模拟考试
Summary of floating point double precision, single precision and half precision knowledge
.105Location
2022年流动式起重机司机国家题库模拟考试平台操作
Error in created hook: "referenceerror:" promise "undefined“
Type judgment in [untitled] JS
20222 return to the workplace