cnn图像二分类python_TensorFlow2基础:CNN图像分类

更新时间:2023-07-30 14:43:36 阅读: 评论:0

cnn图像⼆分类python_TensorFlow2基础:CNN图像分类1. 导包
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
del_lection import train_test_split
2. 图像分类 fashion_mnist
数据处理
# 原始数据
(X_train_all, y_train_all),(X_test, y_test) =
tf.keras.datats.fashion_mnist.load_data()
# 训练集、验证集拆分
X_train, X_valid, y_train, y_valid = train_test_split(X_train_all,
y_train_all, test_size=0.25)
# 数据标准化,你也可以⽤除以255的⽅式实现归⼀化
# 注意最后reshape中的1,代表图像只有⼀个channel,即当前图像是灰度图
scaler = StandardScaler()
演变的近义词X_train_scaled = scaler.fit_transform(shape(-1, 28 *
28)).reshape(-1, 28, 28, 1)
X_valid_scaled = ansform(shape(-1, 28 * 28)).reshape(-1,
28, 28, 1)
X_test_scaled = ansform(shape(-1, 28 * 28)).reshape(-1,
28, 28, 1)
猪附红细胞体构建CNN模型
园林建设model = dels.Sequential()
# 多个卷积层
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=[5, 5],
padding="same", activation="relu", input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2))
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=[5, 5],
padding="same", activation="relu"))
model.add(tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2))
# 将前⾯卷积层得出的多维数据转为⼀维
# 7和前⾯的kernel_size、padding、MaxPool2D有关
# Conv2D: 28*28 -> 28*28 (因为padding="same")
# MaxPool2D: 28*28 -> 14*14
# Conv2D: 14*14 -> 14*14 (因为padding="same")
# MaxPool2D: 14*14 -> 7*7
model.add(tf.keras.layers.Reshape(target_shape=(7 * 7 * 64,)))
# 传⼊全连接层
model.add(tf.keras.layers.Den(1024, activation="relu"))
model.add(tf.keras.layers.Den(10, activation="softmax"))
# compile
optimizer = "sgd",
metrics = ["accuracy"])
模型训练
callbacks = [
tf.keras.callbacks.EarlyStopping(min_delta=1e-3, patience=5)
]
history = model.fit(X_train_scaled, y_train, epochs=15,
validation_data=(X_valid_scaled, y_valid),
callbacks = callbacks)
Train on 50000 samples, validate on 10000 samples
回族过春节吗
Epoch 1/15
50000/50000 [==============================] - 17s 343us/sample - loss: 0.5707 - accuracy: 0.7965 - val_loss: 0.4631 - val_accuracy: 0.8323
Epoch 2/15
50000/50000 [==============================] - 13s 259us/sample - loss: 0.3728 - accuracy: 0.8669 - val_loss: 0.3573 - val_accuracy: 0.8738
...
Epoch 13/15
50000/50000 [==============================] - 12s 244us/sample - loss: 0.1625 - accuracy: 0.9407 - val_loss: 0.2489 - val_accuracy: 0.9112
Epoch 14/15
50000/50000 [==============================] - 12s 240us/sample - loss:
0.1522 - accuracy: 0.9451 - val_loss: 0.2584 - val_accuracy: 0.9104
Epoch 15/15
50000/50000 [==============================] - 12s 237us/sample - loss: 0.1424 - accuracy: 0.9500 - val_loss: 0.2521 - val_accuracy: 0.9114
作图
def plot_learning_curves(history):
腿疼是要长高吗pd.DataFrame(history.history).plot(figsize=(8, 5))
#a().t_ylim(0, 1)
plt.show()
plot_learning_curves(history)
测试集评估准确率
花代表什么生肖
model.evaluate(X_test_scaled, y_test)
[0.269884311157465, 0.9071]
可以看到使⽤CNN后,图像分类的准确率明显提升了。之前的模型是0.8747,现在是0.9071。3. 图像分类 Dogs vs. Cats
3.1 原始数据
原始数据下载
读取⼀张图⽚,并展⽰
image_string =
ad_file("C:/Urs/Skey/Downloads/datats/cat_vs_dog/train/cat.28.jpg")
image_decoded = tf.image.decode_jpeg(image_string)
plt.imshow(image_decoded)
3.2 利⽤Datat加载图⽚
由于原始图⽚过多,我们不能将所有图⽚⼀次加载⼊内存。Tensorflow为我们提供了便利的Datat API,可以从硬盘中⼀批⼀批的加载数据,以⽤于训练。
处理本地图⽚路径与标签
# 训练数据的路径
train_dir = "C:/Urs/Skey/Downloads/datats/cat_vs_dog/train/"
train_filenames = [] # 所有图⽚的⽂件名
train_labels = [] # 所有图⽚的标签
for filename in os.listdir(train_dir):
train_filenames.append(train_dir + filename)
if (filename.startswith("cat")):
train_labels.append(0) # 将cat标记为0
el:
train_labels.append(1) # 将dog标记为1
# 数据随机拆分
X_train, X_valid, y_train, y_valid = train_test_split(train_filenames,
train_labels, test_size=0.2)
定义⼀个解码图⽚的⽅法
def _decode_and_resize(filename, label):
image_string = ad_file(filename) # 读取图⽚
内存卡怎么用image_decoded = tf.image.decode_jpeg(image_string) # 解码
image_resized = size(image_decoded, [256, 256]) / 255.0 #
重置size,并归⼀化
return image_resized, label
定义 Datat,⽤于加载图⽚数据
# 训练集
train_datat = tf.data.Datat.from_tensor_slices((train_filenames,
train_labels))企鹅的小毛毛
train_datat = train_datat.map(
map_func=_decode_and_resize, # 调⽤前⾯定义的⽅法,解析filename,转为特征和标签num_parallel_calls=perimental.AUTOTUNE)
train_datat = train_datat.shuffle(buffer_size=128) # 设置缓冲区⼤⼩
train_datat = train_datat.batch(32) # 每批数据的量
train_datat = train_datat.prefetch(perimental.AUTOTUNE) #
启动预加载图⽚,也就是说CPU会提前从磁盘加载数据,不⽤等上⼀次训练完后再加载
# 验证集
valid_datat = tf.data.Datat.from_tensor_slices((valid_filenames,
valid_labels))
valid_datat = valid_datat.map(
map_func=_decode_and_resize,
num_parallel_calls=perimental.AUTOTUNE)
valid_datat = valid_datat.batch(32)
3.3 构建CNN模型,并训练
构建模型与编译
model = tf.keras.Sequential([
# 卷积,32个filter(卷积核),每个⼤⼩为3*3,步长为1
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(256, 256,
3)),
# 池化,默认⼤⼩2*2,步长为2
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Den(64, activation='relu'),
tf.keras.layers.Den(2, activation='softmax')
])
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss=tf.keras.loss.spar_categorical_crosntropy,
metrics=[ics.spar_categorical_accuracy]
)
模型总览
model.summary()
Model: "quential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
================================================================= conv2d_2 (Conv2D) (None, 254, 254, 32) 896
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 127, 127, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 123, 123, 32) 25632
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 61, 61, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 119072) 0
_________________________________________________________________
den_2 (Den) (None, 64) 7620672
_________________________________________________________________

本文发布于:2023-07-30 14:43:36,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1102073.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:数据   图像   分类   加载
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图