使用Keras的面部表情识别 项目实施...
介绍和概述
Keras是一个非常强大的开源Python库,它运行在TensorFlow、Theano等其他开源机器库之上,用于开发和评估深度学习模型并利用各种优化技术。
Keras完全支持递归神经网络和卷积神经网络 Keras在CPU和GPU上都能顺利运行 Keras神经网络是用Python编写的,主张简单和强大的调试能力。 Keras以其令人难以置信的表现力、灵活、最小的结构而闻名。 Keras具有一致、简单和可扩展的API Keras还以其高度的计算可扩展性而闻名。 对各种平台和后端的广泛支持 210个机器学习项目(含源代码),你今天就可以建立了 有源代码 中等.datadriveninvestor.com
在这个项目中,我们将使用Keras实现面部表情识别。我们的数据集(已经分为训练集和测试集)包括从Kaggle repo下载的不同面部表情的图像。
Import Libraries
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import utils
import os
%matplotlib inline
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Input, Dropout,Flatten, Conv2D
from tensorflow.keras.layers import BatchNormalization, Activation, MaxPooling2D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.utils import plot_model
from IPython.display import SVG, Image
from livelossplot import PlotLossesTensorFlowKeras
import tensorflow as tf
print("Tensorflow version:", tf.__version__)
输出
Tensorflow版本:2.1.0
绘制表达式图像
utils.datasets.fer.plot_example_images(plt).show()
for expression in os.listdir("train/"):
print(str(len(os.listdir("train/" + expression)))+ " " + expression + " images")
Output —
3171 surprise images
7215 happy images
4965 neutral images
3995 angry images
4830 sad images
436 disgust images
4097 fear images)
输出 -
3171张惊喜图片
7215张快乐的图像
4965幅中性图像
3995张愤怒的图像
4830个悲伤的图像
436 厌恶的图像
4097张恐惧图像
生成训练和验证批次
为了充分利用我们的训练实例,我们将通过一些随机变换来 "增强 "它们,这反过来有助于防止模型过拟合,并帮助模型更好地泛化。
keras.preprocessing.image.ImageDataGenerator
类在keras允许--。
通过.flow(data, labels
)或.flow_from_directory(directory)
来实例化增强的图像批(及其标签)的生成器。
它可以与接受数据生成器作为输入的Keras模型方法、fit_generator
、evaluation_generator
和predict_generator
一起使用。
配置随机变换和归一化操作,以便在训练期间对你的图像数据进行处理。
img_size = 48
batch_size = 64
datagen_train = ImageDataGenerator(horizontal_flip=True)
train_generator = datagen_train.flow_from_directory("train/",
target_size = (img_size,img_size),
color_mode='grayscale'。
batch_size=batch_size。
class_mode='categorical',shuffle= True)
datagen_validation = ImageDataGenerator(horizontal_flip=True)
validation_generator = datagen_validation.flow_from_directory("test/",
target_size = (img_size,img_size),
color_mode='grayscale'。
batch_size=batch_size, class_mode='categorical',shuffle= True)
输出-
找到属于7个类别的28709张图片。 找到属于7个类别的7178张图片。 创建CNN模型 我们将使用convnet来完成这项任务。在你的模型中选择参数的数量是很重要的,即层的数量和每层的大小。
model= Sequential()
#1 Conv
model.add(Conv2D(64,(3,3),padding='same',input_shape=(48,48,1)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# 2 conv
model.add(Conv2D(128,(5,5), padding = 'same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# 3 conv
model.add(Conv2D(512,(3,3), padding = 'same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# 4 conv
model.add(Conv2D(512,(3,3), padding = 'same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(7,activation='softmax'))
opt=Adam(lr=0.0005)
model.compile(optimizer=opt,loss='categorical_crossentropy',metrics=['accuracy'])
model.summary()
Output —
Model: "sequential_3"
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 48, 48, 64) 640
_________________________________________________________________
batch_normalization_6 (Batch (None, 48, 48, 64) 256
_________________________________________________________________
activation_6 (Activation) (None, 48, 48, 64) 0
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 24, 24, 64) 0
_________________________________________________________________
dropout_6 (Dropout) (None, 24, 24, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 24, 24, 128) 204928
_________________________________________________________________
batch_normalization_7 (Batch (None, 24, 24, 128) 512
_________________________________________________________________
activation_7 (Activation) (None, 24, 24, 128) 0
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
dropout_7 (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 12, 12, 512) 590336
_________________________________________________________________
batch_normalization_8 (Batch (None, 12, 12, 512) 2048
_________________________________________________________________
activation_8 (Activation) (None, 12, 12, 512) 0
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 6, 6, 512) 0
_________________________________________________________________
dropout_8 (Dropout) (None, 6, 6, 512) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
batch_normalization_9 (Batch (None, 6, 6, 512) 2048
_________________________________________________________________
activation_9 (Activation) (None, 6, 6, 512) 0
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 3, 3, 512) 0
_________________________________________________________________
dropout_9 (Dropout) (None, 3, 3, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_3 (Dense) (None, 256) 1179904
_________________________________________________________________
batch_normalization_10 (Batc (None, 256) 1024
_________________________________________________________________
activation_10 (Activation) (None, 256) 0
_________________________________________________________________
dropout_10 (Dropout) (None, 256) 0
_________________________________________________________________
dense_4 (Dense) (None, 512) 131584
_________________________________________________________________
batch_normalization_11 (Batc (None, 512) 2048
_________________________________________________________________
activation_11 (Activation) (None, 512) 0
_________________________________________________________________
dropout_11 (Dropout) (None, 512) 0
_________________________________________________________________
dense_5 (Dense) (None, 7) 3591
=================================================================
Total params: 4,478,727
Trainable params: 4,474,759
Non-trainable params: 3,968
_________________________________________________________________
Train and Evaluate Model
epochs = 15
steps_per_epoch = train_generator.n//train_generator.batch_size
validation_steps= validation_generator.n//validation_generator.batch_size
checkpoint= ModelCheckpoint("model_weights.h5",monitor='val_accuracy',
save_weights_only= True,
mode='max',verbose=1)
reduce_lr = ReduceLROnPlateau(monitor='val_loss',factor=0.1,patience=2,min_lr=0.00001,model='auto')
callbacks=[PlotLossesTensorFlowKeras(),checkpoint, reduce_lr]
history = model.fit(
x=train_generator,
steps_per_epoch=steps_per_epoch,
epochs =epochs,
validation_data=validation_generator,
validation_steps=validation_steps,
callbacks = callbacks
)
Log-loss (cost function):
training (min: 1.032, max: 1.790, cur: 1.032)
validation (min: 1.041, max: 1.797, cur: 1.041)
accuracy:
training (min: 0.315, max: 0.608, cur: 0.608)
validation (min: 0.327, max: 0.612, cur: 0.612)
Epoch 00015: saving model to model_weights.h5
448/448 [==============================] - 27s 61ms/step - loss: 1.0317 - accuracy: 0.6082 - val_loss: 1.0407 - val_accuracy: 0.6124
建立模型为JSON Str
model_json=model.to_json()
with open("model_json", "w") as json_file:
json_file.write(model_json)
构建一个用于预测面部表情的Flask应用程序
我们将使用open cv分类器来自动检测图像中的人脸,并画出边界框,用于构建应用程序并导入面部表情模型。
OpenCV提供了一个训练方法Cascade Classifier Training
,可以使用cv::CascadeClassifier::load方法读取。由Paul Viola和Michael Jones提出的使用基于Haar特征的级联分类器的物体检测方法是一种基于机器学习的方法,从大量的正负图像中训练出一个级联函数。
import cv2
from model import FacialExpressionModel
import numpy as np
facec = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
model = FacialExpressionModel("model.json", "model_weights.h5")
font = cv2.FONT_HERSHEY_SIMPLEX
class VideoCamera(object):
def __init__(self):
self.video = cv2.VideoCapture("path to video file")
def __del__(self):
self.video.release()
# returns camera frames along with bounding boxes and predictions
def get_frame(self):
_, fr = self.video.read()
gray_fr = cv2.cvtColor(fr, cv2.COLOR_BGR2GRAY)
faces = facec.detectMultiScale(gray_fr, 1.3, 5)
for (x, y, w, h) in faces:
fc = gray_fr[y:y+h, x:x+w]
roi = cv2.resize(fc, (48, 48))
pred = model.predict_emotion(roi[np.newaxis, :, :,
np.newaxis])
cv2.putText(fr, pred, (x, y), font, 1, (255, 255, 0), 2)
cv2.rectangle(fr,(x,y),(x+w,y+h),(255,0,0),2)
_, jpeg = cv2.imencode('.jpg', fr)
return jpeg.tobytes()
用面部表情列表作为model.py文件构建脚本
tf.keras.models.model_from_json
解析我们上面实例化的JSON模型配置字符串并返回一个模型实例。面部表情识别是将面部图像上的表情分为各种类别的技术,如愤怒、恐惧、惊讶、悲伤、快乐等。
from tensorflow.keras.models import model_from_json
import numpy as np
import tensorflow as tf
config.tf.compat.v1.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction =0.15
session = tf.compat.v1.Session(config=config)
class FacialExpressionModel(object):
EMOTIONS_LIST= ["Happy","Neutral","Sad","Surprise","Angry","Disgust","Fear"]
def __init__(self,model_json_file,model_weights_file):
with open(model_json_file, "r") as json_file:
loaded_model_json = json_file.read()
self.loaded_model = model_from_json(loaded_model_json)
self.loaded_model.load_weights(model_weights_file)
self.loaded_model._make_predict_function()
def predict_emotion(self,img):
self.preds = self.loaded_model.predict(img)
return FacialExpressionModel.EMOTIONS_LIST[np.argmax(self.preds)]
Build an HTML Template for the Flask App
Facial Expression Recognition
"bg" height = 640px src="{{url_for('video_feed')}}">
Run Model to Recognize Facial Expressions
from flask import Flask, render_template, Response
from camera import VideoCamera
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
def gen(camera):
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
@app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
Run the main.py and that’s it.
You should be able to see the tagged facial expression for the video you used in your code.
本文由 mdnice 多平台发布