DAY 98-100 DAYS MLCODE: Emotion detection using Keras part 2

My Tech World

DAY 98-100 DAYS MLCODE: Emotion detection using Keras part 2

February 18, 2019 100-Days-Of-ML-Code blog 0

In the previous blog, we started by downloading data, in this blog we’ll try to develop an Emotion detection model which will able to find the emotion in an image.

Let’s first convert the data into the format which is accepted to VGA architecture.

  1. Normalize the pixel value by transforming the pixels column to float values from 0–1 by dividing by 255 and Reshape the arrays to a 48×48 matrix.
  2. Duplicated the 48×48 to create a 48x48x3 matrix (for VGG16 input)
  3. Let’s feed the inputs to the VGG16 and get the 512 predictions for each input.

#Assing the data frame to rawa data to use in the paraga
#Get the value of the all emotion from raw_data
emotion_array = process_emotion(raw_data[[’emotion’]])
# convert to a 48×48 float matrix
pixel_array = process_image(raw_data[[‘pixels’]])

Split the data as Test and Train data set

y_train, y_test = split_for_test(emotion_array)
x_train_matrix, x_test_matrix = split_for_test(pixel_array)

Get the VGG16 model using Keras API

vgg16 = VGG16(include_top=False, input_shape=(48, 48, 3), weights=’imagenet’)

Our Model:

Let’s train a small 3 fully connected layers and an output layer that is going to take in the 512 float values.

top_layer_model = Sequential()
top_layer_model.add(Dense(256, input_shape=(512,), activation=’relu’))
top_layer_model.add(Dense(256, input_shape=(256,), activation=’relu’))
top_layer_model.add(Dropout(0.5))
top_layer_model.add(Dense(128, input_shape=(256,)))
top_layer_model.add(Dense(NUM_CLASSES, activation=’softmax’))

construct the model using Adam Optimizer

adamax = Adamax()

top_layer_model.compile(loss=’categorical_crossentropy’,
optimizer=adamax, metrics=[‘accuracy’])

# train
top_layer_model.fit(x_train_feature_map, y_train,
validation_data=(x_train_feature_map, y_train),
nb_epoch=FLAGS.n_epochs, batch_size=FLAGS.batch_size)
# Evaluate
score = top_layer_model.evaluate(x_test_feature_map,
y_test, batch_size=FLAGS.batch_size)

print(“After top_layer_model training (test set): {}”.format(score))

Now Let’s merge the VGG16 and our trained model

inputs = Input(shape=(48, 48, 3))
vg_output = vgg16(inputs)
print(“vg_output: {}”.format(vg_output.shape))
# TODO: the ‘pooling’ argument of the VGG16 model is important for this to work otherwise you will have to squash
# output from (?, 1, 1, 512) to (?, 512)
model_predictions = top_layer_model(vg_output)
final_model = Model(input=inputs, output=model_predictions)
final_model.compile(loss=’categorical_crossentropy’,
optimizer=adamax, metrics=[‘accuracy’])

This above code which I was trying to run in colab was from here.  You can find my colab here.