When Mom can’t figure it out whether there is a cat or dog Data Raconteur son build a classifier for her mom.

Hi I am a student in XYZ school but don’t think me a kid I know pretty well Deep learning for starter I can tell you the difference between perceptron and Deep Neural Networks .I bet 40% of you even don’t know this at my age .But forget it I am not here to show you my arrogance rather I am here to share my experience with you that happened last night in my house while there is a power cut .

My Mom accidently put her leg in some creature and due to scarcity she screams like hell that disturbs my sleep . Being a lazy f*** I thought if this happens again I will have to pay again by giving up my sleep so i decided to build a classifier that mom can use to classify the creatures mostly there are dogs and cats so went up to kaggel search for a dog and cat dataset and build the thing . here I am sharing how I build the classifier .

As it is a competition it contains train ,test and submission file.

I took the path shown by Prof Andrew Ng in his course to build it (mention your gurus),I use keras and not deeplearning4j so if you are waiting to learn deeplearning4j tutorial wait for my father to put his leg in some of the creatures.

I used the color images and predefine the image height and weight upto 128 as i don’t want any mess.

IMAGE_WIDTH=128
IMAGE_HEIGHT=128
IMAGE_SIZE=(IMAGE_WIDTH, IMAGE_HEIGHT)
IMAGE_CHANNELS=3

then i created the dataset according to need i prefer the cat to be 0 and dog to be 1.

fileclass
0cat.2944.jpg0
1dog.1112.jpg1
2cat.7212.jpg0
3dog.3679.jpg1

then i plot the dataset values to check whether it is balanced or imbalanced.

let me show how the cute dog or cats are looking while doing photo session

do you know her name ……

I find out there are 25000 images in the dataset. I use deep nn to classify them where I used input layer (image) , convolution layer ,maximum pooling layer dense layer and output layer .

I took sequential model from keras then first i used a convolution model with 32 filters of 3*3 size I use relu activation function (relu=max(x,0)) in it then i do batch normalization add a pooling layer and a dropout of 15%.

for 2nd layer I used same except 64 filters .

for 3rd layer i used filter of 128 filters of 5*5 and use dropout of 25%,then dense with 30% dropout .I use a lot of dropout as my father was a college dropout and that’s why he learn some valuable thing in life and i also want my model to learn. I use cross-entropy for loss and rmsprop for optimizer why I WILL tell you when I will become big.Are you believing me that I did all this my self let took a look

_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 126, 126, 32) 896 _________________________________________________________________ batch_normalization_1 (Batch (None, 126, 126, 32) 128 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 63, 63, 32) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 63, 63, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 61, 61, 64) 18496 _________________________________________________________________ batch_normalization_2 (Batch (None, 61, 61, 64) 256 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 30, 30, 64) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 30, 30, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 26, 26, 128) 204928 _________________________________________________________________ batch_normalization_3 (Batch (None, 26, 26, 128) 512 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 13, 13, 128) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 13, 13, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 21632) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 11076096 _________________________________________________________________ batch_normalization_4 (Batch (None, 512) 2048 _________________________________________________________________ dropout_4 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 2) 1026 ================================================================= Total params: 11,304,386 Trainable params: 11,302,914 Non-trainable params: 1,472 ________________________________

What I learn from coursera I will use here I use earlystop to stop the training if validation loss does not decrease after 10 epoch. I will reduce the learning rate when then accuracy not increase for 2 steps . I will callback training if accuracy reaches our estimates.

I split the dataset in 80-20 manner .took batch of 15 use image generator to generate images I am sharing with you the pics

using kaggle kernel i finished my training within 4781.64 sec only.see the accuracy graph

but still my mom some time called jimmy(my dog) a cat due my poor understanding of deep learning can you suggest me any improvement as I am a kid . In next time I will show you How I detect my emotion via camera till then pray for my classifier.

github link:click here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s