Female Infanticide: A Flash Back
Non Linear Regression Example with Keras and Tensorflow Backend
Logos, Ethos, Pathos – On identifiable victim effect and why it is more powerful even when you realize that you are biased
Reflections on Honesty
Udacity Self Driving Car Nano Degree – First Impressions and experience with lane finding project!

Ultrasonic Radar

There was an interesting share in our SDC Nano Degree Bangalore Whatsapp group. I wanted to try out something similar based on the stuff I had with me. The intent is to map room using ultrasonic sensor and stepper motor

The project is forked from Param’s Project https://github.com/paramaggarwal/pingray that uses ultrasonic sensors and magnetic compass. It is a curious attempt to replicate the outcomes with a stepper motor and single ultrasonic sensor as I did not have a magnetic compass.

All limitations metioned in Param’s post (https://www.hackster.io/paramaggarwal/mapping-a-room-with-ultrasonic-distance-sensors-9725b7) apply. Additionally this project can only capture 180 deg.

Arduino Setup

Arduino Setup



Non Linear Regression Example with Keras and Tensorflow Backend

Out of curiosity I wanted to try Keras to do non linear fitting. The simplicity of Keras made it possible to quickly try out some neural network model without deep knowledge of Tensorflow.

The data for fitting was generated using a non linear continuous function. It has five inputs and one output. Both the training set and validation set have around 1000 data points.

Y = SIN(A) x EXP(B) + COS(C x C) + POWER(D,5) – TANH(E)

I realized that adding too many hidden layers worsened the fit. Looks like for continuous functions, one hidden layer with sufficient number of nodes and good choice of activation function is sufficient. I chose hyperbolic tangent (tanh) for activation function and adam for optimizer. The results were pretty good but required some good number of iterations.

I plan to compare this with other regression algorithms available in Azure Machine Learning.

Complete code available on Github – https://github.com/shankarananth/Keras-Nonlinear-Regression

from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
import numpy
%matplotlib inline

#Red data from csv file for training and validation data
TrainingSet = numpy.genfromtxt("training.csv", delimiter=",", skip_header=True)
ValidationSet = numpy.genfromtxt("validation.csv", delimiter=",", skip_header=True)

# split into input (X) and output (Y) variables
X1 = TrainingSet[:,0:5]
Y1 = TrainingSet[:,5]

X2 = ValidationSet[:,0:5]
Y2 = ValidationSet[:,5]

# create model
model = Sequential()
model.add(Dense(20, input_dim=5, init='uniform', activation='tanh'))
model.add(Dense(1, init='uniform', activation='linear'))

# Compile model
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X1, Y1, nb_epoch=5000, batch_size=10,  verbose=2)

# Calculate predictions
PredTestSet = model.predict(X1)
PredValSet = model.predict(X2)

# Save predictions
numpy.savetxt("trainresults.csv", PredTestSet, delimiter=",")
numpy.savetxt("valresults.csv", PredValSet, delimiter=",")
Main Code

The results were pretty good




Udacity Self Driving Car Nano Degree – First Impressions and experience with lane finding project!

I would like to share my experience with the new Udacity Self driving car Nanodegree. I am very excited about this technology due to my work experience in Process Control. There are similarities in engineering concepts but the control objective, sensors and final elements are very different for this technology. My end goal is to learn this technology and use it to work on potential innovations in the industry I work for (Oil and Gas).

Choosing to take this course was difficult. First the cost is high for Indian standards, second have to go through a selection process and third it is nine months long. I got a seat in the December 2016 Cohort which gave me some time to prepare the basics (Python, Github, Machine Learning, etc..)

The feeling I had when the course started was very different from other online courses. I got very excited like going to university again. I was very nervous as well. Though I was very familiar with algorithms, I never worked on image processing. The course started with Computer vision including a project in the very first week. I used the weekend to complete the project with full support form my family. It was a wonderful experience with a sense of achievement. I felt like tuning my very first PID controller. The support from the community (Whatsapp, Facebook, Slack, Udacity Mentor) helped get the confidence and support to complete the project.

Finding Lanes Project

The code is available on Github (https://github.com/shankarananth/CarND-LaneLines-P1)

I used the following sequence of steps to arrive at the solution

1) Grayscale Image

Gray Image

2) Guassian Blur

Guass Blur Image

3) Canny Edge Detection

Canny Edge Detection Image

4) Region of Interest

ROI Image

5) Hough Transformation and Extrapolation

Hough Image

Hough Image

Some lessons learnt from experience

1) I used Anaconda on windows. Had some difficulty in installing ffmpeg. I used the guideline provided in the following link to solve the issue https://github.com/adaptlearning/adapt_authoring/wiki/Installing-FFmpeg

2) I used a debug folder to save all intermediate image. This helped me a lot in easy tuning of various parameters.

3) I did not attempt the Optional challenge to optimize available time with me. Test runs with current code was not successful.

4) In terms of improvement I could further smooth the lines across frames in video.

5) I did not have experience with Jupyter. It is very different for a coding environment. However after using it I could visibly see the advantage of such an environment.

6) Line Extrapolation – I used y=mx+c draw the extrapolated line (First identify slope with co-ordinates ((y2-y1)/(x2-x1)), Second calculate C and Third identify new co-ordinates based on given y-axis) (Note: I received feedback from reviewer that it is possible to achieve the results without the need for extrapolation just by tuning Hough Transform parameters)

Results of the program uploaded to Youtube


Female Infanticide: A Flash Back

Year 1996: My team from school would travel to Karumathur, a small village near Madurai to collect data and statistics for a project to present at the National Science Congress. We had no clue of the risks of project we were pursuing which would later win us a National Award (after District and State Level). The topics were chosen by our guides and ours was on social science which actually gave us an unique edge over physical science projects.

I was on a team with Prasad, Safeek and Senthil to research on a hot social topic – ‘Female Infanticide’ – The intentional killing of female infants soon after birth. More detail on this topic is available in the classic 1986 India today article – Female infanticide: Born to die or watch the movie Karuththamma

We would go around the village interviewing people about why and how it is done. ‘Innocence is bliss’ – I don’t think I have the guts to do it now.  We got to know about the recent killings at that time and who and how it was executed. Quite some people gave us information on real cases.

It is almost two decades now and I could hardly recall our interviews. But there was one interview that I could never forget. We were discussing with one villager on a recent incident at that time and how it was executed. We were testing our knowledge of methods (paddy and poisonous cactus milk or ‘Kalli Paal’) that we read on papers and watched in movies. But he brushed off those claiming that they were very old methods with low success rate! Rather the newer method was to put them in a jute bag and suffocate them to death! How Cruel!The surprising fact was that he explained it in high level of detail citing references without fear or remorse.

I have no idea what has changed in the past two decades but the ‘Thottil Kuzhandhai Thittam – A baby hatch program by the Tamilnadu Government’ still exists today!

Some not so Interesting Facts:

  • Prenatal Sex Determination is banned in India
  • ‘Thottil Kuzhandhai Thittam’ – a baby hatch was set up in 1994 by the then Chief Minister, J. Jayalalithaa, to prevent female infanticide
  • In 2002 an “e-cradle” scheme was also introduced in Kerela – The high-tech cradle’s entrance opens automatically when a person enters. He/she can deposit the unwanted baby there in the cradle and return. The electronic buzzer rings loudly once he/she leaves, announcing the arrival of a new one.

Logos, Ethos, Pathos – On identifiable victim effect and why it is more powerful even when you realize that you are biased

Aristotle’s ‘On Rhetoric’ has clearly explained the three modes of persuasion

  • Logos – Logos is logical appeal or the simulation of it, and the term logic is derived from it. It is normally used to describe facts and figures that support the speaker’s claims or thesis.
  • Ethos – Ethos is an appeal to the authority or credibility of the presenter.
  • Pathos – Pathos is an appeal to the audience’s emotions, and the terms pathetic and empathy are derived from it.

Is ‘Persuasion’ important? – Think about it! We do it every day and it is an essential part of our personal, corporate and social lives! May it be asking for date, getting a job or getting a vote!

You might logically think that as an educated person you tend to be persuaded only by logic. Is it so? Look at the images below. It is self-explanatory. All of us are emotional!I11

How powerful are these images? The Syrian civil war has been on the news for quite some time and has resulted in loss of several lives. However this one recent image has generated so much attention and hopefully results in positive action.


The reason for this irrational behavior (Responding to a single image and not to statistics) is due to Cognitive Bias and specifically – Identifiable Victim Effect.

“We care more about suffering when it is represented by one individual”

See these famous quotes below

“One Man’s death is a tragedy, a million deaths is a statistic” – Joseph Stalin

“If I look at the masses, I will never act. If I look at the individual, I will” – Mother Teresa

How much has the image of the Syrian toddler impacted people’s emotions? – There is tool that that I use to read people’s minds – “Google” and “Google Trends” gives us and insight of what people think or query.


Below is the result of query for ‘syrian refugees’. Needless to say it is very evident the amount of attention that this one picture has brought as compared to rest of the year.


Update [06-Sep-2015]

The graph got even more vertical! Update

Past 12 Months

Since 2004

Since 2004

Here comes the dilemma – Is ‘Pathos’ an ethical means of persuasion? Humans are rational and is not ‘Logos’ an ethical means to persuade people? We do dislike politicians when they use ‘Pathos’ to earn votes from the masses. How about the picture of the Syrian toddler. If not for this picture people would have never acted.

I am not concluding anything! Just a food for thought!

Credits: Dan Ariely, Google Trends, Collins and Taylor, Aristotle

Note: The author has interest in behavioral psychology with no academic credentials on the subject (except for a Coursera course!). Apologies if the content seems creepy or inhumane.