Non Linear Regression Example with Keras and Tensorflow Backend

Out of curiosity I wanted to try Keras to do non linear fitting. The simplicity of Keras made it possible to quickly try out some neural network model without deep knowledge of Tensorflow.

The data for fitting was generated using a non linear continuous function. It has five inputs and one output. Both the training set and validation set have around 1000 data points.

Y = SIN(A) x EXP(B) + COS(C x C) + POWER(D,5) – TANH(E)

I realized that adding too many hidden layers worsened the fit. Looks like for continuous functions, one hidden layer with sufficient number of nodes and good choice of activation function is sufficient. I chose hyperbolic tangent (tanh) for activation function and adam for optimizer. The results were pretty good but required some good number of iterations.

I plan to compare this with other regression algorithms available in Azure Machine Learning.

Complete code available on Github – https://github.com/shankarananth/Keras-Nonlinear-Regression

from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
import numpy
%matplotlib inline

#Red data from csv file for training and validation data
TrainingSet = numpy.genfromtxt("training.csv", delimiter=",", skip_header=True)
ValidationSet = numpy.genfromtxt("validation.csv", delimiter=",", skip_header=True)

# split into input (X) and output (Y) variables
X1 = TrainingSet[:,0:5]
Y1 = TrainingSet[:,5]

X2 = ValidationSet[:,0:5]
Y2 = ValidationSet[:,5]

# create model
model = Sequential()
model.add(Dense(20, input_dim=5, init='uniform', activation='tanh'))
model.add(Dense(1, init='uniform', activation='linear'))

# Compile model
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X1, Y1, nb_epoch=5000, batch_size=10,  verbose=2)

# Calculate predictions
PredTestSet = model.predict(X1)
PredValSet = model.predict(X2)

# Save predictions
numpy.savetxt("trainresults.csv", PredTestSet, delimiter=",")
numpy.savetxt("valresults.csv", PredValSet, delimiter=",")
Main Code

The results were pretty good

Keras

Keras

Share

Udacity Self Driving Car Nano Degree – First Impressions and experience with lane finding project!

I would like to share my experience with the new Udacity Self driving car Nanodegree. I am very excited about this technology due to my work experience in Process Control. There are similarities in engineering concepts but the control objective, sensors and final elements are very different for this technology. My end goal is to learn this technology and use it to work on potential innovations in the industry I work for (Oil and Gas).

Choosing to take this course was difficult. First the cost is high for Indian standards, second have to go through a selection process and third it is nine months long. I got a seat in the December 2016 Cohort which gave me some time to prepare the basics (Python, Github, Machine Learning, etc..)

The feeling I had when the course started was very different from other online courses. I got very excited like going to university again. I was very nervous as well. Though I was very familiar with algorithms, I never worked on image processing. The course started with Computer vision including a project in the very first week. I used the weekend to complete the project with full support form my family. It was a wonderful experience with a sense of achievement. I felt like tuning my very first PID controller. The support from the community (Whatsapp, Facebook, Slack, Udacity Mentor) helped get the confidence and support to complete the project.

Finding Lanes Project

The code is available on Github (https://github.com/shankarananth/CarND-LaneLines-P1)

I used the following sequence of steps to arrive at the solution

1) Grayscale Image

Gray Image

2) Guassian Blur

Guass Blur Image

3) Canny Edge Detection

Canny Edge Detection Image

4) Region of Interest

ROI Image

5) Hough Transformation and Extrapolation

Hough Image

Hough Image

Some lessons learnt from experience

1) I used Anaconda on windows. Had some difficulty in installing ffmpeg. I used the guideline provided in the following link to solve the issue https://github.com/adaptlearning/adapt_authoring/wiki/Installing-FFmpeg

2) I used a debug folder to save all intermediate image. This helped me a lot in easy tuning of various parameters.

3) I did not attempt the Optional challenge to optimize available time with me. Test runs with current code was not successful.

4) In terms of improvement I could further smooth the lines across frames in video.

5) I did not have experience with Jupyter. It is very different for a coding environment. However after using it I could visibly see the advantage of such an environment.

6) Line Extrapolation – I used y=mx+c draw the extrapolated line (First identify slope with co-ordinates ((y2-y1)/(x2-x1)), Second calculate C and Third identify new co-ordinates based on given y-axis) (Note: I received feedback from reviewer that it is possible to achieve the results without the need for extrapolation just by tuning Hough Transform parameters)

Results of the program uploaded to Youtube

Share