Teaching cars to not hit squirrels

15 min read
by: Rockwell Dec 29th, 2016

The term self-driving car has has become kind of a hot keyword. It is kind of curious idea, is it not?  It seems very mysterious. How can a machine run a program that somehow allows it to safely, and effectively, operate a motorized vehicle on a roadway?  But that is exactly what happens.  It has happened in the very recent past, it is happening right now, and it is going to be happening more and more in the future.

I'm really looking forward to a time when generations after us look back and say how ridiculous it was that humans were driving cars. - Sebastian Thrun

The purpose of this post is to discuss the tools and techniques used to enable vehicles to drive themselves.  It will not be a Python programming tutorial or anything of that nature.  I am going to do my best to steer clear from writing an in-depth how-to tutorial to discuss the techniques and processes at a high level.

For a car to operate it needs some way to understand the environment around it. Let's think about some of the things that would be useful for a car to "understand."

  • Perceive lane markers, if there are any
  • Perceive signage, if there is any
  • Perceive other vehicles on the road with it
  • Perceive other objects (like people or other foreign objects that it needs to avoid that could end up in the way)
  • Perceive that pesky squirrel that just hangs out a little too long eating the nut it found forcing you to slow down and swerve a bit to avoid it

This is certainly not a comprehensive list.  It does help us start to get a picture of what we need to do at a high level.

Squirrel3 "Darn it chippy get out of the road!"
Identifying lanes

Lets start with the first item on our list, finding lanes. Consider the image below.

R2N Hm Ygokmmmiii Saa6 Pdrbaommmiii Saaa Ciaeyi Ya Kkjjppoook Sta Bgookmmmiii Sycmigci Saaa Kkjjpoo0 Qqkjppoookmmmgi Abmom Giii Saaa Kkjek2G Ykkjjppoookm Aj Cbgokmmmiii Saa Kneeciaaa Kkjjppo Ig Atkjhoookmmmiii Rl9F Cov Ei M4 H1S Aaaaael Ftk Su Qm Cc

In this image we would we want to focus first on the right and left lanes.  The right lane is a solid white line while the left lane is a broken up white line.  We have all seen lanes like this and have no problem identifying them.  How would a computer "see" this?

What we have to realize is an image is a set of numbers. Data points similar to a large matrix. A color image is made up of three images, one for red (R), one for green (G) and one for blue (B).  Each of those images have a value that runs from 0 to 255. 255 being white, 0 being the darkest point. So to pick out all the white we would need to search for the value 255.  But what does that information do for us? Let's find out.

We are going to use a jupyter notebook to run some very basic code so we can do somethings to our image to get some useful data from it.  The code I am using is from here, if you are going to follow along you will need python 3 and anaconda setup.  Setting up those environments is beyond the scope of this post.  

First we import a few libraries to help. Specifically matplotlib and numPy.  

import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2

I am going to read the image from my directory and show it.  In the notebook this would produce the image you see above.

# Read in the image and show it
image = mpimg.imread('test_images/solidWhiteCurve.jpg')

# make a copy of the image
color_select = np.copy(image)

# show the image

Next we can target the different values we want to filter out.  So, if you remember, all white is 255 for the three image (RGB).  In the real world we are not going to look just for pure white.  We will need to relax that threshold some to get useful data returned.  In the code below I set my color threshold for RGB to be at 200.  So anything lighter than that value will be shown.

# Define the threshold for the different color values
red_threshold = 200
green_threshold = 200
blue_threshold = 200
rgb_threshold = [red_threshold, green_threshold, blue_threshold]

We have to also mark the low end of the spectrum.  In the code below it is set to black in order to produce the most contrast.

# Identify the low end of the threshold
thresholds = (image[:,:,0] < rgb_threshold[0]) \
| (image[:,:,1] < rgb_threshold[1]) \
| (image[:,:,2] < rgb_threshold[2])
color_select[thresholds] = [0,0,0]

# Display the image with the thresholds in place

Running that code will produce the image seen below.

Dy73 Az Ia6 Ew Aaaaasuvork5 Cyii Useful?

So how is this image useful to us?  If you look at the image above you will notice that what we have done is filtered out a lot of stuff that we do  not need to "see" in order to drive.  We have also created an image with very clear contrasts between the lanes and everything else.  This will become important when we discuss Canney edge detection.

This first step really just shows that we can use the data inside the image itself to begin to get some useful insights from it.  This truly is just a first step though.  In the next post I will develop this further with the goal in mind that eventually we need to see clear lane lines on an image and then create those on a video (which is just a series of images).

Ot Yuu Bfd Ws Aaaaasuvork5 Cyii Chippy, is that you?

About Rockwell

Rockwell joined the TheoryThree team in 2015.  He is passionate about the art of crafting quality code and loves satisfying his curiosity by learning about new technologies.

"Every child is an artist.  The problem is how to remain an artist once we grow up." - Pablo Picasso



Rockwell joined the TheoryThree team in 2015.  He is passionate about the art of crafting quality code and loves satisfying his curiosity by learning about new technologies.

"Every child is an artist.  The problem is how to remain an artist once we grow up." - Pablo Picasso