#3: Crash

Last month, I ended my blog with a to do list on the things I wanted to accomplish by the time this blog was due.  The list was:

  • Fix the PID algorithm
  • Add the “I” and the “D” of PID
  • Add weights to sensors
  • Add hazards
  • Harder to navigate roads

And among those, these are what got accomplished:

Which is demoralizing.

Autopsy report

I once read something to the effect of: 90% of your project will be coded in the first 10% of the time spent on it.  The other 90% of the time will be spent on the last 10%.  This has been my first decently sized project since first learning how to program, and I already have a data point of one that agrees with that sentiment.

So, what happened?  Well, the simple answer was I started to implement the next part of the project, and it did not work.  I ended last month with a sensor that could detect when it was off road, and make a course correction based on that.  While that did get an object that can navigate a path, the point of this project was to navigate using proportional, integral, and derivative control.  So I added another type of sensor was located where the first sensor was, and followed the first sensor exactly, until the edge of the road was found.  When the edge was found, the new sensor would stay at the edge of the road, rather than continuing to move freely off road.  You would then have a sensor that was off the road, and another sensor at the edge of the road.  With these two sensors reporting their pixel location, the idea was to get an error calculated based on their pixel distance from each other.

The way I went about implementing the second sensor is I created a new class (the first sensor was called Color Sensor, and the new sensor was called Distance Sensor).  This new class was instantiated in the Color Sensor class’ init method, but did not inherit from the Color Sensor.  I was able to always get Color and Distance Sensor pair in this way, which is what I wanted.  However, in the implementation of the Distance Sensor, I had its movement based on the movement of the Color Sensor.  As a result, the Distance Sensor tried to move with the Color Sensor, but then was restricted to the road.  Every time I encountered anything other than straight road, issues would happen (usually the Distance Sensor would fly off to infinite).

After banging my head against a wall for 2 weeks, I asked my mentor, Brian Krasnopolsky, for help.  His recommendation was to remove the distance sensor and combine the two sensors in to one, reasoning that if you have 2*n sensors as opposed to just n sensors, that’s just doubles the number of things that could break.  I have no emotional attachment to any individual module of this project; my goal is to get it working, so I spent some time rewriting how the color sensor works, described in the pseudocode:

def __notRoad(self, carCenter, target):

update sensor

find the edge of the car

get (x,y) distance of center of car to the target location

start the sensor at center of car

set dx, dy to an increment equal to half the screen

while the sensor registers as on road:

move the sensor location (x, y) to (x + dx, y + dy) pixel location towards the                            target location

if sensor location (x, y) == target location (x, y):

return (color, edge of the car pixel location (x, y), sensor location (x, y))

# color determines whether the car is on road or not

read the color of the sensor at its current location

return (color, edge of the car pixel location (x, y), sensor location (x, y))

As a result, the sensor now can tell how far away it is from the car, and never reads past the edge of the road.  The desired result of having the Color Sensor and Distance Sensor rolled in to one has been achieved.  Furthermore, this is my first addition of a private function (with the use of __functionName).  I know nothing is ever really private in Python, but since I plan to learn C/C++ Soon™ it made sense to use it here as nothing should have access to the __notRoad function.  I imagine in a production environment having __functionName in Python would just let other developers working on the same project to know that this method should only be accessed by its class, rather than strictly forbidding it.

Light at the End of the Tunnel

This technically concludes the mentorship program, save for the presentation on this project to be given at a Chipy meeting in a few months.  The key takeaway is that it is in a few months, which means I can continue working on this project.  Were the presentations to be in a few days, I would not participate out of embarrassment; the project feels like it has taken two steps backwards from where I was a month ago.  Having said that: I know the code is better, satisfies my needs better, and is better organized.  But nobody cares if you rearranged your office furniture – they want a functional office.  So the work continues on the project, and with any luck and perseverance, I will have something more impressive to show off than I do right now, or compared to last month.

# 2: Off to the Races


A month has passed, and another blog is due.  I am in disbelief that only a month has gone by.  I say that because of this:self-driving

Not only do I have a functional car-like object, but it successfully navigates along a path.  Last month, I had big dreams of just getting the car moving in 8 directions by now.  I had anticipated that being able to drive the car around manually would be a lofty, but not outrageous goal.  Well, I blew past that and it travels in all 360 degrees of a circle, and while I still can control it manually, I simply don’t anymore.

Now, while I am excited and still riding the high of where the project is at, there is still a mountain of work to be done.  In the last blog, I briefly described what PID control is, and what is displayed in the gif above is absolutely not PID.  The sensors (white boxes) do check error, and do make a simple calculation to eliminate said error, but the algorithm for even proportional control is more complicated.  Furthermore, I have only worked on proportional control, and in reality there should always be some error if that is the only type of control that you have.  In the above gif, the error is corrected entirely, and that should not be the case, currently.

The road less traveled

So how did I get here in such a short amount of time?  Oddly (to me), was that I actually did not spend all my time working on the project.  I had to take a significant detour learning pygame.  In the last blog, I had random road generation going, but it was sloppy, and that was really all I had.

I followed this tutorial on youtube.  After 9 days of watching and following the videos, this was the end result:


A lot of the work that went in to building this game could be directly translated in to my project.  I did not know that going in, and it was a huge worry that I would have all this information and nothing really gained at the end of it.  Fortunately, that was not the case.  Of note, the things that I learned and added in to my project were:

  • Breaking up components in to classes
  • The scrolling player object and the scrolling background
  • Adding GUI features like a menu with buttons, as well as the ability to pause

A Class of Their Own

As mentioned, the tutorial I followed used classes to build the game.  Since the first week or so of when I first started to learn Python, I was told to use classes.  That did not really mean anything to me, and when I tried I failed to do anything productive, and ended up breaking my scripts entirely.  Since moving back to my own project, I have started using classes left, right, and center.  It has become natural for me to organize and encapsulate things that are ultimately objects and their related functions in to classes. I have gone from never using classes to using them everywhere:

class PIDCar():

def __init__(self, gameWindow):

self.gameWindow = gameWindow
pg.display.set_caption(“Self-driving car”)
self.clock = pg.time.Clock()
self.FPS = 60

self.map = GameMap()
self.car = CarActive()
self.direction = DirectionOfMotion(self.car.image, (self.car.rect.centerx, self.car.rect.centery))
self.dirReticle = DirectionReticle(self.car.image, (self.car.rect.centerx,
self.controller = Controller()
self.sensor1 = Sensor(self.gameWindow, True, .1665, 165, 1)  # Top right
self.sensor2 = Sensor(self.gameWindow, True, 1.8335, 185, 1) # Bottom right
self.sensor3 = Sensor(self.gameWindow, True, .9, 115, 1)  # Behind center
self.sensor4 = Sensor(self.gameWindow, True, 1.1, 115, 1) # Behind center
self.sensorList = [self.sensor1, self.sensor2, self.sensor3, self.sensor4]


In 28 lines of code, 10 classes are used or instantiated, with the last line being a list of the instantiated sensor class (and a lot of lines are simply overruns of class instantiation that wouldn’t fit on a single line). I now worry I might be overdoing it.

Road Blocks

As far as I’ve gotten, it was not without issues. The current issue I faced I already mentioned about the PID not really being PID. But as with all projects, you iterate until you get it right (or simply until it is due). Another issue I had was originally the sensors that were shown off in the first gif were originally completely filled, white squares. Only in the past week did I change them from pygame surfaces in the shape of a rectangle, to drawn rectangles. This sounds pedantic, but the way surfaces are rendered versus how shapes are drawn in pygame made all the difference. When the sensors were filled squares, the sensors were non-functional. I had to stop blitting them to the game window for them to work, which meant I had a case not unlike Schrodinger’s Cat: observing the sensors caused them to not work, but making the sensors work caused me to not be able to see them. By drawing the sensors as empty squares I could both have functioning sensors that I could observe as they changed positions in the game world.

Getting the translation of the car object (the blue arrow, currently) was another hassle. Pygame has a built-in transform function that I first thought to use. It led to interesting results:

In the end the solution was to rotate the arrow image about its center and create individual pictures that represent the arrow when it is moving at a certain angle. This was a quick and dirty solution, and I imagine if I was better at software like blender I would create animated sprites with key frames.


The final month has begun, and with all the progress I have made has inspired me to be more ambitious for April. Maybe this is the reason companies never meet deadlines? Anyway, here’s what I hope to have this time next month:

  • Fix the PID algorithm

The current controller is not even proportional control, let alone PID control.  However, what was shown in the first image is probably what the final product should look like.  Just how it works under the hood will be more fleshed out.

  • Add the “I” and the “D” of PID

Right now I have only worked on proportional control.  The integral and derivative controls come next.

  • Add weights to sensors

Sensors that are further out from the car are more likely to come across things that are not road.  Conversely, sensors that are close that pick up things that are not road need to react much more aggressively to avoid the car hitting something.  Higher weight means more aggressively making a correction to avoid obstacles.

  • Add hazards

Currently there’s only road and not road.  There should be other moving bodies (such as people) that are generated that the car should avoid.

  • Harder to navigate roads

The road presented in the first gif is a simple square shape.  The car can effectively navigate around the 4 corners.   It’s kind of boring and small.  I have a stretch goal of procedurally generating the road such that it is equivalent to miles long and so that I would not necessarily know the path beforehand.  This will be a lot of work.  At the very least, I am already working on a manually-coded path (like the square) that has more zig-zags.

#1: Testing the waters


Whether you are baking a cake or trying to get a rocket in to space, the application of control theory is being used.  Most people are unaware of its presence, but to take the baking example: how does your oven know what temperature it is currently at?  How does it know when an appropriate temperature has been reached?  What is the correct thing to do when that temperature has been reached?

Even if this is the first time you are hearing about the concept of control theory, you can probably answer at least a few of the above questions: your oven has a thermometer, and it turns itself off when the desired temperature is reached.  But this is indeed the foundation of what is ultimately called feedback controls:

  1. An input in to a system is given.  Often a voltage, but in the case of the oven it is an ignited gas.
  2. A goal is set.  A desired voltage, a certain elevation, some maximum speed, or a temperature is to be achieved.
  3. Some output is given.  The oven outputs heat, the rocket shoots out propellant, the applied voltage causes the motor to spin.
  4. A sensor observes the output.  What is the resultant temperature?
  5. A controller modifies the input based on how much error there exists between the sensor readings and the target.

To expand on part 5: an oven reading 75 °F (room temperature) when the target is 450 °F means there is a large error, so on the next “cycle”, the input will be modified to spit out as much gas as possible to get the heat flowing.  Some time later, the thermometer now reads 465 °F.  Not only has the desired target been hit, it has been surpassed.  The next thing to do is to shut off the gas completely (in a “dumb” system that is your oven) until heat is bled off such that the target temperature is met.

For an oven, this is all the control you need.  If the system goes above the desired temperature, no big deal; your dessert will not be harmed if the gas is constantly being turned on and off, oscillating between 435 °F and 465 °F.  It is slow, has a relatively high inaccuracy, and is not smart, but it is extremely cheap.  When speed, precision, and accuracy are required, control gets much more complicated (and also more expensive).  The next evolution is what is referred to as Proportional, Integral, and Derivative Control, or PID Control for short.  Each component still does the same thing: reads the error between the current sensor(s) reading of the output, and the desired output.  How this error is interpreted is where they differ:

  1. Proportional control by itself is the simplest control, and the oven is an example of this kind of control.  It basically asks: what is the difference between 450 °F (target) and 75 °F?  If there is error, the input turns on.  If there is no error (target and current are equal), proportional control does nothing.  As a result, there is always going to be some error between the desired output and the real output.  For this reason, the other two types of control are used in conjunction.
  2. Integral control takes a history of past performance.  Integral is where calculus comes in to play, and if you understand integration, then this control is observing the area under the curve that represents the error.  For those that have not learned calculus, it asks the question: what has happened to the error as timed passed?  Has it gotten smaller or larger, and if so, by how much?  Using a combination of proportional and integral control, error can be eliminated almost entirely.
  3. Derivative control looks to future performance.  Again with calculus: when taking the derivative of a function, you are finding the slope of the curve at some instance in time.  Derivative control asks: where is the error going?  In the next iteration, is it going to be higher or lower than the current iteration?

The combination of these three types of control yields a system that achieves a desired output quickly, accurately, and with high precision.  Derivative control has the added bonus of providing system stability.  An issue with controlled systems is that, if proper precautions are not taken, the system will spiral wildly out of control, and lead to disastrous and even lethal results (ie, if your oven is unstable, it would not stop at 450 °F, but will keep heating up until it catches fire).

Project Proposal

With all this in mind about control theory, the missing piece of the puzzle is the project.  The goal of the Chicago Python Organization Mentorship Program is to learn how to use Python better than we knew going in.  There are several high-level goals to be accomplished with this project:

  1. Become more fluent in Python
  2. Put my skills with Python, high-level math, and problem-solving skills on display
  3. Present my project in a manner that could be easily understood by people with a non-technical background

Point 1 will be accomplished as I do points 2 and 3.  The reason for this is several necessary tools (mostly Python libraries) that I have never used before will need to be learned and accessed in a meaningful way.  I will be using the following:

  1. Python (of course)
  2. SciPy – An open source library for scientific manipulation and analysis.  I have tried to keep this blog post as non-technical as possible, but to properly implement controls requires an understanding of linear algebra and differential equations.
  3. NumPy – Sister to SciPy, used for arrays and matrices, which are essential to some mathematical operations.  Arrays and matrices in the Numpy library have more functionality than Python’s list object.
  4. matplotlib – For quickly plotting graphs.  Can be used to quickly determine aspects of a system and its controller.
  5. Pygame – A way to graphically display the behavior of the system and how it behaves with a controller.
  6. Git/Github – For quickly and efficiently handling version control, while also making it available to my mentor (and the public) for feedback.

This list may not be final.

The project is: Using the concepts of control theory and the aforementioned tools, I will build a simulation where a car follows a road.  The car will have sensors, and it will read information to provide feedback to the controller, telling the car if it is in an optimal location or not as it travels along this road.  There will (hopefully) be hazards such as pedestrians and potholes for the car to avoid.

First Week Progress

This first blog comes 10 days after the ChiPy Mentorship welcome dinner (Monday, February 5th).  My first meeting with my mentor, Brian Krasnopolsky, happened the following Wednesday.  This first meeting was primarily brainstorming and getting each other on the same page.  Brian got me on to the idea of using Pygame for a graphical way of showing off my project (my original intent was simply to use matplotlib).

Modeling System Behavior

Some of the work done in the first couple of days was simply review.  The class that I took that got me interested in the field of controls finished over a year ago, and I needed some time to get myself refreshed on the material.

Figure 1: Step response of a proportional controller


Scipy in combination of matplotlib has the ability to quickly model the behavior of a controlled system, as seen above in figure 1.  This is more abstract than a kitchen oven, but is one of the ways I am familiar representing a control system.  In short: the horizontal axis represents time, and the vertical line represents the system response.  Figure 1 is an example of proportional control, and as I stated earlier: there is always error between the output of the system and the target.  The target for the system in figure 1 is set to 1, and even if extended the graph to an hour (instead of .35 seconds), the output would never pass .5.

Figure 2: Adding integral control (left) and full PID control (right)

Two controllers

For integral and derivative control, they can be added or excluded (although proportional-derivative control is relatively uncommon, so it is excluded here).  On the left you can see that the system has some initial instability, oscillating above and below the target until achieving equilibrium about 30 seconds in.  As mentioned, derivative control increases the stability of the system, so on the right you can see that it reaches the target relatively quickly (about 3-4 seconds), and barely oscillates.  This is the ideal situation.

These functions are very quick and easy to get going in Python:

import scipy.signal as tf
import matplotlib as plt

nums = [2]
dens = [1, 2.3, 2]

sys = tf.TransferFunction(nums, dens)
t, y = tf.step(sys)

plt.pyplot.plot(t, y)

This code turns two arrays which represent numerators and denominators of the transfer function, which is defined as the output of the system divided by the input. This is the equation that describes the behavior of the system. It is then put in graph form using matplotlib.


Getting used to a graphical interface was difficult.  There was the initial shock of how to think about games (which I play a lot): the screen is rewritten from top to bottom as you move in the game world.  This happens so quickly (your frames per second hopefully being above 30) that the appearance of motion is achieved.  The first thing I set out to do was to generate a road on the fly.  A hard-coded seed for what was and was not road was made, and then an implementation of the numpy randint function was used to make the road move up or down as you “traveled” to the right.  There were some issues:

Figure 3: My first ‘roadblock’


Road generation started out fine, but as soon as it reached the bottom of the screen, part of the road got stuck there, and the rest branched off.  When the branch reached the top, that also got stuck.  The solution: I was originally using array.insert(value, location = -1), which was changing the value of the last element.  I switched to array.append(value) and a new element location was added, and the value placed in this new, empty element.

Figure 4: Working random road generation

fixed road

As far as fixes go, this one was relatively simple.  But actually finding the root of the problem felt like hunting for that missing semicolon in other languages.

Further work

The following blog is due a month from now.  Brainstorming is all but finished; Brian and I have both tempered ourselves from the desire to add more and more features.

The car driving is going to be the main goal of this month.  That white square in figures 3 and 4 is what represents the car in this graphical interface.  The road generation is random and happens without input from the car.  But a car drives, so its motion should determine when new road should appear on the edge of the screen (but not where, otherwise the point of this project is lost).  Furthermore: I want the car to drive in a total of 8 directions (cardinal and ordinal).

You can find all the files related to this project on GitHub