Home > Articles > Programming > Java

Building a Light-Seeking Robot with Q-Learning

  • Print
  • + Share This
Q-Learning is a well-known algorithm that allows machines to learn while unsupervised. The Lego Mindstorms kit, along with leJOS, contains everything needed to implement this fascinating algorithm. This article demonstrates how to build a robot that will learn to seek out a bright light.
Like this article? We recommend

One of the most powerful aspects of Lego Mindstorms is that it can be programmed to do whatever we want it to do. This can be interesting, but often these types of projects are very predictable. Rather than doing what we tell it to do, an even more fascinating robot would have the ability to learn on its own. The field of AI has produced numerous algorithms for learning. There are essentially two large subdivisions of learning (which apply to animals as well as robots): supervised learning and unsupervised learning.

Supervised learning is often accomplished using neural networks. In a supervised learning situation, a robot is given input/output pairs and, after many examples, it develops its own function that can decide what to do with a given input. For example, a computer hooked up to a camera could be shown a series of satellite photographs of a forest. Some of the pictures could contain tanks hiding in the trees, and others could be regular unoccupied forest. One photo after another, the robot is shown a picture and told whether or not tanks are present in the scene. Once the teaching process is done, a new picture is shown to the robot and it tries to identify whether or not a tank is present. This type of problem is ideal for neural networks. The robot in this situation is learning passively; that is, after each photograph is shown, it doesn't take an action or make any statements. It just sits back and learns.

Even more interesting than supervised learning is unsupervised learning. This type of robot receives feedback from each action it performs, which allows it to judge how effective the action was. The feedback is extracted from the environment, either through sensors or internal states such as counting. This feedback is then classified as a reward (or reinforcement). An algorithm decides the value of the reward, which can be either positive or negative. These built-in rewards are very similar to the instincts and feelings that guide humans and other animals. A small sampling of reinforcements that guide your typical day are hunger, pain, enjoyment of food, and sensing cold temperatures.

There are three main advantages of reinforcement learning:

  • Very little programming is required because the robot figures out the algorithm itself.

  • If the environment changes, it doesn't need to be reprogrammed. Even if the robot design is altered, it will relearn the optimal algorithm.

  • If the learning algorithm is properly designed, the robot is guaranteed to find the most efficient policy.

Reinforcement learning shines when given a complex problem. Any problem with many different states and actions—so many that it is complicated for humans to fathom—is ideal for reinforcement learning. In robotics, if you want to program a six-legged walking robot, you need to understand which direction each of the motors turns, you need to pay attention to the sensors that indicate leg position relative to the others, and you need to pay attention to a myriad of physical conditions such as balance. This can be downright complex because a simple pair of wires that are reversed could throw everything off. With reinforcement learning, the robot can sit there experimenting with different walking gaits, measure how far a gait has caused it to move, and the best gait will reveal itself with enough reinforcement. The user could then change the length of the robot legs, change motor sizes, and reverse wires; and the robot will readapt to the new hardware. If the walking-algorithm were manually programmed everything would need to be reprogrammed.

There are two types of unsupervised reinforcement learning. The first requires a model of the world so it can make proper decisions. For example, a self-learning chess program would need to know the position of all the pieces and all the available moves to both players in order to make an informed decision. This can be complex because it needs to keep many statistics. The second type uses an action-value model, which creates a function to deal with different states. This is known as Q-Learning.

The rest of this article will reveal more about Q-Learning, including the algorithm and the parts that make up that algorithm. This includes building and programming a real Lego Mindstorms robot with Java. The result will be a light-seeking robot that uses Q-Learning to produce a light-seeking algorithm.

The Q-Learning Algorithm

A Q-Learning robot can determine the value of an action right after the action is performed, and doesn't need to know about the larger world model. It just needs to know the available actions for each step. Because it requires no model, it is much simpler to program than other learning algorithms.

Q-Learning values are built on a reward scheme. We need to design a reward algorithm that will motivate our robot to perform a goal-oriented behavior. For this project, we'll create a goal-based robot that is rewarded for finding brighter areas of light. This turns out to be very easy to do, using the following criteria:

  1. Goal: Approach Light. The value of the current light reading minus the last light reading determines the reward (greater increase = greater reward). So if the current light reading is 56 and the previous light reading was 53, it receives a reward of +3.

  2. Goal: Avoid Obstacles. If one of the bumpers is pressed, the reward is -2.

  3. Goal: Avoid Staying Still. If the light reading hasn't changed in the last five steps, it receives a negative reward of -2. Presumably, if the robot is receiving identical light readings for five or more steps in a row, it is hung up or not moving.

So how are the actual Q-Values calculated? Basically we just need an equation that increases the Q-Value when a reward is positive, decreases the value when it is negative, and holds the value at equilibrium when the Q-Values are optimal. The equation is as follows:

Q(a,i)fl Q(a,i) + ß(R(i) + Q(a1,j) - Q(a,i))

where the following is true:

Q—a table of Q-values
a—previous action
i—previous state
j—the new state that resulted from the previous action
a1—the action that will produce the maximum Q value
ß-—the learning rate (between 0 and 1)
R—the reward function


This calculation must occur after an action has taken place, so the robot can determine how successful the action was (hence, why previous action and previous state are used).

In order to implement this algorithm, all movement by the robot must be segregated into steps. Each step consists of reading the percepts, choosing an action, and evaluating how well the action performed. All Q-values will presumably be equal to zero for the first step, but during the next step (when the algorithm is invoked), it will set a Q-Value for Q(a,i) that will be a product of the reward it received for the last action. So as the robot moves along, the Q-values are calculated repeatedly, gradually becoming more refined (and more accurate).

In order to better understand the overall flow of the program, it would be useful to examine this in abstract. The abstract algorithm would look something like this:

input = getInputs() 
action = chooseAction(inputs, qvalues) 
qvalues=updateQvalues(qvalues, getFeedback() ) 
  • + Share This
  • 🔖 Save To Your Account