Let’s face it, robots are cool. They’re also going to run the world some day, and hopefully, at that time they will take pity on their poor soft fleshy creators (a.k.a. robotics developers) and help us build a space utopia filled with plenty. I’m joking of course, but only sort of.

In my appetite to have some small access over the matter, I took a course in free robot ascendancy theory last year, which culminated in my architectonics a Python-based automatic actor that accustomed me to convenance ascendancy theory on a simple, mobile, programmable robot.

In this article, I’m going to show how to use a Python robot framework to advance ascendancy software, call the ascendancy scheme I developed for my apish robot, allegorize how it interacts with its ambiance and achieves its goals, and altercate some of the axiological challenges of robotics programming that I encountered along the way.

In order to follow this tutorial on robotics programming for beginners, you should have a basic ability of two things:

  • Mathematics—we will use some algebraic functions and vectors
  • Python—since Python is among the more accepted basic robot programming languages—we will make use of basic Python libraries and functions

The snippets of code shown here are just a part of the entire simulator, which relies on classes and interfaces, so in order to read the code directly, you may need some acquaintance in Python and object aggressive programming.

Finally, alternative topics that will help you to better follow this tutorial are alive what a state apparatus is and how range sensors and encoders work.

The claiming of the programmable robot: acumen versus reality, and the airiness of control

The axiological claiming of all robotics is this: It is absurd to ever know the true state of the environment. Robot ascendancy software can only guess the state of the real world based on abstracts alternate by its sensors. It can only attack to change the state of the real world through the bearing of ascendancy signals.

webrok

Thus, one of the first steps in ascendancy design is to come up with an absorption of the real world, known as a , with which to adapt our sensor readings and make decisions. As long as the real world behaves according to the assumptions of the model, we can make good guesses and exert control. As soon as the real world deviates from these assumptions, however, we will no longer be able to make good guesses, and ascendancy will be lost. Often, once ascendancy is lost, it can never be regained. (Unless some benevolent alfresco force restores it.)

This is one of the key affidavit that robotics programming is so difficult. We often see videos of the latest analysis robot in the lab, assuming absurd feats of dexterity, navigation, or teamwork, and we are tempted to ask, “Why isn’t this used in the real world?” Well, next time you see such a video, take a look at how highly-controlled the lab ambiance is. In most cases, these robots are only able to accomplish these absorbing tasks as long as the ecology altitude remain within the narrow borders of its centralized model. Thus, one key to the advance of robotics is the development of more complex, flexible, and robust models—and said advance is accountable to the limits of the attainable computational resources.

One key to the advance of robotics is the development of more complex, flexible, and robust models.

[Side Note: Philosophers and psychologists alike would note that living creatures also suffer from assurance on their own centralized acumen of what their senses are cogent them. Many advances in robotics come from celebratory living creatures and seeing how they react to abrupt stimuli. Think about it. What is your centralized model of the world? It is altered from that of an ant, and that of a fish? (Hopefully.) However, like the ant and the fish, it is likely to oversimplify some realities of the world. When your assumptions about the world are not correct, it can put you at risk of losing ascendancy of things. Sometimes we call this “danger.” The same way our little robot struggles to survive adjoin the alien universe, so do we all. This is a able acumen for roboticists.]

The programmable robot simulator

The actor I built is accounting in Python and very cleverly dubbed . You can find v1.0.0 on GitHub. It does not have a lot of bells and whistles but it is built to do one thing very well: accommodate an authentic simulation of a mobile robot and give an ambitious roboticist a simple framework for practicing robot software programming. While it is always better to have a real robot to play with, a good Python robot actor is much more attainable and is a great place to start.

In real-world robots, the software that generates the ascendancy signals (the “controller”) is appropriate to run at a very high speed and make circuitous computations. This affects the choice of which robot programming languages are best to use: Usually, C is used for these kinds of scenarios, but in simpler robotics applications, Python is a very good accommodation amid beheading speed and ease of development and testing.

The software I wrote simulates a real-life analysis robot called the Khepera but it can be acclimatized to a range of mobile robots with altered ambit and sensors. Since I tried to affairs the actor as agnate as accessible to the real robot’s capabilities, the ascendancy logic can be loaded into a real Khepera robot with basal refactoring, and it will accomplish the same as the apish robot. The specific appearance implemented refer to the Khepera III, but they can be easily acclimatized to the new Khepera IV.

In other words, programming a apish robot is akin to programming a real robot. This is analytical if the actor is to be of any use to advance and appraise altered ascendancy software approaches.

In this tutorial, I will be anecdotic the robot ascendancy software architectonics that comes with v1.0.0 of , and accouterment snippets from the Python source (with slight modifications for clarity). However, I animate you to dive into the source and mess around. The actor has been forked and used to ascendancy altered mobile robots, including a Roomba2 from iRobot. Likewise, please feel free to fork the activity and advance it.

The ascendancy logic of the robot is accountable to these Python classes/files:

  • models/supervisor.py—this class is amenable for the alternation amid the apish world around the robot and the robot itself. It evolves our robot state apparatus and triggers the controllers for accretion the adapted behavior.
  • models/supervisor_state_machine.py—this class represents the different states in which the robot can be, depending on its admiration of the sensors.
  • The files in the models/controllers directory—these classes apparatus altered behaviors of the robot given a known state of the environment. In particular, a specific ambassador is called depending on the state machine.

The goal

Robots, like people, need a purpose in life. The goal of our software authoritative this robot will be very simple: It will attack to make its way to a agreed goal point. This is usually the basic affection that any mobile robot should have, from free cars to automatic vacuum cleaners. The coordinates of the goal are programmed into the ascendancy software before the robot is activated but could be generated from an added Python appliance that oversees the robot movements. For example, think of it active through assorted waypoints.

However, to complicate matters, the ambiance of the robot may be strewn with obstacles. The robot MAY NOT bang with an obstacle on its way to the goal. Therefore, if the robot encounters an obstacle, it will have to find its way around so that it can abide on its way to the goal.

The programmable robot

Every robot comes with altered capabilities and ascendancy concerns. Let’s get accustomed with our apish programmable robot.

The first thing to note is that, in this guide, our robot will be an autonomous mobile robot. This means that it will move around in space freely and that it will do so under its own control. This is in adverse to, say, a remote-control robot (which is not autonomous) or a branch robot arm (which is not mobile). Our robot must figure out for itself how to accomplish its goals and survive in its environment. This proves to be a decidedly difficult claiming for novice robotics programmers.

Control inputs: sensors

There are many altered ways a robot may be able to adviser its environment. These can accommodate annihilation from adjacency sensors, light sensors, bumpers, cameras, and so forth. In addition, robots may acquaint with alien sensors that give them advice that they themselves cannot anon observe.

Our advertence robot is able with nine bittersweet sensors—the newer model has eight bittersweet and five accelerated adjacency sensors—arranged in a “skirt” in every direction. There are more sensors facing the front of the robot than the back because it is usually more important for the robot to know what is in front of it than what is behind it.

In accession to the adjacency sensors, the robot has a pair of wheel tickers that track wheel movement. These allow you to track how many rotations each wheel makes, with one full avant-garde turn of a wheel being 2,765 ticks. Turns in the adverse administration count backward, abbreviating the tick count instead of accretion it. You don’t have to worry about specific numbers in this tutorial because the software we will write uses the catholic ambit bidding in meters. Later I will show you how to compute it from ticks with an easy Python function.

Control Outputs: mobility

Some robots move around on legs. Some roll like a ball. Some even clamber like a snake.

Our robot is a differential drive robot, acceptation that it rolls around on two wheels. When both wheels turn at the same speed, the robot moves in a beeline line. When the wheels move at altered speeds, the robot turns. Thus, authoritative the movement of this robot comes down to appropriately authoritative the rates at which each of these two wheels turn.

API

In Sobot Rimulator, the break amid the robot “computer” and the (simulated) concrete world is embodied by the file robot_supervisor_interface.py, which defines the entire API for interacting with the “real robot” sensors and motors:

  • read_proximity_sensors() returns an array of nine values in the sensors’ native format
  • read_wheel_encoders() returns an array of two values advertence total ticks since the start
  • set_wheel_drive_rates( v_l, v_r ) takes two values (in radians-per-second) and sets the left and right speed of the wheels to those two values

This interface internally uses a robot object that provides the data from sensors and the achievability to move motors or wheels. If you want to create a altered robot, you simply have to accommodate a altered Python robot class that can be used by the same interface, and the rest of the code (controllers, supervisor, and simulator) will work out of the box!

The simulator

As you would use a real robot in the real world after paying too much absorption to the laws of physics involved, you can ignore how the robot is apish and just skip anon to how the ambassador software is programmed, since it will be almost the same amid the real world and a simulation. But if you are curious, I will briefly acquaint it here.

The file world.py is a Python class that represents the apish world, with robots and obstacles inside. The step action inside this class takes care of evolving our simple world by:

  • Applying physics rules to the robot’s movements
  • Considering collisions with obstacles
  • Providing new values for the robot sensors

In the end, it calls the robot admiral amenable for active the robot brain software.

The step action is accomplished in a loop so that robot.step_motion() moves the robot using the wheel speed computed by the administrator in the antecedent simulation step.

webrok

The apply_physics() function internally updates the values of the robot adjacency sensors so that the administrator will be able to appraisal the ambiance at the accepted simulation step. The same concepts apply to the encoders.

A simple model

First, our robot will have a very simple model. It will make many assumptions about the world. Some of the important ones include:

  • The area is always flat and even
  • Obstacles are never round
  • The wheels never slip
  • Nothing is ever going to push the robot around
  • The sensors never fail or give false readings
  • The wheels always turn when they are told to

Although most of these assumptions are reasonable inside a house-like environment, round obstacles could be present. Our obstacle abstention software has a simple accomplishing and follows the border of obstacles in order to go around them. We will hint readers on how to advance the ascendancy framework of our robot with an added check to avoid annular obstacles.

The ascendancy loop

We will now enter into the core of our ascendancy software and explain the behaviors that we want to affairs inside the robot. Added behaviors can be added to this framework, and you should try your own ideas after you finish reading! Behavior-based robotics software was proposed more than 20 years ago and it’s still a able tool for mobile robotics. As an example, in 2007 a set of behaviors was used in the DARPA Urban Challenge—the first antagonism for free active cars!

A robot is a activating system. The state of the robot, the readings of its sensors, and the furnishings of its ascendancy signals are in connected flux. Authoritative the way events play out involves the afterward three steps:

  1. Apply ascendancy signals.
  2. Measure the results.
  3. Generate new ascendancy signals affected to bring us closer to our goal.

These steps are again over and over until we have accomplished our goal. The more times we can do this per second, the finer ascendancy we will have over the system. The Sobot Rimulator robot repeats these steps 20 times per second (20 Hz), but many robots must do this bags or millions of times per second in order to have able control. Bethink our antecedent addition about altered robot programming languages for altered robotics systems and speed requirements.

In general, each time our robot takes abstracts with its sensors, it uses these abstracts to update its centralized appraisal of the state of the world—for example, the ambit from its goal. It compares this state to a  value of what it  the state to be (for the distance, it wants it to be zero), and calculates the error amid the adapted state and the actual state. Once this advice is known, breeding new ascendancy signals can be bargain to a botheration of minimizing the error which will eventually move the robot appear the goal.

A nifty trick: simplifying the model

To ascendancy the robot we want to program, we have to send a signal to the left wheel cogent it how fast to turn, and a abstracted signal to the right wheel telling  how fast to turn. Let’s call these signals v and v. However, consistently cerebration in terms of v and v is very cumbersome. Instead of asking, “How fast do we want the left wheel to turn, and how fast do we want the right wheel to turn?” it is more accustomed to ask, “How fast do we want the robot to move forward, and how fast do we want it to turn, or change its heading?” Let’s call these ambit velocity v and angular (rotational) velocity ? (read “omega”). It turns out we can base our entire model on v and ? instead of v and v, and only once we have bent how we want our programmed robot to move, mathematically transform these two values into the v and v we need to absolutely ascendancy the robot wheels. This is known as a unicycle model of control.

In robotics programming, it's important to accept the aberration amid unicycle and cogwheel drive models.

Here is the Python code that accouterments the final transformation in supervisor.py. Note that if ? is 0, both wheels will turn at the same speed:

webrok

Estimating state: robot, know thyself

Using its sensors, the robot must try to appraisal the state of the ambiance as well as its own state. These estimates will never be perfect, but they must be fairly good because the robot will be basing all of its decisions on these estimations. Using its adjacency sensors and wheel tickers alone, it must try to guess the following:

  • The administration to obstacles
  • The ambit from obstacles
  • The position of the robot
  • The branch of the robot

The first two backdrop are bent by the adjacency sensor readings and are fairly straightforward. The API function read_proximity_sensors() returns an array of nine values, one for each sensor. We know ahead of time that the seventh reading, for example, corresponds to the sensor that points 75 degrees to the right of the robot.

Thus, if this value shows a account agnate to 0.1 meters distance, we know that there is an obstacle 0.1 meters away, 75 degrees to the left. If there is no obstacle, the sensor will return a account of its best range of 0.2 meters. Thus, if we read 0.2 meters on sensor seven, we will assume that there is absolutely no obstacle in that direction.

Because of the way the bittersweet sensors work (measuring bittersweet reflection), the numbers they return are a non-linear transformation of the actual ambit detected. Thus, the Python action for free the ambit adumbrated must catechumen these readings into meters. This is done in supervisor.py as follows:

webrok

Again, we have a specific sensor model in this Python robot framework, while in the real world, sensors come with accompanying software that should accommodate agnate about-face functions from non-linear values to meters.

Determining the position and branch of the robot (together known as the  in robotics programming) is somewhat more challenging. Our robot uses odometry to appraisal its pose. This is where the wheel tickers come in. By barometer how much each wheel has turned since the last abundance of the ascendancy loop, it is accessible to get a good appraisal of how the robot’s pose has changed—but .

This is one reason it is important to iterate the ascendancy loop very frequently in a real-world robot, where the motors moving the wheels may not be perfect. If we waited too long to admeasurement the wheel tickers, both wheels could have done quite a lot, and it will be absurd to appraisal where we have ended up.

Given our accepted software simulator, we can afford to run the odometry ciphering at 20 Hz—the same abundance as the controllers. But it could be a good idea to have a abstracted Python thread active faster to catch abate movements of the tickers.

Below is the full odometry action in supervisor.py that updates the robot pose estimation. Note that the robot’s pose is composed of the coordinates x and y, and the heading theta, which is abstinent in radians from the absolute X-axis. Positive x is to the east and positive y is to the north. Thus a branch of 0 indicates that the robot is facing anon east. The robot always assumes its antecedent pose is (0, 0), 0.

webrok

Now that our robot is able to accomplish a good appraisal of the real world, let’s use this advice to accomplish our goals.

Python robot programming methods: go-to-Goal behavior

The absolute purpose in our little robot’s actuality in this programming tutorial is to get to the goal point. So how do we make the wheels turn to get it there? Let’s start by simplifying our worldview a little and assume there are no obstacles in the way.

This then becomes a simple task and can be easily programmed in Python. If we go avant-garde while facing the goal, we will get there. Thanks to our odometry, we know what our accepted coordinates and branch are. We also know what the coordinates of the goal are because they were pre-programmed. Therefore, using a little linear algebra, we can actuate the vector from our area to the goal, as in go_to_goal_controller.py:

webrok

Note that we are accepting the vector to the goal , and NOT in world coordinates. If the goal is on the X-axis in the robot’s advertence frame, that means it is anon in front of the robot. Thus, the angle of this vector from the X-axis is the aberration amid our branch and the branch we want to be on. In other words, it is the  between our accepted state and what we want our accepted state to be. We, therefore, want to  We want to abbreviate the error:

webrok

self.kP in the above atom of the ambassador Python accomplishing is a ascendancy gain. It is a accessory which determines how fast we turn in  to how far away from the goal we are facing. If the error in our branch is 0, then the axis rate is also 0. In the real Python action inside the file go_to_goal_controller.py, you will see more agnate gains, since we used a PID controller instead of a simple proportional coefficient.

Now that we have our angular velocity ?, how do we actuate our avant-garde velocity v? A good accepted rule of thumb is one you apparently know instinctively: If we are not making a turn, we can go avant-garde at full speed, and then the faster we are turning, the more we should slow down. This about helps us keep our system stable and acting within the bounds of our model. Thus, v is a action of ?. In go_to_goal_controller.py the blueprint is:

webrok

A advancement to busy on this blueprint is to accede that we usually slow down when near the goal in order to reach it with zero speed. How would this blueprint change? It has to accommodate somehow a backup of v_max() with article proportional to the distance. OK, we have almost completed a single ascendancy loop. The only thing left to do is transform these two unicycle-model ambit into cogwheel wheel speeds, and send the signals to the wheels. Here’s an archetype of the robot’s aisle under the go-to-goal controller, with no obstacles:

This is an archetype of the programmed robot's trajectory.

As we can see, the vector to the goal is an able advertence for us to base our ascendancy calculations on. It is an centralized representation of “where we want to go.” As we will see, the only major aberration amid go-to-goal and other behaviors is that sometimes going appear the goal is a bad idea, so we must account a altered advertence vector.

Python robot programming methods: avoid-obstacles behavior

Going appear the goal when there’s an obstacle in that administration is a case in point. Instead of active abrupt into things in our way, let’s try to affairs a ascendancy law that makes the robot avoid them.

To abridge the scenario, let’s now forget the goal point absolutely and just make the afterward our objective: 

Accordingly, when there is no obstacle in front of us, we want our advertence vector to simply point forward. Then ? will be zero and v will be best speed. However, as soon as we detect an obstacle with our adjacency sensors, we want the advertence vector to point in whatever administration is away from the obstacle. This will cause ? to shoot up to turn us away from the obstacle, and cause v to drop to make sure we don’t accidentally run into the obstacle in the process.

A neat way to accomplish our adapted advertence vector is by axis our nine adjacency readings into vectors, and taking a abounding sum. When there are no obstacles detected, the vectors will sum symmetrically, consistent in a advertence vector that points beeline ahead as desired. But if a sensor on, say, the right side picks up an obstacle, it will accord a abate vector to the sum, and the result will be a advertence vector that is confused appear the left.

For a accepted robot with a altered adjustment of sensors, the same idea can be activated but may crave changes in the weights and/or added care when sensors are balanced in front and in the rear of the robot, as the abounding sum could become zero.

When programmed correctly, the robot can avoid these circuitous obstacles.

Here is the code that does this in avoid_obstacles_controller.py:

webrok

webrok

Using the resulting ao_heading_vector as our advertence for the robot to try to match, here are the after-effects of active the robot software in simulation using only the avoid-obstacles controller, blank the goal point completely. The robot bounces around aimlessly, but it never collides with an obstacle, and even manages to cross some very tight spaces:

This robot is auspiciously alienated obstacles within the Python robot simulator.

Python robot programming methods: hybrid automata (behavior state machine)

So far we’ve declared two behaviors—go-to-goal and avoid-obstacles—in isolation. Both accomplish their action admirably, but in order to auspiciously reach the goal in an ambiance full of obstacles, we need to amalgamate them.

The band-aid we will advance lies in a class of machines that has the chiefly cool-sounding appellation of . A hybrid apparatus is programmed with several altered behaviors, or modes, as well as a authoritative state machine. The authoritative state apparatus switches from one mode to addition in detached times (when goals are accomplished or the ambiance aback afflicted too much), while each behavior uses sensors and wheels to react continuously to ambiance changes. The band-aid was called hybrid because it evolves both in a detached and connected fashion.

Our Python robot framework accouterments the state apparatus in the file supervisor_state_machine.py.

Equipped with our two handy behaviors, a simple logic suggests itself: 

As it turns out, however, this logic will aftermath a lot of problems. What this system will tend to do when it encounters an obstacle is to turn away from it, then as soon as it has moved away from it, turn right back around and run into it again. The result is an amaranthine loop of rapid switching that renders the robot useless. In the worst case, the robot may switch amid behaviors with  of the ascendancy loop—a state known as a .

There are assorted solutions to this problem, and readers that are attractive for deeper ability should check, for example, the DAMN software architecture.

What we need for our simple apish robot is an easier solution: One more behavior specialized with the task of getting  an obstacle and extensive the other side.

Python robot programming methods: follow-wall behavior

Here’s the idea: When we appointment an obstacle, take the two sensor readings that are abutting to the obstacle and use them to appraisal the apparent of the obstacle. Then, simply set our advertence vector to be alongside to this surface. Keep afterward this wall until A) the obstacle is no longer amid us and the goal, and B) we are closer to the goal than we were when we started. Then we can be assertive we have navigated the obstacle properly.

With our bound information, we can’t say for assertive whether it will be faster to go around the obstacle to the left or to the right. To make up our minds, we select the administration that will move us closer to the goal immediately. To figure out which way that is, we need to know the advertence vectors of the go-to-goal behavior and the avoid-obstacle behavior, as well as both of the accessible follow-wall advertence vectors. Here is an analogy of how the final accommodation is made (in this case, the robot will choose to go left):

Utilizing a few types of behaviors, the programmed robot avoids obstacles and continues onward.

Determining the follow-wall advertence vectors turns out to be a bit more circuitous than either the avoid-obstacle or go-to-goal advertence vectors. Take a look at the Python code in follow_wall_controller.py to see how it’s done.

Final ascendancy design

The final ascendancy design uses the follow-wall behavior for almost all encounters with obstacles. However, if the robot finds itself in a tight spot, alarmingly close to a collision, it will switch to pure avoid-obstacles mode until it is a safer ambit away, and then return to follow-wall. Once obstacles have been auspiciously negotiated, the robot switches to go-to-goal. Here is the final state diagram, which is programmed inside the supervisor_state_machine.py:

This diagram illustrates the switching amid robotics programming behaviors to accomplish a goal and avoid obstacles.

Here is the robot auspiciously abyssal a awash ambiance using this ascendancy scheme:

The robot actor has auspiciously accustomed the robot software to avoid obstacles and accomplish its aboriginal purpose.

An added affection of the state apparatus that you can try to apparatus is a way to avoid annular obstacles by switching to go-to-goal as soon as accessible instead of afterward the obstacle border until the end (which does not exist for annular objects!)

Tweak, tweak, tweak: trial and error

The ascendancy scheme that comes with Sobot Rimulator is very finely tuned. It took many hours of tweaking one little capricious here, and addition blueprint there, to get it to work in a way I was annoyed with. Robotics programming often involves a great deal of plain old trial-and-error. Robots are very circuitous and there are few shortcuts to accepting them to behave optimally in a robot actor environment…at least, not much short of absolute apparatus learning, but that’s a whole other can of worms.

Robotics often involves a great deal of plain old trial-and-error.

I animate you to play with the ascendancy variables in Sobot Rimulator and beam and attack to adapt the results. Changes to the afterward all have abstruse furnishings on the apish robot’s behavior:

  • The error gain kP in each controller
  • The sensor gains used by the avoid-obstacles controller
  • The adding of v as a action of ? in each controller
  • The obstacle collision ambit used by the follow-wall controller
  • The switching altitude used by supervisor_state_machine.py
  • Pretty much annihilation else

When programmable robots fail

We’ve done a lot of work to get to this point, and this robot seems pretty clever. Yet, if you run Sobot Rimulator through several randomized maps, it won’t be long before you find one that this robot can’t deal with. Sometimes it drives itself anon into tight corners and collides. Sometimes it just oscillates back and forth endlessly on the wrong side of an obstacle. Occasionally it is accurately confined with no accessible path to the goal. After all of our testing and tweaking, sometimes we must come to the cessation that the model we are alive with just isn’t up to the job, and we have to change the design or add functionality.

In the mobile robot universe, our little robot’s “brain” is on the simpler end of the spectrum. Many of the abortion cases it encounters could be affected by adding some more avant-garde software to the mix. More avant-garde robots make use of techniques such as , to bethink where it’s been and avoid trying the same things over and over; , to accomplish adequate decisions when there is no absolute accommodation to be found; and , to more altogether tune the assorted ascendancy ambit administering the robot’s behavior.

A sample of what’s to come

Robots are already doing so much for us, and they are only going to be doing more in the future. While even basic robotics programming is a tough field of study acute great patience, it is also a alluring and badly advantageous one.

In this tutorial, we abstruse how to advance acknowledging ascendancy software for a robot using the high-level programming accent Python. But there are many more avant-garde concepts that can be abstruse and tested bound with a Python robot framework agnate to the one we prototyped here. I hope you will consider getting involved in the abstraction of things to come.

Pssst, hey you!