Visual Servoing Platform  version 3.3.0
Tutorial: Visual servo simulation on a pioneer-like unicycle robot

This tutorial focuses on visual servoing simulation on a unicycle robot. The study case is a Pioneer P3-DX mobile robot equipped with a camera.

We suppose here that you have at least followed the Tutorial: Image-based visual servo that may help to understand this tutorial.

Note that all the material (source code) described in this tutorial is part of ViSP source code and could be downloaded using the following command:

$ svn export https://github.com/lagadic/visp.git/trunk/tutorial/robot/pioneer

Unicycle with a fixed camera

In this section we consider the following unicycle:

This robot has 2 dof: $(v_x, w_z)$, the translational and rotational velocities that are applied at point E, considered as the end-effector. A camera is rigidly attached to the robot at point C. The homogeneous transformation between C and E is given by cMe. This transformation is constant.

The robot position evolves with respect to a world frame; wMe. When a new joint velocity is applied to the robot using setVelocity(), the position of the camera wrt the world frame is also updated; wMc.

To control the robot by visual servoing we need to introduce two visual features. If we consider a 3D point at position O as the target, to position the robot relative to the target we can consider the coordinate $x$ of the point in the image plane and $log(Z/Z^*)$, with $Z$ the distance of point in the camera frame, as visual features. The first feature implemented in vpFeaturePoint allows to control $w_z$, while the second one implemented in vpFeatureDepth $v_x$. The position of the target in the world frame is given by wMo transformation. Thus the current visual feature ${\bf s} = (x, log(Z/Z^*))^\top$ and the desired feature ${\bf s}^* = (0, 0)^\top$.

The code that does the simulation is provided in tutorial-simu-pioneer.cpp and given hereafter.

#include <iostream>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpVelocityTwistMatrix.h>
#include <visp3/gui/vpPlot.h>
#include <visp3/robot/vpSimulatorPioneer.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/visual_features/vpFeatureDepth.h>
#include <visp3/visual_features/vpFeaturePoint.h>
#include <visp3/vs/vpServo.h>
int main()
{
try {
cdMo[1][3] = 1.2;
cdMo[2][3] = 0.5;
cMo[0][3] = 0.3;
cMo[1][3] = cdMo[1][3];
cMo[2][3] = 1.;
vpRotationMatrix cRo(0, atan2(cMo[0][3], cMo[1][3]), 0);
cMo.insert(cRo);
robot.setSamplingTime(0.04);
robot.getPosition(wMc);
wMo = wMc * cMo;
vpPoint point(0, 0, 0);
point.track(cMo);
vpServo task;
task.setLambda(0.2);
cVe = robot.get_cVe();
task.set_cVe(cVe);
vpMatrix eJe;
robot.get_eJe(eJe);
task.set_eJe(eJe);
vpFeaturePoint s_x, s_xd;
s_xd.buildFrom(0, 0, cdMo[2][3]);
task.addFeature(s_x, s_xd, vpFeaturePoint::selectX());
vpFeatureDepth s_Z, s_Zd;
double Z = point.get_Z();
double Zd = cdMo[2][3];
s_Z.buildFrom(s_x.get_x(), s_x.get_y(), Z, log(Z / Zd));
s_Zd.buildFrom(0, 0, Zd, 0);
task.addFeature(s_Z, s_Zd);
#ifdef VISP_HAVE_DISPLAY
// Create a window (800 by 500) at position (400, 10) with 3 graphics
vpPlot graph(3, 800, 500, 400, 10, "Curves...");
// Init the curve plotter
graph.initGraph(0, 2);
graph.initGraph(1, 2);
graph.initGraph(2, 1);
graph.setTitle(0, "Velocities");
graph.setTitle(1, "Error s-s*");
graph.setTitle(2, "Depth");
graph.setLegend(0, 0, "vx");
graph.setLegend(0, 1, "wz");
graph.setLegend(1, 0, "x");
graph.setLegend(1, 1, "log(Z/Z*)");
graph.setLegend(2, 0, "Z");
#endif
int iter = 0;
for (;;) {
robot.getPosition(wMc);
cMo = wMc.inverse() * wMo;
point.track(cMo);
Z = point.get_Z();
s_Z.buildFrom(s_x.get_x(), s_x.get_y(), Z, log(Z / Zd));
robot.get_cVe(cVe);
task.set_cVe(cVe);
robot.get_eJe(eJe);
task.set_eJe(eJe);
#ifdef VISP_HAVE_DISPLAY
graph.plot(0, iter, v); // plot velocities applied to the robot
graph.plot(1, iter, task.getError()); // plot error vector
graph.plot(2, 0, iter, Z); // plot the depth
#endif
iter++;
if (task.getError().sumSquare() < 0.0001) {
std::cout << "Reached a small error. We stop the loop... " << std::endl;
break;
}
}
#ifdef VISP_HAVE_DISPLAY
graph.saveData(0, "./v2.dat");
graph.saveData(1, "./error2.dat");
const char *legend = "Click to quit...";
vpDisplay::displayText(graph.I, (int)graph.I.getHeight() - 60, (int)graph.I.getWidth() - 150, legend, vpColor::red);
vpDisplay::flush(graph.I);
#endif
// Kill the servo task
task.print();
task.kill();
} catch (const vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
}
}

We provide now a line by line explanation of the code.

Firstly we define cdMo the desired position the camera has to reach wrt the target. $t_y=1.2$ should be different from zero to be non singular. The camera has to keep a distance of 0.5 meter from the target.

cdMo[1][3] = 1.2; // ty
cdMo[2][3] = 0.5; // tz

Secondly we specify cMo the initial position of the camera wrt the target.

cMo[0][3] = 0.3; // tx
cMo[1][3] = cdMo[1][3]; // ty
cMo[2][3] = 1.; // tz
vpRotationMatrix cRo(0, atan2( cMo[0][3], cMo[1][3]), 0);
cMo.insert(cRo);

Thirdly by introducing our simulated robot we can compute the position of the target wMo and of the camera wMc wrt the world frame.

robot.setSamplingTime(0.04);
robot.getPosition(wMc);
wMo = wMc * cMo;

Once all the frames are defined, we define a 3D point and its coordinates (0,0,0) in the object frame as the target.

vpPoint point;
point.setWorldCoordinates(0,0,0);

We compute then its coordinates in the camera frame.

point.track(cMo);

A visual servo task is then instantiated.

vpServo task;

With the next line, we specify the king of visual servoing control law that will be used to control our mobile robot. Since the camera is mounted on the robot, we consider the case of an eye-in-hand visual servo. The robot controller provided in vpSimulatorPioneer allows to send $(v_x, w_z)$ velocities. This controller implements also the robot jacobian $\bf ^e J_e$ that links the end-effector velocity skew vector $\bf v_e$ to the control velocities $(v_x, w_z)$. The also provided velocity twist matrix $\bf ^c V_e$ allows to transform a velocity skew vector expressed in the end-effector frame in the camera frame.

We specify then that the interaction matrix $\bf L$ is computed from the visual features at the desired position. The constant gain that allows an exponential decrease of the features error is set to 0.2.

To resume, with the previous line, the following control law will be used:

\[ \left[\begin{array}{c} v_x \\ w_z \end{array}\right] = -0.2 \left( {\bf L_{s^*} {^c}V_e {^e}J_e}\right)^{+} ({\bf s} - {\bf s}^*) \]

From the robot position we retrieve the velocity twist transformation $\bf ^c V_e$ that is then re-injected to the task.

cVe = robot.get_cVe();
task.set_cVe(cVe);

We do the same with the robot jacobian $\bf ^e J_e$.

robot.get_eJe(eJe);
task.set_eJe(eJe);

Let us now consider the visual features. We first instantiate the current and desired position of the 3D target point as a visual feature point.

vpFeaturePoint s_x, s_xd;

The current visual feature is directly computed from the perspective projection of the point position in the camera frame.

The desired position of the feature is set to (0,0). The depth of the point cdMo[2][3] is required to compute the feature position.

s_xd.buildFrom(0, 0, cdMo[2][3]);

Finally only the position of the feature along x is added to the task.

We consider now the second visual feature $log(Z/Z^*)$ that corresponds to the depth of the point. The current and desired features are instantiated with:

vpFeatureDepth s_Z, s_Zd;

Then, we get the current Z and desired Zd depth of the target.

double Z = point.get_Z();
double Zd = cdMo[2][3];

From these values, we are able to initialize the current depth feature:

s_Z.buildFrom(s_x.get_x(), s_x.get_y(), Z, log(Z/Zd));

and also the desired one:

s_Zd.buildFrom(0, 0, Zd, 0);

Finally, we add the feature to the task:

task.addFeature(s_Z, s_Zd);

Then comes the material used to plot in real-time the curves that shows the evolution of the velocities, the visual error and the estimation of the depth. The corresponding lines are not explained in this tutorial, but should be easily understand by reading Tutorial: Real-time curves plotter tool.

In the visual servo loop we retrieve the robot position and compute the new position of the camera wrt the target:

robot.getPosition(wMc) ;
cMo = wMc.inverse() * wMo;

We compute the coordinates of the point in the new camera frame:

point.track(cMo);

Based on these new coordinates, we update the point visual feature s_x:

and also the depth visual feature:

Z = point.get_Z() ;
s_Z.buildFrom(s_x.get_x(), s_x.get_y(), Z, log(Z/Zd)) ;

We also update the task with the values of the velocity twist matrix cVe and the robot jacobian eJe:

robot.get_cVe(cVe);
task.set_cVe(cVe);
robot.get_eJe(eJe);
task.set_eJe(eJe);

After all these updates, we are able to compute the control law:

Computed velocities are send to the robot:

At the end, we stop the infinite loop when the visual error reaches a value that is considered as small enough:

if (task.getError().sumSquare() < 0.0001) {
std::cout << "Reached a small error. We stop the loop... " << std::endl;
break;
}

Unicycle with a moving camera

In this section we consider the following unicycle:

This robot has 3 dof: $(v_x, w_z, \dot q_{1})$, as previously the translational and rotational velocities that are applied here at point M, and $\dot q_{1}$ the pan of the head. The position of the end-effector E depends on $ q_{1}$ position. The camera at point C is attached to the robot at point E. The homogeneous transformation between C and E is given by cMe. This transformation is constant.

If we consider the same visual features than previously ${\bf s} = (x, log(Z/Z^*))^\top$ and the desired feature ${\bf s}^* = (0, 0)^\top$, we are able to simulate this new robot simply by replacing vpSimulatorPioneer by vpSimulatorPioneerPan. The code is available in tutorial-simu-pioneer-pan.cpp.

You can just notice here that we compute the control law using the current interaction matrix; the one computed with the current visual feature values.

The following control law is used:

\[ \left[\begin{array}{c} v_x \\ w_z \\ \dot q_{1} \end{array}\right] = -0.2 \left( {\bf L_{s} {^c}V_e {^e}J_e}\right)^{+} ({\bf s} - {\bf s}^*) \]

Next tutorial

You are now ready to see the next Tutorial: How to boost your visual servo control law.

vpRobot::ARTICULAR_FRAME
@ ARTICULAR_FRAME
Definition: vpRobot.h:78
vpServo::kill
void kill()
Definition: vpServo.cpp:192
vpFeatureDepth::buildFrom
void buildFrom(double x, double y, double Z, double LogZoverZstar)
Definition: vpFeatureDepth.cpp:372
vpServo::CURRENT
@ CURRENT
Definition: vpServo.h:186
vpSimulatorCamera::get_cVe
void get_cVe(vpVelocityTwistMatrix &cVe) const
Definition: vpSimulatorCamera.cpp:94
vpFeaturePoint::buildFrom
void buildFrom(double x, double y, double Z)
Definition: vpFeaturePoint.cpp:395
vpFeaturePoint::get_y
double get_y() const
Definition: vpFeaturePoint.cpp:150
vpPoint::get_Z
double get_Z() const
Get the point Z coordinate in the camera frame.
Definition: vpPoint.cpp:417
vpServo::set_eJe
void set_eJe(const vpMatrix &eJe_)
Definition: vpServo.h:508
vpServo::setLambda
void setLambda(double c)
Definition: vpServo.h:406
vpHomogeneousMatrix::insert
void insert(const vpRotationMatrix &R)
Definition: vpHomogeneousMatrix.cpp:590
vpFeatureBuilder::create
static void create(vpFeaturePoint &s, const vpCameraParameters &cam, const vpDot &d)
Definition: vpFeatureBuilderPoint.cpp:93
vpServo::set_cVe
void set_cVe(const vpVelocityTwistMatrix &cVe_)
Definition: vpServo.h:450
vpFeaturePoint::get_x
double get_x() const
Definition: vpFeaturePoint.cpp:130
vpColVector
Implementation of column vector and the associated operations.
Definition: vpColVector.h:131
vpMatrix
Implementation of a matrix and operations on matrices.
Definition: vpMatrix.h:165
vpServo::setServo
void setServo(const vpServoType &servo_type)
Definition: vpServo.cpp:223
vpSimulatorCamera::getPosition
vpHomogeneousMatrix getPosition() const
Definition: vpSimulatorCamera.cpp:119
vpServo::print
void print(const vpServo::vpServoPrintType display_level=ALL, std::ostream &os=std::cout)
Definition: vpServo.cpp:313
vpFeaturePoint::selectX
static unsigned int selectX()
Definition: vpFeaturePoint.cpp:507
vpDisplay::displayText
static void displayText(const vpImage< unsigned char > &I, const vpImagePoint &ip, const std::string &s, const vpColor &color)
Definition: vpDisplay_uchar.cpp:664
vpPoint::setWorldCoordinates
void setWorldCoordinates(double oX, double oY, double oZ)
Definition: vpPoint.cpp:113
vpRotationMatrix
Implementation of a rotation matrix and operations on such kind of matrices.
Definition: vpRotationMatrix.h:123
vpServo::getError
vpColVector getError() const
Definition: vpServo.h:282
vpServo::DESIRED
@ DESIRED
Definition: vpServo.h:190
vpVelocityTwistMatrix
Definition: vpVelocityTwistMatrix.h:167
vpFeatureDepth
Class that defines a 3D point visual feature which is composed by one parameters that is that defin...
Definition: vpFeatureDepth.h:161
vpSimulatorCamera::get_eJe
void get_eJe(vpMatrix &eJe)
Definition: vpSimulatorCamera.cpp:108
vpServo::EYEINHAND_L_cVe_eJe
@ EYEINHAND_L_cVe_eJe
Definition: vpServo.h:163
vpServo::addFeature
void addFeature(vpBasicFeature &s, vpBasicFeature &s_star, unsigned int select=vpBasicFeature::FEATURE_ALL)
Definition: vpServo.cpp:497
vpFeaturePoint
Class that defines a 2D point visual feature which is composed by two parameters that are the cartes...
Definition: vpFeaturePoint.h:182
vpServo::setInteractionMatrixType
void setInteractionMatrixType(const vpServoIteractionMatrixType &interactionMatrixType, const vpServoInversionType &interactionMatrixInversion=PSEUDO_INVERSE)
Definition: vpServo.cpp:574
vpServo
Definition: vpServo.h:151
vpServo::computeControlLaw
vpColVector computeControlLaw()
Definition: vpServo.cpp:935
vpDisplay::flush
static void flush(const vpImage< unsigned char > &I)
Definition: vpDisplay_uchar.cpp:716
vpColVector::sumSquare
double sumSquare() const
Definition: vpColVector.cpp:1523
vpRobotSimulator::setSamplingTime
virtual void setSamplingTime(const double &delta_t)
Definition: vpRobotSimulator.h:91
vpHomogeneousMatrix::inverse
vpHomogeneousMatrix inverse() const
Definition: vpHomogeneousMatrix.cpp:641
vpColor::red
static const vpColor red
Definition: vpColor.h:179
vpPoint
Class that defines what is a point.
Definition: vpPoint.h:59
vpDisplay::getClick
static bool getClick(const vpImage< unsigned char > &I, bool blocking=true)
Definition: vpDisplay_uchar.cpp:765
vpHomogeneousMatrix
Implementation of an homogeneous matrix and operations on such kind of matrices.
Definition: vpHomogeneousMatrix.h:150
vpSimulatorCamera::setVelocity
void setVelocity(const vpRobot::vpControlFrameType frame, const vpColVector &vel)
Definition: vpSimulatorCamera.cpp:198
vpServo::PSEUDO_INVERSE
@ PSEUDO_INVERSE
Definition: vpServo.h:206
vpException
error that can be emited by ViSP classes.
Definition: vpException.h:72
vpPlot
This class enables real time drawing of 2D or 3D graphics. An instance of the class open a window whi...
Definition: vpPlot.h:116
vpForwardProjection::track
void track(const vpHomogeneousMatrix &cMo)
Definition: vpForwardProjection.cpp:111
vpSimulatorPioneer
Class that defines the Pioneer mobile robot simulator equipped with a static camera.
Definition: vpSimulatorPioneer.h:104