Adding Machine Learning to the Arduino

My eventual goal is to create a few machine object classes for use with either the Arduino or Processing or both. In general, machine learning involves processor intensive statistical calculations on very large data sets to be effective. Obviously this does not lend itself well for use on a small 8-bit microcontroller platform like Arduino. However, I believe it may be possible to do some very neat things with an Arduino based system if most of the “training” phase of a given algorithm is performed on a PC with Processing, while the implementation/reinforcement learning phase is conducted on the Arduino. If input data is limited to analog/digital sensor readings, a very simple open-source robot with the ability to learn may be possible.

I plan to do my best to implement some form of decision tree (supervised learning), Q-learning (reinforcement learning) and K-nearest neighbors (unsupervised learning) algorithms, starting with the decision tree.

To begin with I plan to design/test my algorithms with Matlab to speed development, then translate the code into either Processing or Arduino as the case may warrant.

First Wireless Test Successful!

By adding an Xbee Module with the use of my Xbee shield, I was able to make the Arduino wireless. A second Xbee Module was attached to the ER1 through a Xbee explorer USB. Once the connection was successfully made, as if by magic the same program I wrote yesterday, which randomly moved the robot based on characters received over the serial port, ran just as it did yesterday. The system is now wireless!

Xbee Shield on Arduino and Xbee explorer attached to ER1

Adding Xbee Wireless Connection to the ER1

With the eventual goal of placing biometric sensors on a subject and then wirelessly transmitting data based on the sensors to the ER1 robot, a wireless communication method is required. A very simple to use wireless method for short/mid range wireless communication is the Zigbee protocol. Because the Zigbee standard as defined is relatively difficult to work with, a very common approach for hobbyists is to use Digi’s Xbee wireless modules to wrap the Zigbee hardware protocol into an easy to use and interface serial protocol. With an Xbee module attached through a USB virtual COM port FTDI chip data can be transmitted wirelessly with no more effort than using a standard serial port with wired cable.

Processing provides a serial library which allows programs written in Processing (such as the telnet program which is being used to control the ER1) to access data from serially connected external machines such as another computer, an Arduino or an Xbee module. In order to initially test the Processing serial library an Arduino was connected to the ER1 laptop and was configured to send serial data including random characters “A”, “B” or “C” over the virtual COM port at one second intervals. Processing would read the serial string and display it in the terminal, update the sketch window to show the current random character and would command the ER1 through the telnet connection to either move forward, backward or turn counterclockwise based on the latest random character transmitted. The Arduino and Processing Code used for these basic tests are provided below.

Arduino Code

long time;
char randCharacter;

void setup() {
  Serial.begin(9600);
  Serial.println("Arduino Serial to Processing Test");
  time = millis();
  randomSeed(analogRead(0));
}

void loop() {
  if (millis() - time > 1000) {
    time = millis();
    Serial.print("Test Transmission -- ");
    Serial.print("Time = ");
    Serial.print(millis()/1000);
    Serial.print(" Sec");
    Serial.print(" random character = ");

    randCharacter = random(65, 68);
    Serial.println(randCharacter);
  }
}

Processing Code

import processing.serial.*;
import processing.net.*;

Client myClient;

Serial myPort; // Create object from Serial class
int val;       // Data received from the serial port
PFont fontA;

void setup() {
  size(200, 200);

  myClient = new Client(this, "localhost", 9000);
  myClient.write("play phrase \"Hello World!\"\n");

  println("Available Serial Ports");
  println(Serial.list());

  String portName = Serial.list()[2];
  myPort = new Serial(this, portName, 9600);

  background(255);
  fontA = loadFont("BookmanOldStyle-Bold-48.vlw");
  textAlign(CENTER);

  // Set the font and its size (in units of pixels)
  textFont(fontA, 48);
}

void draw() {
  while (myPort.available() > 0) {
    char inByte = myPort.readChar();
    print(inByte);
    if (inByte == 'A') {
      fill(210, 25, 30);
      rect(50, 50, 100, 100);
      fill(255);
      text("A", 100, 120);
      myClient.write("move 6 inches\n");
    }
    else if(inByte == 'B') {
      fill(30, 210, 25);
      rect(50, 50, 100, 100);
      fill(255);
      text("B", 100, 120);
      myClient.write("move -6 inches\n");
    }
    else if(inByte == 'C') {
      fill(25, 30, 210);
      rect(50, 50, 100, 100);
      fill(255);
      text("C", 100, 120);
      myClient.write("move 25 degrees\n");
    }
  }
}

Output

Processing Terminal and Window Out

Adding Face Detection with Processing

Adding face detection so that the ER1 robot can respond when it sees someone’s face was a tall order. Part of my reasoning for choosing Processing as an interface for controlling the ER1 was because of the many image processing libraries and functions it provides. One of the most powerful is Open CV, the open source computer vision library originally created by Intel and now maintained by Willow Garage. By installing Open CV to work with Processing and by getting the webcam that came with the ER1 to be functional I was able to provide the robot with the rudimentary ability to detect and react to a person’s face in it’s field of view.

First the webcam needed to have its drivers installed. The drivers for the ER1 webcam appear to only have versions for Windows XP, 2000, ME, and 98 (I told you this thing was old). The webcam itself is a IREZ Kritter which now appears to be managed by GlobalMed a telemedicine company. When you connect the camera’s USB to the computer and windows asks for the location of the drivers, navigate to C:\Program Files\ER1 CD\CameraXP\

Once the camera’s drivers are installed upon opening the ER1 and choosing Setting -> Camera and clicking the checkbox that says “Enable Camera Usage” the camera’s video should be visible in the ER1 interface. When connecting the camera to processing make sure the ER1 check-box is NOT selected or Processing will give an error that the hardware is already in use.

Now Open CV needs to be installed. Follow the directions given on the Processing libraries webpage. The version of processing to be installed is Intel(R) Open Source Computer Vision Library 1.0. I had to install, uninstall and reinstall Open CV a couple times before I got it to work, hopefully it’s not so hard for you (if you ever have a reason to attempt this).

Lastly, in order to view any video with Processing, whether from a webcam or not, currently a very old plug-in called WinVIG 1.0.1 is required. Once all this stuff is installed and you’ve moved the unzipped example Open CV sketches folder provided with the library into your Processing->libraries folder you should be all set. You can hope to get something like this running in no time.

Face Detection Example

Example Face Detection Example from Processing using ER1 Webcam

Reviewing Last Semester

So here’s the deal. Last year I took a class named “Machine Learning”, in which we learned some of the basic uses and algorithms that comprise machine learning. One of the projects we attempted was to use a couple of very outdated Evolution Robotics ER1 robots to implement a machine learning task. The problem was the robot hardware and software were both so out of use and in bad repair that they were very difficult to use for the task at all. After the class concluded I came back to see what could be done to make the robots somewhat functional. What I decided to do was use Processing and it’s free extension libraries to connect with and control the ER1 through the provided telnet API and the ER1 Robot Control Center (RCC). I successfully started that project last year, and attempt now to describe what was done and how it works.

First: The RCC and Processing must be installed on the laptop that is gong to control the robot. RCC should come on a CD or download with the ER1 and Processing is freely available from their website.

Second: The RCC must be configured to all API control. This is done by clicking the “Settings” button in the upper left corner, followed by opening the “Remote Control” tab. Then the radio button “Allow API control of this instance” must be selected. The Port defaults to 9000, and leaving is as such worked just fine. Then clicking “OK” should make the RCC ready for API control. If you did this step correctly, upon opening the RCC the message box should say something like “Listening for incoming API requests on port 9000”. In order to control the ER1 through Processing the RCC must be left open, but it can be minimized.

Third: Open Processing and run a sketch that uses the network library to connect to the RCC telnet API. Here is one such example program that I use to verify that the ER1 is connected and functional

import processing.net.*;

Client myClient;

void setup() {
  myClient = new Client(this, "localhost", 9000);
  myClient.write("play phrase \"Hello World!\"\n");
}

void draw() {
  delay(3000)
  myClient.write("play phrase \"I am an E R 1 Robot, who is Here to Help!\"\n");
}

At this point just about anything that can be done in Processing can now be translated over to the ER1 itself. Other telnet API commands can be used aside from “play phrase”. The API is documented in the RCC and this documentation can be found by opening the RCC, clicking on the “Help” button in the upper left corner, then by opening the section titled “ER1 Command Line Interface”.

If instead of using processing you wish to directly enter the command line control commands, open a windows command line, then type “telnet localhost 9000” without the quotation marks and press enter. If a blank black command line opens then you can control the robot from the command line. Have fun!

hello_world

A traditional way for those of us who work with computers and robotics to test our systems is the pleasantly ubiquitous “hello world” program. This post is such a test program. Both as a way to gauge the abilities and features of the web interface I’m using to create it, and as a test of my own skills at writing and maintaining such a thing as a “blog”. Through this blog and other associated pages, I hope to exemplify some of my projects in robotics and electrical engineering, especially my masters independent research project. To those of you who have stumbled upon this first post be warned, I have not yet gained the ability to statistically predict future outcomes, and as such make no claims about the quality or extent of what is to follow. Enjoy.