At Xylos Inspire 2018, we demonstrated how to operate a remote-controlled Lego Mindstorms robot using your own body movements. In this blog post, we’ll illustrate how we did this.
Prototyping a controllable robot is easy with Lego Mindstorms. By combining Lego bricks, a programmable EV3 brick, motors and sensors, you can build your own robot that can walk, talk, shoot and move in almost any way you want. The motors and sensors are connected to the programmable EV3 brick. Available sensors include infrared sensors to measure distance, colour sensors, touch sensors and gyroscopes.
You can create your own design, but we decided to stick with one of the standard robots. The robot’s sensors are not used for this project.
The EV3 brick comes installed with a default operating system which supports several standard robots designed by Lego. You could also link it to the EV3 software and start programming with the building blocks included with the kit, but the downside is that your options are fairly limited if you work this way.
For our demo, we decided to start with a basic robot, so that we could fully control everything the robot does. We did this by leveraging the EV3DEV operating system; this is a Debian Linux-based operating system which you flash onto an SD-Card and then insert into the brick. With the Debian operating system running on the robot, we could write our own code for it, which we did in Python because it has the best open source support (for a full list of supported programming languages, visit the EV3DEV page). Being able to write our own Python code allowed us to send commands to the different engines and connect to a remote WebSocket server to respond to external commands. If you’re interested in the code, you can find the open-source code on the GitHub repository at the end of this blog post.
To summarise, the solution has three components:
For the client-to-robot communication, we decided to build an API to act as a middleman. The API is a container which consists of two parts. First, it handles all incoming requests from the user. There are multiple endpoints available; some examples include ‘forward’, ‘backward’ and ‘shoot’. Second, the container hosts a WebSocket server to send messages to the robot. The API transforms every request into a message, which is then registered by the listening WebSockets. The main reason why we took this logic out of the robot is its limited hardware capabilities; we already had a Kubernetes cluster running, so the decision to quickly host it as a container was easily made. We’re not going to explain the Kubernetes configuration and the deployment to the cluster in detail in this blog, but if you’re be interested, you can find the deployment file and container file in the git repository.
For the demo, we decided to run the application locally and host the API which accepts commands and forwards them to the robot on Azure. With the application securely hosted on the internet, anyone with the necessary permissions could control the robot.
Interested in the source code for this project? You can find it here.
Your email address will not be published. Required fields are marked.