You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Our visualization pipeline:

At first we were uncertain about what system we wanted to use to visualize Cabot. At first we tested simple visualization and a GUI in CodeSys, but as CodeSys is a slow program we wanted to use third-party software. We knew that we had two main options; Ignition and CDP-Studio. Both of which are new software packages to all of the group members and it would require a communication protocol. As the project went on we deprioritized the visualization and GUI, until we heard from another group that they were using Blender. Blender is a 3D graphics program with no built in support for communication with other software, but has great modelling tools and rendering capabilities. Since blender has a Python API, and we now had MQTT setup, we decided to give it a go, even though it is not software made for this, its possible to adjust the program to our needs. While the 24_7 CDP-Studio Dashboard was being developed, the blender script and visualization was developed in parallel. Lets go into the details of the Blender pipeline.

Responsible: Magnus


Digital-Twin in Blender:

First of all we created the digital double as seen in the image on the right. As most parts of our digital double does not need to be dimensionally accurate, such as the motor size and the end-effector, it was a simple modelling job. The dimensions of the aluminum profiles has to be accurate though, so they were created accordingly. Motor and driver 3D-models were taken from the official Omron website, where they offer CAD files for free. Now for the important part, the communication and methods we implemented to make the visualization behave as a digital twin and UI for our system.


We wanted to make the position of the end-effector of our digital representation match the actual end-effector position on Cabot. A communication fieldbus was needed, and we chose MQTT because of its python libraries we could use in Blender. Creating the script requires some knowledge about the BPY (Blender python api). As we had experience with this from earlier projects, we made a subscriber script first, which listens to updates from the PLC publisher. The messages are formatted in the most logical way: "x,y,z". Some adjustments were also made to make the coordinate system from the system work in the visualization.

The python libraries used for the MQTT software was paho-mqtt. This library helps make subscriber and publisher clients, which is exactly what we needed. After testing of the subscriber went well, we figured why not make blender our main UI system as well. This would make the interaction with the system more natural, because of its complex dynamics could be visualized at the same time as using it as an input. The first idea was to simply publish the position of the blue cube to the PLC, and it should then dynamically move to the target position. The implementation of this went well, and we decided to add more features. A feature list of what can be done from blender now is:

  1. Send "real-time" update of the blue cube's position to update the target position on the system.
  2. Subscribe to estimated position from forward kinematic and link that position to the red cube.
  3. Add step to path.
  4. Remove step to path.
  5. Show full path.
  6. Send full path.



How the visualization looks to the end-user:

To make it easier to visually understand what is happening in the system at all times, the viewport is split into 4 unique views. In this setup the top left view is from the front, to the right is from the side and the last two are from the top view and one is free to move to interact with the camera at will. This is the typical setup when running the program, though all of this is completely adoptable.

The UI interface:

The items mentioned in the list above are also made as UI-elements within the toolbar interface in the software. Since its the only way we found to press buttons to run simple commands from the interface. The UI-interface is implemented as illustrated in the image completely to the right in this section. 

  • No labels