Computer vision based safety-system: how to get the information

The system we described was originally operated by distributed units, called masters. These masters got the local information about occupancy through a special circuit integrated into the board. However, network problems often caused the error in the safety-logic, so we decided to introduce an additional layer of safety based on computer vision and complex event processing.

Now, we will give some details about the application of computer vision for recognizing the trains and their location.

The safety logic deployed to the embedded controllers have binary information of the trains, namely if a train is on a section of the system, we detect it. There is no information about the direction of travel, and speed. Because these limitations, the information of the safety logic is rather limited.

Because the logic itself cannot determine the direction, it must consider the worst-case scenarios. This causes deadlocks, and unnecessary stops. This is a price we pay for safety.

The previously mentioned solution operates in a distributed manner. It’s safe, it’s reliable, it’s formally verified. If everything works correctly.

So we decided to implement the runtime verification of the local components and we integrated the system level monitor based on computer vision. We show the later now in details.

Our monitoring solution is a computer vision based one, using the open-source OpenCV framework. OpenCV is a very extensive library of optimized image processing, machine learning algorithms, ideal for quick development of computer vision based applications. You are not worried about the performance and programming complexity.

There are other solutions maybe with better performance, however as OpenCV is open-source and there is a huge community behind it, our decision was straightforward.

This is an example marker we use on the top of the trains. There are three markers: red, green, and a blue one.

Our needs were pretty simple: identify the trains, and determine their positions. Circular patterns are great for this kind of computer vision tasks, because if you rotate a circle, nothing happens, therefore you don’t have additional complexity.

So we decided to use markers to make our task easier!

Many of the people reading this article may think about the Hough circle algorithm, which can find circular patterns. The problem with this algorithm is it’s genericity: our board may contain many circles, not just only the train. We needed an error prone algorithm, which can match a specific pattern, if only a partial circle is visible.

What we can do is use some math. Instead of traditional pattern matching, we can turn this into a math problem. Our pattern is very static. By static, I mean the circle pattern doesn’t vary by size. Because of this property, we can create a very specific matcher, using convolution. Convolution is basically two math functions, and we apply one function on the other. The resulting function is the combination of the two. Although convolution is quite difficult to achieve, but if we transfer our image to the frequency domain, the convolution basically becomes a multiplication which is easy to do.

Let’s see an example what is happening:

  1. We create a pattern image, with specific values. These values can be: 0, if we are not interested what’s really there; 1, if we want this area white; -1, if we want this area black.
  2. Convert this pattern image to frequency domain.
  3. Read the image from camera.
  4. Convert this image to the frequency domain.
  5. Multiply the two spectra.
  6. Convert the multiplied image back.

The pattern

This is a pattern, where green has value 0, white has value 1, and black has value -1

The image from the camera looking down the MoDeS3 board

The camera image’s, and the pattern’s convolution result

This is not a pitch black image, if you look closely, you can see the bright points, which marks the points where markers found

Now we have a weird-looking image, where a brighter spot means a bigger match between an image, and the pattern. On this image, we can use a simple threshold, and get a binary image, where it is really trivial to find the brightest points.

We are not saying this is the most efficient algorithm for this solution, but it’s really robust, and precise. The precision is in the millimetre range, and it’s robustness can be described as this solution does not make false detection. It might not detect valid points for a small time period, but we haven’t seen false reading, not even in a 8-hour-long session. On the other side, Complex Event Processing can solve the problem when false values are observed for a small time interval.

So what’s after the detection of the circle pattern? There is a color ID inside the two circle patterns, and this color identifies the train. What we do is search for points, where the distance of these two points exactly matches the distance in the real world. If we find a pair, we can be sure it’s a train marker. After idetifying all the visible trains, we convert the data to JSON, and publish it to the MQTT broker.

Our approach may seem a little non-standard, but it’s proven it’s reliability, and after all, that’s what matters for us.

It’s always good to help!

Yesterday I was playing around the Texas Instruments 4.4 real-time kernel. After installing the image it turned out the script copying the contents of the SD card to the eMMC script was broken.

After some poking around, and debugging I found the source of the mystery, and opened an issue with the solution.
https://github.com/RobertCNelson/boot-scripts/issues/20

Just after some hour, RobertCNelson, the maintainer of everything in the BeagleBone Black project issued a fix. That was fast!

The PRU

It’s one thing we have a powerful hardware, but it does not matter if we can’t trust it. We need software reliability too. Of course, you can always compile a real-time operating system, but it won’t solve your problem. A real-time OS guarantees the maximum time required to serve your interrupt, but it does not guarantee correctness. A real-time OS can run into a kernel panic too. Therefore you need something extra over your OS, which can guarantee you reliability.

This extra is called PRU, the Programmable Realtime Unit. It’s a very special, and interesting idea, and now only the BeagleBone Black has it. The PRU is a very small processor (32 bit, 400 MHz, 8 kB program memory, 8 kB data memory). The PRU’s architecture is similar to the microcontrollers, but it is integrated next to the CPU.

The block diagram of the PRU

The PRUs can control pins, communicate with the application processor. Why are they so useful? With an operating system, many level of abstraction comes into the equation when you are dealing with reliability: drivers, file systems, complex libraries. You need a simple, deterministic unit which can watch over your complex application. If something goes wrong, you can be sure that it will detect the error and take preventive measures.

The BeagleBone Black has two of these units, where we can embed monitors, and other helpful little applications. With these, we can guarantee more reliability over the real-time OS.

The past and the future of the hardware – Part 2

As part 1 depicted, using microcontrollers have serious implications. For the future uses of this project, we decided not to focus on the code generation side, and compress the code into the AVR, but instead invest in a bigger hardware.
With a faster computing unit running a Linux operating system, many projects, and addons can run in the same time, allowing much more people to experiment. With a central deployment system, you can easily deploy applications, operating systems onto the embedded hardwares.

Although Raspberry Pis are great embedded computers, it turned out they are not reliable. It’s not a big deal for everyday applications if the Pi restarts suddenly, but in our application this can be fatal, therefore we have choosen the BeagleBone Black single board computer (SBC). Let’s not forget, our environment is safety critical one, so we need not only computing power, but reliable hardware as well. 

The Beaglebone Black

It has 4 GB onboard flash memory, so you don’t need SD card. The expander header has more than twice the pin count of a Raspberry Pi 2. The Ethernet is not USB-based, it’s handled by the processor. The BeagleBone Black (BBB) is like a Raspberry Pi, but it is more designed for embedded applications, where media capabilities are not priority.

The past and the future of the hardware – Part 1

In the following, I will overview the past and the future of the hardware we use in the project!

At the post “What this project was all about”, zsoltmazlo described the application of Arduino boards. The choice was based on the experience of easy usage of the libraries, but later during the development of the models it turned out that they hit their limitations.

As you may know, the Arduinos are basically microcontrollers. They are really an all in one package. But with microcontrollers, the programmer is facing very serious limitations. The Atmel ATmega328P has 32 Kbytes of program memory, and 2 Kbytes of data memory, with a 20 MHz operating frequency. This is the processor on the well famous Arduino Uno, and we also had Ethernet capable board. But the memory limitations are serious.

image

Comparing this with even your smartphone: just lack of performance. However, we have to mention, that although they look quite weak, they are very usable. 98 percent of the computers are used in embedded systems, and most of them utilizes microcontrollers like the one I mentioned.

Well, than what was the problem? We develop in a model-based fashion. With a safety critical system, you can’t just write code, and hope it just works, or test it to a level where the developer thinks it’s safe enough. You need mathematical proofs of the correctness of your solution, and you can do it with model based developement. You create some abstract representation of your application with statecharts, flow diagrams, or other abstract representations (read the post “Statecharts are everywhere!” if you are interested). These are mathematically more representable objects, therefore we can analyze this.

So what is the bottom line? When you are ready with analyzing the correctness of your model, you somehow need to make this model runnable. From a model, you can generate code (e.g C, C++, Java, etc.), so you can compile and run it. The problem with these code generators is that they are not prepared for the limitations of an embedded system: generated code is used to use dynamic memory handling, and trash a little. So little, you wouldn’t notice this in a PC, but with 2K RAM, you will, and the application will crash because it can’t allocate more memory for itself.

It turned out that to overcome these difficulties might be much harder than upgrading our hardware to the next level, so we chose the latter, and started developing the new hardware to suit our needs. Sure, we could work out how we could generate code to fit inside, but the MoDeS3 project isn’t about just us. It’s a tool to develop safe and smart applications, and the hardware needs may be varying between projects, so it is just better to fit the future needs.

Coming up: the new hardware and the horizon is opened for us!

Bálint