Statecharts are everywhere! #5 – Validation example

As we mentioned in a former post, Bence, who is our colleague, has extended the validation rules embedded into YAKINDU Statechart Tools with new ones. The high-level purposes of the validation rules are to reduce the ambiguity and nondeterminism of the design models and to avoid bad design practises and structures.

Let’s see how it works in practise.

Let’s image we designed a statechart, that contains a composite state with two parallel regions as depicted on the next figure. Each region has two states, which are connected through transitions. Although most triggers are different in the two regions, there is a trigger, called Protocol.response, that is the same for both regions. Although these two transitions have the same trigger (Protocol.response) the respective actions are different: in the first first region Protocol.actionA, in the other region Protocol.actionB is the respective action. Since the execution order of transitions with the same trigger in parallel regions is undefined, it is a bad design pattern (so called antipattern).

Bence designed an EMF-IncQuery validation pattern to automatically recognize these kinds of design patterns. More on EMF-IncQuery validation patterns can be read in the respective blogpost here.

As the validation rules are evaluated incrementally by the EMF-IncQuery Validation Framework, warnings are showed in the Problems view of Eclipse. If we double click on a warning message, it automatically jumps to the respective model element, to help correct the error easily.

The PRU

It’s one thing we have a powerful hardware, but it does not matter if we can’t trust it. We need software reliability too. Of course, you can always compile a real-time operating system, but it won’t solve your problem. A real-time OS guarantees the maximum time required to serve your interrupt, but it does not guarantee correctness. A real-time OS can run into a kernel panic too. Therefore you need something extra over your OS, which can guarantee you reliability.

This extra is called PRU, the Programmable Realtime Unit. It’s a very special, and interesting idea, and now only the BeagleBone Black has it. The PRU is a very small processor (32 bit, 400 MHz, 8 kB program memory, 8 kB data memory). The PRU’s architecture is similar to the microcontrollers, but it is integrated next to the CPU.

The block diagram of the PRU

The PRUs can control pins, communicate with the application processor. Why are they so useful? With an operating system, many level of abstraction comes into the equation when you are dealing with reliability: drivers, file systems, complex libraries. You need a simple, deterministic unit which can watch over your complex application. If something goes wrong, you can be sure that it will detect the error and take preventive measures.

The BeagleBone Black has two of these units, where we can embed monitors, and other helpful little applications. With these, we can guarantee more reliability over the real-time OS.

The past and the future of the hardware – Part 2

As part 1 depicted, using microcontrollers have serious implications. For the future uses of this project, we decided not to focus on the code generation side, and compress the code into the AVR, but instead invest in a bigger hardware.
With a faster computing unit running a Linux operating system, many projects, and addons can run in the same time, allowing much more people to experiment. With a central deployment system, you can easily deploy applications, operating systems onto the embedded hardwares.

Although Raspberry Pis are great embedded computers, it turned out they are not reliable. It’s not a big deal for everyday applications if the Pi restarts suddenly, but in our application this can be fatal, therefore we have choosen the BeagleBone Black single board computer (SBC). Let’s not forget, our environment is safety critical one, so we need not only computing power, but reliable hardware as well. 

The Beaglebone Black

It has 4 GB onboard flash memory, so you don’t need SD card. The expander header has more than twice the pin count of a Raspberry Pi 2. The Ethernet is not USB-based, it’s handled by the processor. The BeagleBone Black (BBB) is like a Raspberry Pi, but it is more designed for embedded applications, where media capabilities are not priority.

Status report of the MODES3 project

We are now at the finish!

The distributed logic developed in Yakindu is nearly finished. We are now also finishing the MQTT based communication between the components. Valudation and verification helped us to find design errors in the developed protocol! IncQuery and VIATRA really helped our work. The integration testing of the distributed protocol will be conducted tomorrow, stay tuned!

The hardware system is also under design. Many of the components have already produced and integrated, most of the BeagleBone devices are built into the system. Now, we have som problem caused by a short circuit, but hopefully we will solve also these problems in the next few days!

Computer vision can recognize the movement of the trains and also the EMF metamodell is developed to feed the VIATRA-CEP engine with complex events. Our contribution to the open-soruce complex event processing engine is on the way, we have used some cool formal stuff to enhance the language and also the execution. New timing extensions can be used in the language and the semantics were also changed. Our improvements will probably be presented at the 1st Workshop on Monitoring and Testing of Cyber-Physical Systems , we have sent an abstract there. So hopefully we can meet at the CPS Week in Vienna!

We have some problems with the real-time chip of the BeagleBone, so we now plan to use a real-time operating system in the BBB-s and we will use the runtime monitors on the application processor.

Our Lego robot used to help the transportation is also under development. Computer vision was tested and the controller is also finished. However, we are now debugging the MQTT communication between the components.

OpenModelica models are used to simulate the movement of the robot. We can predict not only the movement of the robot, but also the validate the commands we plan to send!

After this hard working weekend, we plan to integrate the components together! Follow our work in this blog, and watch the pictures and videos here!

Statecharts are everywhere! #4 – Validation & Verification

In the previous parts of Statecharts are everywhere series we discussed a lot of things. First, we talked about what xtUML BridgePoint Editor is, and how we designed the ‘safety logic’ with it. Second, we talked about the reasons why we switched from BridgePoint to YAKINDU Statechart Tools (SCT). Third, we introduced YAKINDU Statechart Tools.

Now, we are going to talk about the validation features delivered with SCT, and our extensions: validation based on incremental graph matching, and verification based on timed automaton formalism.

YAKINDU Statechart Tools (SCT) comes with a Validation framework that ensures the detection of unreachable states, dead ends and references to unknown events. These validation constraints are checked live during editing [1].

Xtext

This framework is based on the Xtext’s Validation features that provide both syntactical and semantical validation. The syntactical correctness of any textual input is validated automatically by the parser and any broken cross-references can be checked generically [2]

Semantical validation means the developer of the grammar (that can be either textual, or graphical) can write validation rules in Java language for model elements. These rules are applied on the model and if they are not justified, then an error message is shown in the editor. More on validation can be read in the documentation of Xtext.

This way it can be ensured, that the model is valid, because it meets the well-formedness constraints it was set against.

Built-in validation rules

After the introduction, let us see, what validation rules are built in SCT [3]. They mostly inherit from the semantics of (UML) statecharts, that is a well researched area.

  1. A state must have a name.
  2. Node is not reachable.
  3. A final state should have no outgoing transition.
  4. A state should have at least one outgoing transition.
  5. Initial entry should have no incoming transition.
  6. Initial entry should have a single outgoing transition
  7. Entries must not have more than one outgoing transition
  8. Outgoing transitions from entries can not have a trigger or guard.
  9. Outgoing transitions from entries can only target to sibling or inner states.
  10. Exit node should have no outgoing transition.
  11. Exit node should have at least one incoming transition
  12. Exit node in top level region not supported – use final states instead.
  13. A choice must have at least one outgoing transition.
  14. The region can’t be entered using the shallow history. Add a default entry node.
  15. The region can’t be entered using the shallow history. Add a transition from default entry to a state.
  16. The source / target states of a synchronization must be orthogonal.
  17. The source / target states of a synchronization have to be contained in the same parent state within different regions.

Some other rules that are applied on the SCT model as well [4], but they do not necessarily come from the statechart semantics.

  1. A choice should have at least one outgoing default transition.
  2. In/Out declarations are not allowed in internal scope.
  3. Local declarations are not allowed in interface scope.
  4. The evaluation result of a time expression must be of type integer.
  5. The evaluation result of a guard expression must be of type boolean.
  6. Target state has regions without ‘default’ entries.
  7. Region must have a ‘default’ entry.
  8. The named entry is not used by incoming transitions.
  9. The named exit is not used by outgoing transitions.
  10. The parent composite state has no ‘default’ exit transition.
  11. The left-hand side of an assignment must be a variable.
  12. Missing trigger. Transisition is never taken. Use ‘oncycle’ or ‘always’ instead.

EMF IncQuery

Bence, who is a friend of ours has extended the validation rules mentioned before with new ones. But these rules were composed using EMF IncQuery and are applied by its Validation Framework, rather than the Xtext Validation Framework.

EMF-IncQuery is a framework for defining declarative graph queries over EMF models, and executing them efficiently without manual coding in an imperative programming language such as Java.

With EMF-IncQuery, you can:

More on the EMF IncQuery Validation Framework can be read in its documentation.

EMF-IncQuery Validation Framework

image

Bence composed new validation rules in the IncQuery’s declarative query specification laguage. These rules are applied on the statechart model. If a rule is violated by the referred model then an error message is displayed next to it.

So how is IncQuery supposed to be used for validation? We are going to show you in an example.

image

The first pattern is responsible for returning declared events that are not used in the state machine. The validation message is defined by the Constraint annotation above the first pattern. The second pattern is an auxiliary pattern.

First of all, let us talk about how to define IncQuery patterns. Let us take a look at the first pattern. Note, that the pattern name must be preceded by the „pattern” keyword. Between parantheses we must declare what nodes of the instance model we want access to. Naturally, the given name must appear in the pattern body and refer to a node type.

„Event : EventDefinition” says that we will want access to the nodes that we refer to as „event” in the pattern body and they are instances of „EventDefinition”. Then, the pattern body must be constructed. Each line states something about the result set. „EventDefinition(event);” states, that through „event” we only want to refer to nodes that are instances of „EventDefinition”. Patterns can be reused with the „find” keyword. This can be imagined as a function call: all of the statements of the called pattern will be applied in the pattern we call it from. To invert the meaning of the find keyword it must be preceded by the „neg” modifier. It forbids the match of the called pattern with the given parameterization.

In the second pattern we want to return nodes, instances of„EventDefinition”, that are not referred by any instance of „FeatureCall” or „ElementReferenceExpression”. FeatureCall.feature(_featureCall, event);” states that there must be a „FeatureCall” instance that has a „feature” reference to „event” (which is as stated before is an instance of EventDefinition). The statements are in a logical AND relation. As you can see, OR relation can be declared as well using the „or” keyword.

To mark the unused events for the Yakindu user a Constraint annotation must be defined. In the annotation the target editor, the message and the severity must be given apart from the elements that we want to mark. The elements are returned by the pattern the annotation is bound to.

New validation rules

The high-level purposes of the validation rules are to reduce the ambiguity and nondeterminism of the design models and to avoid bad design practises and structures.

  1. The transition has the same trigger as another transition in the parallel region, but different action.
  2. This transition is in a circle of always triggered transition causing a livelock.
  3. Ambiguity. This transition is not the only default transition of the exit event.
  4. The transition has the same trigger as another transition, which is on a higher level in the state hierarchy.
  5. Same trigger used on outgoing transitions.
  6. The transition is covered by an always triggered transition.
  7. This choice should have at least two outgoing transitions.
  8. This final state should have no outgoing transition.
  9. This exit should have no outgoing transition.
  10. This entry should have no incoming transition.
  11. This entry has more than one outgoing transition.
  12. Missing trigger. Transition is never taken. Use ‘always’ instead.
  13. This region has no entry node.
  14. This region has more than one entry node.
  15. Unreachable vertex.
  16. Unused variable.
  17. Unused event.

Verification

Verification is used for making sure the designed system meets its requirements, and work as we imagined. Although there are a lot of verification techniques, we use timed automaton formalism and computational tree logic.

UPPAAL

We chose timed automaton formalism because it is similar to the formal operation of YAKINDU statecharts (in theory), and we already have practise with UPPAAL. UPPAAL is an industry-wide known tool for formal verification of real-timed systems by timed automata.

UPPAAL uses a subset of Computational Tree Logic (CTL for short). The UPPAAL query language consists of a path formula quantified over none, one, more or all paths of the model. Specifications used in UPPAAL can be classified into reachability, safety and liveness properties. Next figure illustrates the formulae supported by UPPAAL. Filled circles represent those states for which a given state formulae hold.

image

Model transformation

Besides the validation rules, Bence developed a model transformation plugin for Yakindu as well. The plugin transforms a chosen SCT model into a timed automaton, and automatically generates CTL expressions for formal verification.

The model transformation plugin is continuously evolving to support all features (e.g. choice, entry, exit nodes, parallel regions, etc) of statecharts.

As an example the design and formal models of main region of Section statechart is depicted in the next figures.

Design model:

image

Formal model:

image

Finally, the formal verification result of the timed automata transformed from the Section statechart is depicted in the next figure.

image

Designing the PCB for Beaglebone Black

In the previous entries, we already talked about that we would use Beaglebone Black units to run our safety logic, but in my opinion, these systems have a problem: they could work only if you provide them 5V power source. Of course it’s not that big problem if you are starting a new project based on these computers, because you could choose your power source as you want.

But in our case, we already had a 12V power source under the hood. Furthermore we should use 12V source to feed railway lights, so we had to find out a solution. In one of my previous entries I already made a prototype, which would do the trick for us. But we need to install this solution somehow on the Beaglebone, so this weekend I designed a PCB (printed circuit board) to place this solution. 

Here is the final version:  

image

Disclaimer: I may call it final version, but every evening I find something to modify. So this is maybe not the final version, but it will have no major modification in the near future.

First try, first failure

At the beginning, I started to design our board with Eagle: this program is a lightweight designer for PCBs, and it’s lovable for its multiplatform quality, so it was perfect for me. But I merely started to design anything, one problem came after another. The biggest of all, I couldn’t find any libraries which would give the footprint of our chosen power module, the TI LMZ22005.

Intermezzo: we chose this module because it’s super easy to install on any design. You should only use some capacitors and resistors, and you are done (not to mention, the values of these elements are also provided by TI).

So, I had to make a decision: shall I go on with eagle and design the footprint of the power module myself, which hold the danger of faulty design, or choose another program and learn how to use it. Bálint pointed out that he heard good reviews about the Altium Circuit Maker program, and after a short test, we decided we should give it a try.

Can I invite you for a cup of coffee?

If you are starting to use a new software, first it’ll be totally unfamiliar for you. So you should learn how it works, how can you accomplish what you want, etc. For PCB designer applications, it’s more difficult: they have great set of functionality, tremendous preferences. That’s why for the first couple of hours I had to work as “we were just getting to know each other”.

image

But in the end, I managed to design the circuit. I chose every elements of it (it’s a big help that you have oodles parts from Octopart which you could use), I “just” had to route the PCB itself.

The routing

So, after the logical design, you have to place every element on the board and figure it out how you should make connections between them – that’s the routing. For a senior PCB designer, it’s not that hard to do, but for me, who had no experience at all in making PCBs and in the program also, I had my lesson.

image

This work was all about patience: routing every single wire to the right place was not that hard, just time-killing, because measuring distances, the calculating and designing the perfect layout was hard. Not because of the math: if you do something wrong the board would be useless, you have to fix the problems and print it out again, but it costs a lot of money, so you kind of have only one shot. In the consciousness of this I took my time and measured everything twice, rethought anything at least once and I pursued the perfect solution.

image

Everything that has a beginning has an end

After finishing it, I talked to people who had more experience in designing, took their pieces of advice, and now I am waiting for the bid and I am hoping that it will not take us into bankruptcy.

image
image
image
image

Introducing our Model-based Demonstrator for Smart and Safe Systems

Hi everyone! Let me start by introducing our project called MoDeS3, which stands for Model-based Demonstrator for Smart and Safe Systems. The main goal of our IoT challenge application is to demonstrate the many cool and innovative ways in which open source modeling tools can be used for systems development in the age of Internet-of-Things.

image

Our case study is a railway system: users can control trains arbitrarily as long as it is not dangerous.  Accidents and dangerous situations are detected using sensors embedded into the track: they sense the trespass of the trains and send this information to the controllers. It is important to note that this is just local information, so we have to ensure that it will be shared between the components. We employ six BeagleBone Black (BBB) embedded computers to run the safety logic, configured as a distributed system, where each BBB is responsible for some track sections.

As you can imagine, the real software engineering challenge is how to develop the distributed safety logic. We use the open source Yakindu Statecharts tool to design the software components. It is much easier than manual programming, as it provides a code generator to produce C or Java code. 

In order to supercharge the expressive power of Yakindu models, we have developed custom validation and verification rules. For this purpose, we used the open source EMF-IncQuery engine. Our IncQueries can be used to analyse the well-formedness of the models, for example to check if the state chart is deterministic and complete. This turned-out to be a useful feature as many design time errors were found, well before deployment and debugging even began!  

In addition to the validators, we have also developed model transformations to generate formal models from the state chart models. These formal models, together with associated model checking tools such as UPPAAL, are used to check the deadlock freedom and reachability of the states in the model. In the future, we plan to work more in this direction, in order to automatically analyse other properties such as fault-tolerance properties. We developed the model transformations  using the VIATRA framework. 

To make distributed systems work in practice, we need communication channels. For this project, we are going to use MQTT for its simplicity, reliability and flexibility. The open source Eclipse Paho framework helps us in establishing and maintaining the communication between the components. 

In order to provide system-wide safety guarantees,  we plan to build an additional layer on top of reliable communication channels, to facilitate runtime monitors. These smart components will evaluate the behaviour of the local components, and run locally on the PRU 32-bit microcontrollers of the BBBs. They will analyse if the communication works correctly and there is no problem within the controller itself.

To augment local monitors, the overall system status will be monitored using computer vision techniques. For this purpose, we attached a camera to a stage above the tracks that will observe the movement of the trains. The video stream is processed by OpenCV, a state-of-the-art open source computer vision library. We have implemented train recognition algorithms to detect the position of the trains. 

The combination of local monitoring data and computer vision data will be aggregated on the system level and processed using complex event processing (CEP). The role of this high-level monitoring technique is to integrate multiple monitoring data sources and make sure that if the distributed safety logic does not work correctly, this additional level of logic can still bring the system to a safe state. Our system-level safety framework will be built using an open source complex event processing engine called VIATRA-CEP