Getting Started - Rule engine

The Waylay engine is the rule engine that separates information, control and decision flow using the smart agent concept: where sensors, logic and actuators are separate entities of the rule engine.

Waylay lambda functions (π›Œ) are defined as either sensors or actuators. Sensors are β€œtyped” π›Œ functions, which can return back a state, data or both.

Smart agent concept

Any time sensors are executed, their results (in the form of both sensor’s data and sensor’s state) are inferred by the rule engine, which may result in execution of actuators (other π›Œ functions) or other sensors.

Logic creation

Rules are created using a visual programming environment with dragΒ­ and Β­drop functionality, see the screenshot below. Once rules have been created, they are saved as JSON files. The visual programming environment allows the developer to make use of the library of sensors and actuators, logical gates as well as mathematical function blocks.

Designer view

Tasks

In the Waylay terminology, tasks are instantiated rules. There are two ways tasks can be instantiated:

  • one-off tasks, where sensors, actuators, logic and task settings are configured at the time the task is instantiated
  • tasks instantiated from templates, where task creation is based on the template (which describes sensors, actuators and logic)

Task also defines a “master clock” of the rule, like polling frequency, cron expressions etc. (these settings can be inherit by sensors as well). Before any π›Œ function (sensor or actuator) is invoked, the engine makes a copy of the task context, providing, if required, results and data from all sensors executed (till that moment in time) to the calling π›Œ function.

task

Let’s look a little bit closer to the picture above. With blue arrows we label the information flow, with red arrows the control flow and with green arrows we label decisions. Two sensors are shown on the left side of the picture and on the right we find two actuators. Every sensor is composed of three parts:

  • Node settings that define the control flow (when the function is executed):
  • Sensor settings - input arguments for the function
  • π›Œ function code itself, which returns back states and data

In the picture below we see sensor settings and π›Œ function code: task

Control flow

Sensor, a π›Œ function, can be invoked:

  • Via polling frequency, cron expression or one time (defined either on the node level, or via the inheritance, using the “master clock” of the task settings).
  • On new data arriving. If the node can be addressed via resource, e.g. if the node is a labeled as a testresource any time data arrives for that resource function will be called. Payload, which triggered the sensor is available to the calling function (blue arrow).
  • As the result of other function calls (sensors), via state transitions of the attached sensor (depicted as the red arrow that goes from the top sensor to the other one)
  • As the outcome of multiple function executions via inference (via logical gates)

And of course, with all different conditions combined if needed!

Node settings are defined the moment the rule is configured: task

In this example, we decided to invoke the sensor only when data arrives for testresource and we have also configured eviction time of 10 seconds. This way we decide for how long each sensor information is valid. That is also an elegant way of merging different event streams where information is valid only for a short period of time, which is a very important aspect to take into consideration when making decisions as is explained here.

Information flow

Sensors can use as the input arguments:

  • Input settings (e.g. city, database record etc.)
  • Task context (result of any other sensor)
  • Runtime data (which triggered sensor execution)

Decisions

Decisions are modelled via attaching one or multiple actuators to sensor state (or state transitions), or combination of multiple nodes/states. More about rules you can find on this link