Bring Your Own ML model

If you already have a Machine Learning or home grown Python algorithm developed and trained, there are multiple ways how you can integrate such a model with the Waylay rules engine. In all cases, you need a framework that can host your pre-trained model. This can either be Waylay itself or via an external service like AWS Sagemaker or Azure Machine Learning.

Waylay can host your model

In case you do not have a serving framework to host your trained AI/ML model, we can host it for you on the Waylay platform. The assumption is that you have already trained your model offline, either on your own cluster/PC or on a cloud-based data science tool.

In this section, we are going to show how you can use a pre-trained TensorFlow model in a Waylay rule.

First we will package our model and upload it to the ML serving framework. This can be done by uploading a zip file containing the TensorFlow SavedModel using our BYOML API.

You can download an example Jupyter notebook here. This example notebook contains some python code that will zip your model and upload it directly to your Waylay environment (make sure you enter the right configuration settings).

Next we can create a rule that calls this model. This is what the rule looks like (you can download this template here):

tensorflow_demorule

This rule uses a new sensor, “byomlPredict” and uses the following inputs:

  • modelName: the name of the trained ML model we are serving. Our model is called ‘demo’.
  • instances: the input data we want to feed to the ML model

The “byomlPredict” sensor will request the execution of a pretrained ML model (demo) with “instances” as input. The “instances” are pulled from the Waylay time series database (via the timeSeriesSensor) and transformed (via the scriptSensor) into a format that the ML model understands.

The output of the “byomlPredict” sensor is a series of predicted values and is used together with the observed values (retrieved via another timeSeriesSensor) by another “scriptSensor” that calculates the Root Mean Square Error (RMSE) between the observed values and the predicted values.

For some more examples you can check out our Github repository. As most models are trained using normalized data, we also added some examples on how to upload the normalization along with the model itself.

AWS Sagemaker

If you use AWS Sagemaker to host your trained AI/ML model, then you can call an AWS Sagemaker endpoint in a Waylay rule.

For example, the below rule uses the Waylay sensor “Sagemaker”. You can download this rule from here

sagemakerrule

The “Sagemaker” sensor takes the following inputs:

  • endpoint: the name of your Sagemaker endpoint
  • observations: the input data we want to feed to the ML model
  • contentType: this is typically ‘application/json’mak

Note: this sensor assumes your AWS Sagemaker configuration is stored as a JSON object in ‘Global Settings’ or ‘Vault’. This configuration needs to contain the following information:

For example: “sagemaker_config” : {“key”:“XXX”,“secret”:“YYY”,“region”:“eu-west-1”,“role”:“arn:aws:iam::886760376729:role/service-role/AmazonSageMaker-ExecutionRole-20191230T103256”}

The “Sagemaker” sensor will request the execution of a pretrained ML model with “observations” as input. The “observations” are pulled from the time series database (via the timeSeriesSensor) and transformed (via a scriptSensor) into a format that the Sagemaker ML model understands.

The output of the “Sagemaker” sensor is a series of predicted values and is used together with the observed values (retrieved via another timeSeriesSensor) by another “scriptSensor” that calculates the Root Mean Square Error (RMSE) between the observed values and the predicted values.

Azure Machine Learning

If you use Azure Machine Learning to host your trained AI/ML model, then you can call an AzureML endpoint in a Waylay rule.

Similarly as the use case above, you can build a rule that uses the Waylay sensor AzureML.