7 things to know before using AWS Panorama

Machine learning is becoming essential for a lot of companies and they want to use it to optimize their operations and make new services. One of the challenges is that sometimes you need to deploy a model in an environment where you have limited internet connection and no operators to manage the infrastructure for ML. In this case, you need to use Machine Learning on Edge and have a way to deploy and monitor your models and applications remotely.

AWS Panorama is a machine learning device by AWS with a software development kit and corresponding AWS service which manages devices and applications. It is focused on working with computer vision models and video streams. I’ve received AWS Panorama device from AWS for review and deployed a couple of applications. So in this blog post I want to cover the main topics you need to know about AWS Panorama and whether it could be applied in your case.

Quick dos and don’ts

  • Device outputs videos streams from IP cameras with custom visualizations on top
  • Device doesn’t save video streams or images to the cloud unless you implement it in the code
  • Device doesn’t have a local state (e.g. DB) so events can only be recorded via cloud or on-premise server
  • Device can’t be ssh’ed into or remotely entered so you can troubleshoot via telemetry or cloudwatch logs

Application architecture

For example, the simplest application would have the following architecture:

Nodes

  • Camera stream
  • Code
  • Model
  • Output

Edges

  • Camera stream <> Code
  • Code <> Output

Notice that model is not connected to any node and instead is called by the “Code” node.

Code artifacts

  • Process each image from the video stream
  • Preprocess image before running the model
  • Running the model on the image
  • Annotating the image before sending it to the output
  • Producing cloudwatch logs and metrics

The container is packaged using panorama-cli and uses base panorama docker image public.ecr.aws/panorama/panorama-application.

Model artifacts

Running locally

This notebook provides a way to do the following:

  • check how your model will be converted to SageMaker Neo (using SageMaker compilation job)
  • check how your graph and code would run on a sample video
  • check how your model would perform on video and how your code would annotate it

All three workflows above could be completed without AWS Panorama device. In case you have a device which is set up, the notebook will show you how programmatically deploy the application and the model.

Deploying the application

  • have defined manifest so keep it in mind before deployment. You can test your manifest by using the notebooks from the paragraph above.
  • use panorama-cli for building container and packaging application. panorama-cli is a command line interface for managing AWS Panorama application and can be installed via pip (https://pypi.org/project/panoramacli/)

In case you want to deploy sample applications:

Also, keep in mind that if you want to update the model or code container you need to update the manifest and redeploy the application whether programmatically or via console.

Monitoring

AWS Panorama reports logs to cloudwatch for each application and device. Application logs contain logs for each node and device logs have system-level logs.

More details about the logs are described here https://docs.aws.amazon.com/panorama/latest/dev/monitoring-logging.html

Summary

I'm a senior machine learning engineer at Instrumental, where I work on analytical models for the manufacturing industry, and AWS Machine Learning Hero.