7 things to know before using AWS Panorama

Quick dos and don’ts

  • Device only analyzes video streams from IP cameras in the local network
  • Device outputs videos streams from IP cameras with custom visualizations on top
  • Device doesn’t save video streams or images to the cloud unless you implement it in the code
  • Device doesn’t have a local state (e.g. DB) so events can only be recorded via cloud or on-premise server
  • Device can’t be ssh’ed into or remotely entered so you can troubleshoot via telemetry or cloudwatch logs

Application architecture

AWS Panorama application is defined as a graph with nodes and edges (this graph is called manifest). Nodes are models, code, camera streams, output, and parameters. Each node also has inputs or outputs depending on the inference and type (e.g. code usually inputs and outputs video). Edges connect different nodes.

  • Code
  • Model
  • Output
  • Code <> Output

Code artifacts

Code artifact is defined as a docker container image. Code on AWS Panorama manages the following:

  • Preprocess image before running the model
  • Running the model on the image
  • Annotating the image before sending it to the output
  • Producing cloudwatch logs and metrics

Model artifacts

AWS Panorama supports most popular frameworks (Keras/MXNet/ONNX/Tensorflow/Torch) but requires using the format expected by SageMaker Neo (see more https://docs.aws.amazon.com/sagemaker/latest/dg/neo-compilation-preparing-model.html). That’s because AWS Panorama uses Amazon SageMaker Neo to compile the model before sending it to the device. Model is expected to be run on the image, but may also work with several inputs.

Running locally

The best way to run the application locally is to use AWS Panorama Samples (https://github.com/aws-samples/aws-panorama-samples). The samples repo has notebooks (example https://github.com/aws-samples/aws-panorama-samples/blob/main/samples/people_counter/people_counter_tutorial.ipynb) which can be run in the cloud or locally (requires setting up docker environment).

  • check how your graph and code would run on a sample video
  • check how your model would perform on video and how your code would annotate it

Deploying the application

There are two ways to deploy your application — via web console and programmatically. Both would require you to:

  • use panorama-cli for building container and packaging application. panorama-cli is a command line interface for managing AWS Panorama application and can be installed via pip (https://pypi.org/project/panoramacli/)


AWS Panorama device could be monitored using web console and cloudwatch logs. The former could be used for making sure that your device and video stream are online and the latter can be used to make sure that your application is working right.


AWS Panorama has great hardware and integration with AWS which handles model and code deployment as well as monitoring. To understand whether it can be applicable to your case I would recommend trying our samples from the repo https://github.com/aws-samples/aws-panorama-samples and running them in local dev mode or in the cloud. That would provide you with a lot of insights about how it can be used for your use case.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store