A new press release reports, “Ambarella, Inc., an artificial intelligence (AI) vision silicon company, today announced that Ambarella and Amazon Web Services, Inc. (AWS) customers can now use Amazon SageMaker Neo to train machine learning (ML) models once and run them on any device equipped with an Ambarella CVflow®-powered AI vision system on chip (SoC). Until now, developers had to manually optimize ML models for devices based on Ambarella AI vision SoCs. This step could add considerable delays and errors to the application development process. Ambarella and AWS collaborated to simplify the process by integrating the Ambarella toolchain with the Amazon SageMaker Neo cloud service. Now, developers can simply bring their trained models to Amazon SageMaker Neo and automatically optimize the model for Ambarella CVflow-powered SoCs.”
The release continues, “Customers can build an ML model using MXNet, TensorFlow, PyTorch, or XGBoost and train the model using Amazon SageMaker in the cloud or on their local machine. Then, they can upload the model to their AWS account and use Amazon SageMaker Neo to optimize the model for Ambarella SoCs. They can choose CV25, CV22, or CV2 as the compilation target. Amazon SageMaker Neo compiles the trained model into an executable that is optimized for Ambarella’s CVflow neural network accelerator. The compiler applies a series of optimizations that can make the model run up to 2x faster on the Ambarella SoC. Customers can download the compiled model and deploy it to their fleet of Ambarella-equipped devices. The optimized model runs in the Amazon SageMaker Neo runtime purpose-built for Ambarella SoCs and available for the Ambarella SDK. The Amazon SageMaker Neo runtime occupies less than 10x the disk and memory footprint of TensorFlow, MXNet, or PyTorch, making it much more efficient to deploy ML models on connected cameras.”
Read more at Business Wire.
Image used under license from Shutterstock.com