Chooch AI Launches Solution For Embedded Computer Vision AI Applications

By on

According to a recent press release, “Chooch AI (http://Chooch.ai), the leader in visual training of artificial intelligence, today, in conjunction with TechCrunch Disrupt, officially announced the launch of Chooch Edge AI, a hardware-agnostic edge computer vision service that features the ability for AI to detect and classify events in local video streams from any type of camera. Chooch Edge AI is an executable package with trained neural networks that is easily installed on devices. The Chooch AI Dashboard is used to create custom packages, with set triggers that cause select events detected by the AI solution, such as faces, specific visual objects, actions, words, or any other class trained on the Dashboard.”

The release goes on, “The Chooch Edge AI application features a straightforward and simple setup. It is loaded onto a device, which must be connected to a network or WIFI enabled camera. Once the camera is streaming, Chooch records events in a log with images and video for a set number of seconds.  Data can be uploaded to the cloud for review asynchronously, allowing Chooch Edge AI to operate independently of cloud connectivity. Minimum device requirements include Linux, 1GB RAM and an ARM 32, ARM 64, or Intel x86-64 chip. Example uses of this complete AI solution include facial authorization within security systems, event monitoring in remote locations, industrial IoT operations and autonomous robotics all with zero lag time and industry-leading accuracy. The addition of Chooch Edge AI expands the offering of the Chooch SDK, AP and Dashboard, broadening the applications for Visual AI to a diverse broad class of applications.”

Read more at Globe Newswire.

Image used under license from Shutterstock.com

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept