The Cutting Edge: Is Video the Uber-Sensor for Factory IoT?

By on

Click to learn more about author Will Ochandarena.

When most people think of Industrial IoT use cases, they think about the insights that can be derived by monitoring thousands of little sensors.  For oil fields it may be temperature and pressure sensors, and for manufacturing lines it may be power draw or readings from a PLC.  The reason for this is fairly intuitive – environmental sensors tend to be inexpensive, they produce a manageable amount of data, and they’re (relatively) easy to write software for – whether you’re building a dashboard, setting an alert on a trigger, or trying to build a model capturing typical behavior.

In some of the newest Industrial IoT deployments I’m seeing a new type of sensor emerge – video, and it’s having a profound improvement on how these use cases perform.  In this article, I’ll explore the impact video is having on factory IoT.

Unlike traditional environmental sensors, video is messy.  Cameras can be pricey, they produce a lot of data (~5Mbps per HD feed), and historically there hasn’t been much to do with the feeds except pipe them to a screen that someone is staring at.  However, things are changing now due to two factors: camera prices are coming down rapidly as their presence in smartphones has resulted in mass production of HD video sensors, and advancements in Deep Learning has vastly simplified the process of having computers watch the feeds and derive insight.

Recently a company that manufactures printed circuit boards (PCBs) wanted to lower their operational costs by improving the quality of goods leaving the factory.  This improvement comes in two parts – process optimization to produce less defects, and early detection of defects before they leave the factory.  The former is best solved by adding some new environmental sensors to the factory floor, combining that data with the metrics being emitted by the PLCs, and using machine learning to detect and rectify kinks in the process.

However, PLCs and simple environmental sensors will can’t tell you if a particular part is bad.  For that, typically someone needs to power the board up and run a diagnostic.  Diagnostics take time to run, and they’re often done in a different site, so there is huge value in detecting defects on the manufacturing line.  This company achieved that by placing a video camera at the end of the line, pointing down at the finished PCBs.  The video feed was fed into a Deep Learning model that had been trained to spot the difference between a good PCB and a defective one.  Using this model, the manufacturing process could route defective PCBs directly into the rework bin, saving valuable time and cost.

In my next post, I’ll give another industry example of how video is driving value in Industrial IoT.

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept