Nostradamus cast his divinations five centuries ago, but the information age hasn’t dimmed humans’ love of the prediction. In an industrial world where lost seconds or lost efficiency can mean billions of dollars in company losses, predictions become more than a flight of fancy. They’re the essence of a working industrial system, especially one that relies on thousands of sensors streaming real-time data back to a data center in the cloud.
Predictive analytics can be useful in a variety of settings: police may try to predict crime waves or the next terrorism attack, the IoT world may want to use real-time data to judge marketing effectiveness or usage in the field, and the Industrial world wants to create “condition-based maintenance regimes” that tell companies exactly when a part is likely to fail.
Predictions, of course, are only as good as your information. That information is only as good as its level of accessibility. Companies have rushed in recent years to install hundreds of sensors on equipment rolling off the line, but they did it without thinking about where the data for all those sensors will actually go. When that data goes straight to a data repository without any data access platform in place to aggregate, thread and find data for advanced analytics – well, it might as well have been routed to a black hole, because you’re unlikely to see it again.
There are very few out-of-the-box analytics packages that can really put a dent in specific Industrial Internet data problems – customization on apps and algorithms is inevitable. But it’s essential that the data architecture that supports those apps has a single, unified namespace and can grow at scale with the enterprise.