Looking down the industrial data pipeline

December 16, 2017

Industrial companies in 2018 will find that wrangling real-time data requires a game plan for data preparation and transformation. Sensors on industrial equipment are relaying data at rates that were unheard of just a few years ago. That means that finding the right “data pipeline” to channel those streaming data sets is crucial to managing a large-scale advanced analytics program.

Though each data pipeline designed for unstructured data will need some customization depending on the source and format, data scientists will be looking for these qualities in a pipeline process:

  1. Cleaning and indexing on ingest: When the data comes in, advanced scripting processes the data efficiently and prepares it for analytics.
  2. Application of algorithms on that data. For industrial equipment data, you might want to apply prepackaged Condition Indicators – a set of signal processing and statistical steps aimed at extracting a representative number – that can predict failures.
  3. Dashboard that monitors the data processing and can set warnings when anomalies are detected.

Data pipelines set the table for a number of complex solutions for industrial companies. For example, a windmill operator might want to design a condition-based maintenance schedule for 5,000 windmills, servicing only the units that indicate a breakdown within 6 months. Focusing resources and effort on the problem windmills rather than doing blind maintenance might save that operator tens of millions of dollars over time.