Industrial Analytics

What is Industrial Analytics? | What are Peaxy’s solutions? | Industrial Analytics Use Cases |
Powered by Aureum

What is Industrial Analytics?

We no longer live in an industrial world that’s all about steel and assembly lines. Today Big Data — and the things you can do with it to save your business time and money — rule industry. Industrial Analytics, sometimes called advanced analytics, are business insights industrial companies derive from modeled data, simulations and field sensor data analyzed together. Often this can be accomplished by building a Digital Twin, or a virtual representation of a physical component or system, that processes all of this data simultaneously and delivers ongoing lessons-learned.

But analytics alone isn’t enough. As a pioneer of Digital Twins, Peaxy has invested heavily in providing data access and data persistence. After all, you’re building something to last as long as the equipment… often 20 to 40 years. We start with a specially designed scalable data architecture, then deploy advanced analytic solutions and applications that provide transformative business capabilities. Peaxy generally works on use cases that require large-scale geometries, simulations and sensor telemetry (in the case of a Digital Twin, often all three).

Peaxy Industrial Analytics at Work

Our industrial analytics team designs solutions that bring previously dark unstructured data into the light, then process that data until it yields strategic business insights. Often this solution sits in an edge computing environment, which acts as a front end to a data lake or other data repository. Most data lakes are not designed for the scale of industrial simulations or other massive unstructured test data. Peaxy creates reduced-order data sets that are small enough to be sent to a data lake for higher-order processing.

Here’s an example: An engine maker has terabytes of simulation data on a particular gearbox that is prone to failure. A data lake collects reduced-order data with a "gearbox problem" code that indicates vibration issues.

After that initial step, Peaxy runs anomaly detection analytics on those simulations that generate "red flags." Once you've identified the problem, you can pinpoint where metal fatigue and rotation problems might lead to failure and fix the problem before it snowballs. Other engineers can access the gearbox vibration data from qualification tests to see how field data and test data correlate.

It's a virtuous data circle, and it’s only made possible if you have the data access architecture in place to do the analytics.

What are Peaxy’s solutions?

Our solutions impact several different parts of the enterprise, saving time in the sales and product development cycles while optimizing service revenues and costs.

Digital Twins
Peaxy’s Digital Twin solution updates models and simulations for a serialized piece of equipment or a system with a unique ID. Digital Twins play an extremely important role in industrial analytics. They can correctly predict a successful configuration or prescribe an optimized performance regime over the lifecycle of equipment in the field. Often a Digital Twin compares simulated data to streaming data from the field, creating the foundation for a full-fledged condition-based maintenance solution. It covers every phase of a product’s or system’s lifecycle, from conception, to birth, to its time in the field, to going offline.

For more information, see our Digital Twin page.

Predictive and Condition-Based Maintenance (CBM) Regimes
Industrial companies are increasingly using real-time sensor data to better plan their maintenance schedules and resource planning. These CBM regimes rely heavily on data architectures like Aureum and advanced analytics. Digital Twins feed these CBM regimes real-time data and point the way to an optimized maintenance schedule. Once in place, CBMs help companies maximize the life of equipment, reduce unforeseen costs associated with failure and ultimately gain a competitive advantage.

Peaxy Data Pipeline
Predictive maintenance requires acquiring of lots of real-time sensor data, then performing advanced analytics on that data. Ingesting high-volume real-time data is extremely challenging. Data preparation and transformations in streaming data sets are crucial to proper management of large-scale inbound data. Once the data is reduced and transformed, it’s ready for continuous analytics.

Peaxy Data Pipeline helps data scientists prepare data for analytics with a graphical Integrated Development Environment (IDE) to design the data flow. Data is cleaned with built-in data transformations during ingest. Advanced scripting capability with Python, R and Groovy in the pipeline helps make the data flow more efficiently.

That flow is monitored through a metrics dashboard. Anomalies in the sensor data are detected and corrected as needed in the automated data flow. Real-time warning of anomalies and outliers via data introspection, sampling, threshold rules and alerts make the Peaxy Data Pipeline an essential piece of many industrial data solutions.

Threaded Find
Threading helps engineers and subject matter experts to easily find relevant data in an advanced analytics environment. Connecting data that share a common thread is the Holy Grail of industrial analytics. Enterprises must successfully ingest and aggregate data, tagging it with relevant metadata during the process, to implement a sophisticated industrial analytics solution down the road. Digital threading provides a formal framework to access, integrate, transform and analyze data from disparate systems throughout the product lifecycle. Better yet, it translates that data into actionable information.

Peaxy’s Threaded Find app, included as part of the Aureum data architecture, is an industrial engineer’s best friend. It enables engineers and scientists to locate the most important simulation data sets to create response surfaces, optimize system designs, and track down the best component designs and contributing factors. This improves day-to-day operations and pushes optimizations beyond what was possible in the past.

Find’s visual interface allows users to quickly see how files are connected and useful to each other as analytical tools. This functionality can speed service delivery, accelerate development of new products from existing products and uncover opportunities for new revenue streams from data insights.

Without Find, Engineers often have to resort to “tribal knowledge” rather than any sophisticated system to locate specific simulations or summarized results. They are forced to do extra work to get the data they need, lowering overall productivity. Even with extra work, they lack the complete set of data needed to properly optimize designs and achieve efficiency goals.

Another app in the Threaded Find family is the Peaxy Digital Dossier. This app aggregates and connects all files attached to a serialized piece of industrial equipment or structure. The dossier is updated daily (or hourly) with every file a company collects on that equipment, tracking it from its “birth” when it comes off the assembly line to the day it is taken out of service. Only made possible with Aureum’s superior data infrastructure, the Dossier reaches into every data set that equipment touches, including simulations and test results, geometry, service records and in-the-field sensor data.

Industrial Analytics Use Cases


Peaxy Aureum® empowers universal access to data, allowing your company to gain control over unstructured and structured data assets and making it easy to find, mine, manage and reuse information across silos, data types, applications and infrastructures (on and off premises). Aureum also serve as an enterprise-scale data substrate that enables sophisticated industrial analytics, such as CBM regimes and Digital Twins.

It’s no small task to gain control over exponentially growing data assets spread across the entire enterprise. The growth and increased complexity of unstructured data is accelerating and exerting extreme pressure on IT infrastructures beyond what they were intended to handle. This means that “crown-jewel” data sets “go dark,” which blocks the success of new business models like industrial predictive maintenance and digital twin technology.

Aureum’s data access capabilities speed time-to-value. We enable a broad set of stakeholders to readily find and analyze your company’s most valuable data. Using edge computing workflows, Aureum aggregates and prepares unstructured data for advanced analytics and data visualization.

Aureum enables enterprises to realize productivity gains by empowering teams with self-serve access, search and management across data types and silos. This saves time when you locate data and increases sharing across teams. It can recover weeks or months of time lost on task duplication when the right information can’t be found for maintenance or new product development.

Aureum delivers reliable access to data across silos, so teams can more accurately predict maintenance cycles and enable more efficient, cost-effective services. Aureum data aggregation and search can speed service delivery, accelerate development of new products from existing products and uncover opportunities for new revenue streams from data insights.

Aureum is hardware-agnostic and runs on industry standard commodity hardware for a lower cost of ownership. It is built to run over heterogeneous hardware, so enterprises can mix and match vendors, form factors, hardware generations and types of storage devices. Across this ecosystem, Aureum offers a single namespace that aggregates all devices. Automatic tiering, built-in redundancy and self-healing reduce manual interventions, further lowering costs. This keeps the solution cost effective, lowering the total cost of ownership more than 50 percent.


Aureum will:

Always available through redundancy and replication. Faster than any comparable platform.

Categorize, catalog, get contextual views and index to illuminate data assets.

Preserve mission-critical data for long term use.
Seamless integration in Windows and POSIX environments.
Hardware agnostic, uses existing and forthcoming systems.
Unlimited scalability.
Keep data long-term with minimal maintenance costs and make it available to solve future and current business problems.

Stable (immutable pathnames in a single namespace). Secure (via SSL/TLS, 2FA, etc.)