Data can’t simply be stored anymore. It must be preserved for the long term and made readily accessible wherever and whenever it’s needed. Unstructured engineering data such as models, CAD drawings and visual representations of a design are especially sensitive to long-term access issues.

Engineering teams design engines, turbines or other large pieces of equipment to go into production for years, with service lives far beyond. Long after most everyone involved has left the company, the information they generated will be essential for servicing and troubleshooting the equipment, training new employees and will be used as a reference for new or updated designs.

How do you make sure this critical information remains available and accessible for several decades to multiple engineering, manufacturing and service teams spread around the world?


Capturing tribal knowledge

The design for a new piece of equipment is embodied in one set of unstructured data, such as a series of computer-aided design (CAD) drawings that document its evolution and refinement.

But another set of information accounts for the design process, such as requirements, specifications, cost and ROI analyses, and the tradeoffs and deliberations that guided design decisions. Some of these, like decision-making rationales, are often not written down at all. So the first challenge is to make sure this tribal knowledge is recorded somewhere so it will not be lost when people leave the company.

An effective data access strategy facilitates the institutional memory of a company, creating a process to continually capture and curate tribal knowledge so that it accessible to people as they come into the organization.


Data lifecycle management

Data flows through a lifecycle that governs its availability, redundancy, and access requirements. By automating polices for managing this lifecycle, organizations ensure that data is preserved for the long term and is not lost due to unexpected issues. Data classes manage how data is accessed and related, while storage classes manage where data is stored and how it is replicated. They work together in tandem to implement organizational policies.

For example, essential data for a project in progress could be asynchronously backed up and replicated to prevent data loss. Over time, this data can be automatically migrated to less expensive archival storage types as they become less current.


Hardware agnosticism and technology standards

Server configurations and storage methods change quickly. Who can predict how we will be storing data 10 years from now, let alone 30?

Therefore, an effective data access program must be software-defined, using standard commercial data servers for fungibility, agility and lower total cost of ownership. It should be built to run on heterogeneous hardware, so that enterprises can mix and match vendors, form factors, hardware generations and types of storage devices. Support for common technology lowers hardware costs, avoids the need to manage multiple types of network infrastructure, and reduces total cost of ownership

By also adhering to file system standards, existing applications can be deployed without significant modification to take advantage of the latest storage technologies.

The immutable pathname

Unstructured data often resides on storage systems designed for a three- to five-year technology cycle. With each cycle, the files are copied and moved to another disk drive. An effective data access platform must ensure that each file’s pathname is preserved across maintenance and technology upgrades.

This immutable pathname prevents data from going dark because it can no longer be found at its usual location. Applications should still be able to access the data and document references should still work when the next round of design reviews or troubleshooting begins.



For many companies, pure storage of unstructured data is no longer enough. They need a data preservation strategy and policies that institutionalize the capture and management of both “official” and tribal knowledge and meta-knowledge. Support for standards, hardware agnosticism, and immutable pathnames are the keys to ensure unfettered access to that knowledge by multiple teams across the globe over the long term. An effective data access platform is essential to implementing that strategy and policies.