... coping with the flood of data ...

Every day we generate a huge

amount of data while documenting the manufacturing of products.

It's a matter of fact that the idea to manage this bunch of data could be handled by simply using cutting edge technology does not work.

 

In most cases one and the same content is stored several times, copied and pasted to different databases in different formats. And - how annoying - the relation between dedicated instances of data isn't documented sufficiently - although those data relations play a key-role for high-quality data-management.

 

Hoping that the flood of data would end one day won't happen. Digitization is speeding up and with every additional piece of data you will have to anser the question: where is it supposed to be stored?

 

 

 

Too many kingdoms

"data kingdoms", "data silos", no matter how you call it - the situation is similar in most companies:

 

Our data is stored in many different software-systems and databases. The high amount of resulting work leads to data inconsistency.

Data Flow based on "copy and paste", thousands of office documents on several file servers and a confusing landscape of access rights lead to a loss of transparency and acceptance.

 

Consolidation of the most important main data objects on one single platform and the usage of modern streaming technology (Apache Kafka) allow a transparent and controlled data flow between the systems.

 

 

 

Data as a product

Today, selling a product also causes you to be able to deliver the related product data - in what ever format. This could range from a customer specific CSV file to a a standardized eCl@ss format. Mapping your own data to external given forms is essential.

  

Requests in terms of completeness and actuality of the delivered data must be answered quickly and by stable and transparent processes.