Solution Graphic

What’s wrong with the Spreadsheet PIM? Part 2

Carrying on from my What’s wrong with the Spreadsheet PIM? Part 1 blog post, today I’ll share with you the importance of controlling your product data and offer an alternative to the spreadsheet PIM.

Spreadsheets do not provide the mechanisms to control your data

Without application logic there is no control over data that resides within a spreadsheet. Control means accountability, responsibility and governance. Master data governance is a combination of business process and application logic. Often, business processes are encapsulated within application logic, but when you are working within a spreadsheet that just isn’t possible.

Spreadsheet files are very portable and can be ‘saved’ in many formats so the data can be easily retrieved by other programs and applications. While this is convenient and has many advantages, it makes data governance very difficult.  Product data is master data and allowing that data to become portable in such a way means it cannot be governed. Any organisation managing master data, whether that is customer data, financial data or product data, must have processes and controls in place to ensure there is a single version of the truth.

Virtually everyone with Microsoft Office has generated a spreadsheet for analysis based on a snapshot in time. How often do these spreadsheets get updated with current data after this first iteration? Do you ever share old and potentially incorrect information externally (or make decisions internally) from these spreadsheets because there is no mechanism to automatically update the data? This scenario plays itself out daily from the smallest start-up to the Fortune 500 companies.

Solution GraphicProduct data is seldom the responsibility of a single person or department. Although a single person may ultimately become accountable for that data, its compilation, aggregation and validation resides with those who are responsible for product attribution e.g. ‘Logistics’ maybe responsible for weight and dimensions, ‘Marketing’ responsible for product declarations and ‘Legal’ responsible for regulatory requirements.

Product data attribution is a process and it’s not necessarily sequential in nature. Many attributes can be defined at any stage, but some, for example weight and dimensions, require the product to be packaged first. This means that workflow must form part of the process. A spreadsheet does not contain the ability to define or configure workflow steps and approvals, such as:

  • Which individuals are responsible for what?
  • Which departments are responsible for which individuals?
  • Who is accountable for each department?
  • What happens when an individual in a workflow step is unavailable?
  • What is the sequence of events for product data attribution?
  • What does ‘complete’ look like?
  • Who approves the final record?
  • What systems should be updated with what data?

Spreadsheets allow you to share data, but not synchronize data

iStock_000010045800_SmallIn the previous section we refer to ‘single version of the truth’. This is the place where data is held to account as well as being the place where data quality is measured. When data is shared it becomes vulnerable, especially if the data is in a format that can be modified, like a spreadsheet for example. Unfortunately, this is the reality today. Product data is most commonly shared using spreadsheets. Those spreadsheets are sent internally and externally. At no point during that sharing process is there a controlled mechanism to ensure that the data is the same in all those spreadsheets. Spreadsheets can be ‘locked down’ using cell protection, but even in this case the content can still be ‘cut and pasted’ or, even worse, re-keyed.

In order to share data and ensure that you maintain a single point of truth, the share mechanism must support synchronization. When product data is synchronized between parties, it means that every consumer of that information is working from the same set of data. It also means that when that data is amended at source, that all parties are notified of the changes. Data synchronization is a two-way conversation that is often achieved through a ‘publish’ and ‘subscribe’ model, supported by the ability to accept, reject or review.

A popular mechanism for product data synchronization is the GS1 Global Data Synchronization Network (GDSN).  This system is used predominantly in the Retail, Consumer Packaged Goods (CPG), Food Service, Hardlines and Healthcare sectors.  As of May 2015, there were over 16.5 million items registered in the GDSN.  The number of trading partners (e.g. suppliers and consumers of data) using the network exceeds 37,000.

The GDSN contains a set of standardised product data attributes that means everyone accessing the network is using the same data quality standards. However, the GDSN is by its description a ‘network’ that is used to transport product data in a standardised format. The product data itself is stored in a data pool, of which there are many – the most popular and widely used data pool is operated by 1Worldsync (a joint venture entity operated by GS1 US and GS1 Germany).  It’s these interconnected data pools that make up the network, that all reference a central database called the GS1 Global Registry. Visit for further information.

The GDSN will only accept product data that is GS1 standards compliant, so prior to using the network your product data will need to be validated.  Spreadsheets can not contain these validation rules in their entirety.

Spreadsheets can’t maintain version history and audit trails


Maintaining a repository of master data that is the single point of truth for product information means proving that it is the single point of truth. One must be able to demonstrate that the system in use is where the data has been created, managed and controlled.  Typically this would involve version history and audit trails:

  • Who did what?
  • When did they do it?
  • What did they change?
  • What was the value before and after the change?
  • Who was the data published to?

Product data can be very volatile. It is now influenced by many factors including trading partner requirements, global standards and local and international legislation. Data quality compliance is now so important that getting it wrong can have serious consequences, like:

  • Trading partner cost recovery/fines
  • Omission from catalogues or e-commerce sites
  • Law enforcement/prosecution
  • Trade/market exclusion
  • Consumer backlash and negative press coverage
  • Consumer health issues/adverse events

When data quality becomes an issue, an organization must be able to demonstrate that they have the records to support allegations or an investigation.

Spreadsheets do not contain the features to enable such capability.

So what’s the alternative to the spreadsheet PIM?

SyncManager LogoProduct Information Management systems have traditionally been the preserve of the larger enterprise, with large product portfolios, sophisticated requirements and ample budgets. It was no surprise the spreadsheet PIM had become popular across the mid market and small business.

With the product attribute explosion that has been experienced over the past few years, the small and medium sized business has become exposed to similar levels of sophistication that had been the preserve of large organizations. Except the impact has been more severe on the small business because of inadequate systems, lack of specialist skills/knowledge and the rapid pace of change.

To address this new market requirement LANSA has re-imagined Enterprise PIM. We have taken everything we learned from providing behind-the-firewall PIM solutions over the past decade, then encapsulated that expertise and knowledge, creating the industry’s first Cloud PIM solution that supports interchangeable data models. These data models are used to address specific requirements of a given industry.  SyncManager has been launched with the following Industry Templates (data models):

Grocery – EU1169

Use this data model to get your product information compliant with the EU Food Information Regulation. Assess your readiness and publish compliant data using the GDSN and/or feed your eCommerce website.

Office Supplies

Use this data model if you are in the Office Supplies industry and organize your products using the eCl@ss product classification system. Publish and consume product information using BMEcat messaging.

Healthcare – UDI

Use this data model if you are a medical device manufacturer and must comply with the FDA UDI regulations. Assess your readiness and publish product information to the GUDID to become compliant.

So now you can move forward and leave behind the Spreadsheet PIM.  Sign-up for the free Solo account, select your data model and upload a sample of your spreadsheet data. If you like what you see, then select a subscription plan that suits your needs or contact the SyncManager team at LANSA.



Ian Piddock's career has been focused in the IT industry. Since 2000 he has worked in a variety of business development and marketing roles for global enterprise software companies. Ian has been a GS1 standards advocate and practitioner for over 10 years, he is a data quality evangelist and experienced in the practical application of the GS1 Data Quality Framework. In recent years Ian has been responsible for the design, branding and product management of LANSA’s Data Quality software tools – DQ Inspector and DQ Reporter. At LANSA Ian heads up Marketing for EMEA and manages the relationship with GS1 Member Organisations worldwide, seeking to identify where LANSA solutions can accelerate industry adoption of GS1 standards and use of the Global Data Synchronisation Network (GDSN). Ian is also a trusted advisor on the impact that regulation and legislation has on the product data management process and has worked on projects involving the EU Food Regulation and the FDA’s Unique Device Identification (UDI) rule.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.