Data Synchronization In Real Life

An attractive mid adult woman scanning product through cell phone application at department store.

As my colleague refers to in his “One Click Away: Where Does the Data Article Come From” article, meeting trading partner and global data standards requirements is a daunting task for manufacturers and retailers today. I know first-hand, having worked on an implementation project for a supplier that sought to comply with GS1 standards set for the fresh fruit and vegetable industry. You see, data quality and consistency are hard to achieve when dealing with natural products. Many fresh fruit and vegetable suppliers leave the default values in the item attributes field as is when they publish their products to the Global Data Synchronization Network (GDSN). This particular supplier went beyond GS1 compliance and become the first in the fresh fruit and vegetable sector to fill all item attributes with real values and closed the entire loop from GDSN publication to EDI invoicing. This was accomplished by implementing new procedures and a Product Information Management (PIM) solution. You can probably imagine the project’s scope of work and how comprehensive a plan this achievement required.

Today I would like to focus on one component of a PIM implementation, data synchronization. Data synchronization is about integrating current product data, processes and existing systems with a GDSN or non-GDSN-connected solution to support your daily operations [in managing the synchronization process]. Data synchronization is an add-on to your current item management processes, allowing you to initiate workflow processes that improve inefficiencies, provide insight and guide you in a natural and integrated way of registering, validating and publishing data to your preferred data pool.

The continuously changing data synchronization environment combined with regulatory changes and customer demands ask for a flexible and configurable environment. Together with a firm implementation approach, this leads to a shortened time-to-market. Many companies feel that they are forced into data synchronization and primarily see it as an extra cost. Data synchronization and the process of improving data quality can lead to high quality item data as an added value, maybe even a return on investment, for your own organization or your supply chain. In addition to economies of scale, you can feel confident in the accuracy of the data your consumers and trading partners are taking in. Not to mention applications such as online web shops, product catalogues, data publishing, EDI and even mobile solution offerings (tablet / smartphone) will publish up-to-date product data.

A successful PIM implementation approach consists roughly of three major steps:

Preparation

Preparation is about starting your project with a project plan and analyze your current item data, processes and systems.

Analysis

The analysis phase is about determining the attributes (sector, regulation, customer demands and possibly your own additions because you think it’s valuable for your trading partners). In my humble opinion, just publishing the mandatory product attributes and hierarchy prevents you from a successful implementation. Exceeding expectations from your trading partners does not cause extra pain. E.g. adding a pallet GTIN with its dimensions, volume etc. is easy, while this is set not to be mandatory. But it helps your customer in determining transportation needs, storage space and shipping/handling – all in the same go. The established list of attributes needs to be “mapped” to existing item attributes in your current systems. You might need to enrich the ones you are missing or convert the format or content to values that are allowed. Furthermore, during this phase your processes need to be defined, e.g. what needs to happen and by whom if you change an item in your current systems?

Integration and Execution

And last but not least, the final phase of an implementation project includes the integration of your current systems with the PIM. A two-staged approach has my preference. This includes using staging tables to transform data from within your current systems to then process and validate it [the data] that results in the start-up of workflow processes based on events (new or changed items).

Finally, the execution phase is also about installing your solution:

  • Configuring users, groups, roles, authorities, attributes, views and lay-outs, code lists e.g. GPC codes, workflow, validation rules, trading partners, catalogues etc.
  • Inspecting and determining product dimensions and weights.
  • Reporting KPI’s based on standards and tolerances for inspection sets. The initial load of the item data.
  • Validating rules upfront, so before you register your items in the data pool you can be assured your data is compliant. This prevents frustration and saves time.
  • And last but not least register and publish your items to your chosen data pool.

This process of implementation can become quite challenging. One of the major challenges in the beginning of aproject is almost always the conflict of existing data in respect to the attributes you want to or are being required to publish. This mainly concerns item hierarchy, attributes definitions, data content differences and in some cases even inconsistencies, whereby requiring a proper solution whilst not harming existing procedures and systems. My approach is to always solve these challenges in a staging area that does not harm the existing system nor does it harm the synchronization solution.

Another challenge specific to the fresh food industry and the implementation project I worked on is that the product at the consumer level is a natural product, e.g. a cauliflower. No two heads of cauliflower are the same and the standard tolerances allowed are too narrow. The product has no given dimension or weight. And to make things worse, oftentimes these natural products are sold in different ways to the consumer by the retailer, as I discovered when I went shopping. At one retailer they sell the product at a unit price and another retailer sells the same product at a price based on weight, both located in my small hometown village. At that moment in time there was no answer in how to handle this or standard set. This provided an opportunity for us to work with GS1 in developing a new standard, which is now adopted throughout the entire fresh food industry. I can breathe again! To read more about the PIM implementation, read the ZON case study.

While the challenges many suppliers face in providing consumers and feeding downstream systems with accurate product data are significant, implementing a product information management solution is paying dividends to manufactures and retailers today.

Danny Hellemons

Author: Danny Hellemons

Danny Hellemons, Manager of Professional Services at LANSA Ltd Dänny has been an active member of the IT industry since 1989, starting out as a developer and working his way up to become a project manager and IT consultant. In 1992 he became familiar with LANSA, and has worked extensively with the organization and LANSA-related products ever since. Through various projects, assignments and research and development, Dänny is at the forefront of the continuously progressing LANSA environment. Danny has a strong logistics background in systems development, and has played key roles on many projects as Senior Technical Consultant, System Architect, Project Manager and Business Analyst. He has been employed by LANSA since 2003.

Leave a Reply

Your email address will not be published. Required fields are marked *