Reading Time: 11 minutes

“McKinsey research shows bold moves to adopt digital technologies early and at scale, combined with a heavy allocation of resources against digital initiatives and M&A, correlate highly with value creation” 

Digital strategy in a time of crisis, © McKinsey 2020

Imagine having to identify every customer data point in 100s of business systems, capturing all of that out-of-step from BAU operations, and then purging it from wherever it is. This all based on one customer’s request due to a law that grants a person ‘the right to be forgotten.’  

latest report
Learn why we are the Leaders in API management and iPaaS

We used to think that if our business entities are compliant with (slowly changing) local laws and regulations in the countries businesses trade in, responsible company officers have met their legal obligations. Yet, in the last decade, a plethora of data-centric legislation has been overlaid on the imperative for businesses to remain profitable in challenging times, conspiring to erode value:

  • The 2010 Foreign Account Tax Compliance Act (FATCA), overstepped sovereign boundaries, such that U.S. citizens residing in foreign domiciles were impacted. The cost of domestic compliance with U.S. FATCA in Australia, for example, was estimated in 2014 to be between ~$500M and ~$1.1B over ten years.
  • The 2018 General Data Protection Regulation (GDPR), has meant the same thing happened with personally identifiable data (PII) held by domestic businesses. GDPR compliance is costly, between the U.S. and UK alone — as Forbes called out as “a Herculean racket” costing companies ~$10B.
  • The 2018 California Consumer Privacy Act (CCPA) is a kind-of U.S. version of GDPR. The cost of compliance for businesses is estimated at ~$55B.
  • The ongoing Australian Government requirement to comply with Consumer Data Rights for banking, energy, and telecom sectors is another point of potentially expensive compliance with few immediate upsides for businesses in 2020. 

Can the future cost of compliance with far-reaching legislation like these be softened by delivering on a sound operational data strategy that also lifts your ability to mobilize and profit from your own data at the same time? 

Can operational, in-stream improvements in your data management practices improve the quality of the analytical insights that you already use to drive business performance?

This article will explore the role of a risk-based heads up data strategy in your organization as a key enabler of your digital strategy ambitions and how MuleSoft’s Anypoint Platform can operationalize your data strategy as part of your organization’s ongoing digital transformation journey.

Why ‘heads-down’ data strategies (mostly) fail

A strategy is commonly understood to be “a plan of action designed to achieve a long-term or overall aim,” and a ‘heads-down’ data strategy is commonly understood to be composed of enterprise plans to solve for the topics across all enterprise data implementing:

  • Data management, including the importance of data quality and lineage
  • Common data standards including business glossaries and data integration standards
  • Broad and deep data stewardship and governance by business reps
  • Data security implemented, not only at the edge but everywhere 
  • Data privacy compliance policed by internal and external audits

This collection of theories and best practices have matured over decades, starting in the 1960-1980s when IT was called data processing, and alongside the expansion of businesses data holdings (big data), the movement to the cloud (digital transformation), and the rise in roles like Chief Data Officer and Chief Digital Officer, separating away from the CIO role in larger organizations.

Let’s take data management as a strategy topic. 

The presumption stemming from the 1960’s is that data can and should be managed. Prior to big data and the proliferation of web and mobile experiences, all data should be managed. What does this mean?

In the 1990’s it meant that data should be digitized and held in relational databases with known table and field definitions. But, companies then still had mountains of paper forms that they used for data capture.

While it would be great if there was just one of these databases for each business, the explosion in information technologies into the enterprise space means that never happened. Instead, complexity increased and data and its meaning became spread out across heterogeneous data stores.

Critically, what this did and continues to do is to increase the propensity for chaos with your corporate data. What this also did was enable fiefdoms of data to emerge, irrespective of the desire to centralize systems (like a ERP or data warehouse), and manage data centrally. 

This ‘heads-down’ strategy, to centrally manage and control all data, doesn’t work. 

It doesn’t work because: 

  • Different business units have their own needs for data at varying times, they have different quality criteria, customers, and speeds and tempos. 
  • Different business units also have their own culture and objectives and data “languages.”
  • The layers of specialized tooling to address these issues are not baked into all operational systems and increase fragility and complexity.
  • Businesses perceive ‘heads-down’ data management as heavy-weight, expensive, slow, and essentially a non-value adding overhead.

The accounting department has a month-end process which needs financially accurate data mapped and aggregated from people, sales, and manufacturing systems. But marketing needs customer data to conduct regular marketing campaigns. The auditing team needs six months worth of data every half year. Your customers need access to your product catalogue data and experience data on-demand, and you need access to your suppliers procurement catalogue punchouts. Some data, like Geospatial or radiography data, is only relevant to that domain’s bounded context.  

Organizational demand to consume data for optimizing operational performance meant that a large array of expensive tools (like ETL and data streaming) made it easy to move large volumes of data around. This, unfortunately, opened the way to alter or even “tamper” with the data, and then, because of its usefulness, become another “source of truth.” Fragmented global definitions led to the mistaken decision to attempt to centrally standardize on a single “hub and spoke” master data management solution. 

While the five classic data strategy topics listed above are relevant, the actual core strategy questions that can lead to real business performance are:

  • Do we attempt to manage, integrate and secure all your data centrally or not? 
  • Should we aim to achieve common standards for all data or just some?
  • Can we just preserve controls for operational data only and afford to let analytical data experience a degree of looseness?   

In Part 2, we will explore the answers to these questions by outlining what an operationally baked-in, heads-up data strategy looks like, and how you can realize this as part of your MuleSoft application network. 

To learn more about how to build an API-centric data strategy for your organization, download our whitepaper.


Series NavigationHow a ‘heads-up’ data strategy makes your cost of data compliance cheaper >>