The pace of legislative change within data access governance continues to accelerate. By 2024, three-quarters of the world’s population will have their personal data covered under privacy-related regulations. Incoming laws and directives are likely to impact all areas relating to data access management.
These developments are happening at both an industry level and a location level. The result is a fragmented reality, requiring multiple data access management tools to suit different jurisdictions.
Longer-term, a more unified approach may become more common within data access governance. The EU has had GDPR since 2018. China’s Personal Information Protection Law went into effect near the end of 2021. Meanwhile in the US, proposals for the federal-focused American Data Privacy Protection Act remain on the table.
In the short term, data owners have to adapt their strategies to factor in new regulations and amendments to existing data privacy regulations.
How more regulation causes more disorganization
The challenge is around the existing decentralized and disorganized regulatory landscape.
Cloud computing has made it easier than ever to move data across regions. At the same time, keeping cross-border data flows compliant is one of the biggest challenges for those tasked with data governance and access control.
There’s also the impact on productivity and knowledge-sharing. Without a consistent and standardized approach, more regulation magnifies and compounds internal silos and barriers to democratizing data.
One major example is the transfer of personal data originating from the EU to outside the European Economic Area (EEA). This comes with a legal obligation to complete a Data Transfer Impact Assessment, as recommended by the European Data Protection Board in 2021. Data controllers or processors must “identify and implement appropriate supplementary measures where they are needed to ensure an essentially equivalent level of protection to the data they transfer to third countries.”
Organizations using AI for data and automation face similar control and access obligations. In the US, the proposed AI Bill of Rights will require AI usage to be increasingly transparent, responsible, and explainable.
Those operating internationally must think locally to ensure compliance and avoid penalties for data loss or breaches. It’s a departure from the past and its drive for globalization, harmonization, and lowering of trade barriers. Naturally, this means more work to ensure due diligence across territories.
“Adapting to regional challenges within global companies is imperative. Not all regions are built the same — geopolitical conflict, regulations, culture, staffing availability, and other world events greatly influence the rate of breaches and timely response.”
Forrester
Just to add more complexity, there’s also the data access security angle to factor in. Damage from cyberattacks is expected to have increased 300% by 2025 compared to 2015. Wider attack surfaces from distributed teams and decentralized infrastructure mean more breach risks, threats, and vulnerabilities.
At the same time, organizations are tasked with ensuring they stay compliant by adapting their data access strategies. Consider the process of developing new drugs in the US. Researchers are required to keep secure and traceable electronic records for every step of the process, as part of complying with the 21 CFR, Part 11.
Alongside meeting strict patient confidentiality rules, there has to be access for FDA regulators to assess if trials – and eventual licensing – can proceed. There are strict FDA penalties for non-compliance. These include fines running into hundreds of thousands of dollars, criminal prosecution, or publicly issued warning letters that can impact corporate reputations.
What’s more, the global average cost of a data breach is reportedly at an all-time high ($4.35 million) according to IBM. Although the costs of data breaches aren’t just financial. There’s also the impact on brand reputation, potentially limiting future growth.
“All of the complicated relationship-building and information-sharing is for naught if trust is immediately lost via a data breach or if critical infrastructure is left unprotected.”
Deloitte
Data control: The key to solving data access challenges
Of course, staying on top of regulations and threats is only part of the data challenge. It’s also about unlocking data access for employees who need real-time insights. Customers also need to know their data is secure and being used appropriately.
This question covers everything from behavior and culture through to infrastructure and technology. Where workflows are dynamically tracked, processes are intelligently adapted, and insights are accessible to the right people at the right time. The answer is to find the correct balance between control and access. Below are some of the elements required.
Data discovery
Identifying the location of data is the crucial first step towards extracting data’s value.
For structured data this may be more straightforward. Volumes are more likely to reside in well-known ERP systems and CRM databases. Unstructured data may be more of a challenge. Volumes are more likely to be found in data lakes, remote applications, or platforms such as SharePoint and Dropbox.
“From 80% to 90% of data generated and collected by organizations is unstructured, and its volumes are growing rapidly — many times faster than the rate of growth for structured databases.”
MongoDB
This distributed and often siloed reality is why organizations have to look toward technology, rather than relying on manual preparation, mapping and analyzing.
In particular, data catalog systems with metadata for tagging and bringing together disparate repositories in multi-cloud or hybrid infrastructure. The goal should be to unify and centralize data, making it discoverable and accessible.
Validate & cleanse data
This step, also known as data scrubbing, involves auditing current repositories and access data management workflows. Identifying gaps, inaccuracies, duplications, and incomplete areas.
Start by testing for anomalies. Apply the validate-identify-resolve approach. For example, create rules to uncover data items that have:
- Never been changed
Checking for out-of-date or irrelevant information - Have been changed to an extreme extent
Outliers may be signs of data errors - Two versions that are vastly different
Identifies potential duplicates, inaccurate exceptions, or structural errors (such as incorrect capitalization)
Crucially for governance, this should also include checking that the data held is relevant and lawful. Along with making sure there are the necessary levels of visibility and transparency for regulators and stakeholders.
By aligning data catalogs and related policies, this also boosts efficiency and consistency in the future. For example, it should be possible to make the original data accessible to users, instead of forcing users to duplicate data for sharing and usage.
Classify data
This requires implementing data classification schemes. These identify the data types being stored, processed and accessed. For example, the US government’s most restrictive access category is “Top Secret”. This is based on the likelihood of harm arising from unauthorized disclosure.
The level of granularity depends on your business and industry. Although AWS warns of the risk that “over-classification can incur unwarranted expenses by putting into place costly controls that can additionally impact business operations.
This approach can also divert attention to less critical datasets and limit business use of the data through unnecessary compliance requirements.”
Three or four categories is a good place to start. For example, public/unclassified, internal-only, confidential, restricted (where unauthorized access means non-compliance and potential fines).
Automate policy control
Traditionally, this would have been done via role-based or attribute-based forms of access control. However, managing access, policies, and authorization – particularly at scale – requires something more. Automation.
This can be split into two parts:
- Setting policies for automation
Define, group and aggregate policy entities that belong together - Implementing and controlling the policy automation
Here’s where control commands are issued and updated whenever there’s a change to versioning, regulations or business strategies
Policies can then be automatically reviewed, updated and approved according to the latest data and access rules. Data access management policy definitions can be reused or adapted for other workflows and applications in the business. Actions and exceptions can be updated consistently, dynamically and intelligently – reducing the need for manual input.
How to keep your data governance up to date
The costs of bad data governance are clear in terms of regulatory fines, plus the average $12.9 million cost of poor data quality to organizations. Complex and ever-evolving regulations mean updating policies and maintaining data integrity will never be a case of “set it and forget it.” Controlling and cataloging access to data will be a moving target for businesses wanting to grow.
That’s why investing in the Velotix platform is the answer. The unique symbolic AI engine is dynamic, learning how and when to apply your data policies. Access decisions are made in minutes, with behavioral accuracy constantly improving.
Policies can be as complex as the legislation in your region and industry requires. Policy rules and exceptions can be automatically created and updated, freeing your team to focus on edge cases.
Your data protection policies are built and enforced, with self-service to help with control, access, and doing business faster. The return and pay-off comes with improved efficiency, automated governance, and simplified data access and control. No matter how complex the regulations become.
Contact us today to find out how Velotix can be your platform for data governance success.