Why the sovereign cloud matters in enterprise communications
Decentralized data storage was much easier to manage before data was launched into the clouds. Now, geographic and regulatory concerns abound.
Deploying technology is really all about deploying change that is intended to improve processes and/or productivity. However, as has been presented zillions of times, improvements in technology happen much more quickly than modifications to any regulatory structure designed to place guardrails around that technology. This may never have been truer than it is today with rapid, ubiquitous AI deployment across virtually every aspect of our existence(s). But the bottom line is this: Within an organizational structure, regardless of the industry, a key component of its evolution is often about retaining compliance with existing legal and regulatory strictures. It’s all about compliance, even if the regulations no longer fit the system they were designed to manage.
While the phrase “sovereign cloud” is fairly new, the concept is easy to understand and devastatingly difficult to enforce. “Sovereign cloud” is the phrase that covers the “residence” of certain data which the owner of that data is obligated, by law, to remain, physically, within a specific geographical or national boundary. In the age of the cloud, who really knows where anything physical resides – or when it moved from one place to another? And this is precisely the challenge, as certain data, as a matter of law or contract, must remain within proscribed geographical areas. This was much easier back in the day when data was not housed in the cloud.
Essentially, sovereign cloud is comprised of four separate components involving digital sovereignty:
1) Data residency, that is, where the data is physically present;
2) Data privacy, which addresses who should and should not have access to said data,
3) Data security, which really concerns how safe the data is from those who are not authorized to access it, and
4) Legal controls and obligations regarding the level of security and integrity that exists to protect the subject data from access by those who are simply not authorized to reach it, let alone use it.
It’s also important to note that legal requirements go beyond the physical storage of databases and can include requirements that the data not only be processed within a defined geographical area but that, in some cases, those who have access to the regulated data be either citizens of a particular jurisdiction or have prescribed levels of security clearances.
According to Elizabeth English, Founder of Global Tech and EE Associates, “The push-pull dynamic of globalization is back, demanding that global business adhere to tightening local requirements. This creates a collision point: the expansive, borderless nature of modern cloud computing and AI are meeting stricter in-country data autonomy laws.”
English added, “While Sovereign Cloud provides a necessary compliance framework to navigate this tension, the implementation itself is a daunting and challenging series of processes—especially on the financial side. However, committing to this framework and deploying steps to make it both viable and robust are worthwhile long-term strategic investments, positioning the organization for resilience regardless of whether the global economy is in a phase of expansion or contraction.”
Legally, in the U.S., the 2018 CLOUD Act (Clarifying Lawful Overseas Use of Data) amends the existing Stored Communications Act of 1986. Long before the advent of an official “Sovereign Cloud,” Congress and the federal government saw that vulnerabilities existed in the way that large quantities of data are stored and processed, particularly when either or both of these occur outside of U.S. boundaries. The revised act created useful tools for law enforcement to access electronic data irrespective of where on the globe that data is stored. In other words, from a legal perspective, the CLOUD Act assists with the determination of appropriate jurisdiction when litigation—particularly that related to data breaches--occurs.
While the U.S. has not taken the lead in these matters, the EU most certainly has, first with the GDPR and secondly with the EU AI Act. Both pieces of landmark legislation, armed with powerful enforcement tools (read: financial penalties) have defined terms and conditions for the EU as a whole. In addition, specific countries within the EU have their own regulations which may be more specific—and potentially onerous—than those of the EU itself.
In the case of the EU Artificial Intelligence Act, a four-level pyramid of risk has been created with activities in the “high risk” category warranting the most attention. We addressed this in posts from December 2023 and October 2024 on the EU Artificial Intelligence Act, but what follows is a quick review.
As defined by the EU AI Act, the activities included in the lowest level of “Minimal Risk” include voluntary codes of conduct, spam filters and video games. Here, deployers must make individuals aware that such systems are in place and in use.
The second level of risk is categorized as “Limited Risk.” Included in this level are emotion recognition, biometric categorization, AI AI-generated and manipulated content.
“High Risk” activities, which are the third level, include conformity assessment as defined in Articles 26 and 27 of the EU AI Act, obligate the deployer to rely upon instructions for actual use, monitor and report issues, retain logs, provide human oversight, and assess whether any fundamental rights are—or could be--impacted by the AI system in question. Data – including biometrics, information related to critical infrastructures, education, employment and work management – all fall into this category.
Deployers of high-risk AI systems must commit, interestingly, to use such systems “in accordance with the provider’s instructions for use,” and further agree to cooperate with law enforcement and other governmental authorities as requested. Such commitments to extensive and recorded oversight may go a long way in preventing unauthorized access to AI systems and data.
The highest level of risk is categorized as “Not Acceptable.” Under the EU AI Act, such access and actions are strictly prohibited. This level of risk associated with AI processes are “subliminal, manipulative and deceptive systems, systems that exploit vulnerabilities; systems that utilize and/or access facial recognition databases, those that infer emotions, include biometric categorization and social scoring.” The EUAI Act explicitly prohibits them.
It is still too early to tell how this AI risk framework significantly impacts enterprise communications offerings either in the EU or in the United States.
However, now that we’ve been refreshed on the risks of AI, how does any sovereign cloud fit into that? “Sovereign cloud” as a concept exists to attempt to minimize the risks to sensitive information. Those risks have been defined, both in industry regulations and in broader national or international statutes. As a legal and/or contractual matter, the sovereign cloud is very specific and not at all cloudy. It’s another example where the devil really can be found in the details, both contractually and legally.
Originally posted on December 4, 2025 in No Jitter
December 5, 2025 | tagged
No Jitter | in
AI,
Artificial Intelligence,
Privacy/data security