European data autonomy
Dependence on hyperscalers
We talk about AI sovereignty.
And store our most sensitive company data with US providers.
We discuss:
- GDPR
- EU AI Act
- Digital sovereignty
- Dependence on hyperscalers
And yet we still build our data architecture entirely on US stacks.
I myself work with international technologies, but strategically we have to ask ourselves a question:
Do we want data autonomy, or just low entry prices?
The example of StackIT (Schwarz Group, Lidl/Kaufland) shows that a European counter-model is currently emerging – but we can see that it is also difficult here, as not all old habits have been broken yet (contract with Google).
No one is claiming that it is easy, but we as an industry must continue to fight to ensure that at least data management, structures, and processes are set up in such a way that we keep dependency to a minimum.
Therefore:
- Use intelligent technical models instead of technical metadata from data sources.
- Use CNCF.io and other standards instead of internal processes from hyperscalers.
- Use structured data platforms and table formats instead of proprietary storage.
Be GDPR-compliant.
Think European.
Work with partners like Exasol.
Exasol is an analytics database from Nuremberg.
It has been on the market for over 15 years.
Technologically extremely strong.
I see this as a real opportunity from an entrepreneurial risk perspective.
What do you think?
Is European data sovereignty realistic, or is it ultimately just a political pipe dream?