“We live in a change of era – not an era of change.”
Stefan Schnorr, State Secretary at the German Federal Ministry for Digital and Transport
As German State Secretary Stefan Schnorr put it so well, 2022 has been a transformative year – in tech as well as in geopolitics. The Russian war on Ukraine has illustrated drastically how technology can be weaponized for geopolitical aggression.
It is more apparent than ever that some form of tech regulation is needed not only to protect citizens but also the ideals of our democracies. But how can lawmakers regulate emerging technologies in the digital sphere where jurisdiction borders don’t exist?
Sovereignty and security can only be maintained if countries with aligned values and ideals – like Germany and the US – work together.
At the Transatlantic Summit, industry thought leaders, policymakers, and researchers discussed the most pressing issues of our digital age.
The core questions were:
- How can governments find the right balance between regulation and innovation?
- How can and must countries with similar values collaborate to solve the challenges of the (next) digital era?
- How can we harness the potential of new technologies to solve the world’s biggest problems?
Here are some of the discussed solutions:
Open-Source Strategy as a Means to regulate
The 2022 coming-out of generative AI solutions is just the latest example of code that is made available to the public. While it’s being debated whether this is more beneficial or harmful to society, we need to acknowledge that open-source projects are the backbone of many technologies. We therefore should look at open source as a means of regulation.
99% of all Fortune 500 companies use open-source solutions already. While 85% of IT departments are reporting they will increase the use of open-source software. The popularity of open-source technology makes sense since it is a great way to self-regulate technologies and foster digital sovereignty:
- Increased security: With open-source technologies, the source code is available for anyone to review. This means that any vulnerabilities or security weaknesses can be quickly identified and addressed by the community.
- Improved reliability: Open-source technologies are typically built and maintained by a diverse community of volunteers. This means that there are many people working to ensure that the technology is reliable and of high quality.
- Control over data: With open-source technologies, individuals and organizations have the ability to control their own data, rather than relying on proprietary software.
- Independence from vendors: By using open-source technologies, organizations can avoid vendor lock-in and have more control over their technological infrastructure.
- Ability to customize: Since the source code is publicly available, users have the ability to modify and customize the technology to fit their specific needs.
- Stronger communities: Open-source technologies often have strong communities of users and developers who contribute to the project. This can lead to the creation of a supportive and collaborative environment.
Despite being in use at a large scale, there currently is an imbalance of open-source contributors and users. Currently, 64% of the most heavy-used projects on GitHub are being maintained by only 1-2 developers regularly. In the physical world, this would be like a bridge used by hundreds of daily commuters with no maintenance – a real security threat.
Europe and Germany specifically have been trying to strengthen the open-source ecosystem by supporting initiatives, like the Sovereign Tech Fund, which promotes the establishment of open digital infrastructure for more security and digital sovereignty.
Creating Sandboxes for Regulations
The challenge of all tech regulation is finding the balance between ensuring security and fostering innovation and business growth. Legislators often struggle to develop a deep understanding of all aspects of emerging technologies and the systems they are operating in.
In Europe, lawmakers have focused on potential risk cases first and regulated technologies before they can cause any harm. In the US, entrepreneurs are faster to build technologies before they get regulated by the government.
A middle ground could be to establish regulatory sandboxes where entrepreneurs, lawmakers, and researchers can experiment with potential outcome scenarios. This way use cases and risk cases are equally weighed in. Regulators would also get the chance to develop a deeper understanding of technologies. That way they can build their laws around implications better.
Leveraging Blindspots in AI to solve the World’s biggest Problems
AI took a huge leap last year. Apart from being predicted to create over 100 Billion Dollars in value, it also has the potential to solve the world’s biggest problems. But the models are always only as good as the dataset they are trained with and the humans training them.
On top of that most funding within AI is allocated for growth-generating business models. Some of the world’s biggest problems, however, cannot be translated into profitable business models. Many experts say the reason why OpenAI successfully overtook the tech giant, Google, is because their technology didn’t have to fit into corporate structures and revenue models. If we only try to find the most profitable business case, we will miss the blindspots – the big problems – that AI can help to solve.
Some of those Blindspot Areas could be:
Healthcare: AI can be used to analyze medical data, identify patterns, and make recommendations for treatment. This can help to improve healthcare outcomes and make it more efficient and cost-effective.
Climate change: AI can analyze data related to climate change and make predictions about future weather patterns, which can help us to better prepare for and mitigate the impacts of climate change.
Education: AI can personalize learning and provide customized educational experiences for students.
Poverty: AI can identify patterns and trends in economic data and make recommendations for policies that could help reduce poverty.
Human rights: data analysis related to human rights violations can be used to identify patterns that can help prevent such violations.
Environmental protection: AI can be used to monitor wildlife populations and identify patterns that can help protect and conserve endangered species.
Disaster response: AI can be used to analyze data related to natural disasters and make predictions about where and when they are likely to occur, which can help to improve disaster response efforts.
Our governments could help shed light on these blindspots by funding more research and projects that use AI to come up with solutions without profitable business models backing them. This would also help to encourage young professionals to look more into blindspot areas and boost motivation to solve the big problems.
Digital Platform Governance to fight Misinformation
“We can’t resolve misinformation, but we can try to manage it.”
Mis- and Disinformation are one of the biggest threats to our democracy. With the Digital Service Act (DSA), the EU is tackling this threat by defining rights and obligations for online service providers, users, and other stakeholders. This may include requirements for service providers to implement measures, such as fact-checking. Moreover, the DSA is seeking to restrict access to content that is found to be misleading or harmful. Digital service providers must therefore install a mandatory risk assessment board that re-assesses the content on a regular basis. They will face heavy fines and penalties if they don’t comply.
In the US, legislators are prioritizing freedom of speech. Fearing that too much moderation could suppress the voices of minorities, they are trying to balance the tradeoff between too much moderation vs. too little. That is why mechanisms like regulated algorithmic down or up ranking are controversially debated.
We need a multifaceted approach that doesn’t only aim to regulate the giant in the field. Regulators on both sides of the Atlantic tend to focus too much on trying to only regulate Facebook without thinking about the implications for smaller platforms, like GitHub or Reddit, that generally already are doing a great job of content moderation.
Collaboration is the way to digital sovereignty
Discussing solutions with diverse stakeholders from both sides of the Atlantic has shown the importance of collaboration across borders. Data doesn’t know country lines and thus regulation won’t be effective unless it is a collaborative framework between countries that are aligned on the goals and ideals they want to protect.
Digital Sovereignty can and must be reached with collaboration. That’s why more platforms, like the Transatlantic Summit, are needed where experts, researchers, industry leaders, lawmakers, and other stakeholders can come together to exchange ideas and work on solutions – much like a sandbox with transatlantic players.
Get involved!
If you liked this article and want to stay up to date on the transatlantic dialogue on tech, business, and the digital economy, please subscribe to our newsletter and follow us on LinkedIn.