Testing AI
in Smart Cities and
Communities

AI faces many barriers in Europe today – here are 5 of the biggest hurdles

Pexels googledeepmind 18069696
IMG

Despite its promised potential, AI faces several significant barriers that hinder its widespread adoption. Based on insights from AI innovators and experts, here are some of the biggest hurdles standing in the way of AI development and implementation in Europe today.

Even the best technological solution will fail if is not matched with concrete local problems and contexts. We need to foster a culture where public institutions feel confident to explore AI's potential, not fear it. By creating real-world testing environments and clearer regulations, Europe can unlock a future where AI drives sustainable societal progress..
Dimitri Schuurman, lead researcher of a new paper on AI Testing & Experimentation Facilities.

#1 | Innovation aversion: a hesitant ecosystem

One of the top barriers identified by experts is innovation aversion—the reluctance of cities and communities to embrace new technologies. Many local governments and public organisations lack the expertise needed to fully understand AI, leading to cautious, slow, and often unclear decision-making. This hesitation stems from concerns over data privacy, cost, and the potential risks of AI, such as job displacement or ethical dilemmas. Without adequate support and education, cities are slow to adopt even promising AI innovations, which hampers progress across Europe.

Solution: To overcome this barrier, real-life demonstration of AI applications, as provided by CitCom.ai and the TEFs, are crucial. Also, enhanced innovation support systems are needed. Cities require assistance in evaluating the risks and rewards of AI, coupled with initiatives that reduce the fear of innovation. By fostering confidence and providing clearer guidelines, AI adoption can accelerate.

#2 | Lack of regulatory sandboxes

Though regulatory sandboxes are highly sought after, they remain underdeveloped and underutilised across Europe. Regulatory sandboxes provide a controlled environment where innovators can experiment with AI technologies while complying with legal constraints. These sandboxes help innovators and regulators alike better understand how AI functions in real-world conditions. However, their inconsistent availability and underuse have left a gap in the regulatory landscape.

Solution: A coordinated effort across Europe to develop and expand regulatory sandboxes would provide AI innovators with the much-needed space to test and refine their technologies within legal frameworks. This would bridge the gap between innovation and regulation, fostering a safer and more innovative AI ecosystem.

#3 | Complex regulations: navigating the legal maze

The rapidly evolving regulatory landscape is another major obstacle for AI developers. As Europe leads the way in AI regulation, including the new AI Act, innovators face challenges in understanding, complying with, and keeping up with complex laws such as GDPR, the Data Act, and AI-specific regulations. The complexity of these regulations creates uncertainty and increases the administrative burden for companies, especially small and medium-sized enterprises (SMEs), which often lack the resources to navigate such legal frameworks.

Solution: AI innovators need better regulatory supportto help them understand and comply with both national and European Union legislation. Providing clearer guidance on AI-related regulations, particularly through regulatory sandboxes, would allow innovators to test and deploy their technologies without the fear of unintentionally violating legal requirements. Support systems tailored to AI startups and SMEs could alleviate much of the regulatory pressure.

4. | Data silos: the fragmentation problem

Another critical barrier is the prevalence of data silos. AI thrives on vast amounts of structured and accessible data, yet in Europe, data is often scattered across different systems and sectors, making it difficult to access and use. Innovators report that data is frequently unstructured, incomplete, and fragmented across organisations. This fragmentation restricts the potential for innovation, as AI systems struggle to learn and improve without a consistent and reliable flow of information.

Solution: The creation of common data-sharing standards and frameworks would significantly improve data interoperability. By establishing clear rules for data exchange and building robust frameworks for data sharing, Europe can unlock the full potential of AI innovation. Standardisation would enable AI systems to access richer datasets, resulting in more effective and impactful AI solutions.

#5 | Data availability: the 'data hunting' challenge

Finally, while AI thrives on data, many innovators struggle with data availability. In addition to fragmented data silos, some AI innovators report difficulties in finding the right datasets for training and testing AI models. This “data hunting” problem further slows the development of AI technologies, as the absence of relevant and high-quality data means algorithms cannot be properly trained to meet the needs of real-world applications.

Solution: Providing easier access to real-world data from public and private sources is key. Innovators would benefit from well-maintained, standardised, and widely accessible datasets that enable them to train and validate their AI systems more effectively. Open data initiatives and partnerships between public and private sectors could unlock valuable datasets that drive AI innovation forward.

Discover more about the research behind this article, in our paper - Testing & Experimentation Facilities: Exploring the link with AI Regulatory Sandboxes, Living Labs & AI Testbeds.