​​An enabling environment for AI in science

A recent Mutual Learning Exercise identifies barriers to adoption of AI in science, and provides recommendations on how these can be addressed effectively.

The European Commission (EC) recently published its first thematic report Framework conditions and funding for AI in Science: Mutual Learning Exercise on National Policies for AI in Science. AI in science is among the new priorities of the EC, in line with its plans to transform the EU into an AI continent (see SwissCore article). The EU’s AI Action Plan explicitly calls for the development of an AI in science strategy, which “will seek to accelerate AI adoption and steward its use to preserve scientific integrity and rigour.” The associated consultation on AI in Science is open until 5 June 2025. This Mutual Learning Exercise (MLE) provided a forum for Member States and Associated Countries of Horizon Europe to share their approaches to advance the adoption of AI in science, and the barriers identified in the report will provide the main building blocks of the AI in Science Strategy. 

Although AI has proven to be an enabler of scientific discovery, its use across domains varies greatly due to a number of barriers. The report highlights key obstacles: (1) compute infrastructure; (2) skills and talent; (3) data availability and quality; (4) interdisciplinary collaboration; and (5) trust and trustworthiness. However, these barriers are not insurmountable, and lessons can be drawn from successful AI in science initiatives to design future funding interventions. Though more than 60 national AI strategies or policies exist representing national priorities and approaches, common themes are identifiable, such as investment in R&D and innovation infrastructure, AI’s impact on labour markets, and AI governance. And while AI in science may not be explicitly mentioned in most national strategies, support can be implicit. For example, investment in AI R&D, identifying priority sectors for AI development, and certain policy interventions can all promote the adoption of AI in science. To address the aforementioned barriers meaningfully, the EC identifies the need for an integrated approach connecting funding mechanisms, framework conditions, and on-the-ground adoption of AI in science.

Funding is key for advancing AI and its capabilities, and numerous approaches towards funding AI exist. Given the novelty of this field, it is difficult to evaluate with certainty whether, for example, mission-oriented funding initiatives are more effective than public-private partnerships or large-scale investments. Based on successful national initiatives, the MLE highlights some key factors that policymakers should consider when developing funding frameworks for AI in science: (1) balancing bottom-up and top-down funding approaches; (2) supporting interdisciplinary collaboration, given the nature of AI in science; (3) recognising different types of innovation and the support they require; (4) understanding infrastructure needs, focusing also on human capital in addition to large-scale infrastructure investments; and (5) enabling flexibility and sustainability by focusing funding schemes on desired outcomes and milestones, to account for the speed of change in AI. By providing case studies on how AI in science has been institutionalised, the report highlights that “successful AI adoption in science requires coherent, multi-faceted funding interventions”, which must address the technical, human, and cultural dimensions of AI in science, as well as local flexibility in how barriers to adoption are overcome. The MLE therefore provides three recommendations for designing funding frameworks: (1) develop comprehensive funding portfolios; (2) balance investments between technical and human infrastructure; and (3) design funding instruments that enable interdisciplinary collaboration.

Furthermore, promoting certain framework conditions can enhance AI’s integration responsibly, and policymakers should focus on: (1) targeting upcoming policy interventions at bridging the gap between governance frameworks and practice; (2) creating practical implementation tools; and (3) establishing anticipatory governance mechanisms. Firstly, policies and ethical frameworks alone are insufficient in driving responsible AI adoption in science: a gap exists between “high-level ethical principles and the practice of AI in science.” 54% of recently surveyed scientists consider ethical concerns a barrier to AI adoption, and researchers must be prepared to reflect on the consequences of using AI, while institutions must be capable of providing the necessary resources to enable responsible AI implementation. Secondly, practical implementation tools are needed to translate open science frameworks, like the European Open Science Cloud (EOSC), into practice. Open science is an enabler of AI in science, as elements of open science frameworks – for example, open access publications, open data initiatives, and open-source software development – increase transparency and accessibility of AI in science research. Thirdly, considering the rapid speed of technological and AI developments, anticipatory governance can support policymakers in responding to new opportunities and challenges in AI in science. Combining these three elements, therefore, creates the framework conditions necessary for responsible AI integration.

Finally, taking an ecosystem approach to increase AI adoption in science is crucial, as “AI for science sits at the intersection of multiple policy domains”. As different ministries or agencies are usually responsible for different policy agendas, it is important to focus on cross-government coordination and policy coherence. In so doing, effective support structures for AI in science can be developed. To build an effective AI for science ecosystem, the report thus recommends: (1) establishing cross-government coordination mechanisms; (2) supporting communities of practice; and (3) designing policy feedback loops.

AI in science presents opportunities for innovations but policymakers must focus on creating an enabling environment to reap the benefits. A coordinated approach – combining funding frameworks, governance mechanisms, infrastructure development, and talent strategies – is crucial. The recommendations in the report to support AI adoption in science provide a good basis for creating this environment.