The JRC’s report sheds light on the opportunities and challenges AI creates when applied in each step of the scientific research process.
Since Francis Bacon formulated this framework in the 17th century, the scientific process offers a structured rhythm: ask a research question, conduct background research, formulate hypotheses, test them through experimentation, analyse results, and communicate the findings. The framework has not changed, but the tempo has, as AI plays an increasingly important role in how researchers frame questions, scan the literature, design and run experiments, and analyse and communicate results. The question is not ‘if’ AI belongs in science, but ‘how’ it is used and governed. The European Commission’s Joint Research Centre (JRC) explores this question in its report on The Role of Artificial Intelligence in Scientific Research, which serves as the scientific basis for the European Strategy for AI in Science.
Across each step of the scientific process, the JRC report maps concrete uses and the policy dependencies they trigger. The report explores where AI adds value, where it adds risk, and what the EU must put in place to make the most of it.
At the very start, AI can act as an “idea generator”, synthesising knowledge across fields to sharpen questions and surface blind spots; the benefit is speed and breadth, but the risk is that models over-index on what’s well documented, narrowing inquiry. For background research, large language models, citation networks and NLP tools compress weeks of reading into hours of synthesis. Here, open science matters: access to high-quality, well-documented datasets and models is what keeps reviews comprehensive and reproducible rather than skewed or opaque.
Hypothesis building is where AI’s pattern-finding shines. By spotting non-obvious links and proposing testable relationships, AI broadens the space of plausible ideas. But that only works if teams bring together domain depth with data/AI skills. The report is clear that hybrid, interdisciplinary teams are the engine of reliable AI-assisted science.
When experiments begin, AI helps design and automate parts of the pipeline. According to the report, AI is used to select and automate the most promising avenues of investigation. This allows for quicker testing, expanded experimental scope and has simplified complex studies for scientists, all in all accelerating scientific breakthroughs.
On analysis, AI enables researchers to parse vast, multimodal datasets at scale that humans simply cannot. An example of this is NASA’s ExoMiner NN, a Deep Learning Classifier that recently validated 301 new exoplanets from Kepler telescope data in a single large batch, demonstrating AI’s ability to analyse massive datasets and uncover complex patterns efficiently. The upside of this is faster pattern discovery; the downside is a higher bar for documentation, interpretability and validation to ensure that what looks like a signal isn’t a mirage or hallucination.
When drawing conclusions and writing, AI is helping connect the dots between experimental data and theory, and assisting in validating that conclusions are causally sound and consistent with the literature. The report does acknowledge authorship, originality and accountability: AI is a tool, not an author, and responsibility stays with researchers, including when models hallucinate citations or overstate certainty.
From the report’s analysis of AI in each step of the scientific process, the policy challenges and conclusions supporting the adoption of the European Strategy for AI in Science come into focus. First are the challenges related to data, models and infrastructure. AI performance is contingent on the quality of and access to publicly available data repositories, open-source models and collaborative infrastructures. The report underscores that open science is a prerequisite for innovation and reproducibility, while also noting that advanced AI models require significant resources for training and deployment. Because these costs are often reachable only by large laboratories, EU initiatives such as AI Factories and investments in High Performance Computing and open data repositories are essential to narrow the gap and ensure fair access across countries and institutions.
Second challenge relates to skills and innovation. The requirement for skilled researchers who combine engineering and computer-science expertise with deep domain knowledge is essential for AI in scientific research. Policies should focus on attracting, developing and retaining interdisciplinary talent to ensure that human expertise remains central to the research process. The report points to EU efforts to build AI literacy and shared resources such as the ‘AI Skills Academy’ under the AI Continent Action Plan, but retention and interdisciplinarity remain practical barriers for laboratories trying to use AI responsibly (see SwissCore article).
Third is the ethical and legal implications which cut across every step of the scientific process. Privacy, fairness, transparency and explainability are concerns the EC believes must be addressed to ensure responsible development and deployment. The Commission’s recent consultation reported by Science Business acknowledges that “currently, there is no EU-level mechanism to report concerns about the misuse of AI in scientific research” and that researchers “lack trusted and secure channels to raise the alarm” undermining trust in the system. The idea of an EU whistleblowing channel is proposed in the ERA Act conversation, an example of aligning ethics with workable practice, not just principles.
Lastly, the report sheds light on a societal challenge posed by AI: the risk of epistemic drift. This refers to a fundamental shift in what is considered valid scientific knowledge and processes. AI tools are unintentionally locking researchers into familiar ways of thinking, and potentially shrinking the range of questions we ask. AI can also encourage a culture in which scientific results feel detached from their human sources and authors, weakening human oversight and control over how knowledge is produced, leading to fabricated information or ‘hallucinations’ going undetected. The report emphasises that scientific AI development should not be considered a technical challenge but as a societal effort.
The report concludes that if the EU wants AI to raise quality as well as speed in science, it has to invest in data, models, infrastructure, and interdisciplinary skills and career paths, while building guardrails for evaluation, transparency, and accountability that align with how laboratories actually operate. The report aligns with the goals of the EU’s twin strategies, Apply AI and AI in Science, by showing the necessity of open data and models, shared computing, and teams with both domain and AI skills, along with simple guardrails for evaluation and accountability, to ensure AI raises quality as well as speed.