​Fighting Bias in AI Tools for Recruitment 

The BIAS project brings Bern University of Applied Sciences (BFH) to the fight against algorithmic bias in AI recruitment and develops tools for fair hiring. 

Artificial Intelligence (AI) is playing an increasingly central role in recruitment processes, with nearly a quarter of companies relying on AI-based tools to support hiring decisions. Although these systems, often powered by large language models (LLMs), offer significant efficiencies and new capabilities, they also risk reinforcing societal biases embedded in their training data. This can result in discrimination, especially against vulnerable or marginalised groups, creating urgent ethical and legal challenges. 

The BIAS project addresses this issue with an interdisciplinary approach, bringing together a large consortium to investigate, detect and mitigate bias in AI decision-making systems used in human resources. The project is funded under Digital, Industry and Space of Horizon Europe and by the State Secretariat for Education, Research and Innovation (SERI). BIAS is coordinated by the Norwegian University of Science and Technology (NTNU) and involves a total of nine partners from different countries. As a multidisciplinary project, BIAS combines expertise from computer science, social sciences, law and ethics to ensure that the consortium’s solutions are technologically robust, legally compliant and socially fair. 

Switzerland is one of the partner countries in the consortium, represented by the Bern University of Applied Sciences (BFH). Within BFH, the Applied Machine Intelligence (AMI) Research Group from the School of Engineering and Computer Science is involved in the BIAS project. Professor Mascha Kurpicz-Briki leads the technical work package of the project, which is responsible for the technological development of the project’s proof-of-concept system. Her team plays a crucial role in analysing how bias manifests in LLMs and other language-based AI systems, with a particular focus on gender stereotypes and other forms of discrimination. Their work includes creating multilingual benchmarking methods to measure bias in word embeddings and language models. This approach is notable for its cross-cultural and cross-linguistic inclusivity, enabling the identification and evaluation of bias indicators across multiple languages and societal contexts. Through this detailed analysis, BFH supports the development of practical tools that can be applied in recruitment processes to help mitigate discriminatory outcomes. 

At the heart of the project’s technical innovation is the Debiaser, a proof-of-concept technology that is designed to evaluate and reduce bias in recruitment AI tools. The Debiaser detects biased patterns in job descriptions, CVs and LLM outputs while providing users with clear visual explanations of where and how bias occurs, supporting both technical understanding and human interpretability. In addition, this technology is meant to support organisations in implementing case-based reasoning and conducting regular checks to ensure fairness and consistency in the recruitment process. 

Beyond technological advancement, BIAS also launched seven National Labs across Europe in Estonia, Iceland, Italy, Norway, the Netherlands, Turkey and Switzerland, as well as an international lab that reaches stakeholders outside the consortium. These labs bring together AI developers, HR professionals, policymakers, investors, trade unions and advocacy groups to foster dialogue, co-create solutions and raise awareness about bias and discrimination in AI recruitment. This engagement not only enriches qualitative insights but also empowers stakeholders to become active contributors to a more inclusive digital future. Professor Kurpicz-Briki highlights the benefits of working within such a broad and interdisciplinary consortium, such as allowing the team to “learn about different working cultures in other domains and bring these together for deliverables”. 

In addition to research and stakeholder engagement, the BIAS project plays an important role in European policy discussions on trustworthy AI. By aligning with current EU priorities such as the AI Act and the Assessment List for Trustworthy AI (ALTAI), it can help shape the regulatory and ethical frameworks that will govern AI systems in the coming years. The tools and methodologies developed within BIAS will be tested in real-world scenarios, such as HR departments, ensuring their relevance and practical usability. Switzerland’s participation in BIAS through BFH clearly illustrates the country’s active contribution to responsible AI innovation and its commitment to fostering fairness, accountability and inclusion in digital transformation.