A lack of diversity in teams developing artificial intelligence can lead to in-built bias and discrimination in its decisions says BCS, The Chartered Institute for IT – in a new report by a major government advisory body.
The report, published by the Committee for Standards in Public Life (CSPL) examines whether the existing frameworks and regulations around machine learning are sufficient to ensure high standards of conduct are upheld as technologically assisted decision-making is adopted widely across the public sector.
Director of Policy Dr Bill Mitchell, said: “Lack of diversity in product development teams is a concern as non-diverse teams may be more likely to follow practices that inadvertently hardwire bias into new products or services.”
Diverse teams will also make public authorities more likely to identify potential ethical pitfalls of an AI project, the report suggests. Many contributors emphasised the importance of diversity, telling the committee that diverse teams would lead to more diverse thought, and that in turn this would help public authorities identify any potential adverse impact of an AI system.