Rolls-Royce has released a simple and effective new data bias tool to its pioneering artificial intelligence (AI) ethics and trustworthiness toolkit, The Aletheia Framework. We have also announced AI ethics collaborations with music cataloguing start-up, Musiio; and with international AI oncology experts.
Bias in the requirements, algorithms and data used to train AIs impacts the effectiveness and trustworthiness of AI and is one of the hardest challenges to overcome. It causes inaccuracy and negative bias in the way the AI analyses data and subsequently makes decisions, eroding trust in a technology that should be a valuable partner in our daily lives at home or at work.
Sitting as part of The Aletheia Framework 2.0 ecosystem, released today, the new tool is based on a tried and tested method of identifying and managing risk in very complex and novel systems. It has been adapted to perform the same role in AI, helping developers and organisations achieve highly accurate and fairer outcomes from their use of the technology.
“We’re excited to be adding even greater practicality to The Aletheia Framework, which is uniquely concise and focused on navigating the day-to-day intricacies of applying AI in an ethical and trustworthy way, such as bias in data.”
— Caroline Gorski, Group Director for Rolls-Royce’s data innovation unit, R2 Data Labs.
“In the year since we first published the framework, we’ve been humbled by the level of interest, feedback and enthusiasm for something that started out as an answer to an internal challenge – crucially in a business-critical context.“
“To enhance its effectiveness, not only are we adding this new AI bias tool, but we’ve also sought out collaborations with Musiio; international AI oncology experts to test how the framework performs and to hear how it can be more user-friendly and flexible. All these lessons have been included in The Aletheia Framework v2.0, which is released today and we believe that it can be applied to any use of AI, either as a template or a general guide for organisations to structure their thinking on this complex topic.”
“One of the exciting things about the Aletheia Framework is that even though it wasn’t designed specifically for the music industry it has been designed to work for ‘every’ industry. When you think about applying AI ethically, there are many elements to consider, from social impact to security,” CEO and Co-founder Hazel Savage said.
“One of the ways to consider the social impact of AI is to think in terms of “are we as a company improving the quality of life for people performing the jobs where AI technology is integrated” or as I think of it, does AI remove an element of ‘drudgery’? With tagging music, which is manually tedious, we do remove that drudgery of repetitive work, whilst keeping humans as the overseers and accuracy monitors, and therefore create a better working environment in our specific space. I’d encourage all music companies to consider Ai ethics in building tools, or evaluating partnerships.”
The new data bias tool also extends the ability of The Aletheia Framework to enable organisations to apply rigor across the entire life of their AI product: from pre-development ethical considerations; to training data bias mitigation; and then the trustworthiness check on the decisions an AI makes after it has been deployed.
Crucially, The Aletheia Framework does not scrutinise algorithms themselves, which are highly complex, often commercially sensitive and always evolving. Instead, it focuses on the inputs to and continuously checks the outputs from those algorithms. This makes it simple and fast to use, as well as being applicable in any AI context.
Examples of how The Aletheia Framework has been used.
Original press release by Rolls Royce