/Getting Ahead of AI Bias With Inclusive Algorithms

Getting Ahead of AI Bias With Inclusive Algorithms

Key idea: By acknowledging the inherent pitfalls of AI systems and building in bias-controls and data-integrity measures, we can create an ethically positive, and not an ethically ambiguous, future.

Original author and publication date: BMW Group – August 12, 2022

Futurizonte Editor’s Note: A material can sense its own movement? What could it happen if we combined with AI? Or is AI already incorporated?

From the article:   

The societal impact of machine learning algorithms and artificial intelligence systems is multifaceted. The use of big data and algorithms in a variety of fields, including insurance, advertising, education, and beyond, can lead to decisions that harm the poor, reinforce racism, and amplify inequality. Models relying on false proxies and bad datasets are scalable, amplifying any inherent biases to affect increasingly larger populations. At the same time, these systems can also provide groundbreaking solutions and result in societally positive efficiencies.

AI challenge from BMW wants to beat internal bias with data
BMW Group is launching the “Joyful Diversity with AI” challenge, encouraging participants to come up with new ideas for how AI solutions can help the automaker support diversity, equity and inclusion in its work environment and communications with data-driven solutions. The deadline for submissions is October 3, 2022 and winners will be announced this December. BMW

EU to regulate AI’s impact on life-altering decisions
In order to curb machine-based discrimination, the EU is planning to introduce a comprehensive global template for regulating the type of AI models used to support “high risk” decisions like filtering job, school, or welfare applications, as well as for banks to assess the creditworthiness of potential buyers.

All of which are potentially life-altering decisions that impact whether someone can afford a home, a student loan, or even be employed. The Guardian

FairPlay is the first ‘fairness-as-a-service’ solution to algorithmic bias
Designed primarily for financial institutions, FairPlay’s solution aims to keep the bias of the past from being coded into the algorithms deciding the future, and uses next-gen tools to assess automated decisioning models and increase both fairness and profits for financial institutions. FairPlay

READ the full article here