Skip to main content

Social Impact of AI and explainable ML

artificial intelligence, machine learning, explanation, ethics

Tracking changes from human coding to algorithms automatically learning to solve tasks

Observing many examples of the expected input/output behavior, the last decade has seen the rise of the ‘black box’ society, as most of the times algorithms’ internal reasoning is obscure even for their developers. We assess black box AI systems for automated decision making, which are often based on machine learning trained with big data and map users’ features into a class predicting his or hers behavioral traits.

Go to the Sobigdata catalogue →