[arXiv] BigDataFr recommends: Human-Algorithm Interaction Biases in the Big Data Cycle: A Markov Chain Iterated Learning Framework

BigDataFr recommends: Human-Algorithm Interaction Biases in the Big Data Cycle: A Markov Chain Iterated Learning Framework

Comments: This research was supported by National Science Foundation grant NSF-1549981
Subjects: Learning (cs.LG); Human-Computer Interaction (cs.HC)

[…] Early supervised machine learning algorithms have relied on reliable expert labels to build predictive models. However, the gates of data generation have recently been opened to a wider base of users who started participating increasingly with casual labeling, rating, annotating, etc. The increased online presence and participation of humans has led not only to a democratization of unchecked inputs to algorithms, but also to a wide democratization of the « consumption » of machine learning algorithms’ outputs by general users.

Hence, these algorithms, many of which are becoming essential building blocks of recommender systems and other information filters, started interacting with users at unprecedented rates. The result is machine learning algorithms that consume more and more data that is unchecked, or at the very least, not fitting conventional assumptions made by various machine learning algorithms. These include biased samples, biased labels, diverging training and testing sets, and cyclical interaction between algorithms, humans, information consumed by humans, and data consumed by algorithms. […]

Read paper
By Olfa Nasraoui, Patrick Shafto
Source: arxiv.org

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *