++ We have taken down all the Kindle booklets on 31 Dec 2021 in order to pursue a different path, with software coming first. The booklets will return at a later stage!
Tailored content can be provided upon request. Submit your technical specification through the contact form in the About page at https://www.tenproblems.com/about/ and get your quote.
Literature Review: Bias problem for Artificial Intelligence
This “Ten Problems for Artificial Intelligence in the 2020s” booklet identifies ten relevant areas from very recent contributions put forward at academic level in the form journal articles, conference proceedings and students theses. Ten freely accessible internet references have been selected for each area and direct links are provided at the end of each chapter for own consultation. Our selected references do not intend to mirror ranking indexes nor establish novel classifications. On the contrary, they are meant to represent peer-reviewed, diverse and scientifically-sound case studies for vertical dissemination aimed at non-specialist readers. They will also be able to scoop even more references through the bibliography that is reported at the end of each selected reference.
Without further ado, these are the ten problems that we are going to introduce in this booklet:
- computer games,
- intelligent optimization,
- human intervention,
- ethical issues,
- software development,
- open source,
- operations research.
Each problem has its own dedicated chapter made of an introductory section, a short presentation of the ten selected references and a conclusions section.
The final chapter of this booklet will report the conclusions from each chapter again in order to provide a complete executive summary.
THE PROBLEM — How engineers can understand and fix issues related to discrimination resulting from the application of machine-learning software? A neural network can even learn a bias and empirically determine the severity of that bias. A taxonomy for fairness definitions is needed. Awareness of bias risks and working to reduce them is an urgent priority for companies and organizations, also to promote diversity and avoid the depersonalization of interactions in social environments. Law and technology are expected to work together.
CASE STUDIES — … buy this booklet from Amazon …
CONCLUSIONS — There are many examples to where the models made by machine learning software have been found to exacerbate bias and make arguably unfair decisions. A method based on adversarial training strategies against bias encourages vanished correlation to learn features for the prediction task. Temporal bias acts to undermine the validity of predictions by overemphasizing features close to the outcome of interest. One promising technique is “counterfactual fairness,” which ensures that a model’s decisions are the same in a counterfactual world where attributes deemed sensitive, such as race, gender, or sexual orientation, were changed. Little is known about the reactions of stakeholders to AI-based recruiting. Even within highly developed countries, many AI-modeling and data collection efforts overlook or neglect underrepresented minorities. Gender bias can be studied as based on four forms of representation bias. The construct of Anthropomorphized Technology and a “bias–threat–illusion” model are proposed to classify bias negative consequences. Despite the significant scientific advances achieved so far, most of the currently used biomedical AI technologies do not account for bias detection. Rigorous ex-post tests for medical machine learning programs are needed to tackle harmful biases.
TEN FREE REFERENCES FROM THE INTERNET — … buy this booklet from Amazon …
booklet updated on 19 Jun 2021, now on sale as version 1.2