++ All booklets now selling at 70% discount and fully available for free through the KDP unlimited circuit. Get them while it lasts!
Tailored content can be provided upon request. Submit your technical specification through the contact form in the About page at https://www.tenproblems.com/about/ and get your quote.
Literature Review: Machines problem for Safety
This “Ten Problems for Safety in the 2020s” booklet identifies ten relevant areas from very recent contributions put forward at academic level in the form journal articles, conference proceedings and students’ theses. Ten freely accessible internet references have been selected for each area and direct links are provided at the end of each chapter for own consultation. Our selected references do not intend to mirror ranking indexes nor establish novel classifications. On the contrary, they are meant to represent peer-reviewed, diverse and scientifically-sound case studies for vertical dissemination aimed at non-specialist readers. They will also be able to scoop even more references through the bibliography that is reported at the end of each selected reference.
Without further ado, these are the ten problems that we are going to introduce in this booklet:
Each problem has its own dedicated chapter made of an introductory section, a short presentation of the ten selected references and a conclusions section.
The final chapter of this booklet will report the conclusions from each chapter again in order to provide a complete executive summary.
THE PROBLEM — The idea of machines becoming sentient, autonomous and unpredictable is becoming such a concern for the safety of mankind that ethical principles and strict control frameworks are being invoked worldwide. Possible catastrophic risks of artificial intelligence include threats to democracy, the risk of totalitarianism, threats to physical and digital safety.
CASE STUDIES — … buy this booklet from Amazon …
CONCLUSIONS — Under the headline “AI safety”, a wide-reaching issue is being discussed, whether in the future some “superhuman artificial intelligence” / “superintelligence” could pose a threat to humanity. It is impossible to precisely and consistently predict what specifications a smarter-than-human intelligent system will take to achieve its objectives, even if we know the terminal goals of the system. A key part will be reconciling system safety with artificial intelligence safety, combining system safety’s view of the harm to humans with the understanding of what might go wrong. In the past several years, seemingly every organization with a connection to technology policy has authored or endorsed a set of principles for artificial intelligence. Highly targeted and evasive recent attacks in benign carrier applications have demonstrated the intentional use of AI for harmful purposes. It is clear that self-driving car accidents are an inevitability, but the variety of accidents that ethically hurt the public and hinder the industry are not. Autonomous AI systems in healthcare are AI systems that make clinical decisions without human oversight. There are two types of artificial general intelligence safety solutions: global and local. Artificial intelligence risks need to be evaluated within the set of related global catastrophic risks. Open AI DevOps platforms with built-in ethics-relevant tools should be developed.
TEN FREE REFERENCES FROM THE INTERNET — … buy this booklet from Amazon …