++ All booklets now selling at 70% discount and fully available for free through the KDP unlimited circuit. Get them while it lasts!
Tailored content can be provided upon request. Submit your technical specification through the contact form in the About page at https://www.tenproblems.com/about/ and get your quote.
Literature Review: Self Learning problem for Robotics
This “Ten Problems for Robotics in the 2020s” booklet identifies ten relevant areas from very recent contributions put forward at academic level in the form journal articles, conference proceedings and students theses. Ten freely accessible internet references have been selected for each area and direct links are provided at the end of each chapter for own consultation. Our selected references do not intend to mirror ranking indexes nor establish novel classifications. On the contrary, they are meant to represent peer-reviewed, diverse and scientifically-sound case studies for vertical dissemination aimed at non-specialist readers. They will also be able to scoop even more references through the bibliography that is reported at the end of each selected reference.
Without further ado, these are the ten problems that we are going to introduce in this booklet:
- self learning,
- action planning,
Each problem has its own dedicated chapter made of an introductory section, a short presentation of the ten selected references and a conclusions section.
The final chapter of this booklet will report the conclusions from each chapter again in order to provide a complete executive summary.
1 Self Learning
THE PROBLEM — Robots are required to safely operate in unknown environments for extended lengths of time while without human intervention. The ultimate challenge of reinforcement learning research is to train real agents to operate in the real environment. Human-robot interaction is highly desirable since it would provide user-personalized interaction, a crucial element in many scenarios. Mapping and exploration of a priori unknown environments is also a crucial capability for mobile robot autonomy. Legal aspects and product liabilities considerations are a great obstacle to the deployment of full autonomous robots and systems.
CASE STUDIES — … buy this booklet from Amazon …
CONCLUSIONS — Reinforcement learning holds the promise for persistent autonomy because it can adapt to dynamic and unstructured environments by automatically learning optimal policies from the interactions between robots and environments. Working with real physical environments pose significant challenges for the speed of progress in reinforcement learning research. There is nothing inherently normatively problematic about employing autonomous robots as workers; still, we must not put them to perform just any work, if we want to avoid blame. For a social robot to be able to recreate this same kind of rich, human-like interaction, it should be aware of our needs and affective states and be capable of continuously adapting its behaviour to them. It is becoming easy to prototype and implement state-of-art reinforcement learning algorithms on surgical robotics problems. A deep reinforcement learning exploration policy can select an optimal or near-optimal exploratory sensing action with improved computational efficiency. The design of self-learning and self-play scenarios for improved agent-environment interaction is still an area of developments and research. Little is known about principles of building and implementing as opposed to using robotic systems, such as bots for process automation and chatbots. Networked mobile robots that are capable of motion controllers self-learning are receiving growing attention in the mobile robotics research community. A self-driving car manufacturer who neglects to adapt the visualization technology to adversarial images attacks would face liable claims under product liability.
TEN FREE REFERENCES FROM THE INTERNET — … buy this booklet from Amazon …