Categories
Robotics

Robotics problem: Action Planning

++ All booklets now selling at 70% discount and fully available for free through the KDP unlimited circuit. Get them while it lasts!

Tailored content can be provided upon request. Submit your technical specification through the contact form in the About page at https://www.tenproblems.com/about/ and get your quote.

Literature Review: Action Planning problem for Robotics

This “Ten Problems for Robotics in the 2020s” booklet identifies ten relevant areas from very recent contributions put forward at academic level in the form journal articles, conference proceedings and students theses. Ten freely accessible internet references have been selected for each area and direct links are provided at the end of each chapter for own consultation. Our selected references do not intend to mirror ranking indexes nor establish novel classifications. On the contrary, they are meant to represent peer-reviewed, diverse and scientifically-sound case studies for vertical dissemination aimed at non-specialist readers. They will also be able to scoop even more references through the bibliography that is reported at the end of each selected reference.

Without further ado, these are the ten problems that we are going to introduce in this booklet:

  1. self learning, 
  2. manipulation,
  3. research,
  4. motion,
  5. detection,
  6. action planning,
  7. simulation,
  8. soft,
  9. education,
  10. accountability.

Each problem has its own dedicated chapter made of an introductory section, a short presentation of the ten selected references and a conclusions section.

The final chapter of this booklet will report the conclusions from each chapter again in order to provide a complete executive summary.


6 Action Planning

THE PROBLEM — Existing approaches that integrate action planning with reinforcement learning have not been able to map subgoals to low-level motion trajectories for realistic continuous-space robotic applications. Automated planning systems are still ill-suited to be applied within runtime robotic executions, which take place in uncertain environments.

CASE STUDIES — … buy this booklet from Amazon …

CONCLUSIONS — Action planning is based on discrete high-level action- and state spaces, whereas reinforcement learning is usually driven by a continuous reward function. Static world representations, lack of adaptive execution handling and the ability to replan in response to environmental observations still make it difficult. Imitation Learning has become an incredibly convenient scheme to teach robots skills for specific tasks. Learning from demonstration and reinforcement learning are now frequently combined. A novel Mimicry Constraint Policy Optimization improves policy optimization by enforcing the occupancy measure matching to the expert, and can be integrated into many reinforcement learning methods. A standardized integration of probabilistic planners into ROSPlan framework allows for reasoning with non-deterministic effects and is agnostic to the probabilistic planner used. To illustrate the efficiency of the vision system in autonomous harvesting, a robotic harvesting experiment by using industry robotic arm in a controlled environment is conducted. The use of mivar expert systems makes it possible to solve problems of automatic planning of actions of robots in the state space. More and more countries and regions have adopted robot contests as important platforms for popularizing robot knowledge, selecting robotic technical talents and transforming robot technology. A simple, unmanned, autonomous mobile robot can still carry the appropriate standard sprinkle of firefighting systems.

TEN FREE REFERENCES FROM THE INTERNET — … buy this booklet from Amazon …


Robotics
“Ten Problems for Robotics in the 2020s” booklet for Amazon Kindle, 2020; click on the cover to go to the dedicated Amazon listing page

By TenProblems

Literature Reviews for Inquisitive Minds