Towards a distributed planning of decision making under uncertainty for a fleet of robots
Résumé
Coordination is required in order to solve a multi robot navigation problem and allow an efficient and fast search of a solution while avoiding any possible collisions. Planning with a fleet of robots can rely on Multi-agent Markovian Decision Processes (MMDPs) model This assumes that it is possible to share the local perceptions of robots every time. However the computation of a distributed policy is not necessarily distributable between robots as with multiple path planning, where the movement of one robot depends on all the other's paths. The global search space would be of exponential size (in the number of robots) in most of multi-robot scenarios. Distributed planning over the robots would allow each robot to plan its own policy while taking advantage of parallel computing. In this paper, this problem is addressed by presenting an approach consisting in starting from a simplified model which can be distributed, then by adding robots interactions constraints while maintaining the model distributable. The results of the experimentations with different configurations highlight some of the strength and limitations of the current approach.