Creating coordinated multiagent policies in an environment with uncertainties is a challenging issue in the research of multiagent learning. In this paper, a coordinated learning approach is proposed to enable agents to learn both individual policies and coordinated behaviors by exploiting independent relationships inherent in many multiagent systems. We illustrate how this approach is employed to solve coordination problems in robot navigation domains. Experimental results of different scales of domains prove the effectiveness of our learning approach. © 2012 Springer-Verlag.