Ergodic exploration has been shown to be an effective framework for autonomous sensing and exploration. The objective of ergodic control is to minimize the difference between the distribution of the time-averaged sensor trajectory and a spatial probability distribution function representing information density. Therefore, the time a sensor spends sampling a particular region is manipulated to correspond to the anticipated information density of that region. This paper introduces a trajectory optimization approach for ergodic exploration in the presence of stochastic sensor dynamics. The stochastic differential dynamic programming algorithm is formulated in the context of ergodic exploration. Numerical studies demonstrate the proposed framework’s ability to mitigate stochastic effects.