In this paper, we conduct an investigation into background subtraction techniques using Gaussian Mixture Models (GMM) in the presence of large illumination changes and background variations. We show that the techniques used to date suffer from the trade-off imposed by the use of a common
learning rate to update both the mean and variance of the component densities, which leads to a degeneracy of the variance and creates “saturated pixels”. To address this problem, we propose a simple yet effective technique that differentiates between the two learning rates, and imposes a constraint on the variance so as to avoid the degeneracy problem. Experimental results are presented which show that, compared to existing techniques, the proposed algorithm provides more robust segmentation in the presence of illumination variations and abrupt changes in background distribution.