This article presents a new method for crowd behavior recognition, using dynamic features extracted from dense trajectories. The histogram of oriented gradient and motion boundary histogram descriptors are computed at dense points along motion trajectories, and tracked using median filtering and displacement information obtained from a dense optical flow field. Then a global representation of the scene is obtained using a bag-of-words model of the extracted features. The locality-constrained linear encoding with sum pooling and L2 plus power normalization are employed in the bag-of-words model. Finally, a support vector machine classifier is trained to recognize the crowd behavior in a short video sequence. The proposed method is tested on two benchmark datasets, and its performance is compared with those of some existing methods. Experimental results show that the proposed approach can achieve a classification rate of 93.8% on PETS2009 S3 and area under the curve score of 0.985 on UMN datasets respectively.