Neuroscience study has revealed the discrepancy of emotion expression between the left and right hemispheres of human brain. Inspired by this study, in this article, we propose a novel bi-hemispheric discrepancy model (BiHDM) to learn this discrepancy information between the two hemispheres to improve electroencephalograph (EEG) emotion recognition. Concretely, we first employ four directed recurrent neural networks (RNNs) based on two spatial orientations to traverse electrode signals on two separate brain regions. This enables the proposed model to obtain the deep representations of all the EEG electrodes' signals that keep their intrinsic spatial dependence. Upon this representation, a pairwise subnetwork is designed to explicitly capture the discrepancy information between the two hemispheres and extract higher level features for final classification. Furthermore, considering the presence of the domain shift between training and testing data, we incorporate a domain discriminator that adversarially induces the overall feature learning module to generate emotion related but domain-invariant feature representation so as to further promote EEG emotion recognition. Experiments are conducted on three public EEG emotional data sets, in which we evaluate the performance of the proposed BiHDM as well as investigated the important brain areas in emotion expression and explore to use less electrodes to achieve comparable results. These experimental results jointly demonstrate the effectiveness and advantage of the proposed BiHDM model in solving the EEG emotion recognition problem.