Statistics and Its Interface

Volume 11 (2018)

Number 3

Exploring generative perspective of convolutional neural networks by learning random field models

Pages: 515 – 529

DOI: https://dx.doi.org/10.4310/SII.2018.v11.n3.a13

Authors

Yang Lu (University of California at Los Angeles)

Ruiqi Gao (University of California at Los Angeles)

Song-Chun Zhu (University of California at Los Angeles)

Ying Nian Wu (University of California at Los Angeles)

Abstract

This paper studies the convolutional neural network (ConvNet or CNN) from a statistical modeling perspective. The ConvNet has proven to be a very successful discriminative learning machine. In this paper, we explore the generative perspective of the ConvNet. We propose to learn Markov random field models called FRAME (Filters, Random field, And Maximum Entropy) models using the highly sophisticated filters pre-learned by the ConvNet on the big ImageNet dataset. We show that the learned models can generate realistic and rich object and texture patterns in natural scenes. We explain that each learned model corresponds to a new ConvNet unit at the layer above the layer of filters employed by the model. We further show that it is possible to learn a generative ConvNet model with a new layer of multiple filters, and the learning algorithm admits an EM interpretation with binary latent variables.

Keywords

generative models, Langevin dynamics, Markov random fields, energy-based models

The work is supported by NSF DMS 1310391, DARPA SIMPLEX N66001-15-C-4035, ONR MURI N00014-16-1-2007, and DARPA ARO W911NF-16-1-0579.

Received 12 May 2017

Published 17 September 2018