#
22 Nov 2017
Tensorflow Guide: Exponential Moving Average for Improved Classification

### Parameter Selection via Exponential Moving Average

When training a classifier via gradient decent, we update the current classifier’s parameters $\theta$ via

where $\theta_t$ is the current state of the parameters and $\Delta \theta_t$ is the update step proposed by your favorite optimizer. Often times, after $N$ iterations, we simply stop the optimization procedure (where $N$ is chosen using some sort of decision rule) and use $\theta_N$ as our trained classifier’s parameters.

However, we often observe empirically that a post-processing step can be applied to improve the classifier’s performance. Once such example is Polyak averaging. A closely related—and quite popular—procedure is to take an exponential moving averaging (EMA) of the optimization trajectory $(\theta_n)$,

where $\lambda \in [0, 1)$ is the decay rate or momemtum of the EMA. It’s a simple modification to the optimization procedure that often yields better generalization than simply selecting $\theta_N$, and has also been used quite effectively in semi-supervised learning.

Implementation-wise, the best to apply EMA to a classifier is to use the built-in `tf.train.ExponentialMovingAverage`

function. However, the documentation doesn’t provide a guide for how to cleanly use `tf.train.ExponentialMovingAverage`

to construct an EMA-classifier. Since I’ve been playing with EMA recently, I thought that it would be helpful to write a gentle guide to implementing an EMA-classifier in Tensorflow.

### Understanding tf.train.ExponentialMovingAverage

For those who wish to dive straight into the full codebase, you can find it here. For self-containedness, let’s start with the code that constructs the classifier.

Here, I use a fairly standard CNN architecture. The first thing to note is the use of variable scoping. This puts all of the classifier’s variables within the scope `class/`

. To create the classifier, simply call

Once the classifier is created in the computational graph, variable scoping allows for easy access of the classifier’s trainable variables via

After getting the list of trainable variables via `tf.get_collection`

, we use `ema.apply`

, which serves two purposes. First, it constructs an auxiliary variable for each corresponding variable in `var_class`

to hold the exponential moving average. Next, it returns a tensorflow Op which updates the EMA variables. The object `ema`

can then access the EMA via the function `ema.average`

### Populating the Classifier with the EMA Variables

So far, we’ve figured out how to create the EMA variables and how to access them. But what’s the easiest way to make the classifier use the EMA variables? Here, we leverage the `custom_getter`

argument that appears in `tf.variable_scope`

. According to documentation, whenever you call `tf.get_variable`

, the default getter gets an existing tensor according to the tensor variable’s name. However, a custom getter can be applied to change the existing tensor that is returned by `tf.get_variable`

.

To construct the custom getter, locally define `ema_getter`

after you’ve already created the `ema`

object

To apply the EMA classifier during test-time, we simply call `classifier`

again, this time with the custom `ema_getter`

And that’s it! We can now verify that applying EMA does in fact improve the performance of the classifier on the CIFAR-10 test data set.

You can find the full code for training the CIFAR-10 classifier below.

End of post