This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Supplementary

1 Discriminator

For the discriminator, we use different configurations for different numbers of loop detectors. Table. 1 shows the configurations of the discriminator under different number of loop detectors. Conv2D, Flatten and FC denote 2-D convolution layer, flatten layer and fully connected layers, respectively. We use batch normalization after each convolution layer. The activation function after each layer is the hyperbolic tangent activation function (Tanh)111code is available at: https://anonymous.4open.science/r/TrafficFlowGAN.

Discriminator network structure
Layers #\#loop=3 #\#loop=4 #\#loop=6
Conv2D 3×1×43\times 1\times 4 4×2×44\times 2\times 4 3×2×43\times 2\times 4
Conv2D 3×1×83\times 1\times 8 4×2×84\times 2\times 8 3×2×83\times 2\times 8
Conv2D 3×1×163\times 1\times 16 4×2×164\times 2\times 16 3×2×163\times 2\times 16
Flatten
FC 144 144 240
FC 64 64 64
Layers #\#loop=10 #\#loop=14 #\#loop=18
Conv2D 3×2×43\times 2\times 4 3×2×43\times 2\times 4 3×2×43\times 2\times 4
Conv2D 3×2×83\times 2\times 8 3×2×83\times 2\times 8 3×2×83\times 2\times 8
Conv2D 3×2×123\times 2\times 12 3×2×123\times 2\times 12 3×2×123\times 2\times 12
Flatten
FC 96 96 144
FC 64 64 64
Table 1: Network structures for the discriminator under different loop detectors. For Conv2D layers, the first two numbers indicate the kernel sizes, and the last number indicates the number of output channels. For the FC layer, the number indicates the number of hidden neurons.

2 Generator

For our conditional flow model, the prior network(p-net) consists of networks for the prior mean 𝝁\boldsymbol{\mu} and networks for the prior standard deviation 𝝈\boldsymbol{\sigma}. Each of these two prior networks has 6 affine coupling layers, and each of the layers is a fully connected layer with 256 neurons. We use Leaky Rectified Linear Unit(Leaky ReLU) as the activation function.

Each affine coupling layer in the generator consists of a scale function (k-net) and a translation function (b-net). In the experiments, we use 8 affine coupling layers for k-net and b-net, respectively. Every single layer is a fully connected layer with 256 neurons. The activation function after each layer is Rectified Linear Unit (ReLU). We use batch normalization in our experiments.