Supplementary
1 Discriminator
For the discriminator, we use different configurations for different numbers of loop detectors. Table. 1 shows the configurations of the discriminator under different number of loop detectors. Conv2D, Flatten and FC denote 2-D convolution layer, flatten layer and fully connected layers, respectively. We use batch normalization after each convolution layer. The activation function after each layer is the hyperbolic tangent activation function (Tanh)111code is available at: https://anonymous.4open.science/r/TrafficFlowGAN.
Discriminator network structure | |||
---|---|---|---|
Layers | loop=3 | loop=4 | loop=6 |
Conv2D | |||
Conv2D | |||
Conv2D | |||
Flatten | |||
FC | 144 | 144 | 240 |
FC | 64 | 64 | 64 |
Layers | loop=10 | loop=14 | loop=18 |
Conv2D | |||
Conv2D | |||
Conv2D | |||
Flatten | |||
FC | 96 | 96 | 144 |
FC | 64 | 64 | 64 |
2 Generator
For our conditional flow model, the prior network(p-net) consists of networks for the prior mean and networks for the prior standard deviation . Each of these two prior networks has 6 affine coupling layers, and each of the layers is a fully connected layer with 256 neurons. We use Leaky Rectified Linear Unit(Leaky ReLU) as the activation function.
Each affine coupling layer in the generator consists of a scale function (k-net) and a translation function (b-net). In the experiments, we use 8 affine coupling layers for k-net and b-net, respectively. Every single layer is a fully connected layer with 256 neurons. The activation function after each layer is Rectified Linear Unit (ReLU). We use batch normalization in our experiments.