This repository contains the full implementation, experiments, and analysis for my CS436/536 final project:
“Conditional GANs with Auxiliary Discriminative Classifier: Reproduction and Early Extensions”
(Crystal Sembhi, Mudit Golchha, Binghamton University)
This project reproduces the ADC-GAN (Hou et al., ICML 2022) paper and compares it against AC-GAN and PD-GAN under a unified BigGAN-CIFAR10 framework.
It also introduces a small extension γ-ADC-GAN, which scales the auxiliary discriminative classifier loss by a factor γ.
Conditional GANs aim to generate class-specific images.
The commonly used AC-GAN uses an auxiliary classifier trained only on real images, which causes:
- low intra-class diversity
- mode collapse tendencies
- training instability
ADC-GAN introduces a discriminative classifier trained on both real and fake samples, and outputs:
Real class k → 2k
Fake class k → 2k + 1
This improves stability, diversity, and early FID/IS performance.
In this project:
- Reproduce ADC-GAN on CIFAR-10 using the BigGAN backbone
- Implement and compare AC-GAN and PD-GAN using identical training settings
- Develop γ-ADC-GAN, a lightweight extension using γ ∈ {1.0, 0.5}
- Evaluate all models on CIFAR-10 and a custom 1D synthetic task
Conditional-GANs-with-Auxiliary-Discriminative-Classifier/
|
├── Custom_py
│ ├── custom_losses.py
│ └── custom_train_fns.py
├── experiments
│ ├── acgan_12k_20251205-000221
│ │ ├── metrics.csv
│ │ └── notes.md
│ ├── acgan_cifar10_biggan_20251203-170409
│ │ ├── metrics.csv
│ │ └── notes.md
│ ├── adcgan_cifar10_biggan_20251116-065557
│ │ ├── metrics.csv
│ │ └── notes.md
│ ├── adcgan_gamma05_12k_20251205-002946
│ │ ├── metrics.csv
│ │ └── notes.md
│ ├── adcgan_gamma05_20251204-004734
│ │ ├── metrics.csv
│ │ └── notes.md
│ ├── pdgan_12k_20251204-232757
│ │ ├── metrics.csv
│ │ └── notes.md
│ ├── pdgan_cifar10_biggan_20251202-203055
│ │ └── notes.md
│ └── pdgan_cifar10_biggan_20251203-163507
│ ├── metrics.csv
│ └── notes.md
├── Graph and Plots
│ ├── acgan_synthetic_1d.png
│ ├── adcgan_fid_cifar10.png
│ ├── adcgan_is_cifar10.png
│ ├── adcgan_synthetic_1d.png
│ ├── combine_losses.py
│ ├── combine_samples.py
│ ├── dloss_1d.png
│ ├── fid_0_12k_comparison.png
│ ├── fid_adc_pd_ac.png
│ ├── gamma_adcgan_synthetic_1d.png
│ ├── gloss_1d.png
│ ├── is_0_12k_comparison.png
│ ├── is_adc_pd_ac.png
│ ├── pdgan_synthetic_1d.png
│ ├── synthetic_1d_losses.png
│ └── synthetic_1d_samples.png
├── Main Notebook
│ └── MAIN_CODEBASE.ipynb
└── README.md
pip install torch torchvision tqdm numpy matplotlibInside BigGAN-PyTorch/:
python train.py --loss adcgan --dataset C10 --use_ema --batch_size 50 --num_D_steps 4 --save_every 2000 --test_every 2000 --experiment_name adcgan_runpython train.py --loss acgan --dataset C10 --use_ema --experiment_name acgan_runpython train.py --loss hinge --dataset C10 --projection --use_ema --experiment_name pdgan_runSet γ inside custom_losses.py / custom_train_fns.py:
G_lambda = gamma
D_lambda = gammaRun:
python train.py --loss adcgan --experiment_name gamma05_adcgan --use_emaThe synthetic dataset is a 3-mode Gaussian mixture with means at -4, 0, and +4.
Run:
python train_1d.pyThis produces:
- acgan_synthetic_1d.png
- pdgan_synthetic_1d.png
- adcgan_synthetic_1d.png
- gamma_adcgan_synthetic_1d.png
- dloss_1d.png
- gloss_1d.png
These visualizations help compare diversity and training stability.
| Model | FID ↓ | Behavior |
|---|---|---|
| AC-GAN | Slow improvement | Unstable classifier |
| PD-GAN | Moderate | Noisy projection term |
| ADC-GAN | Best early FID | Stable and diverse |
| γ-ADC-GAN (0.5) | Slight FID improvement at several points | Smoothest overall training |
| Model | Mode Coverage | Stability |
|---|---|---|
| AC-GAN | Collapses to 1–2 modes | Unstable |
| PD-GAN | Covers all modes | Noisy gradients |
| ADC-GAN | Best accuracy | Smooth discriminator signals |
| γ-ADC-GAN (0.5) | Preserves modes | Smoothest training curve |
Crystal Sembhi:
- Trained the ADC-GAN and PD-GAN with Cifar-10.
- Trained the AC-GAN for 1D Dataset.
- Ran the Evaluation Metrics.
Mudit Golchha:
- Trained the γ-ADC-GAN.
- Created Graph and plots for all the models.
- Ran the Evaluation Metrics.