Conditional adversarial generative networks (cGANs) are often trained with a reconstruction loss in addition to the adversarial loss to compensate for its instability. However, reconstruction losses are known to conflict with the adversarial objective and prevent generating diverse outputs (mode collapse). This problem is acknowledged by the community but is either ignored or addressed with sophisticated approaches. We promote a surprisingly simple and yet unconsidered alternative: replacing the reconstruction loss by the energy distance. With a minor implementation modification, it solves the conflict problem with the adversarial objective, prevents mode collapse, and produces high-quality results in several image-processing tasks.