Skip to content

question about batch size and learning rate #10

@tejas-gokhale

Description

@tejas-gokhale

Hello authors,

I have access to a GPU server that can handle larger batch-sizes, say around 128 (or more). I believe this would reduce the training time ~4x, What would you recommend would be a good learning rate on higher batch-sizes? In your experience, is there a good heuristic that you follow in training GANs when it comes to adjusting batch-size and learning-rate ?

On a slightly unrelated note -- have you tried using Distributed Data Parallel to speed up training? We've been trying to use it, but are encountering weird errors, maybe you have some insights? If we are able to figure it out, I'd love to share the code with you and contribute it here.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions