

runĬUDA_VISIBLE_DEVICES= python scripts/sample_diffusion.py -r models/ldm/ /model.ckpt -l -n -batch_size -c -e Train your own LDMs Data preparation Facesįor downloading the CelebA-HQ and FFHQ datasets, proceed as described in the taming-transformers To try it out, tune the H and W arguments (which will be integer-dividedīy 8 in order to calculate the corresponding latent size), e.g. Beyond 256²įor certain inputs, simply running the model in a convolutional fashion on larger features than it was trained onĬan sometimes result in interesting results. even lower values of ddim_steps) while retaining good quality can be achieved by using -ddim_eta 0.0 and -plms (see Pseudo Numerical Methods for Diffusion Models on Manifolds). low values of ddim_steps) while retaining good quality can be achieved by using -ddim_eta 0.0.įaster sampling (i.e. Quality, sampling speed and diversity are best controlled via the scale, ddim_steps and ddim_eta arguments.Īs a rule of thumb, higher values of scale produce better samples at the cost of a reduced output diversity.įurthermore, increasing ddim_steps generally also gives higher quality samples, but returns are diminishing for values > 250.įast sampling (i.e. This will save each sample individually as well as a grid of size n_iter x n_samples at the specified output location (default: outputs/txt2img-samples). Python scripts/txt2img.py -prompt "a virus monster is playing guitar, oil on canvas" -ddim_eta 0.0 -n_samples 4 -n_iter 4 -scale 5.0 -ddim_steps 50
