stable-diffusion

Photo-realistic visuals are only a prompt away. Use Stable Diffusion to generate incredible images based on any type of text prompts and feel the never-ending inspiration.

Hold: 0
Required: 10,000 $wRAI
58.7k runs
Demo
Examples
Input
prompt
Input prompt
negative_prompt
Specify things to not see in the output
image_dimensions
pixel dimensions of output image
num_inference_steps
Number of denoising steps (minimum: 1; maximum: 500)
guidance_scale
Scale for classifier-free guidance (minimum: 1; maximum: 20)
scheduler
Choose a scheduler.
seed
Random seed. Leave blank to randomize the seed
Hold at least 10,000 wRAI to use this model
Multi AI platform is completely free, but most models are only accessible to wRAI token holders. If you have any questions, feel free to ask in our Telegram chat
Input
seed
794286
prompt
city moscow, hd, neon lights, cyberpunk
scheduler
K_EULER
num_outputs
1
guidance_scale
7.5
num_inference_steps
50
Input
seed
76667
prompt
City Moscow, traffic jams, realism
scheduler
K_EULER
num_outputs
1
guidance_scale
7.5
num_inference_steps
50

Readme

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.

  • Developed by: Robin Rombach, Patrick Esser
  • Model type: Diffusion-based text-to-image generation model
  • Language(s): English

We’ve generated a version of stable diffusion which runs very fast, but can only produce 512x512 or 768x768 images. We’ll keep hosting versions of stable diffusion which generate variable-sized images, so don’t worry if you need variable dimensions.

Direct Use

The model is intended for research purposes only. Possible research areas and tasks include

  • Safe deployment of models which have the potential to generate harmful content.
  • Probing and understanding the limitations and biases of generative models.
  • Generation of artworks and use in design and other artistic processes.
  • Applications in educational or creative tools.
  • Research on generative models.