0%

instruct-pix2pix

Developers of the instruct-pix2pix model offer a completely new method of image editing based on a written instruction provided by a person, which tells the model what to do and the input image. The instruct-pix2pix model is twRAIned on a language model (GPT-3) and a text-to-image conversion model (Stable Diffusion)

Hold: 0
Required: 5,000 $wRAI
54.6k runs
Demo
Examples
Input

image
An image which will be repainted according to prompt
prompt
Prompt to guide the image generation
negative_prompt
The prompt or prompts not to guide the image generation (what you do not want to see in the generation). Ignored when not using guidance.
num_inference_steps
The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. (minimum: 1; maximum: 500)
guidance_scale
Scale for classifier-free guidance. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality. (minimum: 1; maximum: 20)
image_guidance_scale
Image guidance scale is to push the generated image towards the inital image. Higher image guidance scale encourages to generate images that are closely linked to the source image, usually at the expense of lower image quality. (minimum: 1)
scheduler
Choose a scheduler.
seed
Random seed. Leave blank to randomize the seed
Hold at least 5,000 wRAI to use this model
Multi AI platform is completely free, but most models are only accessible to wRAI token holders. If you have any questions, feel free to ask in our Telegram chat
Input
guidance_scale
7.5
image
image_guidance_scale
1.5
num_inference_steps
100
num_outputs
1
prompt
turn him into cyborg
scheduler
K_EULER_ANCESTRAL
seed
437351