Understanding Midjourney prompt codes

Understanding Midjourney prompt codes

/

Jan 3, 2026

Guide to Midjourney Prompt Codes and Structuring Prompts

Midjourney prompts consist of a description of what you want to see, followed by optional parameters (the "codes") that modify how the image is generated. This guide covers all the important Midjourney parameters (for versions 5.2 through 7 and current features) and tips on structuring your prompts for best results.

Prompt Structure Basics

When writing a prompt, describe the subject or scene in plain language first. After the description, add any parameters (each beginning with --) separated by spaces. Parameters should always come at the end of the prompt text.

/imagine prompt A majestic castle on a hill at sunset --ar 16:9 --v 7 --stylize 250

In this prompt:

  • Description: “A majestic castle on a hill at sunset”

  • Parameters: --ar 16:9 (aspect ratio), --v 7 (model version 7), --stylize 250 (stylization level).

Formatting rules: Make sure there’s a space before each -- code, and don’t put punctuation right before or after a parameter. All parameters should come after the descriptive text. For example, castle at sunset--ar 16:9 (missing space) or castle at sunset --ar 16:9, (with a comma) are incorrect.

Emphasizing or excluding concepts: Midjourney allows weighting parts of your prompt using the :: syntax. By default, all parts of a prompt have equal weight (1 each). You can add a number after :: to assign higher or lower weight to a section. For example: cat::2 dog::1 tells Midjourney to emphasize “cat” twice as much as “dog”. Negative weights can also be used to de-emphasize something (e.g. realistic::-0.5 to reduce realism). In practice, the --no parameter is an easy way to add a negative weight of about -0.5 for things you want to exclude. For instance, adding --no water to a prompt tries to ensure no water appears in the image.

Tip: Keep your descriptive text clear and focused on what you want in the image. Leave out style instructions in the text if you plan to use style parameters or reference images, to avoid conflicts. Shorter prompts with fewer concepts tend to give more room for parameters like Weird or Style References (discussed later) to have effect.

Model Versions and Base Styles

Midjourney has several model versions with different capabilities. By default, Midjourney will use the latest stable version (currently V7 if you have it enabled). You can explicitly specify a model with the Version parameter --version (or --v). For example, --v 7 uses version 7, while --v 6 uses the V6 model. This can be useful if you want to compare outputs or use an older model for a particular style. (Midjourney V7 generally offers the highest coherence and quality for hands, text, etc. as of its release.)

There is also a special Niji model for anime and illustration styles. Add --niji to your prompt to switch to the Niji model (latest Niji version is used automatically). The Niji model is tuned for “anime and Eastern aesthetics”, so use it whenever you want a manga/anime look or vivid illustrative style.

Midjourney’s Raw Mode gives you more direct control over the output. Normally the bot adds its own “artistic” flair to interpret simple prompts, but if you add --raw in your prompt, it turns off those automatic embellishments. In Raw Mode, simple prompts tend to produce more plain or realistic results, and the AI follows your exact words more literally. Use --raw when you want photo-realism or when you have a very detailed prompt and you don’t want the AI to override your style choices. Example: portrait photo of an old warrior --raw --v 7 will likely yield a straightforward realistic portrait, whereas without --raw Midjourney might add its own dramatic lighting or artistic style. Note: Raw mode is available in versions 5.1 and up. You can also enable Raw mode as a default in your settings (on Discord with the /settings command or on the web UI) so you don’t have to type --raw every time.

Tip: If using --raw, you may want to provide more style clues in your text prompt since Midjourney won’t inject as much style on its own. Conversely, if you want the maximum Midjourney artistic style, leave Raw mode off and consider using a high Stylize value (next section) to let the AI be creative.

Aspect Ratio and Image Size

Midjourney generates square images by default, but you can change the frame’s shape using the Aspect Ratio parameter --ar (or --aspect). Aspect ratio is given as width:height. For example, --ar 16:9 produces a wide landscape image, while --ar 9:16 would be a tall portrait orientation. Some common aspect ratios are: 1:1 (square), 4:3 (slightly landscape), 3:2 (standard photo), 16:9 (widescreen), etc.

Choosing an aspect ratio does not directly specify resolution, but it influences the final pixel dimensions. A wider ratio means the generated image will have more width pixels relative to height. The actual size in pixels also depends on the model version and upscaling options. Version 7 default upscales are generally around ~1024px on the long side for the initial grid. After upscaling, you can get larger images (e.g. ~1664px with standard upscaler, or more with Beta/Max upscalers if available).

To set aspect ratio, simply add --ar W:H. Example: castle on a hill --ar 3:4 yields a vertical image. You can set a default aspect ratio in your settings if you often prefer a certain format. Keep in mind extremely large ratios (very wide or very tall) are experimental – they might produce strange results or be slightly adjusted by Midjourney during upscale. Also, older models (v4 and earlier) had limited aspect ratio support, but v5+ and v7 allow most ratios (within reason).

If you need a specific dimension or print format, calculate the ratio from the dimensions. For example, for an 1920×1080 wallpaper, --ar 1920:1080 will automatically simplify to --ar 16:9. Midjourney doesn’t accept decimals, so use whole numbers (e.g. --ar 85:110 for 8.5x11 inch page).

Quality and Speed Settings

Midjourney lets you control how much processing is devoted to an image using the Quality parameter --quality (or --q). Higher quality means the AI spends more GPU time, often yielding more detailed results. The default is --q 1 (normal quality). You can also use --q 2 for 2× more time (more detail), or even --q 4 for 4× time (maximum, currently available in V7). Low values like --q 0.5 or --q 0.25 use less time – these can make rough drafts faster/cheaper but with less detail.

A few notes on quality:

  • --q 2 roughly doubles generation time/cost and can produce finer details or more polished textures, at the cost of speed.

  • --q 4 (only in V7) is very slow and costly (4× GPU time) and is generally used for the absolute best detail. Note: You cannot use --q 4 with Omni-Reference images (discussed later).

  • You cannot set --q 3 – if you try, Midjourney will interpret it as --q 4 automatically.

  • Quality only affects the initial grid generation, not the upscale or variations steps. So if you generate initial images at --q 2 and then make variations or upscales, those subsequent steps don’t get extra detail beyond what was in the originals.

If you want to save time/credits or experiment quickly, Midjourney v7 introduced Draft Mode. Using the parameter --draft will generate the initial images at a much faster speed and half the cost. Draft mode images are lower resolution and quality than normal, but look similar in composition – they are great for rapid iteration. For example, you might do /imagine a sci-fi cityscape --draft to see quick ideas, then if you like a result, use the “Enhance” or upscale button to re-render it at full quality. Draft mode can also be toggled on in the web UI (there’s a Draft toggle or conversational mode). It’s especially useful in V7 when combined with the new “conversational prompt editing” – you can use voice or quick text tweaks to iteratively change the scene in seconds. Just note that Draft images will be lower resolution and might have some artifacts – always upscale or re-run in normal mode for the final output.

Midjourney also has different speed/priority modes:

  • Fast Mode: Runs jobs quickly, consuming your paid GPU time. In Discord you toggle this with /fast. There’s also a parameter --fast you can append to a prompt to ensure that single job runs in fast mode.

  • Relax Mode: Available to users with certain plans (e.g. Pro plan), it queues jobs with lower priority but doesn’t deduct time. Toggle with /relax or use --relax per prompt. In V7, “Relax” may still run slower, but is useful if you have unlimited relax jobs.

  • Turbo Mode (V7): In the new Version 7, Turbo mode is a high-performance mode that runs very fast but at 2× cost per job. Currently, V7 Turbo is essentially the default “fast” mode (since standard V7 mode is still being optimized). You can explicitly use --turbo on a prompt if you want to ensure it uses Turbo speed. Turbo is great for quick results, but keep in mind each Turbo job eats double the credits of a normal job.

Tip: If you have a Pro plan, you can combine these settings: e.g. use Relax mode for bulk jobs or exploration (costs no credits, just slower), and then switch to Fast/Turbo for final high-quality images. In prompts, you might not often need to type --fast or --relax if you have a default mode set, but it’s good to know the flags exist. Also, when using --draft in V7, you’re implicitly in a fast iteration workflow – you can even use --repeat (discussed later) with --draft to generate many quick variations cheaply.

Controlling Artistic Style: Stylize and Exp Parameters

Midjourney’s Stylize parameter (--stylize or --s) controls how much artistic freedom the AI has with your prompt. Think of it as a creativity dial from very literal to highly interpretive. A low stylize value means the bot will stick closely to your prompt details, yielding a more literal image. A high stylize means Midjourney can inject a lot of its own “learned” style and flair, potentially deviating from exact prompt wording to make a more artistic image.

Default stylize is 100. This is a medium amount of artistic flair (the default is often labeled “Stylize medium” in settings).

You can set stylize anywhere from 0 up to 1000 (for current models).

  • --stylize 0 essentially forces no style: the bot will try to render exactly what’s described, often yielding plain or technically correct but less arty images.

  • Higher values like --s 250, --s 500, --s 750 progressively allow more deviation and creativity. At extremely high values (e.g. 1000), Midjourney might produce very impressionistic results that only loosely match the prompt.

Tip: Use low stylize (like 0–50) if you need precise control (for example, if you’re doing a logo or a specific design with exact elements). Use high stylize (500+) if you want the image to be visually striking even at the expense of prompt accuracy, for instance in abstract art.

You can set a default stylize value via the Midjourney settings (the /settings command offers presets like “Stylize low/med/high” which correspond to certain values). The presets might be around: Stylize Low = 50, Med = 100, High = 250 (these can be checked in the “current suffix” in settings). But you can always override with a manual number.

Example: Prompt turtle riding a bicycle --s 0 will give a very straightforward depiction of a turtle on a bike (likely realistic/plain). turtle riding a bicycle --s 1000 might yield a wildly artistic image – maybe the turtle is drawn in a whimsical style, with colors and flourishes Midjourney imagined beyond the literal prompt.

Midjourney Version 7 introduced a new parameter called Experimental (--exp), which adds another “dimension” of creativity to images. The Exp parameter is somewhat like a second stylize-like control that affects intricacy and energy in the image. While Stylize governs how much Midjourney deviates into learned artistic style, Exp influences the level of detail, dynamism, and surreal touches in an image. It ranges from 0 to 100 (0 is off/default).

  • Lower --exp values (e.g. 5, 10) might add a subtle boost to texture or dynamism. The difference may be slight unless you look closely.

  • Medium values (20–50) start to make the image more detailed and “alive”, often with richer textures or a “tone-mapped” high-contrast look.

  • Very high values (above 50 towards 100) can overwhelm other parameters and the prompt itself. At --exp 100, the images become very stylized/energetic in terms of lighting and detail, but you might lose some fidelity to the original prompt or to a personalization profile. Essentially, high Exp can take over, making images that are dramatic and complex but possibly less controlled.

You can use Exp together with Stylize – for example, --s 500 --exp 50 – for a highly artistic and also richly detailed result. Just use caution at extreme values: users have observed that if Exp is very high, it can negate the effect of stylize entirely. For instance, an image at --exp 100 might look the same whether stylize is 300 or 900, because Exp is dominating the style choices. A good approach is to start with moderate Exp (say 10, 20) and see the effect, then increase if you want more. Midjourney’s own recommendation was to try values like 5, 10, 25, 50 first.

When to use --exp: If you want more visual richness or a dreamlike/cinematic quality, Exp can be great. It’s especially powerful when combined with personalization or custom style codes (covered later) – it can amplify those style nuances. On the other hand, if you need the image to strictly match your prompt (e.g. consistent characters or a specific layout), keep Exp low or zero. High Exp also tends to reduce image diversity – meaning the 4 images in a grid might look more similar to each other, and multiple runs might stick to a certain look. So for consistent outputs Exp can be fun, but for a broad range of interpretations, lower Exp or using Chaos (next section) might be better.

Example outputs comparing Stylize vs Exp: The top row shows a prompt with increasing --exp values (left to right: Exp 0, 10, 50, 100) and the bottom row shows the same prompt with increasing --stylize values (Stylize 0, 250, 500, 1000). As you can see, higher Exp values add dramatic lighting, detail, and “punch” to the image, whereas higher Stylize values make the composition more artistically interpretive (but in this example, Stylize changes color palette and style less drastically than Exp does). Both parameters can be combined, but at maximum Exp the Exp tends to dominate.

Tips: If you use very high Exp along with other style influences (like Stylize, Personalization --p, or Raw), be aware of the “competition” between them. For example, with --raw and high Exp, you might get an extremely detailed but somewhat uncontrolled image. Or if you use Exp with a style reference code and a high stylize, try moderate values first to see how they blend. The key is to experiment in small increments and find a balance that gives you the creativity you want without losing the essence of your idea.

Variation and Unpredictability: Chaos & Weird Parameters

To introduce variations and unpredictability into your outputs, Midjourney provides the Chaos and Weird parameters.

The Chaos parameter (--chaos or --c) controls how much variety you get in the initial grid of 4 images. Normally, with Chaos at 0 (default), the four images for a given prompt tend to be fairly similar in composition and overall concept. Increasing chaos gives you more divergent interpretations of the prompt within one job.

Range: 0 to 100. Default is --chaos 0 (no extra chaos).

  • Low chaos (e.g. 0–10): The results will be more predictable and similar to each other. This is good for consistency – if you want four variations that all stick to the prompt in a reliable way (useful when refining a concept).

  • High chaos (e.g. 50 or 100): The 4 images can be wildly different from each other. Midjourney will take more creative liberty in how it interprets the prompt each time, often resulting in at least one image that’s quite distinct in style or composition from the others. At --chaos 100, expect the maximum variety – it’s almost like getting four very different prompt interpretations.

Chaos does not necessarily make the image “chaotic” in content; it makes the set of outputs more varied. So one image might lean one way, and another image a completely different way in style, mood, or even what elements are emphasized. For example, prompt: “a small cottage in the woods, spring”:

  • With --chaos 0, you might get four cottages that all look similar (maybe just slight differences in angle or lighting).

  • With --chaos 100, one image might show a bright fairytale cottage, another might focus on a dark spooky cabin, another might emphasize a surrounding garden, etc., all from the same prompt text.

In other words, increasing Chaos “nudges the four variations away from the default harmonious center and towards four distinct directions”. It’s great for exploration when you’re not sure exactly what style or composition you want – a high chaos prompt can surprise you with an unexpected yet appealing result.

Tip: If you find a particular variant you like from a high-chaos grid, you can upscale or reroll with lower chaos to then fine-tune that concept. Also, you can use chaos with seeds for experimentation: the seed will anchor one aspect of randomness, and chaos will still introduce variety in generation (Chaos adds variety beyond just the initial noise seed). At Chaos 0 with a fixed seed, re-running yields the same result (deterministic); at higher Chaos, even the same seed can lead to more differences.

The Weird parameter (--weird or --w) is an experimental quirkiness slider. It tells Midjourney to get unconventional or “take some creative risks” in how it renders the prompt. Where Stylize/Exp make things more artistic or detailed, Weird makes images strange, edgy, or unexpected in content and style. Essentially, it pushes the AI away from the most common or obvious interpretations and into more unusual territory.

Range: 0 (off) up to 3000. By default it’s 0 (no weirdness). Values above 1000 are possible but not recommended to start with – the Midjourney team calls Weird an experimental feature and its effects can change over time. Practically, users often use values like 50, 100, 300, 500, up to 1000 for strong effect.

  • A little weird (--weird 50 or 200) might give subtle quirky twists – slight surreal elements, or mix mediums/styles in odd ways.

  • Moderate weird (300–500) yields distinctly offbeat results: your image may include unexpected elements or styles that deviate from typical art norms. For example, a portrait might come out with abstract, distorted features or an unusual art style blending.

  • Very high weird (1000+): The output can become truly bizarre and unpredictable. This could mean semi-abstract or dreamlike outputs that only loosely follow the prompt. It’s a way to discover very unique imagery, but it may stray far from your initial idea if too high.

One approach is to start with Weird around 200–300 to add some spice, and increase if you want more. Also, shorter prompts tend to let Weird have more room to play. If your prompt is very detailed and specific, Weird might not manifest strongly unless you go to extremely high values.

Important: Weird vs. Chaos – they are different. Chaos affects variation between images in the grid, whereas Weird affects the content/style of each image itself. High weirdness might produce a single image that’s very unusual, but if chaos is low, all 4 images might be unusual in a similar way. Conversely, high chaos might give four very different images but each one could be fairly normal if Weird=0. You can combine them (e.g. --weird 300 --chaos 50 to get many varied and odd interpretations).

Tip: Weird can work nicely with Stylize. A common suggestion is to use a high Stylize together with an equally high Weird to maintain aesthetic quality while being bizarre. For example, --stylize 700 --weird 700 might produce a beautifully rendered yet very unconventional image. If you leave stylize low and weird high, you might get something weird but also plain or not as visually interesting. Matching them somewhat can yield “distinctive yet beautiful” results. Of course, feel free to experiment – sometimes a pure weird with raw realism can create an uncanny effect too.

One more note: the Weird parameter isn’t fully compatible with seeds. This means that if you set a specific --seed and use a high weird value, you might not get deterministic results or the same weird pattern each time. Weirdness injects randomness beyond the normal seed’s influence (since it changes how the AI interprets things conceptually). So don’t expect reproducibility with Weird unless you keep Weird at 0.

Using Reference Images and Styles

Midjourney allows you to guide the image generation using reference images and predefined style codes. There are a few distinct features in this area:

  • Image Prompts (direct image inputs)

  • Style Reference (SREF) codes and Style Weight

  • Omni-Reference (new in V7 for injecting a specific character/object)

  • Personalization profiles and moodboards (discussed separately in the next section)

Image Prompts and Image Weight (--iw)

You can attach one or more images to your prompt to influence the outcome. Simply paste an image URL into the prompt (or upload via the Discord UI / drag into web prompt) before your text description. For example: /imagine prompt a floating city in the sky (with an image attached before the text). The image serves as inspiration for composition, style, color palette, or even specific elements.

By default, when you include an image, Midjourney balances the influence of the image and your text. If you want to control this balance, use the Image Weight parameter --iw. This sets how strongly the image(s) influence the result relative to the text prompt. The default --iw 1 (meaning roughly equal weight). You can lower it (e.g. 0.5) so the image is just a slight influence, or raise it (e.g. 2 or 3) to make the output look much more like the image.

Valid range for --iw is typically 0 to 2 or 3 in current versions (older info mentions up to 5 or even 9, but extremes often had diminishing returns or weird effects). Most use cases will be in 0.5–2.0 range. Example: If you set --iw 2, the reference image’s style/composition will be emphasized, so the output might closely mimic that image’s look (while still incorporating your text).

If you include multiple images, Midjourney by default gives them equal weight total (and the default text weight remains 1). In such cases, you cannot individually weight images except by repeating one image or using separate commands. But --iw will affect the combined influence of all images vs text.

Tip: If your image prompt is very strong (like a distinct artwork) and you want only a slight hint of it, use a low --iw. If your image has a subject you absolutely want to keep, use a higher --iw. For instance, providing a photo of a person and prompt “in a futuristic armor” – a higher image weight will keep the person’s face/features clearer in the result, while a lower might just take colors or mood from the photo.

Style Reference Codes (--sref) and Style Weight (--sw)

Midjourney has an internal library of “Style Reference” codes, often called SREF codes. These are special numeric codes that correspond to particular art styles, color schemes, or visual vibes. By using a style code, you can apply a predefined style to your prompt without describing it in words. Essentially, SREF codes act as shortcuts for complex style prompts – “apply style #123456” might mean “90s grunge comic-book style with neon colors” (just an example).

Using a style code: Add --sref <code> to your prompt. For example: a portrait of a warrior princess --sref 2771306670. This will generate the image in the style associated with code 2771306670. The style might include certain color palette, brushstroke look, lighting, etc., as defined by that code.

These codes do not copy content from anywhere, just the stylistic attributes (colors, textures, lighting, medium, etc.). It’s like telling Midjourney “make it look like it was done in the style of X” without you having to spell out all the style adjectives.

How to get codes? You can find and browse style codes using Midjourney’s Style Explorer on the website. There are also community resources and lists (some users curate large lists of SREF codes on forums or sites). In the Midjourney web app, the Styles tab lets you search and click on styles, and it will automatically insert the corresponding --sref <number> for you.

Random style: If you want to explore, you can use --sref random. Midjourney will pick a random style from its library each time. Once you run it, the prompt will actually show the numeric code that was chosen (so you can reuse that code if you like it). Note that if you use --sref random and also use --repeat or a permutation (multiple prompts in one), each job gets a different random style. This is a fun way to generate lots of style variations for the same subject.

Each style reference can be further tuned by Style Weight (--sw). This is similar to image weight but for the style code’s influence. It ranges 0 to 1000, with default --sw 100. Higher --sw means the style code will heavily dictate the outcome’s look; lower means the style is applied more subtly. In Midjourney V7, it’s noted that --sw has more impact with style codes than with image examples. So if you find the style code is overpowering your subject, you can dial down --sw, or conversely raise it if the style isn’t coming through strongly enough.

Important: Midjourney’s style reference system was updated in V7 (mid-2025). The old style codes (from earlier versions) may give different results now. If you have an older code that you love but it changed, you can use the Style Version parameter --sv. For example, --sv 4 uses the older style model so your code works as before, whereas --sv 6 (default) uses the latest style system. This mostly matters if you have legacy codes; new users will just use the default latest style versions.

You can also provide your own style reference image – effectively, using an image purely for style (not content). In the Midjourney web UI, there is a separate “Style Reference” slot where you can drag an image; in Discord, you would use --sref <image_url> similar to attaching an image prompt. When used this way, the image’s look and feel is applied to the new generation, without carrying over specific objects or people. This differs from a normal image prompt: a normal image prompt influences content and style, whereas a style reference explicitly is meant to only lend style.

Best practice for style refs: Keep your text prompt focused on content and not already describing style. Let the style reference handle the look. For example, if you use a watercolor painting as a style reference and your text says “in watercolor style”, that’s redundant – you might confuse the AI or double up the effect. Instead, just describe the subject (e.g. “a house by the lake”) and use the style image to dictate that it’s in watercolor.

If needed, you can combine multiple style references (just list multiple --sref <url_or_code> in the prompt, or in Discord add multiple codes/URLs separated by spaces). You can also mix an image style reference with a code at the same time. In such advanced cases, --sw applies overall; you can’t individually weight multiple style refs with separate values in one prompt.

Omni-Reference (--oref) and Omni Weight (--ow)

A brand-new feature in Midjourney V7 is Omni-Reference (--oref). This is a powerful tool to inject a specific character, person, object, or creature from a reference image into your generated scene. It replaces what used to be the limited “Character Reference” in earlier versions with something more flexible and effective.

How it works: You provide one image of a subject, and Midjourney will attempt to place that exact subject (with their appearance or the object’s unique look) into the new images you generate. For example, you could provide a photo of yourself and prompt “in a medieval royal attire in a throne room --oref <your_photo_url>”. Midjourney V7 will try to create an image of you wearing medieval clothes in a throne room. Or you might input a picture of a unique sculpture and ask for “in a lush forest setting --oref <sculpture_image_url>” to see that sculpture in a forest.

Usage: In Discord, attach the image (or paste its URL) and add --oref before the URL (or just after the prompt text). On the web, there’s a dedicated “Omni Reference” slot in the prompt interface to drop the image in. Only one image can be used as Omni-reference at a time. If you need to include two characters, you’d have to put them both in one image (e.g. a photo of two people together) because --oref only accepts a single reference slot.

The parameter --ow (Omni-reference Weight) controls how strongly the final image tries to match the reference. Range is 1 to 1000, default 100. Low --ow means the reference’s influence is lighter – the subject might resemble the original but the output might drift, especially if your text describes changes. High --ow means the output will very closely resemble the reference, capturing facial features, outfit, etc., almost like transplanting it into the new context. However, too high (near 1000) can make the AI focus so much on the reference that your text prompt’s new scenario might be under-emphasized. The documentation suggests weights up to 400 are usually enough, and to only go higher if necessary (and if stylize isn’t too high). Also, if you use very high stylize or Exp with Omni, you might actually need a higher --ow to keep the likeness – because stylize and exp are also “competing” for influence.

Limitations of Omni-reference: It’s currently V7 only. It also uses a lot of resources (jobs with --oref cost 2× GPU time of a normal image). It doesn’t work with some features – for example, you cannot use Omni with inpainting/outpainting (those still rely on the V6 model), and you can’t do region editing (pan, zoom-out) on images that used --oref unless you remove the reference in the editor. Also you can’t use --q 4 (the highest quality) on the same job as Omni since that’s not allowed. Omni also cannot be used in Draft mode or Relax mode; you have to run it in standard/fast (turbo) mode.

Privacy/Policy note: When using external images (especially of real people), Midjourney has rules. You should have rights to the image, and you cannot use it to create disallowed content (like sexual or derogatory portrayals of a person, etc.). The system might refuse or block certain uses of reference images due to the community guidelines (e.g., referencing a famous person might be filtered as disallowed). So ensure your usage complies with their policies.

Tip: For best Omni results:

  • Use a clear image of the subject with not much background clutter. This helps the AI focus on the subject. For example, a portrait photo with a plain background is ideal if you want to isolate a person’s look.

  • Start with default weight (100) and if the subject’s identity isn’t coming through, increase it in increments (200, 300, etc).

  • If the style of the reference image is very different from what you want (say the ref is a painting but you want a photo output), you might mention the desired style strongly in your prompt and possibly use style reference or raw mode to override the style from dominating. Also lowering --ow in such a case can let the new style come in while still keeping the subject.

  • Remember you need some text prompt along with the reference. Omni-reference doesn’t generate an image only from the ref; you must describe the scene or context you want to put that subject into. If you just use --oref with no text, the system will likely give an error or a boring result.

  • If your Omni result is too faithful and not adapting, try reducing --ow or adding more descriptive text to push it. If it’s not faithful enough, increase --ow or remove competing high --stylize/--exp values as needed.

In summary, Omni-reference is great for continuity (like having the same character appear in multiple images) or placing real-world subjects into AI art. Just use it carefully and be mindful of its costs and limits.

Personalization Profiles (--p)

Midjourney now offers Personalization – a way to train the AI on what styles you like, by having you rate images. This creates personal “profiles” or custom style codes unique to you. If you have personalization enabled and have built a profile (e.g. by ranking a bunch of images on the Midjourney website’s Personalize page), you can apply that style to your generations using --profile or --p.

Using --p with no further argument will apply your default personal profile(s) to the prompt. Essentially, once your profile is unlocked and set to ON, Midjourney will automatically bias images towards the aesthetics you’ve shown you prefer. If you have multiple profiles (say you made one for “bright colorful art” and one for “dark horror style”), you can select which to use by either toggling it in the web UI or using the profile ID in the prompt. Each profile has an ID (and a code behind the scenes); you can copy the ID from the Personalize page and do --p <ID> to specifically use that one. The UI provides shortcuts (e.g. a “Use Profile” button that adds --p profileNameCode).

Important: You have to unlock personalization first by ranking a set of image pairs (Midjourney shows you images and asks which you prefer). After doing enough, your Global V7 Profile becomes active. Only then will --p do anything – if you try to use --p too early, you’ll get an error telling you to rank more images.

Once active, you can turn personalization on/off with a toggle (in the web Imagine bar, the little 🅿️ icon next to the prompt box). When ON, by default it might apply your Global profile to all prompts automatically (like an invisible --p). You can also have multiple profiles and select them to combine. For example, you might have a “Moody B&W” profile and a “Minimalism” profile – you could potentially apply both if that makes sense, or just one at a time. In Discord, you’d manually add --p profile1ID --p profile2ID to use multiples (or use the code names they’ve given).

Under the hood, personalization generates a special code (like a style code) that represents your profile’s preferences. When you use --p, it essentially attaches that custom style code to the prompt. You might even see it convert to an actual --p abcdef code when the job runs. These codes can update as you like more images or do more ranking, so grabbing the latest ID from your profile is wise. (You can list previous codes with the Discord command /list_personalize_codes if needed.)

Moodboards: Midjourney also allows creating moodboards – these are collections of images that define a style. They function similarly to a profile. If you create a moodboard on the site, you can use it via --p as well (each moodboard becomes a profile with an ID). Essentially moodboards are a manual way to curate style inspiration, whereas the rating method is an automated way to tune a profile. Both result in profiles you apply with --p.

Using Personalization effectively: If you have a specific look you often aim for, personalization is great. For instance, say you love high-contrast cyberpunk scenes – by liking a bunch of such images, your profile can bias any prompt to look more cyberpunk even if you don’t specify it. It’s like a personal style baseline. Combine personalization with other parameters carefully: a very high Exp or Stylize might overshadow a subtle profile, whereas a moderate Exp works with your profile to enhance your preferred aesthetics. If you want to turn it off for a prompt, just don’t use --p (or toggle it off in the UI).

Keep in mind personalization does not guarantee specific content (it’s about style bias). And if you share prompts, others can’t use your --p code unless they have your profile – it’s unique per user.

Other Handy Parameters and Commands

Finally, here are various other Midjourney options (“codes”) that are useful:

  • Seed (--seed) – Sets the random seed for image generation. Midjourney uses a seed to start the generative process. By default it’s random each time. If you find an image you like and want to reproduce or slightly tweak it, note the seed (shown in job info on the website or with /show <job_id> in Discord) and use --seed <###> to reuse it. Using the same seed with the same prompt and settings will give very similar (often essentially identical) output. Changing the prompt slightly while keeping the seed might preserve some composition but alter details. Note: across different model versions or after major updates, a given seed might not produce the exact same image, but it will produce a similar starting noise. Seeds are great for A/B testing small prompt changes or ensuring consistency across a series (e.g. the same scene with slight differences).

    If you want the 4 variant images to each use the same seed (normally each of the 4 has a different sub-seed), Midjourney has a “Same Seed” option in /settings or an older parameter for it. In current versions, you can toggle “Same Seed” ON (ensuring all 4 variations start identically – usually not what you want unless testing). There’s also --repeat if your goal is multiple outputs (below).

  • Repeat (--repeat or --r) – Generates multiple independent batches of images from the same prompt. It’s like telling Midjourney “run this prompt N times.” For example, /imagine a fantasy landscape --repeat 3 will actually queue 3 separate jobs, each producing its own grid of four images (so you’d get 3×4 = 12 images total, delivered as 3 results). This is useful if you want to quickly get many variations beyond just 4, without retyping or manually resubmitting the prompt. It’s also helpful in combination with --chaos or --sref random to explore many different interpretations.

    Be cautious: using a large repeat value will consume your subscription fast since each repeat is a full job. Some accounts might have a cap on how many jobs can run at once or in queue.

    Permutation: Related to repeat, Midjourney also supports prompt permutations with {} braces (though it’s not a -- parameter, it’s a structuring trick). E.g. a house in {spring|summer|winter} would run 3 prompts (one for each season). You can combine that with repeat or style random. But that’s an advanced usage note – bottom line, --repeat is the straightforward way to spawn many jobs quickly with one command.

  • No (--no) – Allows you to specify things you do not want in the image. It’s basically a negative prompt. For example, --no text is commonly used to try to prevent any text or watermark-like artifacts in the image. You can say --no water to avoid water in a scene, etc. This is equivalent to giving that thing a negative weight (roughly ::-0.5) in the prompt. The --no parameter is very handy for excluding unwanted elements or styles that Midjourney might otherwise include by default.

    Tip: Use --no sparingly for big things you really don’t want. It might not always perfectly remove it, but it strongly biases against it. Don’t chain a ton of --no terms; focus on the main distractions. Example: a portrait of a woman --no glasses --no text --no watermark if you kept getting glasses on the woman or text artifacts.

    (Internally, there is a token limit, so you can’t exclude extremely long phrases or too many things at once, but usually 2-3 --no terms is fine.)

  • Tile (--tile) – When this is added, Midjourney generates the image in a seamless tiling manner. That means the left/right and top/bottom edges align so that the image can repeat without visible seams. This is fantastic for creating patterns or textures (for game design, wallpaper, fabric prints, etc.). For example, a pattern of gold flowers on red background --tile will yield a square image that can tile perfectly into an infinite wallpaper. Note: --tile currently works only with certain aspect ratios (it may force the output to be square for true tiling). Extremely complex scenes might not tile perfectly even if edges match, because if the composition has a clear “center”, tiling will show obvious repetition. So use tile for patterns or abstracts that benefit from repetition. (Tiling wasn’t supported in older models like v4, but is available in v5/v6/v7.)

  • Weird (--weird) – We covered this above in the creativity section. Use it to inject unusual aesthetics. Range 0-3000, default 0.

  • Stealth/Public (--stealth, --public) – These flags control whether your image is published in your Midjourney community gallery feed. By default, non-Pro users have their images public on the website gallery. Pro users (with the Stealth feature) can choose to hide images. Adding --stealth to a prompt will explicitly make that generation hidden from the public feed. --public would override and make it public if you normally are in stealth. (Most people just use the account setting or /stealth command rather than these per-prompt flags.) They do not affect the image content itself, just visibility.

  • Version (--version or --v) – Already discussed in model versions. Use it to switch model versions per prompt. E.g. --v 5.2 or --v 6. Note: --v 7 requires you to have access to V7 (which by now is likely default for everyone with personalization unlocked).

  • Niji (--niji) – Switches to the Niji (anime-focused) model. You can also specify versions of Niji if needed (e.g. maybe --niji 5 for an older Niji, but generally --niji uses the latest).

  • Video and Animation parameters: Midjourney has a new video generation (image-to-video) feature in V7. If you want to generate short videos (animations) instead of still images, there are special parameters:

    • --video – Put this in your prompt to tell Midjourney to output a short animated video (around 5 seconds) instead of a still image. You can provide a starting image (like an initial frame) by including an image URL before the prompt text or using the Starting Frame slot on the web, though that’s optional if you just want it to animate the prompt itself.

    • Motion mode (--motion) – Midjourney videos have two motion settings: --motion low (default) or --motion high. --motion low produces subtle movement, slower, more ambient animation (good for gentle scenes or slight camera pans). --motion high produces more dynamic movement – faster camera moves, objects moving more – but it can also lead to glitchiness or surreal jumps. If you want a lot of action in the video, use high; for looped calm scenes, low is better.

    • Loop and End frame (--loop, --end) – These are used if you provide a starting frame and you want a specific ending. --loop will tell MJ to use the starting image again as the ending frame, creating a perfect loop when the video repeats. If you want a different specific image to end on, you can supply an ending image URL with --end <image_url>. This way you can go from image A to image B over the course of the video. Without these, the video just ends on whatever it decides. Loop is great for making seamless looping animations of patterns or scenes (especially if starting = ending, it will smoothly transition).

    • Batch size for video (--bs) – By default, a video prompt will produce 4 variants (like four different videos) similar to how images produce 4 pics. If that’s too costly or you only want one, you can set --bs 1 or --bs 2 to only generate 1 or 2 videos per job instead of 4. This can save a lot of GPU minutes (since video is heavy). Use --bs 1 when you just want a single best video result.

    (After generating a video, you have options to Extend it via buttons – those aren’t invoked by -- codes but by the UI. You can extend up to 20 seconds.)

    • Video resolution: Videos start at 480p by default (Standard Definition). If you have Standard or Pro plan, you can enable 720p HD in your settings on the web. There isn’t a --hd parameter; it’s a setting toggle. HD videos consume ~3× more GPU minutes than SD. So use SD for drafts and HD for final if needed.

    • Video Raw (--raw) – You can also use --raw in video prompts to reduce the extra stylization, just like image Raw Mode, for more precise control of how things move. This can help if the animation is adding too many creative flairs and you want it more literal to the prompt.

    Note: The video feature is relatively new (as of V7) and considered V1. It works best with simple prompts or a single subject. Using an image as a starting frame (--video with an image URL) will animate that image. For example, you can feed one of your Midjourney-generated images and get a 5-second pan or slight movement. If you add a prompt along with it, MJ tries to animate into that prompt scenario. The results can be hit-or-miss, but it’s a fun area to explore.

  • Stop (--stop) – This parameter lets you stop the image generation early, at a given percentage of completion. For example, --stop 50 will stop at 50% of the process. The output looks less finalized, often more abstract or painterly because it didn’t go through all the refinement steps. This can be used artistically if you want a rough sketch look or to avoid too much detail. Default is 100 (full process). You can choose any value 10–100 (in increments of 10) to stop early. Keep in mind it still uses almost the same amount of GPU time as a full generation (it doesn’t scale linearly; stopping at 50 doesn’t mean half the cost). This is more of a niche use case, but sometimes cool for getting impressionistic or unfinished-style images.

  • Remix Mode – (Not a -- parameter, but worth mentioning). If you enable Remix mode (via /settings in Discord or the toggle on web), whenever you hit the Variations (V1, V2, etc.) button on an image, it will allow you to change the prompt or parameters for the new variation. This is super useful to iteratively refine images. For instance, you generated a scene but now want to add --stylize 500 or add “--no trees” for the next variation – Remix lets you do that. Essentially, it’s an interactive way to modify prompts between variation generations.

(Legacy parameters like --sameseed, --uplight, --hd (old mode), --test/--testp, etc., are deprecated in V7. If you see them mentioned in old posts, they no longer apply.)

Putting It All Together (Prompt Tips)

With so many options, it can be overwhelming. Here are some final tips on structuring prompts effectively:

  • Start simple, then add parameters as needed. Describe your subject and maybe one style hint in words. Run it plain to see what the default gives. Then decide, for example: “Hmm, I want it more cinematic – I’ll add --exp 30” or “It’s too polished, I want a sketch – I’ll try a style reference code for pencil drawing.”

  • Use images to guide when possible. One approach to a complex concept: provide a concept art image as an image prompt for composition, and a separate artwork as a style reference via --sref. This can outperform trying to describe the style in text.

  • Adjust weights when combining influences. If you use an Omni-reference for a character and a style code, remember you have --ow and --sw to play with if one is overpowering the other. Also consider using Raw mode if the style code plus Omni is causing conflicts, so that the only strong styles are the ones you explicitly want.

  • Leverage chaos and repeat for brainstorming. If you have a general idea (“steampunk city”), try --chaos 50 and maybe --repeat 2 – you’ll get 8 unique takes. Often one will speak to you. Then you can take that seed or that concept and drill down, perhaps lowering chaos and refining the prompt details for consistency.

  • Don’t be afraid of high values, but use them intentionally. A very high --stylize or --exp or --weird can produce amazing art, but also can drift far from your original idea. If the art is more important than the exact subject, that’s fine. If not, keep those values moderate. You can also do a trick: run one prompt with conservative settings and one with extreme, then remix-combine them (copy some prompt elements from one to the other) to find a middle ground.

  • Stay updated. Check official documentation (Midjourney’s docs and announcements) for any changes. The team frequently updates parameters – e.g. adding new style categories or adjusting ranges. As of V7, things like personalization and video are new frontiers, so keep an eye on those sources for the latest capabilities.

  • Use the settings UI: On the web or via the Discord /settings command, set up your defaults. If you always want --v 7 and Style med and Quality 1 and Raw off, etc., you can toggle those so every prompt has them implicitly. This saves time and ensures consistency. (You can still override per prompt by explicitly adding a parameter.)