Prompting8 min read

Midjourney sref codes: what they are and how to use them

Sref codes lock a visual style across every generation. How to find, use, and share style references in Midjourney v7 - Discord, the web app, or API.

MT

Maya Torres

Creative Director · 2026-03-19

Sref codes solve a real problem: getting consistent visual style across multiple generations. Without them, every Midjourney prompt is a lottery - you describe a style in words and hope the output matches. With a sref code, you pin a specific aesthetic and reuse it exactly.

Quick answer

Sref (style reference) codes are numeric identifiers that lock Midjourney to a specific visual style. Add --sref <code> to any prompt - Discord, web app, or API - and your image will match that style regardless of what else you prompt.

You can also use an image URL instead of a numeric code to reference a style from any image you already have.

What sref codes actually are

Midjourney's training data includes millions of images, each associated with a style embedding in the model. A sref code is a pointer to one of those embeddings. When you add --sref 1234 to your prompt, Midjourney retrieves that style embedding and applies it to your generation - regardless of what your text prompt says.

The result: you can generate a product photo, a portrait, and a landscape that all share the same lighting, color grading, and mood - without writing a paragraph trying to describe the style every time.

Two types of sref

Numeric codes

These are 4–8 digit numbers that reference Midjourney's internal style library. You'll find them shared in communities, Reddit threads, and Discord servers. Most popular sref lists include a preview image so you can see what the code produces before using it.

# Using a numeric sref code

a minimalist product photo of a glass perfume bottle --sref 3456789 --ar 1:1 --v 7

Image URL references

You can also point --sref at any publicly accessible image URL. Midjourney will extract the style from that image and apply it to your generation. This is how designers maintain visual consistency with existing brand assets.

# Using an image URL as a style reference

a minimalist product photo of a glass perfume bottle --sref https://example.com/brand-reference.jpg --ar 1:1 --v 7

Controlling style strength with --sw

The --sw (style weight) parameter controls how strongly the sref style overrides your prompt. Default is 100.

--sw valueEffectUse when
0–50Light style influenceYou want the style to hint at an aesthetic, not define it
100 (default)Balanced - style and prompt share controlMost use cases
200–500Strong style dominanceConsistent brand output across many prompts
1000Maximum - style nearly overrides prompt contentYou want a pure style match regardless of subject

# Strong style dominance - useful for brand consistency

product photo of wireless earbuds --sref 3456789 --sw 400 --ar 1:1 --v 7

Using sref codes via API

If you're generating images programmatically, sref codes work exactly the same way - just include them in your prompt string. No special parameter needed. The full Midjourney parameter syntax is supported.

# Sref via JourneyAPI - same prompt syntax as Discord or web app

curl -X POST https://api.journeyapi.com/api/v1/imagine \
  -H "Authorization: japi_live_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "product photo of a skincare serum bottle --sref 3456789 --sw 300 --ar 2:3 --v 7",
    "webhook_url": "https://your-app.com/webhook"
  }'

This is particularly useful for apps that generate images with a consistent brand aesthetic - you hardcode the sref code in your prompt template and every user's generation shares the same visual style.

How to find sref codes

There's no official Midjourney sref directory. The best sources are community-driven:

  • Reddit r/midjourney - regular posts sharing sref code collections with preview images
  • Midjourney Discord - the #prompt-craft channel and community servers focused on styles
  • X (formerly Twitter) - search "midjourney sref codes" for curated threads with screenshots
  • Generate your own - use --sref random in Discord to explore random style codes and note ones you like

Example sref codes worth trying

The following are widely shared codes with predictable, useful aesthetics. Results vary slightly with each generation - use --seed if you need reproducibility.

CodeAestheticGood for
--sref 1000Painterly, impressionisticEditorial art, album covers
--sref 2000Clean, editorial photographyPortraits, lifestyle
--sref 3456789Minimalist, high-contrast productE-commerce, brand assets
--sref 4215285Cinematic, moody lightingMovie stills, dramatic portraits
--sref 7654321Soft pastel, dreamyFashion, beauty, wellness
--sref randomExplores random stylesDiscovery and experimentation

Always test a sref code with 2–3 different prompts before committing to it for production use. Codes can behave differently depending on subject matter - a code that looks great on portraits may not translate well to product photography.

Combining sref with character references (--cref)

Midjourney v7 supports both style references and character references simultaneously. --cref locks the identity or appearance of a character, while --sref controls the overall visual style. Used together, they give you precise control over both character consistency and brand aesthetic - essential for product illustration, avatar generation, or any content series.

# Character + style reference combined

a woman smiling at a coffee shop --cref https://your-cdn.com/character.jpg --sref 2000 --sw 200 --ar 3:2 --v 7

Do sref codes work on Meta AI?

Partially. In our testing for the Meta AI vs Midjourney comparison, about 60% of numeric sref codes produced recognizably similar styles on Meta AI. This is likely due to shared training data influence. However, Meta AI doesn't officially support the --srefparameter - you'd need to include the sref code in the prompt text, not as a parameter, and results are inconsistent.

For reliable, consistent sref code behaviour, stick with Midjourney.

Best practices

  • Use --sw 200–400 for brand work. The default 100 lets your prompt content compete with the style reference too strongly. Higher values give you more reliable consistency.
  • Combine with --style raw for product photography. Raw mode reduces Midjourney's own aesthetic processing, letting the sref code have cleaner control over the output.
  • Store your best codes. Keep a reference table of codes and their aesthetics. Over time this becomes a valuable brand asset - your own style library.
  • Use --seed for reproducibility. Sref codes don't fully determinize output. If you need to reproduce a specific result, note the seed from a generation you like and include it in future prompts.
  • Test across subject types. A sref code that works for portraits may behave differently on landscapes or product shots. Always verify with a few test prompts before production use.

Using sref at scale via API

For developers building content pipelines or apps that need consistent visual output, sref codes are the most practical tool available. The workflow:

  1. Find a sref code that matches your brand aesthetic
  2. Hardcode it into your prompt template with an appropriate --sw value
  3. Every generation via the /imagine endpoint will share that style automatically

No additional logic, no prompt engineering per-request. One line in your template handles consistent style across thousands of generations.

Try it free

25 free credits. Generate your first image in under 5 minutes.

Get your API key

More in Prompting