What’s Changed in Image Prompts (and How I’m Writing Them Now)
Image prompting has quietly shifted over the last few months. Not with a big announcement, but with small behavioral changes in the models themselves. I’ve noticed I can write less, be more literal, and still get better results—especially in newer image models inside tools like ChatGPT and Midjourney.
This post is a snapshot of how I’m adapting my image prompts right now, and a few patterns that seem to matter more than they used to.
The move from “style dumping” to intent-first prompts
I used to front-load prompts with long strings of adjectives and references. Lately, that backfires. The models seem better at inferring style once the intent is clear.
What works better for me now:
- Start with what the image is for
- State subject + action
- Then add one or two constraints, not ten
Example prompt I’ve been reusing:
“A square hero image for a blog post about AI workflows. A calm, modern desk scene with a laptop and handwritten notes. Neutral lighting, realistic photography.”
The results are more coherent than my older, overstuffed prompts—and easier to iterate on.
Composition is the new secret weapon
One big improvement: models now respond much more reliably to composition language. Camera framing, distance, and layout matter more than named art styles.
I’m explicitly calling out things like:
- “Centered subject with negative space on the left”
- “Wide angle, eye-level perspective”
- “Shallow depth of field, background softly blurred”
Sample prompt: “Wide-angle illustration of a person sketching ideas on paper at a café table, subject on the right, empty space on the left for text, soft afternoon light.”
This is especially useful if you plan to overlay text later.
Fewer references, clearer constraints
Named artists and brands still work, but I’m using them sparingly. The newer models seem to do better with constraints than references.
Instead of: “In the style of X meets Y meets Z”
I’ll try: “Flat illustration, limited color palette, no text, simple shapes, friendly tone.”
That “no text” constraint alone saves a lot of cleanup time.
A simple iteration loop that’s working for me
My current workflow looks like this:
- Write a plain-English prompt (no style flexing)
- Generate 2–4 images
- Revise the prompt once, focusing only on composition or mood
- Regenerate
Resisting the urge to rewrite everything has made iteration faster—and results more predictable.
If you’ve been frustrated with image prompts lately, try subtracting instead of adding. The models have grown up a bit. Our prompts should too.