← Home About Archive Photos Replies Also on Micro.blog
  • Focusing on Apple’s AI Wearables Story (Redefining Human–Technology Interaction)

    In the evolution of human‑technology interaction, Apple’s acquisition of Q.ai is a quiet but profound inflection point. This isn’t about a new gadget. It’s about rethinking how humans engage with intelligence. Q.ai’s core innovation, interpreting subtle facial and micromovement signals to infer user intent, suggests a future where interaction doesn’t demand commands or screens.

    Historically, we’ve trained people to think like machines: click here, speak there, wait for responses. What Q.ai enables is the reverse: systems that understand people where they already are. Not louder technology, quieter presence. Not visible interfaces, invisible support.

    This shift is significant because it reframes the question of progress. Progress isn’t measured by what tools can do. It’s measured by how naturally they integrate into human experience. When AI anticipates context and intention without interrupting flow, cognitive load decreases and human capacity expands.

    The challenge ahead isn’t engineering alone. It’s design that honors human rhythm, interaction that feels less like instruction and more like accompaniment, intelligence that listens to context, not just commands.

    We’re not moving to smarter machines. We’re moving toward machines that understand us on our terms.

    → 10:36 AM, Feb 9
  • Five prompts I’ve actually enjoyed testing lately.

    1. The “Argue With Yourself” Prompt (reasoning check)

    Use this when answers feel too confident.

    “Answer the question below.

    Then write a short rebuttal to your own answer.

    Finally, revise the answer based on the strongest rebuttal.

    Question: Is prompt engineering becoming less important?”

    Why it’s interesting: It exposes where the model is hand-waving versus reasoning.


    1. The “What Would Break This?” Prompt (risk-first thinking)

    Great for plans, workflows, or optimistic takes.

    “Propose a simple workflow for using AI agents in daily work.

    Then list 5 realistic failure modes that would make it unusable.

    Rank those failures by likelihood.”

    Why it’s interesting: You get fewer buzzwords and more operational thinking.


    1. The “Editor From Hell” Prompt (clarity upgrade)

    I use this constantly for my own drafts.

    “Act as a brutally strict editor.

    Rewrite the text below to remove:

    vague claims

    filler adjectives

    implied certainty

    Keep the tone neutral and concise.”

    Why it’s interesting: It forces models to cut, not embellish—still a weak spot for many.


    1. The “Translate Across Mediums” Prompt (concept stress-test)

    Try this with abstract ideas.

    “Explain ‘prompt constraints’ as:

    a kitchen recipe

    a legal contract clause

    a software interface setting

    Each explanation ≤40 words.”

    Why it’s interesting: If the idea survives translation, it’s probably solid.


    1. The “Diagram Without Words” Prompt (image models)

    Best with tools like Midjourney or similar.

    “Create a simple diagram showing how an LLM responds to a prompt.

    Constraints:

    no text

    grayscale only

    must clearly show user intent vs model output.”

    Why it’s interesting: You learn fast whether the model actually understands relationships—or just labels.

    If you want,

    → 9:13 PM, Jan 7
  • What’s Changed in Image Prompts (and How I’m Writing Them Now)

    Image prompting has quietly shifted over the last few months. Not with a big announcement, but with small behavioral changes in the models themselves. I’ve noticed I can write less, be more literal, and still get better results—especially in newer image models inside tools like ChatGPT and Midjourney.

    This post is a snapshot of how I’m adapting my image prompts right now, and a few patterns that seem to matter more than they used to.

    The move from “style dumping” to intent-first prompts

    I used to front-load prompts with long strings of adjectives and references. Lately, that backfires. The models seem better at inferring style once the intent is clear.

    What works better for me now:

    • Start with what the image is for
    • State subject + action
    • Then add one or two constraints, not ten

    Example prompt I’ve been reusing:

    “A square hero image for a blog post about AI workflows. A calm, modern desk scene with a laptop and handwritten notes. Neutral lighting, realistic photography.”

    The results are more coherent than my older, overstuffed prompts—and easier to iterate on.

    Composition is the new secret weapon

    One big improvement: models now respond much more reliably to composition language. Camera framing, distance, and layout matter more than named art styles.

    I’m explicitly calling out things like:

    • “Centered subject with negative space on the left”
    • “Wide angle, eye-level perspective”
    • “Shallow depth of field, background softly blurred”

    Sample prompt: “Wide-angle illustration of a person sketching ideas on paper at a café table, subject on the right, empty space on the left for text, soft afternoon light.”

    This is especially useful if you plan to overlay text later.

    Fewer references, clearer constraints

    Named artists and brands still work, but I’m using them sparingly. The newer models seem to do better with constraints than references.

    Instead of: “In the style of X meets Y meets Z”

    I’ll try: “Flat illustration, limited color palette, no text, simple shapes, friendly tone.”

    That “no text” constraint alone saves a lot of cleanup time.

    A simple iteration loop that’s working for me

    My current workflow looks like this:

    • Write a plain-English prompt (no style flexing)
    • Generate 2–4 images
    • Revise the prompt once, focusing only on composition or mood
    • Regenerate

    Resisting the urge to rewrite everything has made iteration faster—and results more predictable.

    If you’ve been frustrated with image prompts lately, try subtracting instead of adding. The models have grown up a bit. Our prompts should too.

    → 12:58 PM, Jan 2
  • RSS
  • JSON Feed
  • Micro.blog