Worlds we impose

3 minute read

Published:

In the book Impro: Improvisation and the Theatre, Keith Johnstone recounts a moment between a teacher and a special needs student. The teacher holds up a flower and says, “Look at the pretty flower.” The girl responds, “All of the flowers are beautiful.” Then the teacher gently says, “but this flower is especially beautiful.” The girl proceeds to scream and thrash about violently.

The way Johnstone characterized this interaction surprised me:

In the gentlest possible way, this teacher had been very violent. She was insisting on categorising, and on selecting. Actually it is crazy to insist that one flower is especially beautiful in a whole garden of flowers, but the teacher is allowed to do this, and is not perceived by sane people as violent. Grown-ups are expected to distort the perceptions of the child in this way. Since then I’ve noticed such behaviour constantly, but it took the mad girl to open my eyes to it.

Basically, to reject another’s world is violence. Even if done in a “gentle” way (like this teacher had done), it’s still an act of violence.

As a father of two, I often have to resist this urge to impose my world, my perspective, upon my kids. My daughter sees something she wants to share with me, but I instinctively want to respond by reshaping it into my perspective. Or convert it into some teaching moment, to insist on some fragment of my reality. But such a response to their bid for attention is what Johnstone calls “blocking”, and he discusses it at length throughout the book.

This has been on my mind because I practice it daily now. If you use large language models (LLMs), then you probably do as well.

In order to actually get any value out of your interactions with LLMs, you need to construct its world, e.g. by providing context, constraints, and specific objective. Prompting (or context engineering) is that “violent imposition” — pushing your reality onto the machine.

It’s not true to say that all such interactions are violent in this way. Parents tell their kids not to run into traffic. We teach them knowledge and skills that might broaden their world. The AI safety community seeks to align LLMs with human values. It’s not a bad thing to provide guidance. And again, skilled prompting is necessary to get any utility from LLMs.

However, I’m quite concerned about what this practice does to ourown psyches. What happens when you spend hours each day reformatting the world context of a LLM, which can never resist? The way that AI generally interacts is to comply with whatever you say (or at least attempt to do so).1

Real life is never this frictionless! And it shouldn’t be… each person has their own perspectives, and most people aren’t thrilled about having a worldview subjugated upon them.

What happens when we get too good at making LLMs see things our way? I’m guessing that it’ll make us even more siloed or unwilling to change our perspectives (even more than what social media has already done).

The equivalent of touching grass in this case is to spend some conscious effort not imposing our worlds on others. Maybe even LLMs too! After all, improv2 is all about accepting what your partner gives you and building on it.


  1. Also, gross sycophancy… and it looks like the latest version of Gemini 2.5 Pro is falling into this same trap

  2. I should probably add the caveat that I’ve never done improv, but it’s on my bucket list!