the argument that taught me something about prompts

why small words, implied agency, and framing drive big interpretive fights

06-Jan-26

I keep noticing the same pattern when I work with large language models: tiny wording changes create outsized differences in behavior. You swap one verb. You remove one implied subject. You add a single constraint. Suddenly the model is “a different person.”

At first, that feels like a product quirk. Then you start seeing it as a general property of language under pressure. Meaning is not only in the dictionary definition. It is in the frame. It is in what the sentence makes available as an interpretation. It is in what the listener thinks you are doing by saying it.

And that is what made an old theological argument click for me, even as a purely observational thing. Not “who is right,” not “which sect,” not “what should people do.” Just: this is a real-world example of how fragile communication becomes when the stakes are high and the language is slightly ambiguous.

The argument is about a single kind of sentence.

One sentence, two ontologies

In many Muslim communities, you will hear phrases like “O Ali, help me,” or “O Messenger of God,” said in moments of stress or devotion. Then you will also hear a sharp objection: “That is shirk.”

If you are not Muslim, “shirk” is the word for associating partners with God. It is not a mild critique. It is the charge of crossing the central boundary of monotheism.

From far away, it looks like people arguing over a phrase. But the phrase is not the real object. The real object is the mapping from words to reality.

That is why it reminded me of prompt engineering.

In prompt work, the same output can be interpreted as “following instructions” or “jailbreaking” depending on frame. In theology, the same sentence can be interpreted as “devotional shorthand” or “worship misdirected” depending on frame. The words are the same. The system you use to interpret them is different.

So let’s translate the debate into a model you can hold in your head without needing to be inside the tradition.

The hinge: does meaning live in form, or in implied agency?

There are a few terms, but you only need them once.

  • Tawhid: God is one and unique. No partners. No shared divine authority.
  • Shirk: violating that uniqueness by treating something else as divine or as a competitor.
  • Du‘a: supplication, calling out in prayer.
  • Tawassul: asking God while mentioning a revered figure as a “means.”
  • Istighatha: calling on a revered figure directly for help.

Most people in the debate agree on two baseline points: God alone has independent control of outcomes, and asking humans for ordinary help is allowed. The fight starts when someone uses language that sounds like prayer but is aimed at someone other than God.

The hinge is simple:

  • One side defines worship heavily by form. Certain speech acts are “reserved.” If du‘a is a core act of worship and worship belongs to God alone, then directing du‘a to anyone else is wrong in itself. Intention does not rescue the form.
  • The other side defines worship heavily by attribution. What makes something worship is what you believe you are doing. If you believe the person you address has independent power, that is shirk. If you believe God alone acts and you are speaking as shorthand for “intercede by God’s permission,” that is not worship.

That split is what makes the argument so persistent. They are not disagreeing about the sentence. They are disagreeing about what a sentence counts as.

It is the same kind of disagreement you see when someone says, “The model did X,” and one person means “the model complied,” and another means “the model leaked policy.” The token sequence is the same. The interpretation is different because the underlying ontology is different.

Why this looks like an LLM problem

LLMs are famously sensitive to implied roles and implied agency. A few examples you have probably seen:

  • If you say “write like a mediator,” the model starts smoothing edges, adding “both sides,” and sounding like it has skin in the game.
  • If you say “write as an observer,” it stops advising and starts describing.
  • If you say “do not moralize,” it stops using moral language but may still smuggle in normative cues through phrasing.
  • If you say “be concise,” it becomes more categorical, sometimes more brittle.

None of this is magic. It is a consequence of how models compress patterns. A prompt does not only specify content. It also specifies a stance. It sets up implied obligations. It tells the model what kind of act this is: report, persuade, comfort, warn, arbitrate.

That is exactly what happens with istighatha language. The phrase “help me” is not only a request. It is an implied claim about where agency sits.

To one listener, “O Ali, help me” loads as: “Ali can do something here.” To another, it loads as: “Ali is a means; God is the actor.” The sentence is underdetermined without the surrounding frame. And because the stakes are high, people do not treat it as underdetermined. They snap it to the nearest safe interpretation.

This is one of the simplest rules of human argument: when the cost of misclassification is high, people prefer false positives to false negatives. In theology, misclassifying worship is expensive. In safety, misclassifying a jailbreak is expensive. So you get conservative interpretive policies.

Three short scenes where framing does the work

Scene 1: The panic sentence

Someone in distress blurts, “O Ali, help me.”

Under a form-first interpretation, this is prayer directed away from God. It looks like the worship channel. The worry is not only that the speaker is committing a category mistake today, but that repeating the form trains the community into dependence patterns tomorrow.

Under an attribution-first interpretation, it is shorthand. It can mean: “Pray for me,” or “Intercede,” or “Help in whatever way God permits.” The worry is not that the form is dangerous, but that outsiders are reading a metaphysical claim into ordinary speech.

This is the first LLM parallel: the model cannot reliably infer your hidden intent from a short phrase. Humans cannot either. So they substitute their own prior. And their priors are optimized for different failure modes.

Scene 2: The explicit version

“God, by the honor of Your prophet, help me.”

This version reduces ambiguity by addressing God directly. Many people who dislike direct address are more comfortable here. Not because it is “nice,” but because it clarifies agency.

This is the second LLM parallel: if you do not want the model to drift, you make the constraint explicit. You name the actor. You remove implied steps.

Scene 3: Talking to the dead

At a grave: “O Messenger of God, ask forgiveness for me.”

Here the same hinge reappears. Is this a forbidden form of invocation, or a permissible request for intercession? The answer depends on whether you treat the address itself as the worship act, or the underlying belief attribution as the worship act.

This is the third LLM parallel: the difference between “I want you to do X” and “I want X to happen.” If you phrase it as “you do it,” you assign agency to the model. If you phrase it as “output the plan,” you constrain agency. Small words, different implied ontology.

The interesting observation

The reason this theological dispute is a good thinking tool is that it forces you to separate three layers that people usually collapse:

  1. Surface text: what was said.
  2. Speech act: what kind of thing the saying is (worship, request, metaphor, honor, intercession).
  3. Agency model: who is believed to cause outcomes.

When those layers line up, nobody argues. When they don’t, you get conflict that sounds like a moral fight but is often an interpretive fight.

LLMs make this visible because they amplify it. You can watch the system reframe itself based on tiny cues. You can watch it treat a phrase as instruction, or as description, or as a request for permission. Humans do the same thing, just with more emotion and more stakes.

So if you want a compact takeaway that travels: many “culture” arguments are really “parsing” arguments. They are fights over which implicit model should be used to interpret underdetermined language.

Where this leaves you, as an observer

If you are building with AI, you already live in a world where ambiguous phrasing creates failure. The easy lesson is “be precise.” The better lesson is “be precise about agency.”

When people argue about sentences like “help me,” they are arguing about an implied causal diagram. Who can act. Who is allowed to be addressed. What counts as a reserved channel. Those are not minor details. They are the entire system.

That is why this argument survives. It is not about one sentence. It is about two competing safety strategies for language: lock down the forms, or lock down the beliefs. Both aim at the same center. They just distrust different parts of the stack.

And if you have spent time prompting models, that should sound familiar.