What Your AI Documentation Tool Is Being Prompted to Do
Most AI documentation tools are used through what users can see, a conversational interface, an input box, and a back-and-forth format where the user enters requests and the model returns text, suggestions, or actions. What the user does not see is often more influential. The prompt structure behind the interface strongly shapes how the model behaves. If those instructions are generic, the output can be generic too. If they are built for clinical writing, documentation, safety, and grounding, that can also be reflected in the output. AI models have known systematic biases, and some of those effects can be reduced in part through culturally sensitive prompting. The problem is that many tools do not apply that kind of guidance consistently on every run.
Prompting can refer to more than one thing. In practice, it often refers both to the user prompt and the system prompt. The user prompt is the visible part, or what the user types. The system prompt is the invisible part, or the instructions programmed by the software provider or model maker to be inserted every time. The system prompt is fixed and runs on every turn, while the user prompt changes from turn to turn and is how the model receives the user’s immediate intent. Most tools do not show the system prompt, which can make it harder to evaluate how useful, reliable, or well-designed a given tool actually is.
Clinical writing is not generic text output, but a form of writing with many nuances that require grounding, restraint, context, ethical consideration, and the ability to handle ambiguity. A standard system prompt is usually not designed to meet those needs in clinical writing or clinical work, and that can become very evident in the output. A model can produce fluent text and still miss what matters clinically. It can smooth over uncertainty, overstate what is known, flatten meaningful context, or generate language that sounds polished without being appropriately grounded in the clinician’s reasoning or the client’s actual presentation.
What Better Prompting Design Addresses
At the system level, important aspects of prompt design include grounding to the clinician’s input text, prioritizing clinician-provided information over general model knowledge, handling ambiguity and contradictions appropriately, safety framing, attention to cultural factors, and attention to bias. At the user level, important aspects of prompting include providing enough specific clinical context, clearly stating the desired task or output, including relevant cultural or contextual factors, naming safety concerns explicitly, clarifying uncertainty when the clinical picture is mixed, and giving the model enough detail to stay grounded rather than fill major gaps on its own.
How to Prompt Better in Practice
In practice, it is worth paying attention to both the system prompt and the user prompt. At the system level, clinicians should explore what kinds of instructions are built into the tools they choose to use, especially for clinical work and documentation. If a tool does not make that clear, that is already useful information. At the user level, prompting should reflect the same concerns discussed above and should shape the conversation from the beginning rather than being added in later as an afterthought.
For example, if the task is to create a psychoeducational handout on panic attacks for a teenage client, the user can begin by grounding the model in accurate, research-based information and then add the relevant client context. That might include the client’s age, reading level, whether the handout should be written for the teen, parent, or both, any cultural or family context that matters, and whether the goal is to normalize panic symptoms, explain avoidance, or introduce coping skills. The point is not just to ask for a handout on panic attacks, but to frame the request in a way that gives the model the right material and the right constraints. The output should then still be reviewed carefully rather than accepted at face value.
LocalScribe as an Example
Better prompting design addresses these issues at both the system and user level. LocalScribe fits as a useful example here. LocalScribe is built to start a new prompt each time rather than continue a running conversation. The system and user instructions are therefore fresh each turn, which may reduce some risk of sycophancy. Its instructions also explicitly ground the model in the clinician’s input, which can reduce, though not eliminate, hallucinations. Safety and cultural considerations are built into the system instructions so that each output is generated with those factors already in view, which may help reduce biased outputs.
The current LocalScribe system prompt includes direct instructions such as:
"You are a clinical documentation assistant."
"Base content on information explicitly provided."
"Do not invent facts, names, dates, diagnoses, or scores."
"Use person-first, non-stigmatizing, culturally respectful language."
Those lines matter because they shape behavior on every run, not just when the user remembers to ask for those things explicitly. They tell the model to stay grounded, avoid invention, and write in a way that is both clinically professional and culturally respectful.
The user-side helper prompt matters too. In the current prompt assembly path, LocalScribe includes instructions such as:
"Write a [template] from the clinical shorthand below, using attachments and measures when provided."
It also adds specific guardrails against overreaching when the documentation is incomplete. One current instruction says:
"Do not convert missing documentation into negative findings. If a symptom, risk item (e.g., SI/HI), or MSE element is not documented, do not state it was denied, absent, not observed, or within normal limits; state only that it was not documented."
That kind of prompting is especially important in clinical writing because polished language can otherwise create a false impression of certainty. The problem is not just hallucinating an obviously false fact. It is also the quieter error of turning silence into a conclusion.
On the user side, LocalScribe is designed to work from the clinician's own material first. That can include clinical shorthand, relevant source documents, and supporting context such as terminology help or test information when those are needed. The goal is to give the model better context for the writing task so it stays closer to the clinician's reasoning and the actual material being used for documentation, rather than filling in gaps on its own.
Competence Still Matters
Competent use includes knowing what kinds of tasks a given AI tool was built to optimize for. A user does not need to know every technical detail, but a solid working understanding can help them choose the right tool for the job and use it more effectively. How a tool is used may also depend on prompting techniques that support more culturally responsive and helpful outputs. That is part of why AI documentation in clinical work is ultimately a competence question.
Prompt literacy is part of competence in using AI in clinical work. The tools themselves, and the companies behind them, should be held to meaningful standards of competence. At the same time, the stochastic and interactive nature of many AI systems also requires competence on the part of the user, especially in more sensitive clinical contexts. AI use is never automatic or required, but when it is chosen for an appropriate task, the quality and safety of the output depend on both a competent tool and a competent user.
Subscribe for future posts
If you want new writing at the intersection of AI and psychology, ethics, and implementation of AI in clinical practice, subscribe on Substack.
The views expressed here are my own and do not necessarily reflect the views of any current or future employer, training site, academic institution, or affiliated organization.