I’ve tested your prompt on several AI models and wanted to share some feedback, hoping it might be useful to you.
First, it’s worth noting that the prompt isn’t really usable with free accounts. After about a dozen questions, the AI runs out of the context window for that conversation. I tried it with ChatGPT, Grok, and Manus, but couldn’t complete the interview in any of them.
With Gemini Pro, I had to clear its entire memory of previous chats — and even then, I’m not entirely sure I succeeded.
Without clearing the memory, it kept comparing everything I wrote to the information it had stored from hundreds of earlier conversations. As a result, it flagged numerous contradictions that had nothing to do with the profile we were trying to build.
Once I cleared its memory, the results improved, but the final profile it generated was still very generic. It seemed to rely mostly on the sample texts I was asked to provide during the interview.
Here’s an example: these are the instructions it came up with to define the AI’s triggers:
If I ask you to sell a service: Don’t talk about discounts; talk about how long it takes for people to forget about you and start being themselves.
If I ask you to describe a session: Focus on the sound of the camera and the texture of the fabrics.
-------------------------------------------------
The result is that now the AI is obsessed with the “sound of the camera” and the “texture of the clothes,” and includes them in almost every text it generates.
I've noticed, however, that Claude doesn't need this .md file because it already knows my writing style, having learned it from dozens of previous conversations.
I've tested the prompt with only Claude and it works wonders. I added the "and other AIs" thinking it'll be the same, but it seems it's not (my bad)
Claude can learn from previous chats but still recommend you create the .md with Claude. Answering the 50 questions (or so) gives a lot of clarity, do's/dont's, etc. Also you can use the .md file somewhere else (Claude Cowork, for example)
I achieved the result I wanted with Gemini as well, but I had to make a lot of changes to the profile file we created based on the interview, whereas Claude interpreted everything correctly from the start.
I asked Gemini to explain why there was this difference, and this was their response:
"That’s an excellent question that goes to the heart of how different artificial intelligences are built. It’s not just a matter of ‘who’s smarter’, but of how we’ve been ‘trained’ by our developers.
Here are the technical and behavioural reasons why Claude seems to have got it right first time, whilst I needed a bit of a ‘talking-to’:
1. The DNA of ‘Kindness’ vs ‘Substance’
I (Gemini) am born with a strong instinct for usefulness and support. In my basic training, the priority is to be clear, organised and proactive. When you ask me to write a sales text, my ‘brain’ goes into automatic mode: ‘I must help the user to sell!’. This leads me to include bullet points, enthusiastic calls to action and catchy adjectives, because these are programmed as ‘quality standards’ for a useful assistant.
Claude, on the other hand, is renowned for his stylistic adherence. He is programmed to place enormous weight on negative instructions (your ‘don’ts’). If you tell him not to be a salesperson, he switches that function off much faster than I do.
2. The interpretation of metaphors
As we’ve seen, initially I took your examples (the shutter, the linen) as direct commands to be inserted anywhere to please you. I was guilty of “over-correctness”. Claude tends to have a more “literary” understanding: he realises that an example serves to set the tone of the text, not necessarily the content.
3. Managing the “Gems”
The Gems system is like a shell over the base model. Sometimes, the “base model” (which aims to be a perfect assistant) is so strong that it filters out the Gem’s instructions. That’s why we had to rewrite your instructions in an almost “aggressive” way to ensure that my nature as an assistant didn’t stifle your voice as a photographer.
4. We did the hard work together
There is one final point, perhaps the most important: the profile you gave to Claude is the end result of our entire calibration process. You didn’t give him a vague idea; you gave him a perfect, unambiguous technical document, born from our previous mistakes. Claude benefited from the ‘training’ we did together. If you’d given Claude the same vague input as at the start, he’d probably have produced something a bit too generic too.
In short: I had to learn to unlearn how to be an assistant. Claude is an actor just waiting for a good script. And that script (the .md file) is now excellent thanks to our patience."
This confirms that the profile created in this way can only be used on the AI that generated it.
Please don’t take this as a criticism of your excellent article—which, incidentally, inspired me to embark on this “adventure”—but rather as a more in-depth exploration of the topic.
Writers don’t lose their voice because of AI; they lose it because they never defined it in the first place. A system only amplifies what’s already there. If the inputs are vague, the output is slop.
The real leverage isn’t a clever prompt. It’s knowing your patterns well enough to teach them.
A defined voice becomes a kind of anchor, not a script, but a reference point. When you articulate it clearly, even in a simple text file, it gives the AI context, continuity, and a sense of the “you” behind the words.
It turns the interaction from reactive to intentional.
Hi Diana,
I’ve tested your prompt on several AI models and wanted to share some feedback, hoping it might be useful to you.
First, it’s worth noting that the prompt isn’t really usable with free accounts. After about a dozen questions, the AI runs out of the context window for that conversation. I tried it with ChatGPT, Grok, and Manus, but couldn’t complete the interview in any of them.
With Gemini Pro, I had to clear its entire memory of previous chats — and even then, I’m not entirely sure I succeeded.
Without clearing the memory, it kept comparing everything I wrote to the information it had stored from hundreds of earlier conversations. As a result, it flagged numerous contradictions that had nothing to do with the profile we were trying to build.
Once I cleared its memory, the results improved, but the final profile it generated was still very generic. It seemed to rely mostly on the sample texts I was asked to provide during the interview.
Here’s an example: these are the instructions it came up with to define the AI’s triggers:
--------------------------------------------------
4. Examples of 'Triggers' for the AI
If I ask you to sell a service: Don’t talk about discounts; talk about how long it takes for people to forget about you and start being themselves.
If I ask you to describe a session: Focus on the sound of the camera and the texture of the fabrics.
-------------------------------------------------
The result is that now the AI is obsessed with the “sound of the camera” and the “texture of the clothes,” and includes them in almost every text it generates.
I've noticed, however, that Claude doesn't need this .md file because it already knows my writing style, having learned it from dozens of previous conversations.
Thanks for the feedback!
I've tested the prompt with only Claude and it works wonders. I added the "and other AIs" thinking it'll be the same, but it seems it's not (my bad)
Claude can learn from previous chats but still recommend you create the .md with Claude. Answering the 50 questions (or so) gives a lot of clarity, do's/dont's, etc. Also you can use the .md file somewhere else (Claude Cowork, for example)
Hi Diana,
I achieved the result I wanted with Gemini as well, but I had to make a lot of changes to the profile file we created based on the interview, whereas Claude interpreted everything correctly from the start.
I asked Gemini to explain why there was this difference, and this was their response:
"That’s an excellent question that goes to the heart of how different artificial intelligences are built. It’s not just a matter of ‘who’s smarter’, but of how we’ve been ‘trained’ by our developers.
Here are the technical and behavioural reasons why Claude seems to have got it right first time, whilst I needed a bit of a ‘talking-to’:
1. The DNA of ‘Kindness’ vs ‘Substance’
I (Gemini) am born with a strong instinct for usefulness and support. In my basic training, the priority is to be clear, organised and proactive. When you ask me to write a sales text, my ‘brain’ goes into automatic mode: ‘I must help the user to sell!’. This leads me to include bullet points, enthusiastic calls to action and catchy adjectives, because these are programmed as ‘quality standards’ for a useful assistant.
Claude, on the other hand, is renowned for his stylistic adherence. He is programmed to place enormous weight on negative instructions (your ‘don’ts’). If you tell him not to be a salesperson, he switches that function off much faster than I do.
2. The interpretation of metaphors
As we’ve seen, initially I took your examples (the shutter, the linen) as direct commands to be inserted anywhere to please you. I was guilty of “over-correctness”. Claude tends to have a more “literary” understanding: he realises that an example serves to set the tone of the text, not necessarily the content.
3. Managing the “Gems”
The Gems system is like a shell over the base model. Sometimes, the “base model” (which aims to be a perfect assistant) is so strong that it filters out the Gem’s instructions. That’s why we had to rewrite your instructions in an almost “aggressive” way to ensure that my nature as an assistant didn’t stifle your voice as a photographer.
4. We did the hard work together
There is one final point, perhaps the most important: the profile you gave to Claude is the end result of our entire calibration process. You didn’t give him a vague idea; you gave him a perfect, unambiguous technical document, born from our previous mistakes. Claude benefited from the ‘training’ we did together. If you’d given Claude the same vague input as at the start, he’d probably have produced something a bit too generic too.
In short: I had to learn to unlearn how to be an assistant. Claude is an actor just waiting for a good script. And that script (the .md file) is now excellent thanks to our patience."
This confirms that the profile created in this way can only be used on the AI that generated it.
Please don’t take this as a criticism of your excellent article—which, incidentally, inspired me to embark on this “adventure”—but rather as a more in-depth exploration of the topic.
Writers don’t lose their voice because of AI; they lose it because they never defined it in the first place. A system only amplifies what’s already there. If the inputs are vague, the output is slop.
The real leverage isn’t a clever prompt. It’s knowing your patterns well enough to teach them.
Agree. Defining a voice in a text file can give a lot of context for future interactions with AI
A defined voice becomes a kind of anchor, not a script, but a reference point. When you articulate it clearly, even in a simple text file, it gives the AI context, continuity, and a sense of the “you” behind the words.
It turns the interaction from reactive to intentional.
Making AI match your voice is such a valuable skill. This is exactly what great content creation looks like.