Introduction
This is a note derived while reading the paper by the name Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts . The paper can be found at this link .
While prompting LLMs can appear efortless, designing efective prompting strategies requires identifying the contexts in which these LLMs’ errors arise, devising prompting strategies to overcome them, and systematically assessing those strategies’ efectiveness.
In this work, they investigate how non-AI-experts intuitively approach prompt design when designing LLM-based chatbots, with an eye towards how non-AI-expert-facing design tools might help.
They think that chat-based interactions with LLMs can provide a powerful engine for a wide variety of tasks, including joke-writing, programming, writing college-level essays, medical diagnoses, and more.
So they creat a no-code LLM based chatbot design tool, BotDesigner that allows users to create an LLM based chatbot solely through prompts.
They, then examine how 10 participants perform a chatbot design task using BotDesigner .
Their observations:
- People can make a final design but tend to design opportunistically
- People struggle to make a systematic, robust progress
- People tend to have inclinations to design prompts that resemble human-to-human instructions.
Contributions
- Describe a no-code LLM-and-prompt based chatbot design tool that encourages iterative design and evaluation of robust prompting strategies (rather than opportunistic experimentation).
- Describe how non-AI experts approach prompt design; and where they struggle.
- Identifies opportunities for non-AI experts-facing-prompt-design tools.
Today's Non Expert Prompt Design
One of the desgin practice is as follows:
- First identify the chatbot's functionality or persona and draft ideal user-bot conversations
- create a dialogue fow template (e.g., “(1) greeting message; (2) questions to collect user intention; (3) ...”);
- fll the template with supervised NLP models (e.g., user intent classifer, response generator, etc.)
- iterate on these components to achieve a desired conversational experience.
But this requires substantial programming and machine learning knowledge.
Next design technique is pre-train, prompt, predict . This allows designers to create conversational agents with little to no training data, programming or ML skill or even NLP knowledge.
- Use of pre-trained LLM models, such as chatGPT
- create conversational systems, but sometimes problematic performance
- attach prompts to the user inputs to improve the performance.
Some idea on improving the LLM outputs in a conversational contexts
- Give examples of desired interactions in prompts (for example, n-shot prompting)
- Craft prompt that looks like code, for example like Jinja
- Repeat yourself. For example, in DALL-E, they show that repeating like a neon sign that reads backprop; backprop neon sign; a neon sign that backprop produces more effective result. (unsure about text based tasks)
TODO....