Get Better Responses from LLMs
Written by Katherine Villasenor, Learning Analytics/LMS Specialist, Office of Digital Learning
Introduction: The Power of Prompt Engineering
It would have been easy to dismiss the potential of this tool after a couple of frustrating experiences when I desperately needed a quick and accurate answer from Chat GPT, and I just wasn't getting the response I was expecting. Luckily, during an evening stroll around Austin's famous Zilker Park, I overheard some people talking about prompt engineering. That buzz word stayed with me, and the first thing I did when I got home was google this new term and, of course, ask Chat GPT about it as well. I realized that the reason I wasn't getting the responses I wanted or expected from Chat GPT was because I wasn't asking for what I needed in a way that allowed these tools to provide better responses. In order to leverage these tools effectively, it's essential to understand the principles of prompt engineering. Similar to a classroom setting, the answers we get are only as good as our questions, or in this case, prompts. This blog post will discuss practical tips and examples on how to achieve just that.
The Rise of AI in Education
As AI continues to grow, we have seen it draw a lot of attention in the education world. With that attention has come an increased interest in tools like Copilot or Chat GPT which offer tremendous potential for enhancing teaching, research, and administrative tasks. I started using Chat GPT, like many others, from a place of curiosity and entertainment. Asking trivial questions just to see what the limits of its knowledge or capabilities were, which to my surprise seemed almost non-existent. No matter the complexity of the questions, there seemed to be an answer. A correct answer? That's where it gets tricky… the answers are generally correct, however, they are not always what the user was hoping for.
Understanding Prompt Engineering
First let's define what prompt engineering is; prompt engineering involves crafting your input in a way that maximizes the relevance and accuracy of the AI's output. This is crucial because like I mentioned before, the quality of AI responses depends significantly on how the questions or prompts are phrased.
Tips for Improving Your Prompts
In the section below I will share tips on how to improve your prompts to get better responses from LLM (Large Language Models):
1. Be Specific and Clear: clearly state your query or task to reduce ambiguity.
• Example: Instead of asking, "Summarize this article," specify, "Summarize the
key findings of this article on climate change impacts on coastal cities."
2. Provide Context: give the AI context to better understand the task.
• Example: "Provide a summary of this research paper. It is about the effects of
remote learning on student engagement during the pandemic."
3. Break Down Complex Queries: divide complicated tasks into simpler, more manageable parts.
• Example: Rather than asking, "Analyze the trends in this dataset," break it down:
"First, identify the major trends in the dataset from 2010 to 2020. Second, explain
what these trends indicate about student enrollment patterns."
4. Use Examples: provide examples to guide the AI on the desired format or style.
• Example: "Generate a research proposal outline. For example, the outline should
include sections like Introduction, Literature Review, Methodology, Expected Results,
and Conclusion."
5. Specify the Desired Output Format: clearly mention the format you need the information in.
• Example: "Create a multiple-choice question based on this text with four options,
where one option is correct."
6. Iterate and Refine Prompts: don't hesitate to refine your prompts based on the initial responses.
• Example: If the response to "Explain the significance of this theory" is too broad,
refine to "Explain the significance of this theory in the context of modern educational
psychology."
7. Ask for Step-by-Step Responses: request detailed steps for processes or explanations.
• Example: "Explain the process of applying for a research grant step-by-step, including
tips for each stage."
As most things, it takes time to perfect the formula. An article by McKinsey & Company says that getting good output isn't rocket science, but it can take patience and iteration. Be patient and get creative in the way you describe and ask for what you need, don't be afraid to overexplain.
Putting It into Practice: Examples
Now it's time to see it in action, in the images below we will apply the tips I mentioned, and we will compare the responses we receive. I will be using Microsoft Copilot since that is the recommended LLM for our University thanks to its high security and privacy protection.
Example 1: Lecture Preparation
-
-
- General Prompt: "Create a lecture outline on the topic of cognitive development in children."
- Refined Prompt: "Create a detailed lecture outline on Piaget's stages of cognitive development, including key concepts and examples for each stage."
-
Example 2: Research Assistance
-
- General Prompt: "Help me with a literature review on digital learning."
- Refined Prompt: "List the most recent studies (2018-2023) on the impact of digital learning tools on student engagement, summarizing their key findings."
Further Practice
I invite you to test the prompts below and compare the results you get. To take this a step further, try to think of tasks or projects you could use AI for and come up with a prompt that integrates the tips we discussed earlier. Maybe you discover that creating lectures doesn't take as long, or you get some creative inspiration for creating your next assignments.
Example 3: Student Support
-
-
- General Prompt: "Provide advice for students struggling with online learning."
- Refined Prompt: "List practical strategies for students to improve their time management and focus during online learning sessions."
-
Example 4: Administrative Tasks
-
-
- General Prompt: "Write an email to students about the upcoming seminar."
- Refined Prompt: "Draft an email to students reminding them of the upcoming seminar on 'Innovative Teaching Methods' scheduled for July 15th, including details on the speakers and how to register."
-
Conclusion: The Transformative Potential of AI in Academia
As you can see in the examples above, effective prompt engineering is the key to getting the most out of LLMs. I hope these tips help you as you dive into the world of AI and take advantage of this powerful tool. MIT published an article that stated that integrating these strategies not only improves the efficiency of administrative and academic tasks but also enhances the overall educational experience for students. AI is here to stay and make our jobs easier if we learn how to make use of these tools.
As you refine your approach to working with AI, the process becomes more intuitive and productive. It's like having a responsive assistant that grows more attuned to your needs the more you work together. This improvement in the quality of the responses is they key of the transformative potential of AI in academia. As you start to see the benefits in your work, I believe you'll agree that mastering prompt engineering is a worthwhile investment of time and effort. I invite you to share with your colleagues what tips you have found to be useful when creating prompts for LLMs.