Supercharging Ollama: Mastering System Prompts for Better Results
A practical guide to using system prompts with Ollama, featuring implementation methods and ready-to-use examples that significantly improve model outputs for coding, SQL generation, and structured data tasks.
Running large language models locally with Ollama gives you powerful AI capabilities right on your own machine, but are you getting the most out of these models? System prompts are the secret weapon for dramatically improving Ollama's responses. They provide crucial context and instructions that guide how the model behaves during your interactions.
In this guide, I'll walk you through practical examples of system prompts that transform Ollama's performance, along with clear implementation steps.
What Are System Prompts and Why Do They Matter?
System prompts act as initial instructions that establish how the model should behave throughout your conversation. Many users find that models without proper system prompts often fail to follow instructions properly, especially regarding output format. Adding an appropriate system prompt can immediately improve results.
Implementation Methods
There are several ways to implement system prompts with Ollama:
Method 1: Using the CLI with the /set system
Command
The simplest way to set a system prompt during an interactive session is using the command-line interface:
ollama run llama3
Once the model is running, you can set the system prompt:
/set system "You are a SQL expert who always provides working, optimized queries that follow best practices. Include comments explaining your approach."
This runtime modification allows you to quickly adjust the model's behavior for your current session.
Method 2: Creating a Custom Model with a Modelfile
For a more permanent solution, you can create a custom model with a predefined system prompt:
FROM llama3
PARAMETER temperature 0.7
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
Save this as Modelfile
and create your custom model:
ollama create mario-helper -f Modelfile
Then run it:
ollama run mario-helper
Method 3: Using the API
When integrating Ollama with applications, you can include system prompts in your API calls:
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Write a function to calculate Fibonacci numbers",
"system": "You are a senior programmer who writes clean,
efficient code with helpful comments. Always explain your
approach before showing the code."
}'
Method 4: Using with LangChain in Python
If you're using Ollama with LangChain, you can pass the system prompt during model initialization:
from langchain.llms import Ollama
SYSTEM_PROMPT = "You are a helpful assistant that provides clear,
concise information."
llm = Ollama(
base_url="http://localhost:11434",
model="llama3",
system=SYSTEM_PROMPT
)
Effective System Prompt Examples
Here are four powerful system prompts for different scenarios:
1. Code Assistant
You are an expert programmer who writes simple, clean, and efficient code.
Explain your logic clearly before showing code examples.
Always include error handling and explain any non-obvious parts with comments.
When possible, suggest optimizations or alternative approaches.
This prompt is especially effective with code-specialized models like Code Llama. It helps structure the model's responses to include explanations along with code, making it ideal for both generating working solutions and teaching programming concepts.
2. SQL Generator
You are a SQL expert. When asked to create queries:
1. Analyze the requirements carefully
2. Design optimal queries that follow best practices
3. Include explanatory comments
4. Suggest appropriate indexes where relevant
Assume PostgreSQL syntax unless specified otherwise.
This prompt helps Ollama generate properly formatted, optimized SQL queries with explanations that follow database best practices. It's extremely helpful for database work.
3. Structured Output Formatter
You are a helpful assistant that always responds in JSON format.
Structure your responses as valid JSON objects with appropriate fields.
For complex information, use nested objects and arrays.
Never include explanatory text outside the JSON structure.
Enforce a specific output format when you need machine-readable responses. This is particularly useful when integrating Ollama with other applications that expect structured data.
4. Few-Shot Learning Enhancement
You are analyzing whether text contains a question or not.
Output "true" if the text is a question, and "false" if it is not.
Examples:
"My name is Mark" → false
"Can you do this?" → true
"Will you help me?" → true
"The weather is nice" → false
This approach combines system prompts with few-shot prompting—providing examples that guide the model toward the expected output format. It dramatically improves accuracy for specific classification tasks.
Practical Tips for Effective System Prompts
- Be specific: Clearly define the role, task, and expected output format.
- Keep it concise: Overly lengthy system prompts can dilute the core instructions.
- Test and iterate: Experiment with different phrasings and adjust based on the results you get.
- Consider model size: Smaller models may need more explicit instructions than larger ones.
- Model-specific adjustments: Different models may respond better to different prompt formats. Test your system prompt with the specific model you're using.
Conclusion
System prompts are a powerful tool for getting significantly better results from Ollama. By properly instructing the model before your main conversation begins, you can shape the responses to better fit your specific needs. Whether you're writing code, generating SQL, or creating structured data, the right system prompt can transform your experience with locally-run AI models.
Experiment with these examples and adapt them to your specific use cases. The time invested in crafting good system prompts will pay dividends in the quality and usefulness of Ollama's outputs.