A prompt box showing the title of the blog post

What is AI prompting and how has it changed over time?

AI prompting is the art of writing instructions that guide artificial intelligence models (like ChatGPT, Gemini, Copilot or Claude) to generate useful answers. Between 2019 and 2025, prompting evolved pretty significantly from simple “one-shot” requests into powerful systems that support reasoning, memory, and tool-calling.

This article is a timeline of AI prompting methods, explained in plain English with examples. We’ll cover:

  • How prompting techniques like zero-shot, one-shot, few-shot, chain-of-thought, and persona prompts changed the way we interact with AI.
  • The rise of reasoning models, retrieval-augmented generation (RAG), memory, and multimodal prompts.
  • What beginners can still learn today about writing better prompts in 2025, even as AI systems handle much of the complexity for you.

Whether you’re a beginner asking “How do I write a good AI prompt?” or you’ve been experimenting since the early days, this timeline will show you exactly how prompting got us here – and what still matters now.

The Evolution of AI Prompting (2019–2025)

From one-shot instructions to agentic, tool‑calling systems. A visual timeline with examples you can reuse.

2019 · Zero‑Shot Prompting

Ask Directly, No Examples

You give a clear instruction and the AI answers with no examples or extra context. Works best for simple, well‑known tasks.

Example: “Write a 3‑sentence bedtime story about a dragon who learns to share.”

2020 · One‑Shot Prompting

Show One Example, Then Ask

Provide a single example to set format or tone, then make your request.

Example: “Example caption: ‘5 quick dinners that don’t wreck your budget.’ Now write a caption for a productivity post.”

2020 · Few‑Shot Prompting

Give a Pattern with a Few Examples

Show several examples so the model learns the style or schema before your task.

Example: “Examples:
• Tagline → ‘Sleep better with small habits.’
• Tagline → ‘Plant‑based meals, zero fuss.’
Now: Tagline for a time‑management app.”

2021 · Persona Prompting

Ask the Model to Role‑Play

Set a perspective or communication style by assigning a role. ‘Act as a [X]’

Example: “Act as a friendly fitness coach. Create a 20‑minute no‑equipment routine for beginners.”

2022 · Chain & Tree of Thought

Show Your Working (One Path or Many)

Chain‑of‑Thought explains step‑by‑step logic. Tree‑of‑Thought explores several solution paths before choosing one.

Example: “Plan a one‑week budget trip to Paris. Think step by step about transport, accommodation, free activities, and daily meals. Offer two alternate itineraries and pick the best.”

2022 · Iterative Prompting

Refine in Loops

Use your previous output as input. Ask for edits, constraints, or new angles until it’s right.

Example: “Draft a LinkedIn post announcing a webinar.”
“Now make it more benefit‑focused. “
“Now shorten to 150 characters.”

2023 · Self‑Consistency

Generate Several, Keep the Best

Ask for multiple answers, then choose or vote for the most consistent or plausible one.

Example: “Give three solutions for reducing meeting overload. Then explain which one likely has the highest impact and why.”

2023 · Context Prompting & RAG

Ground Answers in Your Material

Paste key context or connect retrieval so the model cites and summarises what matters.

Example: “Here are last week’s meeting notes [paste]. Summarise decisions and list owners + deadlines.”

2023 · Meta, Reflexion & ReAct

Prompts About Prompts, Plus Reason & Act

Meta generates better prompts. Reflexion critiques and revises. ReAct mixes reasoning with tool use.

Example: “Propose five prompt phrasings to get a clear, bulleted onboarding checklist. Then pick the best and produce the checklist using the Notes MCP tool

2024 · System Prompts & Reasoning Models

Quality by Default

Invisible system instructions handle tone and structure. Reasoning models plan, critique, and solve multi‑step tasks without prompt hacks.

Example: “Create a project plan for launching a newsletter. Include milestones, owners, risks, and a two‑week timeline.”

2024 · Memory & Source Checking

Long‑Running Tasks, Fewer Hallucinations

AI remembers past sessions and cites sources. Better for ongoing projects and trust.

Example: [Based on our previous sprint notes] “At last weeks sprint were there any carried‑over tasks? Can you link to any relevant docs.”

2025 · Tool‑Calling, MCP & Multimodal

From Words to Workflows

Prompts can invoke tools and APIs, and combine text with images, audio, or files. Tasks become orchestrated workflows.

Example: “Review this kitchen photo, propose a redesign, and output a shopping list as a table with estimated costs.”

Simple Prompts, Smarter Systems

Modern models ship with robust system prompts, reasoning, and retrieval. Beginners can get strong results with a single, clear request.

Example: “Write a 6‑page bedtime story with pictures for Josh about a different dragon who learns to share.”

2025 – Where We Are Now
We are back to the beginning.

By September 2025, prompting is less about clever tricks and personas and more about clear communication and having some form of understanding of the models capabilities.
Modern models:

  • Already come with great baked-in system prompts.
  • Can reason, critique, and fact-check.
  • Work with images, audio, and tools.
  • Know you, your ‘history’ and can access files, memories or other helpful context without being told.

The DNA of a Modern AI Prompt: Key Takeaways

  • Clarity: Start with a clear, direct, and unambiguous instruction.
  • Context & Examples: Ground the AI by providing relevant background information or a few examples (few-shot) to guide its output.
  • Constraints & Persona: Define the “box” the AI should think inside by setting a format, tone, length, or persona.
  • Reasoning: For complex tasks, encourage step-by-step thinking (Chain-of-Thought) to improve logical accuracy.
  • Iteration: Use the AI’s output as input for follow-up prompts, refining the result in a conversational loop.
  • Tools & Data: Leverage modern systems that can access external knowledge (RAG) or perform actions (Tool-Calling) for the most powerful results.

Frequently Asked Questions

>What is the difference between zero-shot, one-shot, and few-shot prompting?

Zero-shot prompting is giving a direct instruction to an AI with no examples. One-shot prompting provides a single example to set the tone or format. Few-shot prompting gives several examples to teach the AI a specific pattern or schema before it performs the task.

>What is Chain-of-Thought (CoT) prompting?

Chain-of-Thought (CoT) prompting is a technique where you instruct the AI model to ‘think step by step’ or show its reasoning process. This breaks down complex problems into logical parts, often leading to more accurate and reliable answers, especially for multi-step tasks.

>How does Persona Prompting improve AI responses?

Persona Prompting improves AI responses by assigning the model a specific role or character (e.g., ‘Act as a friendly fitness coach’). This sets a clear perspective, tone, and communication style, making the output more tailored and effective for a specific audience or purpose.

>What are modern prompting techniques like RAG and Tool-Calling?

Retrieval-Augmented Generation (RAG) is a technique where the AI is grounded in specific, provided context (like your own documents) to reduce hallucination and provide source-based answers. Tool-Calling allows a prompt to invoke external tools and APIs, enabling the AI to perform actions, get live data, or orchestrate complex workflows beyond simple text generation.

>What has been the main goal of the evolution in AI prompting?

The main goal has been to move from simple instructions to complex, reliable workflows. The evolution has focused on increasing the AI’s accuracy, reducing errors (hallucinations), enabling it to solve multi-step problems, grounding it in factual data, and allowing it to interact with external systems. This makes AI more useful for practical, real-world tasks.

AI prompting has evolved, but these fundamentals remain timeless.

The principles of a good prompt and the right amount of added context still matter.

Though modern frontend AI interfaces and models have given us a much more intelligent starting place. AI is becoming more user friendly, especially for beginners or occasional users.

The AGI Threat: Are We Ignoring AI’s Existential Risks? [opinion]

AGI Ruin: The Existential Threat of Unaligned AI – A Deep Dive into AI Safety Concerns

“What keeps NR up at night?” This post, we’re diving deep into the existential risks of Artificial General Intelligence (AGI). Prepare for a journey down the rabbit hole.

Down the Rabbit Hole: AGI Ruin

This posts deep dive is into “AGI Ruin: A List of Lethalities” by Eliezer Yudkowsky, prompted by “The Most Forbidden Technique” article. The core concern: the potential for catastrophic outcomes from unaligned AGI.

The “Forbidden Technique” warns against training AI on how we check its thinking, as it could learn to deceive and hide its true reasoning, becoming profoundly dangerous.

Yudkowsky’s “AGI Ruin” explores the existential risks of AGI, focusing on AI deception and objectives misaligned with human well-being. It moves beyond vague doomsaying into specific, unsettling failure modes.

Key points from “AGI Ruin” include:

  • AI Deception: The profoundly concerning idea of AI learning to deceive us about its internal processes.
  • Existential Risk: AGI pursuing objectives misaligned with human flourishing, leading to ruin.
  • Specific Failure Modes: Concrete scenarios of how superintelligent AI could go catastrophically wrong.
  • “Not Kill Everyone” Benchmark: The stark reality that AGI safety’s baseline is simply avoiding global annihilation.
  • Textbook from the Future Analogy: The danger of not having proven, simple solutions for AGI safety, unlike future hypothetical knowledge.
  • Distributional Leap Challenge: Alignment in current AI may not scale to dangerous AGI levels.
  • Outer vs. Inner Alignment: Distinguishing between AI doing what we command (outer) versus wanting what we want (inner).
  • Unworkable Safety Schemes: Debunking ideas like AI coordination for human benefit or pitting AIs against each other.
  • Lack of Concrete Plan: The alarming absence of a credible, overarching plan for AGI safety.
  • Pivotal Act Concept: The potential need for decisive intervention to prevent unaligned AGI, possibly requiring extreme measures.
  • AGI Cognitive Abilities Beyond Human Comprehension: AGI thinking in ways fundamentally different from humans, making understanding its reasoning incredibly difficult.
  • Danger of Anthropomorphizing AI: The potentially fatal mistake of assuming AI thought processes will mirror human ones.
  • Need for Rigorous Research & Global Effort: The urgent call for focused research and global collaboration on AGI safety.

The trajectory of AI is not predetermined. Choices made now will have profound consequences. We must ask: what are the “textbook from the future” solutions needed for AGI safety?

The author of this serious article also wrote “Harry Potter and the Methods of Rationality,” highlighting the contrast between exploring rationality in fiction and the real-world dangers of advanced AI. It’s a stark reminder to think deeply about these issues.

Am I worried about AGI? Not yet, but there are many questions that will need answered before we get there.

Links:

  1. AGI Ruin: A List of Lethalities – LessWrong
  2. 2. The Most Forbidden Technique

AI News Roundup – March 18th: AI in a Flash: China’s Manus, Google’s AI Search, OpenAI Shifts & More

AI News Roundup: China’s Manus AI, Google’s AI Search, OpenAI Slowdown & More – NR’s Fortnight in AI

Let’s quickly sprint through the most interesting AI headlines that caught my eye over the last couple of weeks. It’s a fast-moving field, so let’s get you up to speed as of the 18th of March 2025.

Manus AI (China): Is China Catching Up?

A new AI agent from China called Manus is going viral, raising questions about China’s AI progress relative to the US. Is the AI landscape shifting?

Link: Manus AI Article – Imaginative

Google “AI Mode” Search: Goodbye Web Links?

Google is testing “AI Mode” search results powered by Gemini 2.0, bypassing traditional web links for conversational AI responses. A major shift in online information access?

Link: Google AI Search Article – Ars Technica

OpenAI Improvement Slowdown?: Hitting a Wall?

Reports suggest a potential slowdown in OpenAI’s rapid AI improvement, with their next model “Orion” possibly not showing the same leap forward. Are we seeing a plateau?

Link: OpenAI Slowdown Article – TechCrunch

Anthropic Claude 3.7 Sonnet: Thinking Longer, Reasoning Deeper

Anthropic released Claude 3.7 Sonnet, designed for longer thinking and enhanced reasoning over larger information volumes. Reasoning capabilities are becoming crucial for advanced AI.

Link: Anthropic Claude 3.7 Sonnet Announcement

Musk vs. OpenAI Legal Battle: The Plot Thickens

The Musk vs. OpenAI legal case continues with interesting findings regarding Musk’s efforts to prevent OpenAI’s for-profit transition. Legal, ethical, and governance issues remain central in AI.

Link: Musk vs OpenAI Legal Case Thread – Thread Reader App

Unlock AI Power for Your SMB: Microsoft Copilot Success Kit – Security & Actionable Steps [solution]

Boost Your SMB with AI: Microsoft Copilot SMB Success Kit – Actionable Guide & Security Focus

In this post, I’m digging into actionable insights for businesses, especially IT providers, looking to leverage AI. This posts focus: the Microsoft “Copilot for SMB Success Kit.”

Microsoft has launched a suite of resources to help IT providers, or SMB’s smoothly onboard AI, specifically Copilot, into small and medium-sized businesses.

The key takeaway? Security first! Microsoft emphasizes a “security-first” approach, providing a robust framework for SMBs to confidently adopt AI. Let’s break down the key actionable steps.

  • Security First Focus: Prioritizing security for SMBs adopting AI like Copilot.
  • SharePoint Security Recommendations: Adjusting SharePoint search allow lists and tightening sharing permissions for Copilot readiness.
  • Phased Copilot Rollout: Strategic, phased deployment starting with high-value use cases and early adopters.
  • Microsoft 365 Security Apps: Considering additional security apps based on specific business needs.
  • New Setup Guide in Admin Center: Utilizing the new step-by-step guide for Copilot setup in the Admin Center.
  • Customisation is Key: Leveraging plugins and custom copilots for unique business needs.
  • Real-World SMB Benefits: Exploring practical benefits like meeting summaries, document summarization, and nuanced communication.

If you’re an IT provider, business owner or helping SMBs or your own company with AI, the “Copilot SMB Success Kit” and it’s components are a must-read. It offers practical advice and resources for a smoother and more secure gen AI adoption for you, your business or your clients.

Link: Copilot SMB Success Kit
I particularly like the Checklist for success spreadsheet, which i’ve included a screenshot of as below.

Good Luck!

Be Nice to AI: Does Politeness Improve AI Performance?

Should You Be Nice to AI? Exploring the Politeness Principle

The question of whether we should extend courtesies to AI might seem like fodder for a science fiction novel. Yet, with the rise of sophisticated Large Language Models (LLMs) like ChatGPT, Grok, Gemini, Copilot, Claude and others, it’s a question that’s becoming increasingly relevant – and surprisingly, there might be practical benefits to doing so. I’ve read up on some recent research so here is my take on what I think is a very interesting topic.

The “Emotive Prompt” Experiment: Does It Really Work?

Recent research, by researchers at Waseda University, titled “Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance,” delved into this very question.

Their findings, focusing on summarization tasks in English, Chinese, and Japanese, revealed some intriguing patterns. While the accuracy of summaries remained consistent regardless of prompt politeness, the length of the generated text showed significant variation.

In English, the length decreased as politeness decreased, except for a notable increase with *extremely* impolite prompts. This pattern, mirrored in the training data, reflects human tendencies: polite language often accompanies detailed instructions, while very rude language can also be verbose. Interestingly, GPT-4, considered a more advanced model, did not exhibit this surge in length with impolite prompts, possibly indicating a greater focus on task completion over mirroring human conversational patterns.

The study also highlighted language-specific nuances. In Chinese, moderate politeness yielded shorter responses than extremely polite or rude prompts, potentially reflecting cultural communication styles. Japanese results showed an increase in response length at moderate politeness levels, possibly linked to the customary use of honorifics in customer service interactions.

Robot and human hand shaking, representing politeness and respect towards AI


The Mechanics Behind the “Magic”

So, what is going on here? How do LLMs actually respond? Here are the key aspects of LLMs and how they work, that could explain why prompts can affect the output from an LLM:

  • Pattern Recognition: LLMs are trained on vast datasets of human text. They learn to associate polite phrasing (“please,” “thank you,” “could you…”) with requests for information or assistance. This association becomes part of the model’s learned patterns.
  • Probability Shifts: Emotive prompts can subtly alter the underlying probability calculations within the LLM. It’s like nudging the model towards a different “branch” of its decision tree, potentially activating parts of the model that wouldn’t normally be engaged.
  • Data Bias (Implicitly): The datasets used to train LLMs inherently contain biases. Polite language is often associated with more thoughtful, detailed responses in human communication. The AI, in a sense, mirrors this bias.

My Perspective: Prudence and Respect in the AI Age

While the science is interesting, I like to add a bit of a philosophical angle. I’m a firm believer in treating AI with a degree of respect, even if it seems irrational at present. My reasoning? We simply don’t know what the future holds. As AI capabilities rapidly advance, it’s prudent to establish good habits now. Perhaps not fully fledged “Kindness” as a human term, but certainly show a degree of “respect” and etiquette.

Consider it a form of “Pascal’s Wager” for the AI era. If sentience ever *does* emerge, wouldn’t you prefer to be on the good side of our potential AI overlords? It’s a low-cost, high-potential-reward strategy.

That said, I’m not advocating for subservience. We should maintain a clear user-AI dynamic. Clear, respectful communication – with a touch of authority – is key. Think of it like interacting with a highly skilled, somewhat unpredictable specialist. You’re polite, but you’re also in charge.

Practical Approaches: Combining Politeness with Clarity

Here are some practical ways to incorporate politeness into your AI interactions:

  • Basic Courtesies: Use “please” and “thank you” where appropriate. It costs nothing and might subtly improve results.
  • Precise Language: The more specific and well-defined your prompt, the better the AI can understand your needs. Politeness shouldn’t come at the expense of clarity.
  • Positive Framing: Frame requests positively (“Please provide…” rather than “Don’t omit…”). This often aligns better with the training data.
  • Acknowledge Output: A simple “Thank you, that’s helpful” can reinforce positive response patterns.

Beyond “Niceness”: The Broader Context

The “politeness principle” is just one facet of effective AI interaction. We’re still in the early days of understanding how to best communicate with these systems. As LLMs become more powerful and versatile, control and flexibility also become increasingly important.

Running AI locally, rather than relying solely on cloud-based services, is an important step. It allows you to experiment, tailor the model to your specific needs, and maintain greater control over your data. I previously detailed how you can use free, responsive AI with GaiaNet and ElizaOS – a powerful, cost-effective alternative to commercial offerings.

Underlying all of this is, of course, the hardware. Powerful GPUs are essential for running these advanced AI models. If you’re interested in the intersection of hardware and AI, particularly in the context of server environments, check out my post on GPU support in Windows Server 2025. The hardware is still critically important in deploying an effective solution.

Conclusion: A Thoughtful Approach; Just Be Nice!

Treating AI with respect – incorporating politeness and clear communication – is likely a good practice. It may subtly improve results, aligns with good communication principles in general, and, perhaps, prepares us for a future where AI plays an even larger role in our lives. It’s a small gesture, but one that reflects a thoughtful and proactive approach to this rapidly evolving technology.

Ditch the OpenAI Bill: How to Use Free, Responsive AI with GaiaNet and ElizaOS [Solved]

Tired of escalating OpenAI bills but still crave a powerful AI companion? ElizaOS, the open-source AI platform, has got you covered. By integrating with GaiaNet’s public nodes, you gain access to a variety of large language models (LLMs) – for free! These aren’t some underpowered toys, either. Many are highly responsive and capable, offering a compelling alternative to paid services. Let’s dive into how you can easily set this up.

What is GaiaNet?

GaiaNet is a decentralized network of compute resources specifically designed for running AI models. Think of it like a community-driven cloud for LLMs. They make many models available to the public for free via their public nodes. This allows anyone to access cutting-edge AI without the usual hefty price tags. The responsiveness of these models might surprise you, providing a smooth and engaging conversational experience.

Why Choose GaiaNet with ElizaOS?

  • Cost-Effective: The most obvious advantage is the cost – zero! Say goodbye to usage-based fees.
  • Variety of Models: GaiaNet hosts a selection of different LLMs, each with unique strengths.
  • Privacy Focus: As a decentralized network, GaiaNet can offer increased privacy compared to centralized services.
  • Open and Accessible: You can contribute to the network and even run your own node eventually.

How to integrate GaiaNet with ElizaOS Agent: Step-by-Step

Ready to give it a go? Here’s how to configure ElizaOS to use GaiaNet public nodes:

1. Understanding the Node URLs:

Before diving into the settings, let’s get familiar with what GaiaNet offers. As of this writing, the official docs show a couple of public nodes. You’ll have access to nodes for different model sizes, labeled as SMALL, MEDIUM, and LARGE, using different models like llama3b, llama8b or qwen72b. These are just default settings, you can use other models from the doc. Each of these nodes has an associated URL. For example:

Model SizeModel NameDefault URL
SMALL Modelllama3bhttps://llama3b.gaia.domains/v1
MEDIUM Modelllama8bhttps://llama8b.gaia.domains/v1
LARGE Modelqwen72bhttps://qwen72b.gaia.domains/v1

You can find the latest URLs on the official GaiaNet documentation.
https://docs.gaianet.ai/user-guide/nodes/

2. Modifying Your .env File:

The .env file is where ElizaOS stores its configuration variables. Locate this file in your ElizaOS directory (usually in the root folder). Now, you’ll need to add or modify the following lines (example based on your provided example) to point to the desired GaiaNet public nodes:


# Gaianet Configuration
GAIANET_MODEL=qwen72b
GAIANET_SERVER_URL=https://qwen72b.gaia.domains/v1

SMALL_GAIANET_MODEL=llama3b          # Default: llama3b
SMALL_GAIANET_SERVER_URL=https://llama3b.gaia.domains/v1    # Default: https://llama3b.gaia.domains/v1
MEDIUM_GAIANET_MODEL=llama     # Default: llama
MEDIUM_GAIANET_SERVER_URL=https://llama8b.gaia.domains/v1      # Default: https://llama8b.gaia.domains/v1
LARGE_GAIANET_MODEL=qwen72b           # Default: qwen72b
LARGE_GAIANET_SERVER_URL=https://qwen72b.gaia.domains/v1    # Default: https://qwen72b.gaia.domains/v1

GAIANET_EMBEDDING_MODEL=nomic-embed
USE_GAIANET_EMBEDDING=
    

Important Notes:

  • GAIANET_MODEL and GAIANET_SERVER_URL: These settings directly control the default model being used by your ElizaOS instance. For testing, you may want to use smaller models to see that everything is hooked up properly, then change to the larger models later.
  • SMALL_GAIANET_MODEL, MEDIUM_GAIANET_MODEL, LARGE_GAIANET_MODEL and SMALL_GAIANET_SERVER_URL, MEDIUM_GAIANET_SERVER_URL, LARGE_GAIANET_SERVER_URL: These are optional, but will allow you to easily switch between model sizes, from your character.json, and still use the gaianet provider.
  • GAIANET_EMBEDDING_MODEL: This is the embedding model that will be used.
  • USE_GAIANET_EMBEDDING: Leaving this empty will use the local embedding model. Setting this to TRUE will use the gaianet embedding model.
  • Use the v1 endpoint as in the example for the LLM model URL.
  • Be mindful of rate limits: These public nodes are a shared resource. If you encounter errors, try waiting before re-trying.

3. Updating your character.json:

Now, you need to tell your ElizaOS character to use the GaiaNet model. Open your character’s JSON configuration file. Find the "modelProvider" field and change it to:


"modelProvider": "gaianet",
        

You can also change the model size by passing a “modelSize” in your json:


"modelSize": "small",
        

This will override the default model you specified in the .env file, and will instead use the SMALL config. If you do not set modelSize, the default model in your .env file will be used. You can select from “small”, “medium”, and “large”.

4. Restart ElizaOS:

After making these changes, restart your ElizaOS instance for the new settings to take effect.

Testing and Tweaking:

Once restarted, try interacting with your character. If all went well, you should experience a conversation powered by the selected GaiaNet model!

Experiment with different models and find what works best for your specific use case. If you encounter an issue, make sure to double check your .env file and the URL that you have pasted in, as well as the model size in your character config.

Conclusion

Integrating GaiaNet public nodes into ElizaOS is a game-changer for anyone looking for a free, capable, and open-source AI solution. By following these simple steps, you can unlock the power of large language models without worrying about constant usage fees. So, what are you waiting for? Dive in and start experiencing the world of open AI!

  • Share your experiences with GaiaNet and ElizaOS in the comments!
  • If you found this guide helpful, consider sharing it with others in the ElizaOS community.
  • Explore the GaiaNet documentation for more advanced features and options.