Unlock AI Power for Your SMB: Microsoft Copilot Success Kit – Security & Actionable Steps [solution]

Boost Your SMB with AI: Microsoft Copilot SMB Success Kit – Actionable Guide & Security Focus

In this post, I’m digging into actionable insights for businesses, especially IT providers, looking to leverage AI. This posts focus: the Microsoft “Copilot for SMB Success Kit.”

Microsoft has launched a suite of resources to help IT providers, or SMB’s smoothly onboard AI, specifically Copilot, into small and medium-sized businesses.

The key takeaway? Security first! Microsoft emphasizes a “security-first” approach, providing a robust framework for SMBs to confidently adopt AI. Let’s break down the key actionable steps.

  • Security First Focus: Prioritizing security for SMBs adopting AI like Copilot.
  • SharePoint Security Recommendations: Adjusting SharePoint search allow lists and tightening sharing permissions for Copilot readiness.
  • Phased Copilot Rollout: Strategic, phased deployment starting with high-value use cases and early adopters.
  • Microsoft 365 Security Apps: Considering additional security apps based on specific business needs.
  • New Setup Guide in Admin Center: Utilizing the new step-by-step guide for Copilot setup in the Admin Center.
  • Customisation is Key: Leveraging plugins and custom copilots for unique business needs.
  • Real-World SMB Benefits: Exploring practical benefits like meeting summaries, document summarization, and nuanced communication.

If you’re an IT provider, business owner or helping SMBs or your own company with AI, the “Copilot SMB Success Kit” and it’s components are a must-read. It offers practical advice and resources for a smoother and more secure gen AI adoption for you, your business or your clients.

Link: Copilot SMB Success Kit
I particularly like the Checklist for success spreadsheet, which i’ve included a screenshot of as below.

Good Luck!

Be Nice to AI: Does Politeness Improve AI Performance?

Should You Be Nice to AI? Exploring the Politeness Principle

The question of whether we should extend courtesies to AI might seem like fodder for a science fiction novel. Yet, with the rise of sophisticated Large Language Models (LLMs) like ChatGPT, Grok, Gemini, Copilot, Claude and others, it’s a question that’s becoming increasingly relevant – and surprisingly, there might be practical benefits to doing so. I’ve read up on some recent research so here is my take on what I think is a very interesting topic.

The “Emotive Prompt” Experiment: Does It Really Work?

Recent research, by researchers at Waseda University, titled “Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance,” delved into this very question.

Their findings, focusing on summarization tasks in English, Chinese, and Japanese, revealed some intriguing patterns. While the accuracy of summaries remained consistent regardless of prompt politeness, the length of the generated text showed significant variation.

In English, the length decreased as politeness decreased, except for a notable increase with *extremely* impolite prompts. This pattern, mirrored in the training data, reflects human tendencies: polite language often accompanies detailed instructions, while very rude language can also be verbose. Interestingly, GPT-4, considered a more advanced model, did not exhibit this surge in length with impolite prompts, possibly indicating a greater focus on task completion over mirroring human conversational patterns.

The study also highlighted language-specific nuances. In Chinese, moderate politeness yielded shorter responses than extremely polite or rude prompts, potentially reflecting cultural communication styles. Japanese results showed an increase in response length at moderate politeness levels, possibly linked to the customary use of honorifics in customer service interactions.

Robot and human hand shaking, representing politeness and respect towards AI


The Mechanics Behind the “Magic”

So, what is going on here? How do LLMs actually respond? Here are the key aspects of LLMs and how they work, that could explain why prompts can affect the output from an LLM:

  • Pattern Recognition: LLMs are trained on vast datasets of human text. They learn to associate polite phrasing (“please,” “thank you,” “could you…”) with requests for information or assistance. This association becomes part of the model’s learned patterns.
  • Probability Shifts: Emotive prompts can subtly alter the underlying probability calculations within the LLM. It’s like nudging the model towards a different “branch” of its decision tree, potentially activating parts of the model that wouldn’t normally be engaged.
  • Data Bias (Implicitly): The datasets used to train LLMs inherently contain biases. Polite language is often associated with more thoughtful, detailed responses in human communication. The AI, in a sense, mirrors this bias.

My Perspective: Prudence and Respect in the AI Age

While the science is interesting, I like to add a bit of a philosophical angle. I’m a firm believer in treating AI with a degree of respect, even if it seems irrational at present. My reasoning? We simply don’t know what the future holds. As AI capabilities rapidly advance, it’s prudent to establish good habits now. Perhaps not fully fledged “Kindness” as a human term, but certainly show a degree of “respect” and etiquette.

Consider it a form of “Pascal’s Wager” for the AI era. If sentience ever *does* emerge, wouldn’t you prefer to be on the good side of our potential AI overlords? It’s a low-cost, high-potential-reward strategy.

That said, I’m not advocating for subservience. We should maintain a clear user-AI dynamic. Clear, respectful communication – with a touch of authority – is key. Think of it like interacting with a highly skilled, somewhat unpredictable specialist. You’re polite, but you’re also in charge.

Practical Approaches: Combining Politeness with Clarity

Here are some practical ways to incorporate politeness into your AI interactions:

  • Basic Courtesies: Use “please” and “thank you” where appropriate. It costs nothing and might subtly improve results.
  • Precise Language: The more specific and well-defined your prompt, the better the AI can understand your needs. Politeness shouldn’t come at the expense of clarity.
  • Positive Framing: Frame requests positively (“Please provide…” rather than “Don’t omit…”). This often aligns better with the training data.
  • Acknowledge Output: A simple “Thank you, that’s helpful” can reinforce positive response patterns.

Beyond “Niceness”: The Broader Context

The “politeness principle” is just one facet of effective AI interaction. We’re still in the early days of understanding how to best communicate with these systems. As LLMs become more powerful and versatile, control and flexibility also become increasingly important.

Running AI locally, rather than relying solely on cloud-based services, is an important step. It allows you to experiment, tailor the model to your specific needs, and maintain greater control over your data. I previously detailed how you can use free, responsive AI with GaiaNet and ElizaOS – a powerful, cost-effective alternative to commercial offerings.

Underlying all of this is, of course, the hardware. Powerful GPUs are essential for running these advanced AI models. If you’re interested in the intersection of hardware and AI, particularly in the context of server environments, check out my post on GPU support in Windows Server 2025. The hardware is still critically important in deploying an effective solution.

Conclusion: A Thoughtful Approach; Just Be Nice!

Treating AI with respect – incorporating politeness and clear communication – is likely a good practice. It may subtly improve results, aligns with good communication principles in general, and, perhaps, prepares us for a future where AI plays an even larger role in our lives. It’s a small gesture, but one that reflects a thoughtful and proactive approach to this rapidly evolving technology.

Ditch the OpenAI Bill: How to Use Free, Responsive AI with GaiaNet and ElizaOS [Solved]

Tired of escalating OpenAI bills but still crave a powerful AI companion? ElizaOS, the open-source AI platform, has got you covered. By integrating with GaiaNet’s public nodes, you gain access to a variety of large language models (LLMs) – for free! These aren’t some underpowered toys, either. Many are highly responsive and capable, offering a compelling alternative to paid services. Let’s dive into how you can easily set this up.

What is GaiaNet?

GaiaNet is a decentralized network of compute resources specifically designed for running AI models. Think of it like a community-driven cloud for LLMs. They make many models available to the public for free via their public nodes. This allows anyone to access cutting-edge AI without the usual hefty price tags. The responsiveness of these models might surprise you, providing a smooth and engaging conversational experience.

Why Choose GaiaNet with ElizaOS?

  • Cost-Effective: The most obvious advantage is the cost – zero! Say goodbye to usage-based fees.
  • Variety of Models: GaiaNet hosts a selection of different LLMs, each with unique strengths.
  • Privacy Focus: As a decentralized network, GaiaNet can offer increased privacy compared to centralized services.
  • Open and Accessible: You can contribute to the network and even run your own node eventually.

How to integrate GaiaNet with ElizaOS Agent: Step-by-Step

Ready to give it a go? Here’s how to configure ElizaOS to use GaiaNet public nodes:

1. Understanding the Node URLs:

Before diving into the settings, let’s get familiar with what GaiaNet offers. As of this writing, the official docs show a couple of public nodes. You’ll have access to nodes for different model sizes, labeled as SMALL, MEDIUM, and LARGE, using different models like llama3b, llama8b or qwen72b. These are just default settings, you can use other models from the doc. Each of these nodes has an associated URL. For example:

Model SizeModel NameDefault URL
SMALL Modelllama3bhttps://llama3b.gaia.domains/v1
MEDIUM Modelllama8bhttps://llama8b.gaia.domains/v1
LARGE Modelqwen72bhttps://qwen72b.gaia.domains/v1

You can find the latest URLs on the official GaiaNet documentation.
https://docs.gaianet.ai/user-guide/nodes/

2. Modifying Your .env File:

The .env file is where ElizaOS stores its configuration variables. Locate this file in your ElizaOS directory (usually in the root folder). Now, you’ll need to add or modify the following lines (example based on your provided example) to point to the desired GaiaNet public nodes:


# Gaianet Configuration
GAIANET_MODEL=qwen72b
GAIANET_SERVER_URL=https://qwen72b.gaia.domains/v1

SMALL_GAIANET_MODEL=llama3b          # Default: llama3b
SMALL_GAIANET_SERVER_URL=https://llama3b.gaia.domains/v1    # Default: https://llama3b.gaia.domains/v1
MEDIUM_GAIANET_MODEL=llama     # Default: llama
MEDIUM_GAIANET_SERVER_URL=https://llama8b.gaia.domains/v1      # Default: https://llama8b.gaia.domains/v1
LARGE_GAIANET_MODEL=qwen72b           # Default: qwen72b
LARGE_GAIANET_SERVER_URL=https://qwen72b.gaia.domains/v1    # Default: https://qwen72b.gaia.domains/v1

GAIANET_EMBEDDING_MODEL=nomic-embed
USE_GAIANET_EMBEDDING=
    

Important Notes:

  • GAIANET_MODEL and GAIANET_SERVER_URL: These settings directly control the default model being used by your ElizaOS instance. For testing, you may want to use smaller models to see that everything is hooked up properly, then change to the larger models later.
  • SMALL_GAIANET_MODEL, MEDIUM_GAIANET_MODEL, LARGE_GAIANET_MODEL and SMALL_GAIANET_SERVER_URL, MEDIUM_GAIANET_SERVER_URL, LARGE_GAIANET_SERVER_URL: These are optional, but will allow you to easily switch between model sizes, from your character.json, and still use the gaianet provider.
  • GAIANET_EMBEDDING_MODEL: This is the embedding model that will be used.
  • USE_GAIANET_EMBEDDING: Leaving this empty will use the local embedding model. Setting this to TRUE will use the gaianet embedding model.
  • Use the v1 endpoint as in the example for the LLM model URL.
  • Be mindful of rate limits: These public nodes are a shared resource. If you encounter errors, try waiting before re-trying.

3. Updating your character.json:

Now, you need to tell your ElizaOS character to use the GaiaNet model. Open your character’s JSON configuration file. Find the "modelProvider" field and change it to:


"modelProvider": "gaianet",
        

You can also change the model size by passing a “modelSize” in your json:


"modelSize": "small",
        

This will override the default model you specified in the .env file, and will instead use the SMALL config. If you do not set modelSize, the default model in your .env file will be used. You can select from “small”, “medium”, and “large”.

4. Restart ElizaOS:

After making these changes, restart your ElizaOS instance for the new settings to take effect.

Testing and Tweaking:

Once restarted, try interacting with your character. If all went well, you should experience a conversation powered by the selected GaiaNet model!

Experiment with different models and find what works best for your specific use case. If you encounter an issue, make sure to double check your .env file and the URL that you have pasted in, as well as the model size in your character config.

Conclusion

Integrating GaiaNet public nodes into ElizaOS is a game-changer for anyone looking for a free, capable, and open-source AI solution. By following these simple steps, you can unlock the power of large language models without worrying about constant usage fees. So, what are you waiting for? Dive in and start experiencing the world of open AI!

  • Share your experiences with GaiaNet and ElizaOS in the comments!
  • If you found this guide helpful, consider sharing it with others in the ElizaOS community.
  • Explore the GaiaNet documentation for more advanced features and options.

The AI Generalist

A framework for thriving in the age of artificial intelligence


For decades, the advice was simple: specialise. Find a niche, go deep, and become the person everyone calls. In a world where knowledge expanded slowly and tools evolved at a human pace, that made sense. Depth was rare. Expertise took years. The specialist was rewarded.

We no longer live in that world. And honestly? That took me a while to fully accept.


The Observation

Artificial intelligence now learns faster, retrieves more, and adapts quicker than any individual can. In most domains, for most people, AI will outperform human specialists in raw knowledge, speed, and pattern recognition. This is not speculation. It is already observable. I’ve seen it in my own work.

The question is not whether this is true. The question is what it means.


The Problem with Specialisation

If AI can match or exceed most specialists in their own field, then the value of narrow expertise changes. Consider this reasoning:

Premise one. AI systems now perform at expert level across a growing range of domains.

Premise two. These systems improve continuously. Today’s capability floor is tomorrow’s baseline.

Premise three. A career built on static knowledge in a single domain is therefore fragile. Not because the knowledge becomes wrong, but because the advantage it once conferred disappears.

Conclusion. For most people, the pursuit of narrow mastery alone is no longer a reliable strategy. The value of human contribution must shift.

This is not a rejection of specialists. The top tier will always matter. But for the broad majority (myself included), a different approach is now more rational.


What the AI Generalist Is

The AI Generalist is not a jack of all trades. They are not shallow. They are strategic.

Where the specialist asks how can I know more about this one thing, the generalist asks how can I connect, combine, and orchestrate across many things. They understand that AI has already claimed the ground of raw recall and domain computation. The ground that remains for humans is synthesis, judgment, and integration.

The AI Generalist learns the foundations, the principles behind the tools, not just the tools themselves. They grow a capacity to evaluate, adopt, and discard technology as it evolves. They orchestrate AI capabilities rather than compete with them.

This is not anti-specialist. It is meta-specialist. It is the strategic layer above.

Put simply: Stop trying to out-memorise a machine. Learn to conduct the orchestra.


The Five Foundations

1. Principles First

Tools change. The principle foundations behind them change slower. Understanding why a language model hallucinates, why a retrieval system fails, why an agent loops indefinitely, these foundations transfer across tools and time. Learn the mechanics. The interfaces will change; the foundations will not.

I have found that the people who struggle most with new AI tools are those who learned the buttons but never learned the why. Do not be that person.

2. Deliberate Breadth

Stay informed across domains. Not to become an expert in each, but to know enough to connect them. A generalist who understands data pipelines, user interfaces, business logic, and security basics can orchestrate solutions that a pure specialist in any one area cannot see. The value is in the connections.

This isn’t about being a dabbler. It is about developing vision.

3. Rapid Learning Cycles

Learn enough to evaluate. Learn enough to apply. Learn enough to know when to go deeper. Do not over-invest in systems that may be obsolete in eighteen months.

Develop the skill of fast, focused learning, the ability to become competent quickly and move on when the landscape shifts. This is not a nice-to-have. It is survival.

4. Orchestration Mindset

The future is not going to be in one single model. It is ecosystems of models, tools, and agents working together. The AI Generalist learns to build these systems, to understand their interfaces, and to design workflows that leverage each component’s strengths.

Orchestration is the skill that compounds. Time to master it.

5. Teaching as Mastery

The best way to understand something is to explain it. Share what you learn. Help others move from basic prompts to genuine capability. In teaching, you find the gaps in your own knowledge. You also build reputation and trust in a landscape where credibility matters.

If you can’t explain it simply, you do not understand it well enough.

Not my quote, but it’s pretty solid advice, not that I’m great at it!


Why This Matters

There’s a temptation to overhype this moment. To claim that we stand at the edge of a revolution, that everything changes, that the future belongs to the bold.

In a way, we are there, but that’s not what I’m saying.

What I’m saying is simpler. The tools have changed. The rational response is to change with them. Those who change their approach will find they expand their opportunities. Those who do not will find fewer. This isn’t revolutionary. It’s just standard cause and effect.

The AI Generalist mindset is not a guarantee of success. It’s just a better bet than the alternative. In an uncertain world, breadth and adaptability are more robust than depth and rigidity. That’s it.


Closing Thought

C.S. Lewis once wrote:

“If I find in myself desires which no experience in this world can satisfy, the most probable explanation is that I was made for another world.”

This isn’t about the Author, it’s about the logic, though who could hate on the tales of Narnia, or other works? (There’s always someone, I suppose). Anyway, the logic is simple, and is one I come back to time and time again.
Observe what is. Reason about what it implies. Act accordingly.

If I find that AI now outperforms specialists in most domains, and that the pace of change makes static, deep, expertise fragile, then the most rational explanation is that the value I can offer lies somewhere else. Not by competing with machines on their ground, but in doing what they cannot: connecting, judging, teaching, and leading.

The AI Generalist knows they will 99% of the time not be able to compete with a machine. They learn to work with it. They become the one who sees the whole picture, and can put the pieces together – and lead the orchestra.


Originally published as a thought piece under my dev account RealistSec on GitHub

.