AI News Roundup: China’s Manus AI, Google’s AI Search, OpenAI Slowdown & More – NR’s Fortnight in AI
Let’s quickly sprint through the most interesting AI headlines that caught my eye over the last couple of weeks. It’s a fast-moving field, so let’s get you up to speed as of the 18th of March 2025.
Manus AI (China): Is China Catching Up?
A new AI agent from China called Manus is going viral, raising questions about China’s AI progress relative to the US. Is the AI landscape shifting?
Google is testing “AI Mode” search results powered by Gemini 2.0, bypassing traditional web links for conversational AI responses. A major shift in online information access?
Reports suggest a potential slowdown in OpenAI’s rapid AI improvement, with their next model “Orion” possibly not showing the same leap forward. Are we seeing a plateau?
Anthropic Claude 3.7 Sonnet: Thinking Longer, Reasoning Deeper
Anthropic released Claude 3.7 Sonnet, designed for longer thinking and enhanced reasoning over larger information volumes. Reasoning capabilities are becoming crucial for advanced AI.
The Musk vs. OpenAI legal case continues with interesting findings regarding Musk’s efforts to prevent OpenAI’s for-profit transition. Legal, ethical, and governance issues remain central in AI.
Boost Your SMB with AI: Microsoft Copilot SMB Success Kit – Actionable Guide & Security Focus
In this post, I’m digging into actionable insights for businesses, especially IT providers, looking to leverage AI. This posts focus: the Microsoft “Copilot for SMB Success Kit.”
Microsoft has launched a suite of resources to help IT providers, or SMB’s smoothly onboard AI, specifically Copilot, into small and medium-sized businesses.
The key takeaway? Security first! Microsoft emphasizes a “security-first” approach, providing a robust framework for SMBs to confidently adopt AI. Let’s break down the key actionable steps.
Security First Focus: Prioritizing security for SMBs adopting AI like Copilot.
SharePoint Security Recommendations: Adjusting SharePoint search allow lists and tightening sharing permissions for Copilot readiness.
Phased Copilot Rollout: Strategic, phased deployment starting with high-value use cases and early adopters.
Microsoft 365 Security Apps: Considering additional security apps based on specific business needs.
New Setup Guide in Admin Center: Utilizing the new step-by-step guide for Copilot setup in the Admin Center.
Customisation is Key: Leveraging plugins and custom copilots for unique business needs.
Real-World SMB Benefits: Exploring practical benefits like meeting summaries, document summarization, and nuanced communication.
If you’re an IT provider, business owner or helping SMBs or your own company with AI, the “Copilot SMB Success Kit” and it’s components are a must-read. It offers practical advice and resources for a smoother and more secure gen AI adoption for you, your business or your clients.
Should You Be Nice to AI? Exploring the Politeness Principle
The question of whether we should extend courtesies to AI might seem like fodder for a science fiction novel. Yet, with the rise of sophisticated Large Language Models (LLMs) like ChatGPT, Grok, Gemini, Copilot, Claude and others, it’s a question that’s becoming increasingly relevant – and surprisingly, there might be practical benefits to doing so. I’ve read up on some recent research so here is my take on what I think is a very interesting topic.
The “Emotive Prompt” Experiment: Does It Really Work?
Their findings, focusing on summarization tasks in English, Chinese, and Japanese, revealed some intriguing patterns. While the accuracy of summaries remained consistent regardless of prompt politeness, the length of the generated text showed significant variation.
In English, the length decreased as politeness decreased, except for a notable increase with *extremely* impolite prompts. This pattern, mirrored in the training data, reflects human tendencies: polite language often accompanies detailed instructions, while very rude language can also be verbose. Interestingly, GPT-4, considered a more advanced model, did not exhibit this surge in length with impolite prompts, possibly indicating a greater focus on task completion over mirroring human conversational patterns.
The study also highlighted language-specific nuances. In Chinese, moderate politeness yielded shorter responses than extremely polite or rude prompts, potentially reflecting cultural communication styles. Japanese results showed an increase in response length at moderate politeness levels, possibly linked to the customary use of honorifics in customer service interactions.
The Mechanics Behind the “Magic”
So, what is going on here? How do LLMs actually respond? Here are the key aspects of LLMs and how they work, that could explain why prompts can affect the output from an LLM:
Pattern Recognition: LLMs are trained on vast datasets of human text. They learn to associate polite phrasing (“please,” “thank you,” “could you…”) with requests for information or assistance. This association becomes part of the model’s learned patterns.
Probability Shifts: Emotive prompts can subtly alter the underlying probability calculations within the LLM. It’s like nudging the model towards a different “branch” of its decision tree, potentially activating parts of the model that wouldn’t normally be engaged.
Data Bias (Implicitly): The datasets used to train LLMs inherently contain biases. Polite language is often associated with more thoughtful, detailed responses in human communication. The AI, in a sense, mirrors this bias.
My Perspective: Prudence and Respect in the AI Age
While the science is interesting, I like to add a bit of a philosophical angle. I’m a firm believer in treating AI with a degree of respect, even if it seems irrational at present. My reasoning? We simply don’t know what the future holds. As AI capabilities rapidly advance, it’s prudent to establish good habits now. Perhaps not fully fledged “Kindness” as a human term, but certainly show a degree of “respect” and etiquette.
Consider it a form of “Pascal’s Wager” for the AI era. If sentience ever *does* emerge, wouldn’t you prefer to be on the good side of our potential AI overlords? It’s a low-cost, high-potential-reward strategy.
That said, I’m not advocating for subservience. We should maintain a clear user-AI dynamic. Clear, respectful communication – with a touch of authority – is key. Think of it like interacting with a highly skilled, somewhat unpredictable specialist. You’re polite, but you’re also in charge.
Practical Approaches: Combining Politeness with Clarity
Here are some practical ways to incorporate politeness into your AI interactions:
Basic Courtesies: Use “please” and “thank you” where appropriate. It costs nothing and might subtly improve results.
Precise Language: The more specific and well-defined your prompt, the better the AI can understand your needs. Politeness shouldn’t come at the expense of clarity.
Positive Framing: Frame requests positively (“Please provide…” rather than “Don’t omit…”). This often aligns better with the training data.
Acknowledge Output: A simple “Thank you, that’s helpful” can reinforce positive response patterns.
Beyond “Niceness”: The Broader Context
The “politeness principle” is just one facet of effective AI interaction. We’re still in the early days of understanding how to best communicate with these systems. As LLMs become more powerful and versatile, control and flexibility also become increasingly important.
Running AI locally, rather than relying solely on cloud-based services, is an important step. It allows you to experiment, tailor the model to your specific needs, and maintain greater control over your data. I previously detailed how you can use free, responsive AI with GaiaNet and ElizaOS – a powerful, cost-effective alternative to commercial offerings.
Underlying all of this is, of course, the hardware. Powerful GPUs are essential for running these advanced AI models. If you’re interested in the intersection of hardware and AI, particularly in the context of server environments, check out my post on GPU support in Windows Server 2025. The hardware is still critically important in deploying an effective solution.
Conclusion: A Thoughtful Approach; Just Be Nice!
Treating AI with respect – incorporating politeness and clear communication – is likely a good practice. It may subtly improve results, aligns with good communication principles in general, and, perhaps, prepares us for a future where AI plays an even larger role in our lives. It’s a small gesture, but one that reflects a thoughtful and proactive approach to this rapidly evolving technology.