Monday, October 27, 2025

How Microsoft Copilot Replaced 40 Hours of Work in Just 5 Minutes

How Microsoft Copilot Replaced 40 Hours of Work in Just 5 Minutes 

A client of mine had a real challenge. They were managing 95 corporate policies written over different years by various departments. Some policies even contradicted each other. It was a compliance risk, but no one had time to read through thousands of pages. 

They asked, can #Microsoft Copilot help? It turns out it could. We took these steps.

  1. Open Microsoft Copilot Pro's Research Agent 
  2. Connected it to their SharePoint folder where all the policies were stored.
  3. I gave it a single prompt: 
    • Search all policies and identify where any two contradict each other. Return a table with this content: 
    • Column A: Policy A include the sharepoint file link + summary description 
    • Column B: Policy B include the sharepoint file link + summary description 
    • Column C: The issue or contradiction found 
    • Column D: Provide a recommended action based on industry and policy best practices.”
In about five minutes, Copilot returned a full table. It highlighted conflicts, provided file links, and suggested next steps based on best practices. It was not perfect, but it was a powerful starting point compared to spending 40 hours on manual review.  Now the team scheduled this prompt to run monthly. Copilot scans new or updated policies and highlights issues automatically. Once they trusted the output, they shared the Agent with others across the company to keep policy management clear and fast. 

 Lesson: You do not need a huge budget to save serious time. Teach your team how to think with AI and real value will show up in everyday work. So what has your team done with AI this week?

Labels: , , , , ,

Friday, October 17, 2025

Your Phone Is About to Get Its Own AI

Title: Your Phone will Get Its Own AI

A new article from Harvard Business Review, The Case for Using Small Language Models, highlights an important shift. Instead of relying on large cloud-based AI systems, we are moving toward smaller, faster models that can eventually run directly on your phone.

These are called small language models, or SLMs. They are simplified versions of the AI behind tools like ChatGPT, but built to run locally. That means they respond quickly, protect your privacy, and do not need a constant internet connection.

This shift is possible because hardware is catching up. Companies like NVIDIA are building powerful AI systems small enough to fit in edge devices like phones and wearables.

The result is a future where your phone could have its own private AI assistant. One that understands your habits, communicates in your style, and keeps your data on your device.

It is not here yet, but it is coming. And it is going to change how we interact with technology.

Read the full article here: The Case for Using Small Language Models

Labels: , , , ,

Sunday, October 5, 2025

10 Hard-Learned Lessons in Prompt Engineering

A recent post on Reddit’s r/PromptEngineering shared ten powerful lessons learned from real-world experience. It’s a goldmine for anyone building working with AI and building CustomGPT's or Agents. It is also great fundamental advice.

Key Insights 
  1.  Examples beat instructions. Models learn patterns faster from examples than from detailed written rules.
  2. Treat prompts like code. Use version control, testing, and performance tracking.
  3. Test coverage > prompt polish. Wide evaluation exposes hidden weaknesses better than clever phrasing.
  4. Domain expertise wins. Subject experts write more accurate prompts than general engineers.
  5. Don’t overlook temperature. Small tuning can solve major consistency problems.
  6. Every model is unique. What works for GPT-4o may fail on Claude or Llama.
  7. Keep reasoning simple. Complex “chain-of-thought” prompts don’t always outperform direct instructions.
  8. Use AI to optimize AI. Models can often refine their own prompts effectively.
  9. Strong system prompts matter most. Foundation setup drives most of the output quality.
  10. Plan for prompt injection defense early. Secure prompts before deployment. These insights reflect a maturing field: prompt engineering is evolving from creative experimentation into disciplined software engineering. 

Credit: Original insights created and posted by a community member on Reddit’s r/PromptEngineering

Labels: Prompt Engineering, AI Development, LLM Optimization, Machine Learning, Best Practices

Labels: , , , ,

Monday, September 29, 2025

Your employees are absolutely using AI and 25% or more are using ChatGPT

It is time to make AI standard issue for anyone who works in front of a computer. It is common to hear that 95 percent of large-scale AI projects fail according to MIT’s 2025 State of AI in Business report. I say SO WHAT. We are focused on the wrong thing and we are not talking about what moves the needle. 

While enterprise AI pilots may struggle, there is a wave of progress happening at the individual level that we aren't really talking about. According to recent OpenAI research (available below), 28 percent of employees are now using ChatGPT for work even if their company does not officially approve or pay for it. Employees are delivering results with or without company support. Is there a security risk? Absolutely. What is your company doing about it? Simply putting a note against AI usage in your policy handbook is not enough! 

When you look across all AI tools, about half of all employees are leveraging AI in some way. That means in any typical workplace two out of every four people are using AI regularly, even if the company does not officially sanction any tools. 

The value is clear. A Federal Reserve Bank of St. Louis study found that over half of frequent AI users save at least three to four hours of work each week. Harvard and MIT found that knowledge workers using generative AI produce work that is up to forty percent higher in quality compared to those who do not use AI. What is the real difference? The 95 percent failure rate applies to bigger enterprise projects using AI. Meanwhile, the real opportunity is to start small. Everyday tasks can lead the way to bigger and much more valuable AI automation. Hit me up! What do you see happening at your company? 

See more details of the study here: OpenAI Study 

#ArtificialIntelligence, #AI, #GenerativeAI, #AIForBusiness, #FutureOfWork, #Productivity, #WorkplaceInnovation, #Innovation, #Technology, #Leadership, #ChangeManagement, #DigitalTransformation, #AIAutomation, #ResponsibleAI, #AIAdoption,#ChatGPT

Labels: , ,

Friday, September 26, 2025

AI’s 95% Failure Rate Looks Just Like Cloud’s Early Struggles

**AI’s 95% Failure Rate Looks Just Like Cloud’s Early Struggles** MIT says 95% of AI pilots flop. Sounds scary, right? But this is not the end of the world. In fact, it looks a lot like the early days of cloud computing. Fifteen years ago, most cloud projects went nowhere. Security fears, messy migrations, and grumpy IT teams kept things stuck in pilot mode. Today, AI pilots are tripping on the same issues. Poor workflow fit, bad data, and unrealistic hopes keep projects from scaling. Here is the good news. Cloud eventually became the backbone of business. AI will get there too and probably faster since it is standing on cloud’s shoulders. So yes, most pilots fail. Think of them as practice rounds. The companies that stick with it and learn from the flops will be the ones cashing in when AI finally clicks.

Labels: , , , ,

Thursday, September 18, 2025

We Are the QA for AI

We Are the QA for AI In my last post I wrote about Anthropic’s postmortem and how AI bugs do not look like the bugs we are used to. They hide in the answers. They are not obvious crashes. That leads to this. In the past if you wanted to complain or suggest a feature you went to a support page. Maybe you submitted a ticket. Maybe you took a screenshot of the bug and attached it. Then you waited. Maybe someone read it. Maybe they did not. AI is different. The thumbs up and thumbs down buttons are right there in front of you. No forms. No support tickets. Just a quick signal. And that signal matters. It is the easiest way to shape how AI learns and how reliable it becomes. This is still new. Most people do not think of themselves as part of QA. But with AI we all are. Every tap is a way to make the system better. I am an advocate for this. If you want better AI at work and in life, smash those buttons. **Labels:** AI Feedback, AI Reliability, Human Feedback, AI QA, Responsible AI

Labels: , , , ,

Bugs in AI Do Not Look Like the Bugs We Are Used To

Bugs in AI Do Not Look Like the Bugs We Are Used To When software would break in the past it was often obvious. The screen froze. The app crashed. You knew you had a bug. AI is different. When it breaks the system keeps going. It still gives an answer. But something is off. A word out of place. A weaker response. A sentence that does not sound right. Anthropic’s postmortem showed this. Claude slipped Thai characters into English text. That is not a crash. That is a bug hiding inside an answer. This matters. The risk is not that AI stops working. The risk is that it keeps working while giving you something that is just a little wrong. And at work a little wrong can be a big problem. This is why the like and dislike buttons matter. They look small. But they are signals. They are how the system learns what can be trusted and what cannot. Smashing those buttons is part of keeping the AI useful. The old bugs broke software. The new bugs break trust. **Labels:** Anthropic, Claude, AI Reliability, AI Bugs, AI Trust

Labels: , , , ,