Sunday, November 16, 2025

Teaching Practical AI: Building a Custom GPT to Fix Documentation Chaos

This week, I helped a colleague tackle a familiar headache. Their documentation was scattered across formats, styles, and half-filled templates. Everyone had created own version, and every revision meant hours of reformatting. I showed them how to create a custom GPT that could collect and structure the information automatically for every new document that needed to be created.

We built it step by step. First, we defined what “good documentation” meant for their team. Then, we created a prompt framework that guided the GPT to ask for information in a clear, consistent way. Each response fed into a standard structure that produced a clean draft for review. The human stayed in control, approving and refining the content before publishing.

The result was more than time saved. The team gained a repeatable process that ensured accuracy and consistency without removing human oversight. Now every question needed to produce publishable documentation was being answered, in the same order, with the same level of detail. The GPT became a collaborator, handling the structure and input while people focused on communicating the necessary information.

This experience reinforced an important lesson. The most useful AI applications start small, with one clear pain point and a cheap, fast, reliable custom solution built around it. Teaching people how to frame prompts and design their own GPTs gives them independence. It turns AI from a black box into a practical tool they can shape to fit their work.

Labels: , , , , ,

Thursday, October 30, 2025

Should We Let AI "Dream" Its Way to Discoveries?

Should We Let AI "Dream" Its Way to Discoveries? 

 As large language models grow, researchers are noticing dream-like behavior when they run unsupervised. This sparks a bigger conversation: what should society do with this emerging capacity? On one hand, letting AI explore freely could reveal new patterns and even novel insights. On the other, we must ask whether such experiments are safe and responsible. 

Should we encourage AI to turn this creative energy toward medical breakthroughs or complex social challenges? The possibility that AI could stumble upon discoveries on its own makes these questions urgent. Instead of only testing what AI can do, we need to decide what we want it to do and how to guide its growth responsibly. 

Read the full article here


Labels: AI ethics, AI discovery, LLMs, AI safety

Labels: , , ,

Monday, October 27, 2025

How Microsoft Copilot Replaced 40 Hours of Work in Just 5 Minutes

How Microsoft Copilot Replaced 40 Hours of Work in Just 5 Minutes 

A client of mine had a real challenge. They were managing 95 corporate policies written over different years by various departments. Some policies even contradicted each other. It was a compliance risk, but no one had time to read through thousands of pages. 

They asked, can #Microsoft Copilot help? It turns out it could. We took these steps.

  1. Open Microsoft Copilot Pro's Research Agent 
  2. Connected it to their SharePoint folder where all the policies were stored.
  3. I gave it a single prompt: 
    • Search all policies and identify where any two contradict each other. Return a table with this content: 
    • Column A: Policy A include the sharepoint file link + summary description 
    • Column B: Policy B include the sharepoint file link + summary description 
    • Column C: The issue or contradiction found 
    • Column D: Provide a recommended action based on industry and policy best practices.”
In about five minutes, Copilot returned a full table. It highlighted conflicts, provided file links, and suggested next steps based on best practices. It was not perfect, but it was a powerful starting point compared to spending 40 hours on manual review.  Now the team scheduled this prompt to run monthly. Copilot scans new or updated policies and highlights issues automatically. Once they trusted the output, they shared the Agent with others across the company to keep policy management clear and fast. 

 Lesson: You do not need a huge budget to save serious time. Teach your team how to think with AI and real value will show up in everyday work. So what has your team done with AI this week?

Labels: , , , , ,

Friday, October 17, 2025

Your Phone Is About to Get Its Own AI

Title: Your Phone will Get Its Own AI

A new article from Harvard Business Review, The Case for Using Small Language Models, highlights an important shift. Instead of relying on large cloud-based AI systems, we are moving toward smaller, faster models that can eventually run directly on your phone.

These are called small language models, or SLMs. They are simplified versions of the AI behind tools like ChatGPT, but built to run locally. That means they respond quickly, protect your privacy, and do not need a constant internet connection.

This shift is possible because hardware is catching up. Companies like NVIDIA are building powerful AI systems small enough to fit in edge devices like phones and wearables.

The result is a future where your phone could have its own private AI assistant. One that understands your habits, communicates in your style, and keeps your data on your device.

It is not here yet, but it is coming. And it is going to change how we interact with technology.

Read the full article here: The Case for Using Small Language Models

Labels: , , , ,

Sunday, October 5, 2025

10 Hard-Learned Lessons in Prompt Engineering

A recent post on Reddit’s r/PromptEngineering shared ten powerful lessons learned from real-world experience. It’s a goldmine for anyone building working with AI and building CustomGPT's or Agents. It is also great fundamental advice.

Key Insights 
  1.  Examples beat instructions. Models learn patterns faster from examples than from detailed written rules.
  2. Treat prompts like code. Use version control, testing, and performance tracking.
  3. Test coverage > prompt polish. Wide evaluation exposes hidden weaknesses better than clever phrasing.
  4. Domain expertise wins. Subject experts write more accurate prompts than general engineers.
  5. Don’t overlook temperature. Small tuning can solve major consistency problems.
  6. Every model is unique. What works for GPT-4o may fail on Claude or Llama.
  7. Keep reasoning simple. Complex “chain-of-thought” prompts don’t always outperform direct instructions.
  8. Use AI to optimize AI. Models can often refine their own prompts effectively.
  9. Strong system prompts matter most. Foundation setup drives most of the output quality.
  10. Plan for prompt injection defense early. Secure prompts before deployment. These insights reflect a maturing field: prompt engineering is evolving from creative experimentation into disciplined software engineering. 

Credit: Original insights created and posted by a community member on Reddit’s r/PromptEngineering

Labels: Prompt Engineering, AI Development, LLM Optimization, Machine Learning, Best Practices

Labels: , , , ,

Monday, September 29, 2025

Your employees are absolutely using AI and 25% or more are using ChatGPT

It is time to make AI standard issue for anyone who works in front of a computer. It is common to hear that 95 percent of large-scale AI projects fail according to MIT’s 2025 State of AI in Business report. I say SO WHAT. We are focused on the wrong thing and we are not talking about what moves the needle. 

While enterprise AI pilots may struggle, there is a wave of progress happening at the individual level that we aren't really talking about. According to recent OpenAI research (available below), 28 percent of employees are now using ChatGPT for work even if their company does not officially approve or pay for it. Employees are delivering results with or without company support. Is there a security risk? Absolutely. What is your company doing about it? Simply putting a note against AI usage in your policy handbook is not enough! 

When you look across all AI tools, about half of all employees are leveraging AI in some way. That means in any typical workplace two out of every four people are using AI regularly, even if the company does not officially sanction any tools. 

The value is clear. A Federal Reserve Bank of St. Louis study found that over half of frequent AI users save at least three to four hours of work each week. Harvard and MIT found that knowledge workers using generative AI produce work that is up to forty percent higher in quality compared to those who do not use AI. What is the real difference? The 95 percent failure rate applies to bigger enterprise projects using AI. Meanwhile, the real opportunity is to start small. Everyday tasks can lead the way to bigger and much more valuable AI automation. Hit me up! What do you see happening at your company? 

See more details of the study here: OpenAI Study 

#ArtificialIntelligence, #AI, #GenerativeAI, #AIForBusiness, #FutureOfWork, #Productivity, #WorkplaceInnovation, #Innovation, #Technology, #Leadership, #ChangeManagement, #DigitalTransformation, #AIAutomation, #ResponsibleAI, #AIAdoption,#ChatGPT

Labels: , ,

Friday, September 26, 2025

AI’s 95% Failure Rate Looks Just Like Cloud’s Early Struggles

**AI’s 95% Failure Rate Looks Just Like Cloud’s Early Struggles** MIT says 95% of AI pilots flop. Sounds scary, right? But this is not the end of the world. In fact, it looks a lot like the early days of cloud computing. Fifteen years ago, most cloud projects went nowhere. Security fears, messy migrations, and grumpy IT teams kept things stuck in pilot mode. Today, AI pilots are tripping on the same issues. Poor workflow fit, bad data, and unrealistic hopes keep projects from scaling. Here is the good news. Cloud eventually became the backbone of business. AI will get there too and probably faster since it is standing on cloud’s shoulders. So yes, most pilots fail. Think of them as practice rounds. The companies that stick with it and learn from the flops will be the ones cashing in when AI finally clicks.

Labels: , , , ,