Every prompt in this issue points at a specific M365 data source. That's the only reason they work.

Every two weeks: 3 verified AI stories that affect your workflow, tested prompts for one role (this issue: PMs), and one mental model for working with AI. Different role each time. Same bar: only what survived real use.

WHAT HAPPENED

Product launches and a geopolitical shockwave.

Microsoft shipped a PM agent and made Copilot edit your documents by default. Meanwhile, the US government blacklisted an AI company for the first time. All three stories change how you'll use AI at work, just at very different timescales.

Anthropic refused Pentagon terms. Got blacklisted. OpenAI signed instead. [Source] — Anthropic wouldn't let its models power autonomous weapons or surveillance. The Pentagon blacklisted it, first US company ever. OpenAI signed within days; Altman later called the deal "rushed" and "sloppy." Claude surged to #1 on the App Store. If your org evaluates AI vendors, the risk profile just shifted.

Copilot gets a Project Manager agent [Source] — A dedicated PM agent launches in public preview this March, inside Planner in Teams. Breaks goals into tasks, tracks project health. Still preview, don't build your workflow around it. But the direction is clear: not assistant, agent.

Copilot goes agentic in Word and PowerPoint [Source] — Copilot now edits documents directly by default in Word and creates presentations through conversation in PowerPoint Web. The side-panel era is ending. Already live.

This issue: Project Managers
Every prompt below points Copilot at a specific M365 data source (Planner, Teams, Outlook) and asks for a transformation, not a creation. That's why they work.

1. The weekly status report (used weekly, saves 30-45 min)

From my Planner board for [project name], create a status report covering: tasks completed this week, tasks overdue with assignee names, tasks due next week, and any task that's been in progress for more than 10 days. Format for my stakeholder email.

Why it works: Pulls directly from your Planner data. You stop manually assembling updates.

2. The meeting context sweep (used before every meeting)

Summarize the last 2 weeks of conversation in the [project name] Teams channel. Extract: decisions made, action items mentioned with who said them, open questions, and anything flagged as a risk or blocker. I need this for my weekly status meeting.

Why it works: The Teams channel has the context — you just never have time to read it all.

3. The risk radar (used weekly)

From the [project name] Teams channel and recent emails, identify potential risks I might be missing. Look for: mentions of delays, resource concerns, scope questions, and any external dependency issues. Categorize each as schedule, budget, scope, or resource risk.

Why it works: Risks hide in email threads and chat messages. This surfaces them before they surprise your steering committee.

4. The "who's overloaded" check (used bi-weekly)

From my Planner boards across [project 1, project 2, project 3], show me which team members have the most tasks assigned, who has overdue items across multiple projects, and where I have single points of failure — one person assigned to a critical task with no backup.

Why it works: Resource conflicts are invisible until someone misses a deadline. This makes them visible before that.

5. The stakeholder update draft (used 2-3x/week)

Draft a project update email for [stakeholder name] who cares about [budget / timeline / scope — pick one]. Pull the relevant data from my Project timeline and Planner board. Keep it under 150 words. Tone: confident but transparent about risks.

Why it works: Different stakeholders care about different things. This tailors the update to what they actually want to know.

WHAT DIDN'T WORK

"Create a project plan for [project name]" — Dependencies were invented. Procurement placed after construction because it "sounded right." Every plan needed rebuilding from scratch.

"Prioritize my tasks across all projects" — Sorted by due date. That's not prioritization. No concept of strategic importance or which deadline is real vs. aspirational.

"Summarize this project's risks" without pointing at data — Generic list of "common project risks." Described every project ever instead of yours.

The pattern: When a prompt asks Copilot to decide, create, or prioritize from nothing — it will. Confidently. Wrongly.

THE TAKEAWAY

Stop debugging your prompts. Debug your inputs.

When output is garbage, people rewrite the prompt. More detail, more instructions, more formatting requirements. They iterate five times on the same bad prompt and wonder why it's still bad.

Look at the 5 prompts above. None of them are clever. "From my Planner board, create a status report" isn't prompt engineering — it's pointing at data. Now look at the 3 failures. "Create a project plan" sounds like the better prompt. More ambitious. More useful if it worked. But there's nothing to transform. You're asking a tool built to process information to generate it instead.

Here's the mental model: AI is a new analyst on your team. Smart, fast, never sleeps — and knows nothing about your projects, your stakeholders, or your politics. You wouldn't hand a new analyst a blank page and say "write me a project plan." You'd hand them your Planner board, your last steering committee deck, and your risk register. Then you'd say "synthesize this."

Next time you get useless output, don't rewrite the prompt. Ask: what data did I forget to give it?

Try one prompt this week. If it saves you time, forward this to someone on your team who'd use the others.

Mathieu

Keep reading