Microsoft’s Copilot Cowork arrives with smarter AI research tools to spot gaps in your work

Microsoft has rolled out Copilot Cowork in early access, alongside major upgrades to its Researcher tool, introducing Critique and Model to help you review ideas, spot gaps, and test decisions before acting.

Microsoft’s Copilot Cowork arrives with smarter AI research tools to spot gaps in your work

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Your work can now get a second opinion with Microsoft’s new Researcher features

uninstall-microsoft-copilot Microsoft

Earlier this month, Microsoft unveiled Copilot Cowork, which is based on Anthropic’s Claude Cowork.

Now, the company has rolled out Copilot Cowork in early access through its Frontier program, alongside new upgrades to its Researcher tool that will help you plan, analyze, and make decisions at work.

So what can Microsoft’s Copilot Cowork do for you?

Microsoft-Copilot-CoworkMicrosoft

Copilot Cowork is an agentic AI tool built for handling long, multi-step tasks inside Microsoft 365. It can help you think through tasks, break down goals, and work alongside you like a colleague across documents and workflows.

You describe the outcome you want, and it creates a plan and completes the task while showing you its progress. You can also step in and redirect it at any point. It can handle everything from one-off requests to repeating workflows like monthly budget reviews.

New AI features in Copilot’s Researcher tool

Microsoft is also upgrading Researcher, its deep research feature inside Copilot, with two key additions.

The first is Critique, a new setup where two AI models work together on the same task. OpenAI’s GPT generates the initial response, and Anthropic’s Claude reviews it for accuracy and quality before it reaches you.

According to Reuters, Microsoft plans to make this interaction bi-directional in the future, meaning Claude’s drafts could eventually be reviewed by GPT, too.

According to Microsoft, this feature improved the Researcher tool’s score by 13.8% on the DRACO benchmark, the industry standard for measuring accuracy and quality of deep research.

M365-Copilot-Researcher-CritiqueMicrosoft

The second addition is a new model Council, which lets you pull responses from different AI models and compare them side by side. You can instantly see where they agree, where they differ, and what each brings uniquely to your question.

Microsoft says all of this is part of Wave 3 of Microsoft 365 Copilot, a push to move AI from a tool you experiment with to one that actively does your work for you.

Manisha Priyadarshini

Manisha likes to cover technology that is a part of everyday life, from smartphones & apps to gaming & streaming…

The Asus Morph 96 Wireless gives you the custom keyboard feel without the DIY hassle

It features a gasket-mount design, foam damping, and a hot-swap PCB, letting you experiment with switches and sound without having to build a keyboard from scratch.

Asus ROG Strix Morph 96 Wireless featured.

Asus is trying to break into the custom keyboard space without going all in, and the ROG Strix Morph 96 Wireless is its latest attempt. First unveiled at CES 2026, the board targets enthusiasts with a mix of customization and out-of-the-box usability.

What makes the Morph 96 Wireless interesting?

Read more

AI can beat chess grandmasters, but it can’t adapt to modern video games

Modern video games are exposing what AI still can’t really do.

Robot chess player from Sense

For all the noise around AI conquering chess, go, and now even coding, there is still a pretty glaring weakness hiding underneath those wins. AI is still pretty bad at handling a new video game it has never seen before.

The core argument of a new paper by NYU talks about how these headline-grabbing milestones have painted a misleading picture of how close machines are to real general intelligence.

Read more

Stanford experts say you should avoid using AI chatbots as a personal guide

Researchers found users preferred agreeable bots, even when those replies made them less empathetic and more morally rigid.

phone-showing-ai-chatbots

Stanford researchers are warning that using AI chatbots for personal advice could backfire. The problem isn’t just accuracy, it’s how these systems respond when you’re dealing with complicated, real-world conflicts.

A new study found that AI models often side with users even when they’re in the wrong, reinforcing questionable decisions instead of challenging them. That pattern doesn’t just shape the advice itself, it changes how people see their own actions. Participants who interacted with overly agreeable chatbots grew more convinced they were right and less willing to empathize or repair the situation.

Read more