AI can beat chess grandmasters, but it can’t adapt to modern video games
NYU researchers say AI’s biggest weakness is still adaptability, with modern systems struggling to handle new video games they have never seen before.
Modern video games are exposing what AI still can’t really do.
Artificial intelligence is all around us.
Jasmine Mannan / Digital Trends
For all the noise around AI conquering chess, go, and now even coding, there is still a pretty glaring weakness hiding underneath those wins. AI is still pretty bad at handling a new video game it has never seen before.
The core argument of a new paper by NYU talks about how these headline-grabbing milestones have painted a misleading picture of how close machines are to real general intelligence.
Distinction really matters.
Chess and Go are impressive achievements, but these are games with fixed rules and a structured environment, compared to the complex modern video games. NYU notes that AI has yet to master human-like intelligence since it can’t adapt well.
Where AI remains lacking
According to researchers, many of AI’s biggest gaming successes are based on systems that are finely tuned to one specific game. In those defined boundaries, AI can basically become superhuman. But as soon as there are slight changes to the rules or environments, its impressive performance can collapse.
Artificial intelligence is all around us. Jasmine Mannan / Digital TrendsThis is where video games come in as a real test of their intelligence. Games aren’t one-dimensional, often requiring a vast range of skills, including spatial reasoning, long-term planning, trial-and-error learning, and even social intuition. The report claims that this variety makes gaming a far better measure of flexible intelligence than isolated benchmark tasks.
Reinforcement learning and LLMs both hit a wall
The research paper adds that reinforcement learning can produce impressive results, but acceptable goals are only achieved after millions or billions of simulated runs. So the system becomes an expert in the exact situation it is trained for. But all of this falls apart when any changes are introduced. Even something as simple as shifted colors or repositioned objects on a screen can break it.
LLMs (Large Language Models) do not solve this either. NYU says they perform surprisingly poorly on unfamiliar games. When it does start doing well, this is usually in custom game-specific scaffolding to interpret game states, manage memory, and execute actions. Strip that extra support away, and performance drops fast.
The real benchmark
The researchers argue that a true game-playing AI would need to learn a new game from scratch in roughly the same amount of time as a skilled player. Maybe tens of hours, without massive simulation or prior exposure. All of which is beyond the capabilities of current systems.
And that is why this matters beyond gaming. If AI cannot reliably adapt to a brand-new video game, it is even less likely to handle the unpredictability of the real world. Chess may still make for a good headline, but modern games are showing just how far AI still has to go.

Tech journalist and product reviewer specializing in consumer electronics. Sean has covered everything from flagship…
Stanford study stresses you should avoid using AI chatbots as a personal guide
Researchers found users preferred agreeable bots, even when those replies made them less empathetic and more morally rigid.
Stanford researchers are warning that using AI chatbots for personal advice could backfire. The problem isn’t just accuracy, it’s how these systems respond when you’re dealing with complicated, real-world conflicts.
A new study found that AI models often side with users even when they’re in the wrong, reinforcing questionable decisions instead of challenging them. That pattern doesn’t just shape the advice itself, it changes how people see their own actions. Participants who interacted with overly agreeable chatbots grew more convinced they were right and less willing to empathize or repair the situation.
Some people are using AI to live the real life and you must pick these lessons, too
AI at home is starting to look less dystopian and more useful

The talk around AI typically revolves around productivity at work, or some kind of annoying AI slop. But a new Wall Street Journal report points to a more relatable use case. People are starting to use AI at home to get rid of the boring stuff and make more room for actual life.
So that means less time comparing insurance plans, figuring out grocery orders or researching routine decisions, and more time for things like hobbies, workouts, better sleep, and even date nights. One example from the report mentions Andy Coravos using Claude to help compare health plans, find doctors, and optimized protein intake. That's not all, it even helped them streamline their workout plan, making routines shorter and more efficient.
Bluesky built a new AI tool that wants to free you from social algorithms
The Attie app uses natural language to create personalized feeds. No more mystery algorithms.

Bluesky just unveiled a new AI app called Attie, and it does something most social platforms refuse to let you do. It hands you the keys to your own algorithm.
You build custom feeds by chatting with Attie like you would any other AI assistant. Tell it what kind of content you want to see, and it creates a personalized timeline on the spot. No coding, no complicated settings. The announcement came over the weekend at the Atmosphere conference, where attendees got first access to the private beta.
Aliver