Zoom is also getting an AI assistant, if that’s what you need for video calls
Zoom is evolving beyond video calls with its AI Companion 3.0, a web-based assistant that turns conversations into tasks, summaries, and content. The post Zoom is also getting an AI assistant, if that’s what you need for video calls...
Zoom is rolling out its AI Companion 3.0 with a new web interface and expanded features that turn meetings, chats, and documents into actionable insights.
Zoom has launched its next-generation AI Companion 3.0, which brings its AI assistant out of the app and into a dedicated web interface. It isn’t another meeting transcriber and summarizer anymore; Zoom’s latest AI can turn conversations into actionable tasks, create daily reflection reports, draft follow-up emails, and even create documents from meetings or notes.
While free Zoom users can use the new AI Companion 3.0 for three meetings every month (including features like AI note-taking, in-meeting questions, summary, and 20 questions via the side panel), the platform is offering the entire suite of AI features for $10 per month (in addition to the fee paid for Zoom Workplace).
Zoom
This update marks a shift for Zoom from a simple video-conferencing platform to an AI-driven workspace. Adding agentic AI capabilities that can handle meetings, chats, documents, and connected apps helps the platform stand out.
Furthermore, the company is following a federal AI model approach, blending its own AI engines with popular models from OpenAI, Anthropic, and other open-source tools. If you’re a hardcore Zoom user who spends a workday juggling between meetings, emails, to-do lists, documents, and other connected apps, Zoom’s new AI Companion 3.0 could help you offload a decent amount of tasks onto AI.
For instance, instead of manually summarizing meetings or writing follow-up messages, you can ask Zoom’s AI to do it for you.
Zoom
This will help you free up time for higher-value tasks. In the near future, Zoom plans to expand AI Companion’s integrations. For example, the company could add Gmail and Outlook connectors to its platform, thereby refining its personal workflows and document-creation tools.
Over time, this could reshape how teams collaborate using Zoom, not just by hosting calls but by helping everyone get mundane tasks faster.

For more than five years, Shikhar has consistently simplified developments in the field of consumer tech and presented them…
AMD to play safe at CES 2026, but it may still deserve your attention
AMD’s CES 2026 keynote is shaping up to be far more about AI strategy than shiny new consumer chips.
For years, the Consumer Electronics Show (CES) has evolved from a consumer-electronics showcase to a global premier launchpad for chipmakers, turning the event into a key battleground for leadership in computing and AI hardware. The upcoming 2026 edition is expected to be no less.
AMD has confirmed that President and CEO, Dr. Lisa Su will deliver the opening keynote on January 5, outlining the company’s AI vision across cloud, enterprise, edge, and consumer devices. While we aren’t expecting any major announcements like a new GPU generation or a surprise Zen 6 tease (though we can still dream), expect some important launches.
ChatGPT gets major update (GPT-5.2) as OpenAI battles Google in AI arms race
OpenAI's GPT-5.2 upgrade boosts real-world productivity just as Google escalates the competition with its latest Deep Research model.

OpenAI has officially launched GPT-5.2, the latest iteration of its flagship AI model series and its answer to Google's Gemini 3. The new model is meant to be faster, smarter, and more helpful for the complex, real-world queries with improvements in reasoning and long-document processing.
It is rolling out to ChatGPT's paid subscribers as part of the Plus, Pro, Team, and Enterprise tiers, and developers via API. OpenAI provides GPT-5.2 in three models: GPT-5.2 Instant, GPT-5.2 Thinking, and GPT-5.2 Pro (is it just me, or does the naming sound similar to that of the Gemini models?).
Gemini Nano Banana now turns your selfie into a full-body avatar for online try-ons
Google's upgraded AI try-on tool uses a single selfie to generate a realistic full-body avatar, making virtual clothes shopping faster, more personal, and far more accurate.

Google is updating its AI-powered visual try-on tool, and the best part is that you no longer need a full-body photo to use it. Thanks to the company's Nano Banana model (part of the Gemini 2.5 Flash Image suite), the tool can now turn a single selfie into a full-body digital avatar for trying on clothes, and it works across Google Search and Shopping.
All you need to do is upload a selfie and pick your usual size; Nano Banana will generate a handful of avatars. Using your face, the AI model generates the rest of your body while maintaining consistent proportions. Choose the one you think suits you best, and use the avatar to try on shirts, jackets, dresses, and more.
KickT