Back to all facts

Google Gemini 3.0 introduced a 'Deep Think Mode' toggle, which leverages 'Inference-Time Compute' for more complex reasoning tasks, and rolled out advanced voice mode with live video analysis widely in July 2025, including limited access on the free tier.

AI-summarized from trusted sources

Related Artificial Intelligence Facts

Microsoft and MSI have teamed up to create a unique stylus that doubles as an AI microphone for Copilot, offering fast charging and new AI integration for productivity.

Samsung's One UI 8.5 will feature a new Privacy Display and Bixby powered by Perplexity AI, enhancing privacy and assistant capabilities on Samsung devices.

Google TV is receiving a major upgrade powered by Gemini, bringing new AI-driven features and improvements to the platform.

Perplexity CEO Aravind Srinivas highlighted the shift to local compute for faster, more secure, and cost-effective AI processing at CES.

Intel announced Panther Lake (Intel Core Ultra Series 3), a new AI chip for laptops, emphasizing local AI with up to 180 TOPS.

AMD launched a new line of Ryzen AI processors to expand in AI-powered personal computers.

Nvidia's next-generation AI superchip platform, Vera Rubin, is now in full production.

Nvidia announced Cosmos, an AI foundation model trained on massive datasets for simulating physics-governed environments, and Alpamayo, an AI model for autonomous driving, at CES 2026.

Nvidia unveiled its new Vera Rubin AI superchip platform, which is already in full production and offers 3.5x the training performance and 5x the AI performance rating compared to its predecessor, Blackwell, with improved power efficiency. The company also announced Alpamayo, an open-source AI model and simulation toolkit for autonomous vehicles, with code available on Hugging Face.

Perplexity AI CEO Aravind Srinivas highlighted the growing trend of local compute, emphasizing its advantages in speed, security, and cost-effectiveness compared to cloud-based AI. The company is focusing on making AI more accessible and efficient by shifting more processing to local devices.