AI Reading for Saturday December 14
Real-time multimodal AI with ChatGPT Advanced Voice Assistant with Vision - Business Insider
I definitely recommend trying this out.
Real-time multimodal AI with Gemini 2.0 Flash - VentureBeat
Giving AI models IQ tests. Feel the AGI yet? - Tracking AI
Google rolling out advanced voice assistant on Nest smart speakers - The Verge
AI-assisted suicide prevention? - Rest of World
Feds propose new framework making cloud providers GPU compute gatekeepers and limiting total GPU exports. - Yahoo Finance
Designer used AI to assist in building a government website, people were not happy. - Creative Bloq
xAI rolls out faster Grok 2 - TechCrunch
David Sacks, from Musk minion to Trump’s AI and crypto tsar - FT
Meta joins Musk in asking the government to block OpenAI’s switch to a for-profit - The Verge
BuzzFeed Pivots to AI-Generated Content. - Futurism
Character.AI Appears to Have Accidentally Let Users See Each Others' Chat Histories. - Futurism
Serious privacy lapse. Very important in the enterprise to have AI running in single-tenant environments.
ChatGPT adds fancy folders - The Verge
Some funny Easter Eggs in the OpenAI livestream including 'how to act normal' and 'super secret AGI do not show live'.
Pika 2.0 launches in the wake of Sora - X (formerly Twitter)
You can now talk back to NotebookLM podcast hosts. - The Verge
Apple AI posts a terrible misleading summary of BBC report
AI insiders love Claude. It just feels smarter. - NY Times
Sutskever gave a talk at NeurIPS, said the current method of pre-training is over - YouTube
Sarah Friar say OpenAI is still on track to spend billions to train the next big LLM model. - Bloomberg
Good news for NVDA if scaling laws are still a thing. Because if scaling laws don’t apply, who needs million-GPU clusters?
Is the AI revolution running out of data? - Nature
No. The running out of data to train on feels overrated to me … ok you've trained as much as you can on all the Shakespeare, but most of the training data is Harry Potter at best. You have to ask yourself what exactly the AI would learn from more Harry Potters.
It just doesn't seem that hard to use AI to make synthetic variations on the existing Harry Potters that would be sufficiently novel for the AI to keep learning, kind of like a teacher finding a few different ways to express an idea so that students will understand it.
Getting something that understands and generates human language was a miracle, taking it apart and understanding it and making it train 10x more efficiently with data seems pretty likely on a timeframe of years or a decade. especially with better tooling, better understanding of the brain, multiples more fast and efficient compute.
Shakespeare didn't have to read everything in the 2024 Library of Congress to be Shakespeare. A child learns to speak well after a few million tokens. We now have a working model of an LLM that can learn to speak. The task is to find an architecture that has the same capability but is sample efficient and can also learn on the fly and adapt after its initial training.
The intern fired from ByteDance was first author on an award-winning NeurIPS paper, leading people to blog about his behavior and question the ethics of the award. - var-integrity-report.github.io
From Google Workspace to Agentspace - SiliconANGLE
Sundar never panicked. - Semafor
Long interview, consensus narrative is, Google has caught up on the tech if not the perceived relevance in the AI space. Unless OpenAI drops something huge next week.
24 AI tips from Google - Google
Surreal Videos of Competitive Gymnastics Floor Routines Generated by OpenAI's Sora - Laughing Squid
America's funniest AI home videos - Reddit
Follow the latest AI headlines via SkynetAndChill.com on Bluesky