empty wine glass, table/place setting


Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

It’s nearly Thanksgiving, and you know what that means: stuffing your face to the point of comatosis. Well, that and entertaining all the unsavory relatives you manage to keep at bay the rest of the year.

For those of us lucky (unlucky?) enough to be tasked with fixing this year’s feast, there isn’t much time left to settle on a menu. It’s always a tough decision, what to prepare. So, here’s a clever idea: ask a chatbot.

Yes, yes, it’s been done before — turning to AI for desperate Thanksgiving assistance. (The New York Times tried ChatGPT recipes in 2022.) The results have generally been middling. But perhaps the prompts were the issue.

Curiosity got the best of me. So I asked some of the more popular chatbots, ChatGPT and Claude, for a Thanksgiving menu “so unique it’d wow positively any family member.” That’d do the trick, I reckoned.

Let me tell you, reader, the AI didn’t disappoint.

ChatGPT recommended starting with a cocktail hour — fancy! — featuring whipped sweet potato and goat cheese crostini. Claude, meanwhile, shot for the moon, suggesting an appetizer — “butternut squash bisque with sage foam” — that certainly checked the “unique” box.

“Pumpkin soup shooters with cinnamon crème fraîche” sound good? That’s what ChatGPT proposed for the appetizer, followed by a main course of miso-butter turkey with a ginger-soy glaze. Claude, once again the wild card, suggested “lavender and fennel dry-brined turkey with a honey-thyme glaze.” The chatbot described it as an herbaceous departure from classic roast turkey. Indeed.

What about sides? ChatGPT recommended a chili-lime corn bread and pistachio risotto. Claude said to whip out the fine liquor for a “wild mushroom and chestnut stuffing with aged sherry.”

For the big finish, both chatbots would have you stick to staples: pie, cheesecake, and healthy scoops of ice cream. The twist? The ice cream is saffron-flavored, and the cheesecake is chai-spiced.

“This menu takes familiar Thanksgiving flavors and elevates them through unexpected ingredients, techniques, and combinations,” Claude writes of its creations. “Each dish tells a story and invites conversation, making the meal not just about food, but about shared experience and creativity.”

I can’t argue with that. But as the designated cook this year … well, let’s just say I’m not going to be aiming for Top Chef.

News

OpenAI’s Sora leaks: A group appears to have leaked access to Sora, OpenAI’s video generator, in protest of what it’s calling “art washing” on OpenAI’s part.

Amazon backs Anthropic, again: Anthropic has raised an additional $4 billion from Amazon and has agreed to train its flagship generative AI models primarily on Amazon Web Services, Amazon’s cloud computing division.

AI app connectors: In other Anthropic news, the company has proposed a new standard, the Model Context Protocol, for connecting AI assistants to the systems where data resides.

OpenAI funds “AI morality” research: OpenAI is pouring $1 million into a Duke University research program to develop algorithms that can predict humans’ moral judgments.

YouTube gets AI backgrounds: YouTube’s Dream Screen feature for Shorts, the platform’s short-form video format, now lets users create AI-generated video backdrops.

Brave adds AI chat: Search engine Brave introduced an AI chat mode for follow-up questions based on initial queries on Brave Search, an expansion of Brave’s Answer with AI feature that provides AI-generated summaries of web searches.

Ai2 open sources Tülu 3: The Allen Institute for AI (Ai2) has released Tülu 3, a generative AI model that can be fine-tuned and customized for a range of applications (e.g., solving math problems).

Crusoe raises cash: Crusoe Energy, a startup building data centers reportedly to be leased to Oracle, Microsoft, and OpenAI, is in the process of raising $818 million, according to an SEC filing.

Threads tests AI summaries: Meta’s Threads has begun testing AI-generated summaries of what people are discussing on the platform, taking a page from rival X.

Research paper of the week

AlphaQuibit
Image Credits:Google DeepMind

DeepMind, Google’s AI research org, has developed a new AI system called AlphaQubit it claims can accurately identify errors inside of quantum computers.

Quantum computers are potentially far more powerful than conventional machines for particular workloads. But they’re also more prone to “noise,” or general errors.

AlphaQubit identifies these errors so that they can be mitigated and corrected for, helping make quantum computers more reliable.

It’s not a flawless system, though. Google acknowledges in a post that AlphaQubit is too slow to correct for errors in real time — and is not especially data-efficient. Work is underway on improved versions, says the company.

Model of the week

Runway Frames model
A sample from Runway’s Frames model. Image Credits:Runway

Runway, a startup building AI tools for content creators, has released a new image-generation model that the company claims offers better stylistic control than most.

Called Frames, the model, which is slowly rolling out to users of Runway’s Gen-3 Alpha video generator, can reliably create images that stay true to a particular aesthetic, Runway says.

Now, it’s worth noting that Runway may be playing fast and loose with copyright rules. A 404 Media report earlier this year suggested the company scraped YouTube footage from channels belonging to Disney and creators like MKBHD without permission to train its models.

When reached for comment, a Runway spokesperson declined to reveal the source of Frames’ training data.

Like many generative AI companies, Runway asserts its data-scraping practices are protected under fair use doctrine. That theory is being tested in a number of courtroom battles, including a class action suit filed against Runway and several of its art-generator rivals.

Grab bag

Image Credits:iunewind (opens in a new window) / Shutterstock (opens in a new window)

Nvidia has unveiled a model it’s calling “the world’s most flexible sound machine.”

Dubbed Fugatto, the chip giant’s model can create a mix of music, voices, and sounds from a text description and a collection of audio files. For example, Fugatto can create a music snippet based on a prompt, remove or add instruments from/to a song, and change the accent or emotion in a vocal performance.

Trained on millions of openly licensed sounds and songs, Fugatto can even generate things that don’t exist in the real world, Nvidia claims.

“For instance, Fugatto can make a trumpet bark or a saxophone meow,” the company wrote in a blog post. “With fine-tuning and small amounts of singing data, researchers found it could handle tasks it was not [trained] on, like generating a high-quality singing voice from a text prompt.”

Nvidia hasn’t released Fugatto, fearing it might be misused. But according to Reuters, the company is considering how it might launch the model “responsibly” were it to make it available.

Leave a Reply

Your email address will not be published. Required fields are marked *