Sam Altman sees more great leaps forward for AI

By Axios | Created at 2024-09-24 09:45:33 | Updated at 2024-09-30 09:25:28 5 days ago
Truth

To those who contend that large language models are dumb word-predicters — "stochastic parrots" — that don't actually understand questions or solve problems, OpenAI CEO Sam Altman responds, in effect: So what?

The big picture: As the power of generative AI models continues to grow and the tech industry's bets on the technology pile up, a gulf remains between industry leaders like Altman — who believe AI will keep getting better as it gets bigger — and critics who argue that the technology will never prove fully reliable.


What they're saying: "I think people get very hung up on the fact that it's just being trained to predict the next token," Altman said Monday in an onstage interview with Axios' Ina Fried.

  • "Once it can start to prove unproven mathematical theorems, do we really still want to debate: 'Oh, but it's just predicting the next token?'" Altman said.
  • He spoke on a panel with UN tech envoy Amandeep Singh Gill at an OpenAI event on the sidelines of the UN General Assembly meeting in Manhattan.

Altman suggested that we've quickly taken for granted achievements most of us would have thought impossible just a few years ago.

  • Altman said the recent release of o1 (internally code-named Strawberry) lets anyone on the internet use a tool that's better at math than all but the top few 100 students in the U.S. — and works at the level of upper-echelon programmers.
  • Altman recalled an observation by mathematician Terence Tao: "The thing [he] said that stuck with me is that before, AI was like a very incompetent grad student. Now it's like a mediocre grad student that you could give tasks to. And soon, you can see a unique, useful research partner."

State of play: Altman said the new o1 model's reasoning capability has convinced him AI is getting closer to the goal of advancing scientific discovery.

  • "I think with o1, you can see the glimmers of how this is starting to work," Altman said.

The other side: Altman's comments stood in sharp contrast to the perspective provided by the UN's Gill, who talked about challenges he had seen in past work using AI to address global health issues.

  • "I don't think we can get a shortcut to all of this through large language models," he said.

Between the lines: Gill also lamented that people are using such compute-intensive systems to do simple tasks that could be better accomplished using other means — like just asking other people for help finding the nearest Starbucks.

Yes, but: Altman argued that such queries represent a trivial use of energy today, will use even less energy in the future, and are accompanied by other types of inquiries that save time and energy because they are solved by AI.

  • Altman pointed out that costs for a set AI computing task have rapidly come down.
  • He added that when OpenAI and its rivals cut prices, that's because they have found ways to accomplish the work using less computing power — and, therefore, less energy.
  • "The energy use of AI, I think relative to the value it's creating, is quite tiny today," Altman said. "I don't want to minimize it too much, because it's going to go up — like we will use gigawatts over time, out of, you know, terawatts on Earth."

Altman also said that the AI-powered device he's working on with former Apple design guru Jony Ive will not be a phone.

  • Altman also promised that OpenAI still has more big news this year, though he wouldn't say whether it is the long-rumored GPT-5 — if that's even what it will be called.
Read Entire Article