Forbes contributors publish independent expert analyses and insights.
Is it hyperbolic to say that humanity is entering a new age with AI? And what does that mean?
Well, for one thing, many of those who think deeply about IT would contrast the twentieth century, as the age of deterministic programming, with the twenty-first century, where technology results have suddenly become deeply non-deterministic.
Think about "computer programming" in the twentieth century. At its core, it was the same as computer programming in the eighteenth and nineteenth centuries (ex: Babbage computer). There were inputs, calculations, and outputs, as well as stored data, and commands. That was basically it.
Now, there are impulses, sent through a complex and dynamic neural net, that create outputs which we cannot map to inputs. At least, not directly. Next-token prediction has allowed AI to forecast events, read books, write poems, paint pictures - the list goes on.
That gives us an interesting idea of the future. Consider this expansion of prophecy in the "AI 2027" document, now a renowned speculation:
"Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at "research taste" (deciding what to study next, what experiments to run, or having inklings of potential new paradigms)."
Or check out this list of assertions from top pioneers in tech, as enumerated at Golan AI:
All of this really rests on a fundamental set of assertions about where we are coming to with AI.
In a recent presentation at Google, Amin Vahdat, a Google VP, gave us a review of this whole enchilada, starting with defining this new "age of insight":
"What we've done over the past 25 years or so, and it really is stunning, if you think about it, is, we've made it such that the totality of human knowledge is available anytime, anyplace," he said. "25 years ago, in the year 2000, that would have been science fiction. Today, you can pull out your cell phone and access virtually any fact, virtually any video, virtually any book, virtually any song, instantaneously... (it's) really, really stunning."
In fact, he suggested, the whole thing can be overwhelming.
"What we're transitioning to is needing insight," Vahdat said. "How do we actually act on information? How do we get the information that we need for us in that moment, quickly, helpfully, but also personally? And this will be a major challenge that's going to, once again, reshape computing."
How have things changed?
"Taken together over about a 20 year period, from 2000 to 2020, the community as a whole delivered a factor of 1000 performance efficiency," he said. "For the same money, you could get 1000 times the capacity, whether it's networking, storage or compute."
In aid of explaining how to support new outcomes, Vahdat talked about the application of the principle of "loosely coupled software" and its origin.
"And what this meant was, even though you're running on 1000, 10,000, perhaps 50,000 servers, the failure of any individual element should be tolerated as just a fact of life," he said. "So in other words, here at Google, if we lose a server, you don’t notice that your web search results didn’t return the correct values. In fact, if we lose a whole rack, you don’t notice - we actually don’t take it as an emergency even if we lose a whole rack. We could lose a whole cluster; an entire building; we would notice but you (as the user) wouldn’t. And that’s how the software is structured. And this was a radical departure from how software used to be structured. Things have really exploded."
He contrasted this with the sorts of realities that faced engineers when the team on the first TPU Tensor Processing Unit actually started its design work in 2013.
"If our users at Google wanted to interact with Google via voice for, let’s say, 30 seconds a day ... we would have to build two more Googles to support just that one use case," Vahdat said. "So voice recognition was so expensive, from a computation perspective, that it was unimaginable for us to actually support it. And even in 2013, we were very fortunate; had a lot of resources; (and we) could imagine building a lot of infrastructure; but building that much infrastructure for one use case seemed impossible."
That imagination bore fruit as hardware design progressed.
"We’re in a place right now where we’re not only just designing custom hardware; we’re co-designing it with the researchers," he said,"and this has been one of our really exciting benefits here at Google where we get to work shoulder to shoulder side by side with leading researchers that since beginning ...have been developing leading models.This allows us to develop hardware looking forward what will be needed models."
Vahdat went over various aspects of design: liquid versus air cooling; RAM packaging; data governance; voting representations.
He talked about the utility of something like an Nvidia GPU, perhaps emblematic of this new age.
"Really astounding pieces of hardware,really astounding connectivity on network side,and really an enabler for this GenAI revolution that we're seeing,"he noted."We're proud actually,to be key partner to Nvidia,and we typically are actually first market with each their products."
In addition,Vahdat went over some of the "specs" of Google's new reality.
"Delivering insights planet requires other elements for this AI hypercomputer,named network,"he said."We've been fortunate actually at Google,really,to be driven by YouTube,which one most network-intensive services internet,and so we've been privileged actually have them driving our requirements."
And then there's the wiring.
"We've got actually global network 42 regions,and 127 zones,"Vahdat explained."We have probably largest edge presence among all hyperscalers;probably largest number subsea cables connecting continents together;2 million plus miles lit fiber across planet.And so this all comes together cloud WAN connects services together."
All of this is fantastically impressive,and it's not the end of the story.In closing,Vahdat asked us to consider eventual results such builds.
"We've driven transformation across all industry last epoch computing,and we're going deliver equal more;I think lot more;in epoch,"he said.
So here's the big question: since you know what the pioneers are doing here, what kinds of networks they're building, and how they're doing it, what will you do with AI over the next, say, three years? How will it drive your career development? How will it influence your personal life? And what will we do with it, at a civic level, at a societal level, to embrace change?