Hereon, the primary approach to AI will increasingly shift (back) to AI-enhanced hardware—and that will not just impact processors. It is comparable to the birth of the Internet and railroad crazes of the past – in the beginning, there was euphoria and a number of small companies circling to build the winning solution. But ultimately, the market sorts out where real value and thus the best solutions lie.
Similarly, when AI started entering mainstream consciousness, we saw the same thing: everybody had their new AI or neural processing chip and companies were proclaiming that AI processing was going to be central to everything we do. Of course, most of these radical approaches failed to find a market, and the general trend has been to throw loads of general-purpose or graphics processing unit-centric compute at the AI opportunity. That works for a while, but scales poorly and is not cost-effective.
Recommended AI ML News: LivePerson Collaborates with UCSC to Build the Future of Natural Language Processing
Now as knowledge grows across industries, there’s a more pragmatic understanding in the market that Rome wasn’t built in a day.
For AI to take the next step to become ubiquitous, it needs to be efficiently processable which means that there needs to be more of a focus on AI-enhanced hardware — and that does not mean going back to the discrete AI or neural chip approach. Rather, in 2022 and beyond, expect AI solutions to be more balanced and blended – just as we see AI techniques being added to or paired with virtually any kind of application, from computer-aided design to customer service to retail and communications.
As AI solutions evolve, we’ll take our cues from the amazing computing device that is the human brain. Electric circuits still use five to six orders of magnitude more energy than biological systems. It’s amazing the amount of work that a human brain does on roughly 35 watts!
AI in Sales Example: Koïos Intelligence and Applied to Further Digitize Sales and Service Workflows
Today’s big complex systems such as ADAS are completely insignificant compared to what the brain can do. The brain is basically a large memory device that does a remarkable job of computation, and a big reason for that is fixed function areas. The area of the brain that handles vision is much different than the area that handles hearing or other cognitive tasks. A brain is basically a large heterogeneous system where there are different fixed-function applications with high-performance connectivity to bring it all together.
The same concept is needed for AI – specialized sub-systems for specific tasks, as opposed to more massive general computing. This will allow better optimization of energy usage, which is desperately needed as AI continues to use astronomical amounts of energy. For example, OpenAI found that training a language model used almost the same amount of energy three homes use in a year! Clearly, if AI is going to be everywhere, it needs to be much more efficient.