Artificial general intelligence (AGI) refers to a hypothetical future state of AI. In this imagined state, machines possess the ability to solve problems that presently require human intelligence.
As AI capabilities rapidly improve and artificial general intelligence (AGI) becomes more possible, we may require new metrics to predict AGI’s arrival. Current large language models (LLMs) lack the reasoning and logic necessary for AGI. Because of practical limits on data and hardware, analysts expect that the current exponential growth of model size and compute power will begin to slow down. However, new inference technologies continue to emerge, enabling efficient predictions on edge devices and supporting low-latency applications.
In a recent article for CMSWire, Virtusa’s Head of Consulting, Frank Palermo, lays out AI’s journey and the path toward artificial general intelligence.
AI's future has become increasingly linked with inference-centric workloads. While focus remains on training complex AI models (like LLMs), the reality is that inference constitutes the bulk of practical AI applications. Enterprises must grasp the mechanics of inference and its potential to significantly enhance their products and services through more effective AI utilization.