CEO Jensen Huang addressed the press this week at Nvidia's annual GTC developer conference, sharing his thoughts on AI hallucinations and artificial general intelligence (AGI). AI hallucinations refer to some AI systems providing plausible but inaccurate answers. These errors are due to a variety of factors, including insufficient training data, biases in the data used to train the model, or wrong inferences made by the model.
Nvidia founder and CEO Jensen Huang stated that AI hallucinations can be solved by implementing the 'retrieval-augmented generation' (RAG) approach. RAG involves instructing the AI system to research and verify its answers before providing them, similar to basic media literacy procedures.
Regarding artificial general intelligence (AGI), Huang believes that depending on how it is defined, it is likely to become reality within the next five years.
AI hallucinations can be harmful because of their ability to deceive people. The immediate issue is that they severely undermine consumer trust. As users start to see AI tools as more authentic, they develop more intrinsic trust in them and may even be more shocked when they find out their confidence is abused. Nvidia's CEO suggests that AI systems should be able to admit when they don't know the answer, can't reach consensus on the right answer, or are unable to provide information about future events.
AGI, sometimes known as 'strong AI' or 'human-level AI', is a critical future goalpost in AI science. Unlike 'narrow AI,' which is designed for specialised tasks, AGI will be capable of performing a wide range of cognitive functions at or beyond human levels. Huang suggests that AGI is achievable within five years if defined as a computer performing better than almost any human being on a broad set of tests, such as math, reading, reading comprehension, logic, pre-medical, economic, bar exams, or standardized tests such as GMATs, SATs, etc. 'If we specify AGI to be something very specific, a set of tests where a software program can do very well-or maybe 8% better than most people-I believe we will get there within 5 years,' Huang argued. However, if AGI is defined as software that thinks independently and becomes sentient, then Huang believes the arrival of AGI is more difficult to predict.