Are Large Language AI model developments hitting scaling issues?
International AI LLM development companies such as OpenAI are apparently hitting delays in developments around training these models to perform more human like thought processes.
Successes Have Restrictions in Scope
Gen AI models have been quite successful in areas of AI implementation in businesses and other areas such as healthcare, that require lots of well-shaped input data with in-depth data analysis, code scanning, system feedback, quality and client management where there is basically a requirement for massive computational power linked with machine learning to provide instant feedback or process improvement adjustments.
In these areas AI has proven to be phenomenal and has improved production, quality, resource allocation, feedback, customer interactions and many other areas.
However, to get AI to the next level it will require more human type thinking that goes beyond simple deduction processes and move towards independent thought processing processes for AI to be able to make a leap beyond the current Open AI GPT4 model that is currently 2 years old and has not developed further.
Bigger is Not Necessarily Better
For this to happen there are a number of factors that are becoming apparent:
- Data accuracy and data depth and data shape is critical to allow maximum potential. According to a new MIT study on AI, 72% of CIO’s surveyed, indicated that data is their biggest challenge for AI and 68% say unifying their data platform for analytics and AI is crucial
- Scaling the volume of data and simply adding computational capacity does not automatically produce better results in AI ability
Gen AI Scaling has Plateaued
Ilya Sutskever, the co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, recently commented that results from scaling up pre-training on AI, which is the phase of training an AI model that uses a vast amount of unlabelled data to understand language patterns and structures – have plateaued.
Reports indicate that researchers at major AI development labs have been incurring delays and disappointing outcomes in the race to release a large language model that will surpass OpenAI’s GPT-4’s ability and scale.
What it seems is required for AI to make the next exponential leap, is for a different approach that would bring about more nuanced thinking without having to add more scale to the infrastructure capacity.
Such a catalyst would require better reasoning to be applied without further data inputs and would also be likely to require new computer processors that would be a growth catalyst for the entire AI eco-System
Finding The Right Touch-Stone
Ilya Sutskever, has raised $1 billion in cash to fund the SSI project to develop safe artificial intelligence systems that far surpass human capabilities and he has been on a drive to hire the top talent in the AI industry to tackle this great challenge in AI development.
“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said.
“Scaling the right thing matters more now than ever.” he believes. Sutskever has not revealed details on how his team is addressing the issue, other than to say that SSI is working on an alternative approach to scaling up AI pre-training.