State of the Art in AI Research

Vincent Granville
2 min readOct 4, 2024

--

Who publish state of the art AI research these days? The question is frequently asked. Of course, large companies contribute but tend to be discrete to protect their intellectual property. For the most part, academia is lagging and focus on little incremental improvements rather than ground-breaking results leading to applications.

Funding is one issue. Grant agencies are unlikely to fund projects based on foundational changes with unpredictable outcome. In academia, research is focused on what is likely to get funded, drastically limiting innovation.

When a new technology works, everyone will work on it, trying to improve it, but never questioning it or trying something else. I was reading Sebastian Raschka’s new book about building an LLM from scratch. In the very first sentence, he describes LLMs as deep neural networks (DNNs). Then he goes on to say that most use transformers. It illustrates how everyone got stuck with DNNs, to the point that everything else is looked down upon. In everyone’s opinion, everything else is standard ML or NLP. Yet no one is working on anything foundationally different, something radically different both from DNNs and standard ML.

In some ways, it reminds me the time when maximum likelihood was considered as the panacea for all estimation problems. It lasted for decades and even today, even in DNNs sometimes, you can still find it. Doing statistics without maximum likelihood was not conceivable. It was taught in all curricula and discussed in all books, in the same ways DNNs are taught today, creating professionals who use what they learned exclusively, generating a feedback loop and barriers without being aware of it.

During all those years, I started to develop alternatives independently, some profoundly different from everything else. I tested them on various datasets and kept what works best. In the end, I rewrote the entire statistics and ML corpus from scratch. Not subject to “publish or perish”, not subject to corporate NDAs and politics, yet well self-funded, I was able to just focus on what works best, without being blindfolded by external pressures.

All my AI research is public. If you are looking for something really new, I invite to check my publications and case studies here and sign up to my newsletter to not miss future content. It includes 50 articles, and 6 books published in the last two years. Many about my groundbreaking architecture for RAG / LLM, though it covers a lot more. Most feature material very different from both DNNs and standard ML.

--

--

Vincent Granville
Vincent Granville

Written by Vincent Granville

Founder, MLtechniques.com. Machine learning scientist. Co-founder of Data Science Central (acquired by Tech Target).

No responses yet