AI Mole
Published on

Which direction AI will take in 2024?

Authors
  • avatar

2023 was an excellent year for the whole artificial intelligence industry. But what we should expect as AI breakthrough future tech in 2024?

Martin Signoux, Public Policy Manager at Meta gives us some insides on this vison:

  1. AI smart glasses become a thing 😎

As multimodality rises, leading AI companies will double down on AI-first wearable devices And what’s better than the glasses form factor to host an AI-assistant ? Branches are close to the ears to deliver audio, cameras are close to the eyes to capture ego-centric input, plus they’re hands-free and comfortable Meta is leading the way with Ray Ban, but think about the recent Open AI x Snapchat rumor 👀 We’re only getting started...

The year unfolds with the emergence of AI-powered smart glasses, marking a significant shift towards AI-first wearable devices. With companies like Meta leading the way through collaborations with Ray Ban and intriguing partnerships like OpenAI x Snapchat on the horizon, the potential of AI smart glasses is just beginning to be explored. This form factor proves ideal for hosting AI assistants, combining proximity to ears for audio delivery and eyes for capturing ego-centric input, all while offering a hands-free and comfortable experience.

  1. ChatGPT won't be to AI assistant what Google is to search

2023 started with ChatGPT taking all the light and ends with Bard, Claude, Llama, Mistral and thousands of derivatives As commoditization continues, ChatGPT will fade as THE reference ➡️ valuation correction

While ChatGPT initially seized the spotlight in 2023, the landscape is diversifying with the advent of Bard, Claude, Llama, Mistral, and numerous other derivatives. As commoditization progresses, ChatGPT's role as the definitive reference may wane, ushering in a valuation correction.

  1. So long LLMs, hello LMMs

Large Multimodal Models (LMMs) will keep emerging and oust LLMs in the debate; multimodal evaluation, multimodal safety, multimodal this, multimodal that Plus, LMMs are a stepping stone towards truly general AI-assistant

  1. No significant breakthrough, but improvements on all fronts

*New models won't bring real breakthrough (👋GPT5) and LLMs will remain intrinsically limited and prone to hallucinations. We won’t see any leap making them reliable enough to "solve basic AGI" in 2024 Yet, iterative improvements will make them “good enough” for various tasks.

Improvements in RAG, data curation, better fine-tuning, quantization, etc, will make LLMs robust/useful enough for many use-cases, driving adoption in various services across industries*

While no groundbreaking breakthroughs are anticipated, ongoing improvements in models like GPT-5 and LLMs are expected. Iterative enhancements in RAG, data curation, fine-tuning, and quantization will render these models robust and useful, paving the way for widespread adoption across various industries.

  1. Small is beautiful

Small Language Models (SLMs) are already a thing, but cost-efficiency and sustainability considerations will accelerate this trend Quantization will also greatly improve, driving a major wave of on-device integration for consumer services

Driven by considerations of cost-efficiency and sustainability, the adoption of Small Language Models (SLMs) gains momentum. Quantization plays a pivotal role in enhancing on-device integration for consumer services, aligning with the growing emphasis on efficiency.

  1. An open model beats GPT-4, yet the open vs closed debate progressively fades

*Looking back at the dynamism and progress made by the open source community over the past 12 months, it’s obvious that open models will soon close the performance gap. We’re ending 2023 with only 13% left between Mixtral and GPT-4 on MMLU

But most importantly, open models are here to stay and drive progress, everybody realised that. They will coexist with proprietary ones, no matter what OS detractors do. *

The open-source community has demonstrated remarkable dynamism, narrowing the performance gap between open models like Mixtral and proprietary models such as GPT-4. As we move forward, it becomes evident that open models are integral to driving progress and will coexist alongside proprietary ones.

  1. Benchmarking remains a conundrum

No set of benchmarks, leaderboard or evaluation tools emerge as THE one-stop-shop for model evaluation. Instead, we’ll see a flurry of improvements (like HELM recently) and new initiatives (like GAIA), especially on multimodality.

In the absence of a definitive benchmark or evaluation tool, the AI community grapples with the challenge of assessing model performance. Initiatives like HELM and GAIA, especially focusing on multimodality, continue to emerge, indicating the ongoing quest for a comprehensive evaluation framework.

  1. Existential-risks won't be much discussed compared to existing risks

While X-risks made the headlines in 2023, the public debate will focus much more on present risks and controversies related to bias, fake news, users safety, elections integrity, etc

While existential risks captured headlines in 2023, the public discourse in 2024 is expected to shift towards existing risks and controversies. Topics such as bias, fake news, user safety, and election integrity will dominate discussions, reflecting a broader concern for the immediate implications of AI technologies.

Even though his opinion inference strongly with global Meta point of view and strategy, such kind of steakholders drives the market and the strategy. So we'll see at the end of the year if the target was well assigned.