In the past year, have open-source large models truly achieved more progress and accomplishments than closed-source large models?

The Performance Gap Between Open and Closed Source Large Models

While open-source large models have made significant contributions, they still lag behind non-open-source counterparts in terms of performance. For instance, according to a comparative study, open-source models show a noticeable performance gap across various tasks.

It is important to note that GPT-4, for example, was reportedly trained as of 2022 and released in March 2023. It’s not quite “new” anymore and has been surpassed in many tasks, even by models like ChatGPT, compared to open-source Llama-2-70B-chat.

Currently, the best-performing large models, such as OpenAI’s ChatGPT/GPT-4 and Google’s Gemini, have not been made open source. They outperform open-source models like Facebook’s llama and Alibaba’s Tongyi Qianwen significantly.

The Indispensable Role of Open-Source Large Models in Advancing the Field

Open-source large models, particularly the llama model, have played a critical role in advancing the field. The proliferation of llama-based models is a testament to this impact. For instance:

  1. LLAMA PRO: An extension of the llama model by Tencent.
  2. TinyLlama: A powerful small model (11B).
  3. LlaMaVAE
  4. Lag-Llama: For time series prediction.
  5. Lawyer Llama: Applications in the legal field.
  6. Fin-Llama: Specialized in finance.
  7. Code-Llama: Focused on the coding domain.

Without the open sourcing of large models like llama, many of these research avenues would be unapproachable.

From the perspective of paper citations, the impact is evident. For instance, the July-released llama-2 has only 50% fewer citations compared to GPT-4 released in March, with a noticeably higher ratio of high-impact and methodological citations. In contrast, GPT-4 is more often mentioned in background introductions.

The influence gap is even more pronounced when comparing llama-1, released three weeks before GPT-4.

Thus, from the perspective of fostering the field’s advancement, open-source large models are indispensable, not as dominators but as essential contributors to the field’s progress.

The Ongoing Debate: Open vs Closed Source Large Models

The AI salon is still live streaming, and yet people are sneakily asking questions. First off, I believe this question doesn’t have a definite answer because we don’t really know to what extent closed-source large models have developed; in essence, closed-source models operate in the shadows, while open-source models are in the open. Moreover, closed-source models can learn from open-source models by adopting their experiences and new ideas, whereas it’s much harder for open-source models to infer technical details from the external usage of closed-source models.

Therefore, it’s hard to make a direct comparison. However, there’s no doubt that in the past year, open-source large models have made significant achievements and progress. Internationally and domestically, models such as Tongyi, ChatGLM, Baichuan, Yi, Llama, MistralAI, and Falcon represent some of the excellent open-source models available.

Meta, in particular, has led the trend of open-source large language models (LLM). In February 2023, Meta released Llama; then in July, LLama-2:

Not only is LeCun more than just a critic of ChatGPT, but he is also a staunch supporter of open-source initiatives.

MistralAI, a French AI startup, pioneered magnetic chain promotion, being terse and direct by just dropping the download link.

MistralAI not only open-sourced an 8X7B MoE model but also announced plans to open-source a ‘GPT-4 level model’ in 2024.

At the end of the year, Microsoft open-sourced phi-2, showcasing the potential of smaller models. Perhaps with sufficiently high data quality, small model performance can be further enhanced. This year, we might experience more of these small models on mobile and PC (CPU) platforms.

Undoubtedly, open-source LLMs are meaningful. The open-source community provides more opportunities for idea collision and verification, pushing forward everyone’s understanding of technology and allowing the academic world to play a more collaborative role in the progress of LLMs.

The field of large models naturally requires a significant amount of investment, especially in open-source development…

Next
Previous