A recently leaked Google memo reveals that while Microsoft and Google are getting all the hype about generative AI, open source developers could still win the market battle.

A Google AI engineer wrote: “The inconvenient truth is that we are not in a position to win this [Generative AI] arms race, and neither is OpenAI. While we argued, a third faction quietly ate lunch.” And who is this hidden third party? Amazon Web Services (AWS)? IB? Baidu? It’s none of them. It’s the open source community.

How can it be? Doesn’t Generative AI require hyperscale clouds to deliver large language models (LLMs) that deliver high-quality answers? Ah, actually, no, no, it’s not like that.

It turns out that you can run LLM on a smartphone: People run basic models on a Pixel 6 at five LLM tokens per second. As others have demonstrated, you can fine-tune a custom AI on your laptop in an evening.

In other words, “Being able to customize a language model in hours on consumer hardware is a big deal, particularly for aspirations that involve incorporating new and different knowledge in near real time.”

The revolution

The key to this revolution? The recent leak of Meta’s Large Language Model Meta AI (LLaMA). This has spurred an avalanche of innovation from the open source community. Despite the lack of initial instruction or conversation tuning, the model was quickly iterated, with improvements such as instruction tuning, quantization, quality improvements, and others developed in quick succession.

Chief among the game-changers is the use of a low-cost fine-tuning mechanism known as low-ranking adaptation (LoRA). This allows for model tuning at a fraction of the cost and time. This technology has significantly lowered the barrier to entry for training and experimentation by allowing people to customize a language model in hours on consumer hardware.

As our mysterious developer put it, “Part of what makes LoRA so great is that, like other forms of tuning, it’s stackable. Enhancements such as statement optimization can be applied and then leveraged as other contributors add dialogue, reasoning, or tool usage. While the individual fine-tunes are low-ranking, their sum need not be, allowing full-ranking updates of the model to accumulate over time. This means that as new and better datasets and tasks become available, the model can be inexpensively upgraded without ever having to pay the cost of a full run.”

So Generative AI is now within reach of virtually any AI-savvy open source developer. Additionally, the open source community has been good at using high-quality, curated datasets for training, following the train of thought that data quality scales better than data size. These datasets are typically developed using synthetic methods and scavenged from other projects.

Revaluation

Recent advances in the open source community have prompted a reevaluation of Google’s strategy and OpenAI. Rapid innovation coupled with the lack of usage restrictions makes open source AI models an attractive alternative for many users.

I think this is only appropriate. After all, while the FAANG has benefited from generative AI thus far, all of their work has been based on open source AI programs. Without TensorFlow, PyTorch, and Hugging Face’s Transformer, there would be no ChatGPT or Bard.

Of course, Meta, which started this revolution, is also in a unique position to get the most out of incorporating its code into its products. Perhaps, the other big companies, betting their future on AI, may realize that letting open source developers work with their data models will work to their advantage. After all, it has pretty much every major software advance over the past twenty years. Why should Generative AI be any different?

As our mystery developer at Google put it, “Competing directly with open source is a losing proposition. … we shouldn’t expect to be able to catch up. The modern internet runs on open source for a reason. Open source has some significant benefits that we can’t replicate. Exactly like that.

Group Created with Sketch.


#Open #source #eat #Google #OpenAIs #lunch
Image Source : thenewstack.io

Leave a Reply

Your email address will not be published. Required fields are marked *