In a groundbreaking move, Stability AI has unveiled StableLM, an open-source language model that is set to revolutionize the AI landscape. This innovative model, currently available in an Alpha version with 3 billion and 7 billion parameters, is just the beginning. Stability AI has plans to roll out models with a staggering 15 billion to 65 billion parameters in the near future.
StableLM is not just another language model. It is a testament to Stability AI’s commitment to making foundational AI technology accessible to all. This model can be freely used and adapted for commercial or research purposes under the CC BY-SA-4.0 license, making it a valuable tool for a wide range of applications.
This launch follows the 2022 release of Stable Diffusion, an open and scalable image model, and is a continuation of Stability AI’s mission to democratize AI technology. StableLM is designed to generate text and code, powering various downstream applications and proving that small, efficiently trained models can indeed deliver high performance.
“The release of StableLM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2.”
Stability AI StableLM
Stability AI’s experience in open-sourcing language models with EleutherAI, including GPT-J, GPT-NeoX, and the Pythia suite, has been instrumental in the development of StableLM. This model is trained on a new experimental dataset, three times larger than The Pile open-source dataset, with 1.5 trillion content tokens. More details on this dataset will be released in due course.
Despite its relatively small size, StableLM packs a punch, delivering high performance in conversational and coding tasks. In addition to StableLM, Stability AI is also releasing research models that are fine-tuned instructions, using a combination of five recent open-source datasets for conversational agents. These models, however, are for research use only and are released under a noncommercial CC BY-NC-SA 4.0 license.
Open-source our models
As champions of transparency, Stability AI remains steadfast in its commitment to open-source our models. Why so? Because they believe that understanding should come without barriers and trust should be fostered through the clarity of our workflow. By open-sourcing our models, we are effectively pulling back the curtain and allowing investigative exploration by researchers and AI enthusiasts alike.
One of the primary benefits of this transparency is the opportunity it affords researchers to “look under the hood” of its models. This simply means unravelling the complexities, scrutinizing the performance, and gaining an in-depth understanding of the StableLM models. This not only allows them to verify the claims we make about their performance, but also expedites the process of developing interpretation techniques which are crucial in understanding how these models arrive at the conclusions.
Additionally, this process acts as an early warning system of sorts, helping identify potential risks intrinsic to the AI models. It fuels the pro-active development of safeguards – preemptive measures to contain and limit any backlash that could arise from unforeseen eventualities. This sets up a robust system that prioritizes safety and accountability.
Public and private sectors
Widening the scope beyond the research community, Stability AI’s open-source models are beneficial for organizations spanning across both public and private sectors. They present the opportunity for these entities to adapt or “fine-tune” our publicly available models for their respective applications. This way, they do not have to share their sensitive information or relinquish control over their AI capabilities. They can customize the models to satisfy their unique requirements without compromising their data security and maintaining the sovereignty of their AI operations.
In a nutshell, Stability AI’s open-source approach not just enables a greater transparency but also bolsters trust among our community, aids researchers in their explorations and allows organizations to utilize pre-existing models while maintaining their data privacy and sovereignty. This reinforces our commitment to meaningful innovation, ultimately aimed at fostering a collaborative and inclusive AI ecosystem.
In addition to these developments, Stability AI will also be launching a crowd-sourced RLHF program and collaborating with community efforts like Open Assistant to create an open-source dataset for AI assistants. This is a clear indication of Stability AI’s commitment to fostering a collaborative and inclusive AI ecosystem. For more information on the StableLM language model jump over to the Stability AI official website for more details. The language model can be downloaded from the official GitHub repository.
This repository contains Stability AI’s ongoing development of the StableLM series of language models and will be continuously updated with new checkpoints say the development team at Stability AI. On August 5, 2023 a patch was released for StableLM-Alpha v2 models with 3B and 7B parameters. As soon more information becomes available and updates are released we will keep you up to speed as always
Filed Under: Guides, Top News
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, TechMehow may earn an affiliate commission. Learn about our Disclosure Policy.