Pages

Tuesday, 18 July 2023

AlpaGasus: How to Train LLMs with Less Data and More Accuracy

AlpaGasus: The Smart Model That Learns with Less Data - symbolic image

Introduction

Have you ever wondered how to train an alpaca to do amazing things with just a few data points? If yes, then you might be interested in AlpaGasus, a new model developed by a team of researchers at the University of California, San Diego. The model was developed with the goal of improving the accuracy of alpaca classification using fewer data points. The team behind AlpaGasus includes Gururise, who contributed to the development of the model and the creation of the Alphaca dataset.

What is AlpaGasus?

AlpaGasus is a machine learning model that obtains instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data. This model uses simple and effective data selection strategy that automatically identifies and removes low-quality data using a strong LLM (e.g., ChatGPT).

Key Features of AlpaGasus

  • Efficient use of data: One of the key features of AlpaGasus is its ability to improve the accuracy of instruction-following capability in large language models (LLMs) using fewer data points. This makes the model more efficient and cost-effective, as it requires less data to achieve high levels of accuracy.
  • Transfer learning: The efficient use of data is achieved through the use of transfer learning, where a pre-trained LLM is used to rate the quality of training data. The model then selects only the highest-quality data for fine-tuning, resulting in improved accuracy.
  • Auto-grading of training data: AlpaGasus uses a novel approach to rate the quality of training data by prompting a strong API LLM, such as ChatGPT, to produce a score for each triplet of (instruction, input, response). This allows the model to automatically select high-quality data for fine-tuning without the need for expensive and time-consuming human annotation.
  • Improved accuracy: The team behind AlpaGasus demonstrated that their model achieved improved accuracy in instruction-following capability compared to traditional fine-tuning methods. This improved accuracy makes AlpaGasus a valuable tool for improving the performance of LLMs.

How does AlpaGasus work?

AlpaGasus is a smart machine learning model that can follow instructions better than other models with less data. How does it do that? it uses another powerful model to grade its own data.

 AlpaGasus needs data to learn how to follow instructions. But not all data are good for learning. Some data are too easy, too hard, or too confusing. So AlpaGasus asks a strong model, like ChatGPT, to give a score to each piece of data. The score tells AlpaGasus how good the data are for learning.

Then AlpaGasus only picks the data with high scores to learn from. It ignores the data with low scores, because they are not helpful. This way, AlpaGasus can learn faster and better with less data.

ALPAGASUS-The fine-tuning pipeline
source - https://lichang-chen.github.io/AlpaGasus/

Above Figure shows the fine-tuning pipeline of AlpaGasus, it shows how AlpaGasus learns from the high-scored data. It uses a model called LLaMA-series, which is good at following instructions. It teaches LLaMA-series how to follow different types of instructions using the high-scored data. This makes LLaMA-series even better at following instructions.

By using this clever trick of grading and filtering its own data, AlpaGasus can improve its accuracy without relying on humans to label the data. This saves time and money and makes AlpaGasus a very efficient and effective model.

How AlpaGasus Beats Other Models

The team behind AlpaGasus came up with an innovative way to improve instruction-finetuning (IFT) in large language models (LLMs). They created a prompt for a powerful LLM, like ChatGPT, to evaluate the quality of each (instruction, input, response) tuple. Then, they filtered out the ones with low scores. When they applied this filter to the 52k data used to train ALPACA, they found that most of the data was of low quality. But when they used the LLM filter on a smaller subset of 9k carefully filtered data, they produced a much better model i.e., ALPAGASUS. This shows that quality is more important than quantity when it comes to IFT.

Performance of AlpaGasus on four test sets
source - https://lichang-chen.github.io/AlpaGasus/

The team evaluated ALPAGASUS on four different human-instruction test sets and used GPT-4 as their judge. In both the 7B and 13B model comparisons, ALPAGASUS performed significantly better than ALPACA on all four test sets. They also did a detailed evaluation of ALPAGASUS on individual tasks like Generic, Roleplay, Knowledge, and Commonsense from the Vicuna test set. ALPAGASUS did better on most of the tasks.

This discovery shows that prioritizing data quality can lead to a more efficient and effective way to fine-tune LLMs. By using their novel data-filtering strategy, the team behind AlpaGasus was able to produce a much better model than the original ALPACA in less time and with less data.

For other experiment results, please refer to research paper released.

How to access and use this model?

One way to access the AlpacaGasus model is through its GitHub Website, where you can find the Alphaca dataset that the model was trained on. The code for the AlpaGasus model is not currently available, but interested users can refer to the research paper for more information on how the model was developed. The dataset is released under the MIT license. Additionally, you can find more information about the project on its website.

The AlpacaGasus model is open-source and can be used for both commercial and non-commercial purposes. Its licensing structure is specified in the LICENSE file in its GitHub repository.

If you are interested learn more about 
AlpacaGasus model, all relevant links are provided under the 'source' section at the end of this article.

Limitations and Future Work

AlpaGasus is a great model that can follow instructions better than other models with less data. But it is not perfect. It has some limitations that we should be aware of. Here are some of them:

  • It is not very big: AlpaGasus is only 7B or 13B in size, which are the common sizes for most open-source LLMs. But there are bigger models out there, like 33B, 65B, or even 175B. We don’t know if AlpaGasus can work well with these bigger models. Maybe it can, maybe it can’t. 
  • It does not ask humans: AlpaGasus uses another powerful model, like ChatGPT, to grade its own data and pick the best ones to learn from. But it does not ask humans what they think of its data or its responses. Maybe humans have a different opinion than ChatGPT. Maybe humans can give better feedback than ChatGPT.
  • It only uses one dataset: AlpaGasus learns how to follow instructions from one dataset, called IFT dataset for Alpaca. But there are other datasets that have different types of instructions. Maybe AlpaGasus can learn from them too. Maybe AlpaGasus can follow different types of instructions better than other models.

As a part of Future work, the team plans to improve AlpaGasus by testing bigger models, asking humans, and using more datasets. 

Conclusion

In conclusion, AlpaGasus is a promising new machine learning model that has been shown to accurately classify alpacas using fewer data points than traditional models. simple and effective data selection strategy that automatically identifies and removes low-quality data using a strong LLM (e.g., ChatGPT). While further research is needed to determine its generalizability, AlpaGasus represents an exciting development in the field of machine learning.

source
research paper - https://arxiv.org/abs/2307.08701
research document - https://arxiv.org/pdf/2307.08701.pdf
Alphaca dataset - https://github.com/gururise/AlpacaDataCleaned/
project details- https://lichang-chen.github.io/AlpaGasus/
License - https://github.com/gururise/AlpacaDataCleaned/blob/main/LICENSE

No comments:

Post a Comment

Qwen2.5-Coder: Advanced Code Intelligence for Multilingual Programming

Introduction Code models have improved by leaps and bounds and now take on much more with higher accuracy levels. At the beginning, they exp...