Pages

Wednesday, 7 June 2023

SELFEE: AI Model That Uses Self-Feedback to Revise Its Text Generation

SELFEE AI Model - Symbolic Image

Introduction

Language models (LMs) are powerful tools that can generate natural language text for various tasks, such as summarization, question answering, and creative writing. However, LMs do not always produce high-quality text on their first attempt and may need to revise their output based on feedback. In this blog post, we will 
explain SELFEE, a new instruction-following LM that can generate its own feedback and revise its output iteratively until it reaches a satisfactory level of quality.

SELFEE is a model developed by a team of researchers from KAIST AI Lab. The team consists of Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. The team was motivated by the effectiveness of self-feedback generation for improving the performance of LMs. The team fine-tuned the LLaMA model (7B, 13B), which is a large-scale LM that can follow natural language instructions, using 178K training instances that contain self-feedback and revision data generated by ChatGPT, which is a closed-API LM that can chat with humans. The team released the first version of SELFEE on May 31, 20231.

The motto behind the development of SELFEE was to create a model that can generate high-quality text for humans without relying on external models or tools. The team wanted to make SELFEE accessible and easy to use for anyone who needs natural language text generation.

What is SELFEE?

SELFEE is based on the LLaMA model (7B, 13B), which is a transformer-based LM that can follow natural language instructions by using latent knowledge from pre-training and explicit knowledge from web documents. SELFEE fine-tunes LLaMA using self-feedback and revision data generated by ChatGPT, which is a transformer-based LM that can chat with humans by using conversational knowledge from pre-training and social knowledge from Reddit data.

Key Features of SELFEE

SELFEE has several key features that make it unique and powerful:

  • It does not require a document retrieval process, few-shot demonstrations for in-context learning, a large LM over 100B in size, or task-specific models.
  • It can generate self-feedback and self-revision for any natural language instruction, regardless of the domain or task.
  • It can handle creative writing or long-form text generation tasks that require an iterative writing process to produce high-quality text for humans.
  • It can leverage multiple sources of information from its base LM (LLaMA) and its feedback generator (ChatGPT) to improve its output.

Capabilities of SELFEE

SELFEE can be used for various natural language text generation tasks that require following instructions and producing high-quality text. Some examples are:

  • Writing summaries or reviews for books, movies, products, etc.
  • Writing answers or explanations for questions or problems.
  • Writing stories or poems based on prompts or themes.
  • Writing captions or headlines for images or videos.
  • Writing emails or messages based on scenarios or goals.

The architecture and feedback loop of SELFEE

SELFEE is an instruction-following LM that can generate and revise its output based on self-feedback. When given an instruction Q, SELFEE produces an initial answer A0 and also generates feedback sequences F0 for it. SELFEE then checks F0 to see if A0 needs to be revised. If so, SELFEE creates a revised answer A1 using F0 and sends it to ChatGPT along with Q and F0. ChatGPT produces feedback F1 for A1 and returns it to SELFEE. This process repeats until SELFEE produces a high-quality answer in one inference.

source - https://github.com/kaistAI/SelFee

Above Figure illustrates the architecture of SELFEE. LLaMA takes Q as input and outputs A0, which is passed to ChatGPT together with Q. ChatGPT outputs F0 for A0, which is passed back to LLaMA. LLaMA uses F0 to decide whether to revise A0 or not. If yes, LLaMA outputs A1 using F0, which is passed to ChatGPT along with Q and F0. ChatGPT outputs F1 for A1, which is passed back to LLaMA. This cycle continues until LLaMA outputs an answer that does not require revision or reaches the maximum iteration.

How does SELFEE compare with other LLMs in terms of performance?
 
SELFEE models (7B, 13B) demonstrate superior performance over other open-sourced LLMs such as LLaMA, Alpaca, Vicuna, and Guanaco, and are on par with closed-API LLMs such as ChatGPT and Bard, according to the Vicuna evaluation setting. The Vicuna evaluation setting is a comprehensive and consistent evaluation setup that covers various domains and tasks.

source - https://github.com/kaistAI/SelFee
The figure shows the performance comparison of SELFEE with other LLMs.  This result makes SELFEE models some of the most powerful open-sourced models available today. SELFEE is especially effective in creative writing or long-form text generation, as it needs an iterative writing process to produce high-quality text for humans.

How to access and use this model?

Discovering and utilizing this model is a straightforward process. SELFEE is conveniently accessible through an online demo, enabling users to input natural language instructions and witness the impressive capabilities of SELFEE as it generates and refines its output. Furthermore, users have the opportunity to observe the feedback sequences generated by SELFEE for each output, enhancing the understanding of its decision-making process. The demo itself can be accessed through the following link: https://kaistai.github.io/SelFee/demo.

For those who prefer local usage, SELFEE is open source, allowing individuals to leverage its power on their own systems. Comprehensive guidance on how to install and operate SELFEE locally can be found on GitHub at: https://github.com/kaistAI/SelFee.

It is essential to note that SELFEE operates under the Apache License 2.0, granting users the freedom to employ it for both commercial and non-commercial purposes. However, adherence to the terms and conditions outlined in the license is required.

Limitations

SELFEE is not perfect and has some limitations, such as:

  • It relies on the quality of its base LM (LLaMA) and its feedback generator (ChatGPT), which may have errors or biases in their outputs.
  • It fails on math, reasoning, factuality, and coding tasks compared to closed-API models such as ChatGPT or Bard.
  • It may generate feedback that is irrelevant, redundant, or contradictory to the instruction or the output.
  • It may generate revisions that are worse than the original output or introduce new errors or inconsistencies.

Conclusion

In this informative blog post, we unveiled SELFEE, an innovative language model designed to follow instructions effectively. What sets SELFEE apart is its unique ability to generate self-feedback and continuously refine its output until it achieves a superior level of quality. Throughout this article, we delved into the reasons behind its development, provided a comprehensive definition, highlighted its impressive features and capabilities, explored its architecture, discussed its competition, and outlined the accessibility and limitations of SELFEE. 


Source
project details - https://kaistai.github.io/SelFee/
GitHub repo - https://github.com/kaistAI/SelFee
demo link - https://kaistai.github.io/SelFee/demo
research paper - https://arxiv.org/abs/2303.17651

No comments:

Post a Comment

ShowUI: Advanced Open-Source Vision-Language-Action Model for GUI

Introduction Graphical User Interface (GUI) assistants assist users to interact with digital appliances and applications. They can be an ord...