Pages

Wednesday, 17 January 2024

ChartAssistant: New Vision-Language Model for Chart Comprehension and Generation

Introduction

Charts are everywhere in our daily lives, from business reports to scientific papers, from news articles to social media posts. They help us visualize, understand, and communicate data patterns and insights. However, charts are not easy to process for general-purpose multimodal models, which combine vision and language to perform various tasks. Charts have unique features, such as graphical elements (e.g., bars, lines, pies) and textual components (e.g., labels, legends, titles), that require special attention and alignment. Moreover, charts can be used for different purposes, such as summarization, comparison, analysis, and prediction, that require different types of reasoning and comprehension skills.

To address these challenges, a team of researchers from OpenGVLab, a leading research group in computer vision and natural language processing, developed a new universal chart multimodal language model that can handle diverse chart-related tasks with basic and specialized chart types. The team aimed to address the challenges posed by the unique combination of graphical elements and textual components in charts, which can be difficult for general-purpose multimodal models to comprehend. This new model helps to achieve competitive performance across various chart tasks without task-specific fine-tuning. This new model is called 'ChartAssistant'.

What is ChartAssistant?

ChartAssistant is a specialized vision-language model designed for comprehensive chart comprehension and reasoning. It utilizes a broad dataset, known as ChartSFT, to handle a variety of chart-related tasks across both basic and specialized chart types. Its proficiency in chart comprehension and reasoning makes it an essential tool for data visualization and pattern recognition.

Key Features of ChartAssisstant

ChartAssisstant has several key features that make it stand out from other vision-language models for chart tasks. Here are some of them:

  • Chart-to-table pre-training: ChartAssistant utilizes a large-scale dataset, ChartSFT, for pre-training. This dataset, comprising over 200,000 chart-table pairs across 12 chart types and 10 domains, aids in learning the alignment between chart and text, as well as the semantics and syntax of chart elements and attributes.
  • Multitask instruction tuning: The model is further fine-tuned on a multitask instruction-following dataset, which contains over 100,000 chart-instruction pairs covering 10 chart tasks and 12 chart types. This process enhances ChartAssistant’s understanding of the diversity and complexity of chart tasks and scenarios, and develops logic and reasoning skills required for chart comprehension and generation.
  • Universal chart model: ChartAssistant is designed to handle various chart tasks and types without task-specific fine-tuning or adaptation. It can dynamically adjust its output format and content based on the input instruction and chart image, and generate relevant, coherent, and informative natural language responses. It can also handle unseen or rare chart types and tasks, as long as they are within the scope of its pre-training and fine-tuning data and tasks.
  • Competitive performance: The unique two-stage training process enables ChartAssistant to achieve competitive performance across various chart tasks without task-specific fine-tuning.

Capabilities of ChartAssisstant

ChartAssisstant has many capabilities that demonstrate its potential and value for chart understanding and reasoning. Here are some as stated below (few highlighted in below figure):

ChartAssistant - Overview
source - https://arxiv.org/abs/2401.02384v2

  • Chart-to-text generation: ChartAssistant can generate natural language descriptions or summaries of charts, explaining the main trend, pattern, or insight of a chart, and highlighting the key data points or comparisons.
  • Chart question answering: The model can answer natural language questions about charts, such as querying the data or metadata of a chart, or inferring or comparing the information or meaning of a chart.
  • Chart captioning: ChartAssistant is capable of generating natural language captions for charts, providing a concise and informative title or label for a chart, or describing the main purpose or message of a chart.
  • Chart summarization: The model can generate natural language summaries for charts, providing a brief and comprehensive overview or analysis of a chart, or highlighting the key data points or comparisons.
  • Chart comparison: ChartAssistant can generate natural language comparisons for charts, providing a contrast or similarity analysis of two or more charts, or highlighting the differences or commonalities of two or more charts.
  • Chart analysis: The model can generate natural language analyses for charts, providing a deeper or broader interpretation or evaluation of a chart, or explaining the causes or effects of a chart.
  • Diverse use cases: ChartAssistant’s capabilities make it a valuable tool for a wide variety of applications, ranging from academic research to business analytics, aiding in data visualization and understanding data patterns.

Architechture

ChartAssistant is designed with a focus on accurately understanding the content of charts.As shown in below figure, It comes in two variants: ChartAst-D and ChartAst-S, with 260M and 13B parameters respectively. Both variants excel in numerous chart-related tasks, with ChartAst-D being more compact and ChartAst-S offering better generalization.

ChartAst-D and ChartAst-S network architecture
source - https://arxiv.org/pdf/2401.02384v2.pdf

ChartAst-D is a vision-language model for chart understanding, built upon Donut. It comprises a visual encoder, Swin-Base, and a textual BART decoder. The visual encoder uses fixed-sized non-overlapping windows to divide the image and applies self-attention layers to consolidate information across these windows. This process transforms the image into a set of tokens. These tokens, along with tokens of text instruction, are used by the BART decoder to generate the corresponding response.

ChartAst-S, on the other hand, is a large vision-language model for chart understanding, built upon Sphinx. It preserves the original information of high-resolution images through sampling and partitioning methods, ensuring greater fidelity to the image content. ChartAst-S incorporates multiple visual encoders to extract more informative visual features, such as DINOv2, CLIP, and ConvNeXt. Unlike ChartAst-D, ChartAst-S directly appends visual tokens to the text tokens. The merged tokens are then fed into the LLM to generate the response. Thanks to the intricate design of the visual encoder and the powerful reasoning ability of LLM, ChartAst-D performs well in various real-world chart-related applications.

Performance Evaluation

ChartAssistant’s performance has been assessed across various tasks and datasets. The evaluation process involved several tasks such as chart summarization, open-ended question answering, numerical question answering, and referring question answering. Datasets like Chart-to-text, OpenCQA, ChartQA, MathQA, and ReferQA were utilized for these evaluations.

For evaluating ChartQA, MathQA, and ReferQA, a relaxed correctness approach was adopted, allowing for an exact match with a tolerance for a 5% numerical error. For Chart-to-Text and OpenCQA, BLEU was employed as the evaluation metric. For chart-to-table translation, RMSF 1 from DePlot was used.

Several models including SPHINX, ChartLLaMa, Unichart, Matcha, Pix2Struct, T5, and Chart-T5 were chosen as baselines. These models were fine-tuned on the train set of the respective evaluation datasets.

Performance Eval with Other Models
source - https://arxiv.org/pdf/2401.02384v2.pdf

As per the results summarized in table above, ChartAssistant consistently outperformed the baseline across all tasks without task-specific fine-tuning. It surpassed the current leading methods by 9.3% and 1.6% on Human and Augment, respectively. Additionally, in the open-ended question answering, it showed a 33.1% enhancement relative to Unichart.

Performance Eval - chart specific models
source - https://arxiv.org/pdf/2401.02384v2.pdf

For specialized type charts, as depicted in table above, ChartAssistant demonstrated an absolute advantage in all five tasks related to specialized types of charts compared to the current chart-specific vision-language models.

Performance Eval - foundation Models
source - https://arxiv.org/pdf/2401.02384v2.pdf

Despite the strengths of current multimodal models, a domain gap exists between charts and general images, causing difficulty in chart-related tasks. To address this, a small-scale test set was built for comprehensive evaluation of these models. This test set features five tasks, each with 50 samples containing a variety of chart types. Task samples include both real-world chart-table pairs and tables with API-generated charts. As indicated in table above, ChartAssistant, specifically the 200M model, notably outperformed multimodal models like GPT-4V(ision) and Bard, particularly in tasks needing precise chart understanding, such as chart-to-table translation and numerical question answering. Although GPT-4V(ision) does well in chart summarization, it is suspected that this could be due to the task’s bias towards text fluency, potentially compromising factual accuracy.

How to Access and Use This Model?

The code and data for ChartAssistant are available on the OpenGVLab GitHub repository. Users can access and use the model locally by following the instructions provided in the repository. If you are interested to learn more about this model, all relevent links are provided the end of this article.

Conclusion

ChartAssistant represents a significant advancement in the field of chart comprehension and reasoning. Its unique approach to training and its impressive performance across various chart tasks make it a valuable tool for data visualization and informed decision-making. As the field continues to evolve, ChartAssistant is well-positioned to lead the way in chart-based vision-language modeling.


Source
research paper - https://arxiv.org/abs/2401.02384v2
research document - https://arxiv.org/pdf/2401.02384v2.pdf
GitHub repo - https://github.com/OpenGVLab/ChartAst

No comments:

Post a Comment

ShowUI: Advanced Open-Source Vision-Language-Action Model for GUI

Introduction Graphical User Interface (GUI) assistants assist users to interact with digital appliances and applications. They can be an ord...