Pages

Friday, 7 February 2025

DeepSeek AI's Janus-Pro: A Leap in Multimodal Learning

Presentational View

Introduction

Advancements in multimodal understanding and model scaling are revolutionizing Artificial Intelligence towards ever more advanced and versatile systems. Recent years have witnessed landmark advances in the capacity of AI to process information from various kinds of data modality, whether it is textual or image-oriented, thus stepping out of their unimodal confines. Mainly, it has been largely fueled by the scaling of the AI models into billions of parameters and training methods improvement to yield better learning with improved efficiency.

Challenges remain. Combining seamlessly the multimodal understanding with strong content generation with guarantees of stability and high-quality output is difficult to achieve. Optimization of the training strategy with the goal of surmounting issues in the quality of training data and bottlenecks arising in unified architectures for multimodal models is under intense research and development. An outstanding achievement to build on such a trend and face these challenges in multimodal AI is represented by Janus-Pro.

What is Janus-Pro?

Janus-Pro is an integrated multimodal model capable of understanding as well as generating content across the various modalities.  This new model adopts model scaling and refined training and introduces essential architectural and strategic optimisations. It includes variants such as the 7B parameter Janus-Pro-7B, catering to varied computational needs and deployment environments. Each variant is engineered to provide an optimal balance between high performance and resource efficiency.

Key Features of Janus-Pro

Janus-Pro presents an improvement factor for the functionalities and efficiencies at work through particular AI distincting features.

  • Unified Multimodal Architecture: It applies a single, unified architecture to multimodal understanding and generation. It simplifies the design and increases efficiency since separate modality-specific pipelines are no longer needed.
  • Rich representations from multimodal fusion: Janus-Pro combines information from a variety of different modalities in order to form comprehensive multimodal representations. This deep fusion facilitates an understanding of context and makes it possible to create content that fluidly incorporates both textual and visual elements.
  • Scalable and Data-Efficient Design: Designed to scale, Janus-Pro uses the benefits of more significant datasets and computational resources while keeping the increased demands proportionally low, without losing learning efficiency even when less data is present.
  • Resource-Efficient High Performance: Janus-Pro achieves strong performance with fairly small model sizes, like the 7B parameter Janus-Pro-7B and the 1B parameter Janus-Pro-1B, and does so at high performance without using much computation.

Capabilities and Use Cases of Janus-Pro

Janus-Pro's distinct capabilities enable diverse applications:

  • Multimodal coherent narrative content creation: The method will involve integrating textual and visual content, so as to result in more engaging and richer descriptions such as comprehensive reports with illustrations or illustrated storybooks.
  • Improved data understanding for contextual comprehension: Improving understanding in tasks such as image captioning, which extends beyond the mere description of objects to provide richer contextual information through the integration of visual and textual cues.
  • Multimodal Response: Interactive AI Systems Facilitate the development of AI assistants which can process and respond using text and images; this would mean more natural, engaging user interaction.
  • Cross-Modal Information Retrieval: This modality retrieves information based on images allowing for reverse lookup tasks including looking up the pictures given their written descriptions as well as written summaries for an image.

How does Janus-Pro work?

Multimodal processing within Janus-Pro is optimized efficiently, with both decoupled visual encoders and a single Transformer backbone shared across all views. For the purpose of understanding multimodally, it then uses the SigLIP encoder to obtain image representations aligned against text. Such representations are thereafter projected into Large Language Model LLM embedding. For visual generation, a VQ tokenizer maps images into discrete codes, fed through a generation adaptor to project codebook embeddings into the embedding space of the LLM. The shared Transformer backbone then processes these combined feature sequences to produce coherent, contextually aware outputs across modalities.

Architecture of our Janus-Pro
source - 
https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf

The architecture further includes a mixture-of-modality experts router that dynamically assigns experts based on the input modality; the cross-modal attention mechanisms allow the model to learn inter-modality relationships. All these components work together in perfect harmony and enable Janus-Pro to efficiently perform both understanding and generation tasks over different data types.

Advanced Techniques used for Janus Pro model

Janus-Pro features a set of advanced techniques. Each of them contributes to making the model work better and with greater efficiency.

  • Decoupled Visual Encoding: Decoupling visual encoding pathways for understanding and generation enhances performance by removing the inherent conflict between representational needs for the two different tasks.
  • Coherent Output using the Autoregressive Framework:  An adherence to an autoregressive framework makes it possible to predict the next states of information, essential to produce coherent, contextually relevant outputs.
  • Dedicated Visual Encoders for Optimization by Task: SigLIP is used as the encoder to understand and VQ tokenizer is used to generate while optimizing the feature extraction and visual representation specific to the task at hand.
  • Cross-Modal Attention Mechanisms for Information Fusion: Integrates attention mechanisms across modality to deepen understanding and leverages relationships, thereby facilitating useful information fusion from cross-modal cues.
  • Optimized Multi-Stage Training Strategy for Efficiency and Performance: A refined multi-stage training approach, including extended Stage I pixel dependence modeling, focused Stage II text-to-image training, and adjusted Stage III data ratios, enhances computational efficiency and overall performance.
  • Data Scaling for Enhanced Learning: Scaling training data for both understanding and generation, including adding 90M samples for Stage II and incorporating synthetic aesthetic data, improves model generalization and text-to-image stability.
  • Model Scaling for Better Convergence: Scaling Janus-Pro up to 7B parameters can benefit from a larger language model, resulting in faster convergence as well as performance on both understanding and generation tasks.
  • Modality Adaptators: The two adaptors take image features and codebook embeddings and project them into the language model's embedding space to be used for unified architecture of both modalities.
  • Unified transformer architecture for coherent processing: This is achieved using a single unified transformer architecture where the shared backbone enables coherent processing of concatenated multimodal feature sequences to generate contextually relevant outputs across modalities.

Performance Evaluation with other models

Janus-Pro is tested for how well it can turn text into images. The test, called the GenEval benchmark, looks at things like making one or two objects, counting items, matching colors correctly, and placing objects in the right spot. Table below shows that Janus-Pro-7B scored 80%. This is higher than other methods like Transfusion (63%), SD3-Medium (74%), and DALL-E 3 (67%). This means Janus-Pro follows instructions very well when creating images from text.

Evaluation of text-to-image generation ability on GenEval benchmark.
source - https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf

Table below shows Janus-Pro’s results on another test called DPG-Bench. This test checks how well a model works with long and detailed prompts. Janus-Pro scored 84.19, which is better than all the other methods. The test looks at overall picture consistency, clear object representation, detail accuracy, and understanding of relationships between items. Janus-Pro did very well on all these parts.

Performances on DPG-Bench
source - https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf

Janus-Pro was also tested on several other benchmarks for multimodal understanding. Tests like GQA, POPE, MME, SEED, MMB, and MMMU show that it performs strongly compared to top models. The results confirm that Janus-Pro is good at both understanding and creating images in many different situations.

How to Access and Use This Model?

Janus-Pro-7B is found on Hugging Face, with easy access for developers and researchers. You can view its capabilities in the interactive demo space available at Hugging Face. The GitHub repository for the project provides code and resources for those that want to dig deeper and experiment more. The model is meant for research purposes only. Commercial use details can be found at the GitHub repository licensing information.

Limitations

Janus-Pro, though a step ahead into multimodal AI, is still in many ways weak and limited, according to the source: its input resolution resolution is at 384 x 384 pixel, and its fine-grained application, such as OCRs, is affected. Thus, in image-to-text generation, the resolution combined with reconstruction losses 'images suffer from missing details. Earlier versions utilized low-quality real-world training data. The text-to-image generation system thus produced visually poor outputs as it suffered instability. Synthetic aesthetic data used in Janus-Pro addresses the problem of instability along with improving the aesthetic quality; however, such usage introduces possible problems with regard to the model's diversity and its real-world applicability. Adjustments to the data ratio during fine-tuning also point towards a trade-off between optimizing visual generation capabilities and multimodal understanding capabilities.

Conclusion

Janus-Pro is one of the exemplars of incredible progress in multimodal AI by scaling and improved training. The fact that this system can so effectively understand and generate content across modalities gives credence to these advancements. Scaling up the capacity of AI models opens it up to be able to learn complex multimodal relationships, whereas refined training methodologies enhance learning efficiency and generalization. In many ways, that synergistic mixture is essential in developing more complicated and intelligent multimodal AI that can understand this world better, as well as interact with its surroundings.


Source
Project details: https://github.com/deepseek-ai/Janus
research paper: https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf
https://huggingface.co/deepseek-ai/Janus-Pro-7B
Trial: https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Saturday, 25 January 2025

DeepSeek-R1: Enhanced Reasoning via Reinforcement Learning

Presentational View

Introduction


The artificial intelligence field is pushing machines to achieve new capabilities. Its most sought-after advancement is when AI systems can reason well. Today's LLMs work wonderfully on recognizing patterns and making statistical predictions but may fail in line with the problems, which are mainly supported on logical deduction, commonsense understanding, and complex problem solving. This gap between pattern recognition and true reasoning limits the capacity of potential applications of LLMs.

Deepseek-R1 is an innovative approach that attacks this challenge from the front. It uses RL to train LLMs to become more capable reasoners. This is one giant leap in the pursuit of AI systems that do not only process information but understand and reason about it.

Model Variants

DeepSeek-R1 has several variants, each with different characteristics and uses. The base model, DeepSeek-R1-Zero, is trained with large-scale reinforcement learning directly on the base model without preliminary supervised fine-tuning. It has 671B total parameters, 37B activated per token, and a 128K context length. DeepSeek-R1 builds upon R1-Zero, addressing its limitations via a multi-stage training pipeline, which improves reasoning performance. There are also denser, more compact models distilled from DeepSeek-R1 which reach better performance than training them directly with RL. The different variants offer everything from exploring purely RL in foundation models to the final refined DeepSeek-R1 and efficient distilled models.

Key Features of Deepseek-R1

  • Explicit Reasoning Ability Focus: One key feature that characterizes the explicit core strength of Deepseek-R1 is the use of reinforcement learning for the specific focus of training the ability of reasoning. While many LLMs primarily utilize supervised learning, RL trains the model to create answers not just correctly but also meaningfully and in coherent well-reasoned explanation towards robust skill-building of reasoning abilities.

    Example of DeepSeek-R1's Thinking Ability
    source - https://chat.deepseek.com/

  • Emergent Chain-of-Thought Reasoning: While nearly every model can be prompted into exhibiting chain-of-thought behavior, the training procedure used for Deepseek-R1 causes this behavior to emerge. The model has learned to produce explanations as a part of its reasoning process and not merely through the use of specific prompting methods. This produces more robust and coherent chain-of-thought behavior.
  • Emphasis on Transparency and Explainability: Deepseek-R1 also emphasizes transparency and explainability by explicitly training the model to give explanations. This way, the model could better explain its reasoning process in a transparent manner to the user, fostering trust and support better debugging and analysis.
  • Generalization Benefits from RL: Even though the training is focused on reasoning, it has been observed that the general language tasks show improvement in large-scale RL training. This clearly indicates that synergistic benefits in training for reasoning positively impact other abilities related to language.

Reinforcement Learning of DeepSeek-R1

Reinforcement learning (RL) is a machine learning technique where an agent learns to make optimal decisions in an environment based on feedback received in the form of rewards or penalties. RL does not implicitly rely on labelled examples like supervised learning. Since last few decades, RL has seen significant growth with the rise of deep learning and more computing power. Reinforcement learning is crucial for DeepSeek-R1, particularly DeepSeek-R1-Zero, as it enables the model to learn reasoning without prior supervised fine-tuning.  This direct use of RL helps the model learn to explain its thinking step-by-step, which is called 'chain-of-thought reasoning'. It shows how RL can help AI become much better at complex reasoning.

Capabilities and Use Cases of DeepSeek-R1


DeepSeek-R1's new approach to reasoning opens up unique applications, pushing the boundaries of AI. Its key capabilities include:
  • Pioneering Reasoning Research via Pure RL: DeepSeek-R1 provides a groundbreaking research platform by showing effective reasoning development without initial training, providing new insights into how reasoning appears in LLMs. The availability of basic and improved models allows direct study of different training methods.
  • Transforming Education: Excellent performance on educational tests suggests DeepSeek-R1's potential to change educational applications. This includes improving AI-driven search, enhancing data analysis tools for education, and creating better question-answering systems.
  • Enabling Custom Model Development: The open-source nature of DeepSeek-R1 and its models allows developers to fine-tune them for very specific reasoning tasks, enabling custom AI solutions for areas like scientific research and complex data analysis.
These are just some examples, and as DeepSeek-R1 improves, we can expect even more new uses.

Technological Advancements

DeepSeek-R1 involves a new kind of training paradigm based on reinforcement learning (RL) that can be applied directly to the base model without initial supervised fine-tuning (SFT). It enables fully autonomous development of reasoning skills, like in DeepSeek-R1-Zero. The latter one is the base model that uses a Group Relative Policy Optimization (GRPO) algorithm - a specified RL method - for the exploration of chain-of-thought (CoT) reasoning and complex problem-solving. This process nurtures self-verification, reflection, and the production of extended CoTs toward the potential of enhancing LLM reasoning without the preliminary SFT approach. Reinforcing the validity and format of structured reasoning, there is an intrinsic self-evolution process to allocate more computation power to complex problems, with the resulting behaviors being spontaneous, such as reflection and diverse problem-solving strategies.

Building on top of R1-Zero, DeepSeek-R1 has a multi-stage training pipeline. First, it involves a 'cold-start' stage with a high-quality, curated dataset of long CoT examples, generated via few-shot prompting, direct prompting for detailed answers with reflection, and human annotation of R1-Zero outputs. It is further improved through a reasoning-focused RL stage and rejection sampling and SFT for general-purpose tasks. Finally, there is an RL stage that aligns the model with human preferences, taking care of the limitations that R1-Zero has, such as readability and language mixing. Deepseek uses distillation as well, transferring reasoning patterns learned by larger models to smaller, more efficient ones. Remarkably, the models distilled outperform those directly trained using RL and display an improving pattern over current open-source models. This integrated method of RL, cold-start data, and distillation forms one of the best strategies to gain superior reasoning ability in LLMs.

Performance Evaluation with Other Models

Performance evaluation of DeepSeek-R1 is conducted strictly with various benchmarks and tasks for reasoning. Mathematics, coding, and logical reasoning benchmark comparison as reported in the paper between DeepSeek-R1, OpenAI-o1-1217, and OpenAI-o1-mini in table below reveals how often DeepSeek-R1 performed well to the levels or even above its counterparts and the complexity that the model can actually address.

Comparison between DeepSeek-R1 and other representative models.
source - https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf

More importantly, DeepSeek-R1 won the length-controlled contest on AlpacaEval 2.0 with an 87.6% win-rate and on ArenaHard for open-ended generation, winning 92.3% of tests, showing how well it was able to respond to non-exam-oriented questions. It had a much larger lead over DeepSeek-V3 on long-context benchmarks, showing long-context understanding is more improved. Note that distilled versions, especially 32B and 70B models, showed new records in dense models' benchmarks for reasoning benchmarks. For example, DeepSeek-R1-Distill-Qwen-7B scored 55.5% on AIME 2024, beating QwQ-32B-Preview.

These distilled models (such as the DeepSeek-R1-Distill-Qwen variants from 1.5B to 32B) were further explored on reasoning benchmarks; notable improvement was observed against other open-source and even some closed-source models. For instance, the 14B distilled model outperformed QwQ-32B-Preview against all metrics, the 32B model, and 70B models significantly exceeded o1-mini on most benchmarks. These findings indicate that the distillation of the reasoning patterns from the models gives a better result than training smaller, base models with reinforcement learning.

How to access And Use DeepSeek-R1

The DeepSeek-R1 model has multiple ways for access and usability. Users can utilize it online at the DeepSeek website or can use an API offered by DeepSeek Platform; this API has compatibility with the OpenAI's API. For users desiring to employ the model on a local setting, instructions on how to access it are within the DeepSeek-V3 repository. Moreover, the light-weight and distilled variants of DeepSeek-R1 are executed on top of the interfaces of tools vLLM and SGLang like all popular models. Official GitHub repository shares the links of research paper and downloadable models, and the result of the evaluations.

Limitations 

DeepSeek-R1 need improvements, currently not as powerful as DeepSeek-V3 in terms of reasoning. Issues related to function calling, multi-turn conversations, complex role-playing and consistent JSON output. It's optimized to perform better at Chinese and English. So it may mix up with other languages. The model mostly falls back to English for reasoning and responses. Moreover, DeepSeek-R1 is quite sensitive to prompting, which may result in performance degradation due to few-shot prompting. Therefore, the recommended method is zero-shot prompting. To date, DeepSeek-R1 has not seen improvements over DeepSeek-V3 in software engineering due to the cost involved in evaluating software engineering tasks in the Reinforcement Learning (RL) process.

Future Work

Future work will leverage longer Chain-of-Thought (CoT) reasoning to improve function calling, multi-turn conversations and role-playing. It is also important to deal with other non-Chinese/English queries. In the following release (or version), software engineering performance will be improved by trying reject sampling on relevant data or by doing asynchronous evaluations during the RL process. Objective of these works is improving robustness and versatility of DeepSeek-R1 on more tasks.

Conclusion

DeepSeek-R1 leverages a novel reinforcement learning paradigm with emergent chain-of-thought reasoning and improved explainability. This makes it better at solving tough problems and communicating. A significant contribution is the introduction of distilled models making sophisticated AI reasoning feasible on resource-constrained devices and thus expanding its use cases. Open-source nature of DeepSeek-R1 models empower community exploration and development of more powerful reasoning AI across science, education, software development, and everyday problem-solving




Source
Website: https://api-docs.deepseek.com/news/news250120
Research Paper: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf
GitHub Repo: https://github.com/deepseek-ai/DeepSeek-R1
Model weights of Variants: https://huggingface.co/collections/deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d
Try chat model: https://chat.deepseek.com/


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Tuesday, 31 December 2024

DeepSeek-V3: Efficient and Scalable AI with Mixture-of-Experts

Presentational View

Introduction

Scalable and efficient AI models are among the focal topics of the current artificial intelligence agenda.  The purpose is to develop models that could solve more and more difficult problems and process ever larger amounts of data, while not demanding outrageous amounts of computational power for that. As of yesterday’s techniques of LLM like the transformer, though quite effective, sizable, in use, their computational costs are relatively high, making them relatively unusable.

The use of a Mixture-of-Experts (MoE AI models) has come out as one of the best solutions to this challenge.  MoE models split one model into multiple specific, smaller sub-networks, known as ‘experts’ where the model can greatly enhance its capacity without experiencing destructive escalations in computational expense. However, these models are not without their problems such as; imbalance distribution of data among experts and highly demanding computational resources during the training phase.

These challenges are solved by DeepSeek-V3 Advanced approaches such as improvements in gating for dynamic routing and less consumption of attention in this MoE. All these enhance the equality of distribution of the specialists and performant computing, thereby offering advanced intelligent systems for paramount application in different fields.

What is DeepSeek-V3?

The DeepSeek-V3 is a strong Mixture-of-Experts (MoE) large language model that was created by the DeepSeek AI. This architecture can make it achieve high performance with better efficiency and extensibility. It is available in varying sizes; it has the basic version in its list of offerings depending on the computation demands of the user.

Key Features of DeepSeek-V3

DeepSeek-V3 leverages its MoE architecture to achieve several key advantages:

  • Efficiency: DeepSeek-V3 uses Mixture-of-Experts (MoE) by enabling a portion of its parameters say, 37B out of 671B, for any input. This selective activation reduces the computational costs considerably bringing out the ability to perform well while frugal with computation.   
  • Scalability: The proposed MoE design enables effortless scalability by incorporating more specialized experts without focusing all the model. This modularity also renders DeepSeek-V3 easily scalable and ready for future improvements and the possibility of incorporating new forms of assessments without the need to carry out a new training process.
  • Specialization: Within MoE architecture, individual experts can be trained to perform specific domains to improve the performance in such areas. What DeepSeek-V3 lacks in general adaptability, it more than makes up for in specialized environments like coding and mathematics in which domain-specific knowledge is valuable.   
  • Improved Inference Speed: Because only a subset of the network is activated to solve a given problem, DeepSeek- V3 has even faster inference rates. This selective activation eliminates delays in managing responses and make interactions faster which is useful for real-time services.

Capabilities/Use Cases of DeepSeek-V3

  • Enhanced Code Generation and Debugging: Since DeepSeek-V3 is built with MoE architecture, this makes it easy to generate experts focused on various programming languages, or coding styles. This targeted approach leads to more effective generation of code since the defects are targeted and thus coded in contrast to general purpose models where the defects could be haphazard. The agents’ differentiation allows the model to be more aware of the subtleties of different programming languages and provide less prone to errors of context.
  • Advanced Mathematical Problem-Solving: Easy comparisons: Aiming at grabbing the correct spot, the MoE architecture enables DeepSeek-V3 to use specialists precisely trained on mastery of mathematics to yield accuracy in this realm. Such individuals should be capable to solve difficult equations, logical proofs as well as most other qualitative mathematical problems with higher precision. Specialists in the model can improve mastery of mathematics both in content and method because specific workers will be assigned to mathematical tasks.
  • Next-Generation AI Assistants: Because the DeepSeek-V3 system integrates specialists in various phases, it facilitates the development of improved AI companions. These assistants can offer balanced and contextually motivated solutions encompassing reasoning, encoding, and mathematical Reasoning abilities to bosses. The structural design of the MoE allows these assistants to change and better serve the users in a wide range of areas.

DeepSeek-V3 Architecture and Key Components

Good information flow is one of the main characteristics of the DeepSeek-V3 architecture. Input data pass through a number of ‘Transformer Blocks,’ as shown in figure below. Within each of the blocks, a Multi-Head Latent Attention module, allowed attention on different parts of the input sequence to be selectively computed to produce an Output Hidden ut. This output is then passed to the ‘DeepSeekMoE’ block which is the novel part of DeepSeek-V3 architecture . As can be seen in the figure below, the input passes through these key components.

Illustration of the basic architecture of DeepSeek-V3
source - https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf

The DeepSeekMoE block involved a set of multiple 'experts' that are trained for a particular domain or a task. This makes it possible for the model to forward different subparts of the input to the relevant expert, thus supporting both optimization and integration of specialists. Rather than invoking all the experts in the network for any input received, DeepSeek-V3 calls only irrelevant ones, thus saving on costs, although with no compromise to efficiency. This dynamic routing is accompanied by an auxiliary-loss-free approach to load balancing that equally distributes load amongst the experts, thereby preventing congestion and improving the efficiency rate of the overall model. Load balancing is paramount in the scalability of the model and utilization of the available resources in the best way.

DeepSeek-V3 uses other innovations apart from MoE architecture and efficient routing, as specified below. Multi-Token Prediction (MTP) allows the training of such models with multiple future tokens at once enhancing learning and possible decoding efficiencies. The above methods of parallel processing during training also help in the process. In addition, DeepSeek-V3 also employs knowledge distillation technique that enables the transfer of reasoning ability from the DeepSeek-R1 series. The MoE architecture along with Multi-Token Prediction and load.

Performance Evaluation with Other Models

Based on the strict comparison with other powerful language models, DeepSeek-V3’s great performance has been shown convincingly. Different benchmarks encompassing both English and necessary Chinese language tasks are used to compare DeepSeek-V3 to open-source competitors such as Qwen2.5 and LLaMA-3.1 and closed-source competitors such as GPT-4o and Claude-3.5-Sonnet. These benchmarks cover various crucial areas: general facts and knowledge (MMLU, MMLU-Pro), logical and rationality (DROP, LongBench v2), code writing (HumanEval-Mul, LiveCodeBench) and mathematical computation (AIME, MATH-500). Analyzing the results, it becomes apparent that DeepSeek-V3 is also among the best variant most of the time being on par with and sometimes outperforming the other open-source counterparts while almost always being on par with or better than the closed-source benchmarks. 

Comparison between DeepSeek-V3 and other representative chat models.
source - https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf

In addition to these comparative standards, several other test and experiments are performed to evaluate the ability of DeepSeek- V3. Among them there are, for example, ablation studies which shed the light on the contributions of particular architectural components of the model and training strategies. Tests on integrated reference recognition and sequential recall evaluate the performance of DeepSeek-V3 which can handle comprehension of a text sequence. More specifically, coding and mathematical reasoning tasks are specifically highlighted as beneficial from the new architecture of DeepSeek-V3 while the report credits knowledge distillation from DeepSeek-R1 as being particularly beneficial.

Comparing DeepSeek-V3, Phi-4, and Llama 3.3

DeepSeek-V3, Phi-4, and Llama 3.3 have strengths in comparison as large language models. While DeepSeek-V3, due to its architecture being Mixture-of-Experts, and trained with a significantly higher amount of data, beats even closed-source versions on some specific benchmarks in maths, code, and Chinese languages, it falters significantly behind in other places, for instance, its poor performance with factual knowledge for English. Phi-4 is trained on a mix of synthesized and organic data, focusing more on reasoning, and gives outstanding performance in STEM Q&A and coding, sometimes even giving more accurate results than its teacher model GPT-4o. Its limitations include a lower context window and susceptibility to hallucinations.

Llama 3.3 places priority on multilingual dialogue and general language understanding, with a larger context window, suitable for processing extended text. Though it works well in multiple language tasks, it doesn't have the focused strengths of Phi-4 on STEM or DeepSeek-V3 on Chinese.

The choice of model depends on the specific application. Phi-4 is suitable for STEM use cases, Llama 3.3 for multilingual dialogue and long-context applications, and DeepSeek-V3 for math, code, and Chinese performance, although it is weak in English factual knowledge. Testing and safety assessments are important before deployment.

How to Access and Use this model?

DeepSeek-V3 provides many ways to query and work with the model. Researches and developers can get different types of models such those of base model from Hugging Face for downloading. DeepSeek provides a chat demo that also demonstrates how the model functions. For more in-depth understanding of how the model works will find the source code and further resources in the GitHub repository of DeepSeek. At this stage, DeepSeek-V3 is primarily targeted to be used in research and development labs.  Licensing may be required for commercial use.

Limitations and Future Work

Despite the high test accuracy, low time complexity, and satisfactory performance of DeepSeek-V3, this study has several shortcomings. Its large recommended deployment size may be problematic for lean teams as there are simply too many features to configure. While it outperforms its predecessor with regard to generation speed, there is still room for enhancement. 

Future work will concern further design optimization of architectures for enhanced training and inference performance, potential abandonment of the Transformer architecture, and ideal context size of infinite. Subsequent studies will also focus on enhancing few-shot learning, stable alignment approaches, and more effective reinforcement learning reward signals.

Conclusion

Different stakeholders can benefit from DeepSeek-V3. For experts in AI, its MoE architecture and training schemes are the basis for research and a practical LLM implementation. Organizations continue to enjoy its flexibility and effectiveness making it easy to embark on large-scale implementation of complex NLP features such as conversational agents and code-generating models. For the general public, DeepSeek-V3 suggests advanced and adaptive AI tools in everyday utilization including a better search, translate, and virtual assistant features improving flow of information and simplifying everyday tasks.


Source
Website: https://www.deepseek.com/
Research paper: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf 
GitHub Repo:https://github.com/deepseek-ai/DeepSeek-V3
DeepSeek-V3 model variant: https://huggingface.co/deepseek-ai/DeepSeek-V3
DeepSeek-V3 base model variant: https://huggingface.co/deepseek-ai/DeepSeek-V3-Base
Try model: https://chat.deepseek.com/


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Saturday, 7 December 2024

ShowUI: Advanced Open-Source Vision-Language-Action Model for GUI

Presentational View

Introduction

Graphical User Interface (GUI) assistants assist users to interact with digital appliances and applications. They can be an ordinary voice-activated help or a complex system with understanding and responding to a natural language command. In particular, GUI visual agents are a special type of GUI assistant that can 'see' and interact with the visible parts of a user interface.

These graphical-user- interface visual agents differ from other GUI assistants as they understand the interface visually. The early GUI assistants depended basically on text-based information like HTML or accessibility trees. Because of this, it was hard for them to perceive UI visuals as a human does and interact with the elements without text description.

Recent developments in VLA models are actually pushing GUI visual agents towards more human-like interactions by processing both visual and textual data to generate actions. However, it is not without challenges. It is costly to process high-resolution screenshots, hard to manage the complex mix of visual elements and actions, and obtaining diverse, high-quality training data is hard. ShowUI, an AI model was designed to show how to tackle all these issues to enhance GUI visual agents.

Who Developed ShowUI? 

A team of researchers from Show Lab at the National University of Singapore and Microsoft developed ShowUI. Show Lab focuses on creating cutting-edge AI technologies to improve human-computer interaction.

What is ShowUI? 

ShowUI is a vision-language-action model for GUI visual agents. It combines visual input, language understanding, and action prediction to allow more natural and efficient interactions with computer interfaces.

Key Features of ShowUI

  • UI-Guided Visual Token Selection: The structured design of ShowUI in screenshots reduces the computing costs by creating a UI Connected Graph in RGB space among the patches having the same RGB values. This leads the model to skip useless visual tokens, hence increasing its efficiency.
  • Interleaved Vision-Language-Action Streaming: Different GUI actions across the platform are shown by organizing them in a JSON format, while providing documentation using the system prompt, which proves helpful in demonstrating actions while testing for the model.
  • Well-selected Instruction-following Dataset: ShowUI uses a small, high-quality dataset that focuses on the visible rather than static text. Its dataset is based on web screenshots, desktop elements, and mobile functions.

Capabilities and Use Cases of ShowUI


source - https://arxiv.org/pdf/2411.17465

ShowUI has awesome zero-shot screenshot grounding through light use of a 2B model trained on 256K, achieving 75.1% accuracy. It will remove redundant visual tokens 33% during training so its performance becomes 1.4 times faster.

Use Cases:

  • UI Automation and Testing: Automates repetitive tasks on user interfaces, which is just fantastic for software testing, including automated regression testing, to ensure the functionality remains consistent.
  • Accessibility Tools: Assists users who are visually impaired in identifying specific UI elements using text descriptions for finding, which helps a user to carry out tasks on the screen.
  • Real-time User Assistance: It gives dynamic app-specific help through real-time analytics of the screen and step-by-step visual instructions or suggestions based on the user's progress.

How ShowUI Works: Architecture, Design, and Workflow

ShowUI uses key elements of GUI tasks: UI-guided visual token selection, interleaved vision-language-action streaming, and judiciously chosen training data. At its very core, ShowUI starts off with a user query, an initial set action space, and an initial screenshot. It predicts the next action and performs it to update the screenshot, proceeding in this cycle until task completion.

Illustration of ShowUI
source - https://arxiv.org/pdf/2411.17465

UI-Guided Visual Token Selection is essential for processing high-resolution screenshots in an efficient manner. Creating a UI patch-wise connected graph, based on similar RGB values, ensures that only necessary visual parts are processed by ShowUI, reducing computational costs while keeping performance. Interleaved Vision-Language-Action Streaming improves the capacity of ShowUI to manage complicated GUI tasks by organizing actions in JSON format and helps to manage past screenshots and actions for better navigation.

The workflow goes about processing a user query using the initial screenshot. ShowUI predicts the next action, say an element click or typing of text, and the environment updates based on this action and produces a new screenshot observation. This observation and updated action history feed back into ShowUI, starting the next cycle of prediction and action. This iterative process continues till the user's task is completed successfully, thereby showing that how efficiently and effectively the ShowUI manages GUI tasks.

Advanced Techniques Used to Build the ShowUI Model

  • Reverse Engineering: This technique was applied on the OmniAct dataset to extract all the detailed information regarding UI elements more than just their names. It enriches the dataset and hence enhances the model's understanding of diverse queries based on descriptions of appearance, spatial relationships, and intention.
  • Resampling Strategy: Serves to handle issues regarding the balance of exposures across different data types in the training set. This reduces variance and results in greater generalization and stability across repeated experiments.
  • Multi-Turn Dialogue Approach: An implementation that facilitates training where predictions of multiple action annotations at a given screenshot take a single forward pass. Improving the utilization of navigation as well as grounding through training.
  • Union-Find Algorithm: Distinguishes connected components in the UI connected graph, regrouping redundant areas in a way that simplifies the selection of tokens in the process.
  • Mixture-of-Depth (MoD) Inspiration: Inspired by Mixture-of-Depth approach, it randomly skips a subset of tokens in the same component during training to incur less computational cost while conserving crucial positional information.
  • Function Calling: Makes use of a 'README' in the system prompt for documenting the usage of each action. This would help learn the semantics of the action space and generalize to novel actions at test time.

These are some of the sophisticated  techniques that contribute to the overall efficiency and effectiveness of ShowUI.

Performance Evaluation with Other Models

In important experiments, ShowUI's performance is more impressive, as shown in below table, especially in its zero-shot grounding on the Screenspot benchmark. That's where accuracy is measured by the model to find and identify well-defined UI elements according to the description given in the text across different devices such as mobile, desktop, and web. In this case, despite being a lightweight 2B model trained on a small dataset amounting to 256K samples, ShowUI got an impressive 75.1% accuracy. This outperforms larger and more complex models such as CogAgent (18B, 47.4% accuracy) and SeeClick (9.6B, 53.4% accuracy), which use much more training data. The edge of ShowUI comes from its smart UI-Guided Visual Token Selection and a well-curated dataset, showing its efficient learning and visual grounding skills.

Zero-shot grounding on Screenspot
source - https://arxiv.org/pdf/2411.17465

Another important test looks at ShowUI's navigation abilities, especially in web navigation using the Mind2Web dataset. Table Below : Performance comparison of ShowUI with other models in the cross-task, cross-website, and cross-domain settings. While not fine-tuning on the dataset in question, the zero-shot setting of ShowUI is comparable in performance to the larger SeeClick model, which has had both pre-training and fine-tuning. This demonstrates the ability of ShowUI to leverage its learned navigation skills across previously unseen websites and tasks, which would be a critical component of robust GUI visual agents. The Interleaved Vision-Language-Action Streaming mechanism boosts the strong navigation performance with complex interactions between visual observations, text instructions, and actions.

Web Navigation on Mind2Web.
source - https://arxiv.org/pdf/2411.17465

Its effectiveness in other navigation tasks is as well demonstrated through mobile navigation using the AITW dataset and online navigation with the MiniWob benchmark. Evaluations showed that ShowUI indeed works cross-cutting all GUI environments, consistently doing well on various datasets and settings. This trend underlines the potential of ShowUI to advance the development of sophisticated visual GUI agents, therefore showing itself to be a leading model for the field.

How to Access and Use ShowUI?

ShowUI is readily accessible on GitHub and the HuggingFace. You can deploy this on Windows and macOS running under the instructions provided at the repository. Because of open-source, you freely utilise the model for academics, and you are free to use in various commercial purposes as well, depending upon the licensing structure.

Limitations and Future Work

The offline training data dependency of ShowUI presents a challenge in real-world applications. It fails to handle unexpected situations or errors which it may not have in its training data. In this task, zero-shot performance also suffers from models fine-tuned on specific datasets. Moreover, even though the UI-guided visual token selection does save computational costs, subtleties or contextual details are easily missed, further lowering the accuracy.

These issues can be overcome in the future by incorporating reinforcement learning techniques to enhance the capabilities of ShowUI in online environments. It allows the model to interact directly with its environment, learning and adapting to its experiences to handle new situations better. Moreover, tailoring learning strategies for online environments with methods for handling unforeseen errors and dynamic UI changes might close the performance gap between offline and online settings. It will make ShowUI more stable and robust for real application.

Conclusion

ShowUI tackles major problems like high computing costs, tricky visual-action interactions, and the need for varied training data. It works well for many things like UI automation, accessibility tools, and real-time help for users. Although it relies on offline training data, future updates with reinforcement learning and customized online strategies could make it even more robust and flexible.


Source
research document: https://arxiv.org/pdf/2411.17465 
GitHub Repo: https://github.com/showlab/ShowUI
Hugging face model weights: https://huggingface.co/showlab/ShowUI-2B
Try demo: https://huggingface.co/spaces/showlab/ShowUI


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Wednesday, 27 November 2024

Hymba by NVIDIA: Advancing SLMs with Hybrid-Head Architecture

Presentational View

Introduction

Recent achievements in small language models geared them toward greater effectiveness and efficiency. Innovations in the aspects of form and training have made these models powerful and versatile.

Researchers have been able to advance the ways in which models process and store information so that, typically speaking, smaller models can do everything at least as well as larger models, and even considerably better on specific tasks. One such model is the Hymba, which shows considerable progress in this regard. Their hybrid-head architecture, paired with use of learnable meta-tokens, significantly improves efficiency and effectiveness-thus raising the stakes for small language models.

Improved training methods also led to stable and reliable models. Such methods allow the small models to cope well with different tasks. Hymba is an example of such advancements, and strategic training approaches guarantee a good performance in different applications.

Who Designed Hymba?

The Hymba model was a team effort led by NVIDIA, which is known for their work in AI and deep learning. They made Hymba,  just because small language models just cannot be efficient and capable enough. They wanted these models to perform well in many tasks using fewer resources.

What is Hymba?

The hybrid-head architecture design is different, as Hymba is a small-sized language model. This combination of strengths in transformer attention mechanisms with state space models makes Hymba powerful and efficient simultaneously.

Model Variants

Hymba comes in various versions, with each version suitable for specific purposes:

  • Hymba-1.5B-Base: General-purpose model that especially strikes the efficiency-performance trade-off.
  • Hymba-1.5B-Instruct: This variant is specifically tuned for instruction tasks which makes it more suited for education and training purposes.

These variants allow Hymba to excel in different areas while maintaining high efficiency.

Key Features of Hymba

Some of the highlights of the Hymba model include:

  • Hybrid-Head Parallel Architecture: This brings together transformer attention mechanisms with state space models, so each layer can capitalize on both high-resolution recall and efficient context summarization.
  • Learnable Meta Tokens: These tokens store important information and act as compressed representations of world knowledge, allowing the model to only concentrate on meaningful details.
  • KV Cache Optimisation: Hymba mixes global and local attention and shares kv caches across layers, by that, it reduces memory usage and boosts performance.

These features make Hymba highly efficient and special among models used with small languages.

Capabilities/Use Cases of Hymba

Above unique characteristics will make Hymba ideal for many real-world applications:

  • Math Reasoning: Hymba is good at solving math problems, providing accurate and efficient solutions.
  • Function Calling: It can recognize and perform functions; hence it is really a big deal in programming and automation.
  • Role-Playing: Hymba performs well in role-playing scenarios, making it ideal for interactive and educational applications.

These capabilities point out how diverse and potent Hymba may be in various situations.

How does Hymba work? / Architecture

Hymba differs from other SLMs in its innovative hybrid-head architecture. Unlike the traditional transformer-based models that are only based on the attention mechanism, Hymba integrates both transformer attention and state space models within every layer, as shown in figure below. This parallel design lets the model take advantage of the strengths of both approaches: attention heads are good at high-resolution recall, capturing the fine details, while SSM heads efficiently summarize the context, retaining the gist of the input. This dual processing mechanism, akin to human memory with its snapshot (attention) and fading (SSM) components, enables Hymba to handle diverse information flows and memory access patterns effectively. Moreover, Hymba uses several optimization techniques to improve its efficiency.

Visualize the hybrid-head module in Hymba
source - https://arxiv.org/pdf/2411.13676

Learnable meta tokens, prepended to the input sequence, act as a compressed representation of world knowledge, guiding attention toward relevant information and mitigating the 'forced-to-attend' issue. The model also uses cross-layer key-value (KV) sharing and a combination of global and local attention, which greatly reduces the KV cache size and computational costs. This efficient design, in combination with the parallel processing of hybrid heads, allows Hymba to achieve state-of-the-art performance for SLMs, outperforming even larger models while maintaining a smaller cache size and faster throughput.

Performance Evaluation with Other Models

Hymba proves it to be even better than other small language models. In benchmark tests, the Hymba-1.5B model outperforms all sub-2B models and in some cases, beats the accuracy of Llama-3.2-3B. Hymba also consumes 11.67 times less cache and, on the other hand, has 3.49 times more throughput than that of Llama-3.2-3B, which clearly shows efficiency and effectiveness in multiple tasks.

Benchmark Hymba with SOTA small LMs
source - https://arxiv.org/pdf/2411.13676

It's when comparing different architectures, which are the standard Transformer (Llama3), pure Mamba, Mamba with Feed-Forward Network (FFN), and Samba that Hymba is consistently the best in language modeling, recall tasks, reasoning, and question-answering tasks.

Apple-to-apple comparison of Hymba with other style architectures
source - https://arxiv.org/pdf/2411.13676

The Hymba-1.5B-Instruct instruction-tuned model also is very good at math reasoning, function calling, and role-playing, so the model is versatile for the most complex tasks. Such evaluations show Hymba's leading position among small language models.

Comparative Analysis of Hybrid Language Models

Developing the Hybrid Architectures, that hybridizes small language models significantly transformed performance and efficiency while utilizing Hymba, Mamba2, or Samba. The most essential thing about Hymba is its hybrid head designed with transformer attention mechanism to combine state space model into the architecture to attain top-notch performance in a variety of tasks, especially focusing high recall resolution and efficient contextualization summarization.

Mamba2 combines the attention heads and memory units to improve sequential data handling and context management. The architecture is great for any task that needs detailed recall and deep understanding. Samba integrates attention mechanisms and feed-forward networks in a sequential layer design, balancing the strengths of both methods. This makes Samba robust in commonsense reasoning, question-answering, and language modeling.

A comparison of the models shows Hymba, which has some distinct learnable meta tokens as well as optimization of the KV cache for efficiency and performance. Although Mamba2 gives the best possible results with respect to recall and contextual handling, while Samba offers versatile performance, it is the new design in Hymba that differentiates it as one of the best hybrid models for small language models.

How to Access and Use Hymba

Hymba is available in the usage on platforms, such as Hugging Face, for the base model and its instruct variant variants. This can be made available both locally and even online with demos. Licensing information regarding this is to be found in its Hugging Face pages.

If the Users are interested in this AI model, they can learn about its details by referring links from the source as placed at the end of the article.

Limitations and Future Work

Hymba excels on many tasks, but stumbles when confronted with intricately complex scenarios that might require more elaborate background or know-how, like very accurate medical diagnoses or interpretations of jurisprudence. It shows bias from the data the internet feeds it, and may say harmful, socially unacceptable things. Therefore, it's clear that what's required to polish this model is reduction in biased responses, particularly where ethic issues are involved.

Future research shall be geared towards increasing Hymba's efficiency and more extensive capacities. Continuous learning and updates with debiasing techniques shall be introduced; new architectures for the longer sequences handling shall be developed. Besides, this shall enhance its overall performance within specific domains and compensate for present limitations to become more successful in more sophisticated tasks.

Conclusion

From the Hymba model, innovative designs and training methods will lead to developing powerful and efficient language processing tools. Hymba helps make these tools accessible for a multitude of very different uses which will support the rise of AI as well as its potential to change many parts of our lives.


Source
Research document: https://arxiv.org/pdf/2411.13676
HF base Models : https://huggingface.co/nvidia/Hymba-1.5B-Base
HF Instruct Models : https://huggingface.co/nvidia/Hymba-1.5B-Instruct


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

DeepSeek AI's Janus-Pro: A Leap in Multimodal Learning

Introduction Advancements in multimodal understanding and model scaling are revolutionizing Artificial Intelligence towards ever more advanc...