Pages

Monday 14 October 2024

Aria: Leading the Way in Multimodal AI with Expert Integration

Presentational View

Introduction

Multimodal Mixture-of-Experts models are the latest in wave AI. They take in multiple kinds of input into a single system-including text, images, and videos. That way, they learn to become very great at understanding and creating complex content. They're great for many domains-from language processing to vision applications.

Latest innovations in multimodal MoE models make them much more efficient and powerful. Newer designs and training schemes enable the models to deal with larger groups of datasets and tackle harder challenges more quickly and more accurately. The perfect example of the innovation is the multimodal MoE model called Aria. With top-of-the-line performance in most tasks, the development marks a new standard in industry. Advanced features and innovative design set Aria as being a primary development within AI technology.

Who developed Aria?

Aria was created by Rhymes AI, a trailblazing AI start-up based in Tokyo. Rhymes AI is famous for its creative approach to AI, focusing on making open-source models that push the limits of what AI can do. Their mission is to make advanced AI technologies accessible to everyone and encourage a cooperative research environment. The main goal of developing Aria was to create a high-performance model that researchers and developers worldwide can easily use and adapt.

What is Aria?

Aria is an open-source model featured with a multimodal native Mixture-of-Experts (MoE). It is designed to handle and understand different types of input like text, images, video, and code. Aria uses a mixture-of-experts setup to manage these diverse data types efficiently in one system.

Key Features of Aria

  • Multi-modal Native capability: This means that, unlike many other multimodal or MoE models, it is trained natively to take care of text, images, videos, and code in the very same model.
  • Large Context Window: Aria takes larger and more detailed inputs, since this language model has a large 64K token context window.
  • Efficient Inference: The model has a parameter use of 3.9 billion parameters per token and promises high speed and low costs with adjustment.
  • It's open-source: The code and the weights of Aria are open for everyone in the world. Openness is encouraged here and highly encouraged teamwork of AI.

Capabilities and Use Cases of Aria

  • Video Understanding: Aria is great at video content analysis and summarization, making it very useful for all media and entertainment companies.
  • Document Analysis: due to its capability to handle long context windows, it is well suited for comprehensive document analysis and even more advanced search functionalities.
  • Language Processing: Aria is able to process and generate natural language, so it may be fine-tuned for applications in natural language processing.
  • Multimodal Content Generation: This model has a capability of generating content that may take the form of textual elements, images, or even videos, which is of much use to the creative industries and marketing.

Architecture and Efficiency in Multimodal AI

Aria's architecture has a vision encoder and an MoE decoder. The vision encoder transforms visual inputs, including images and videos, into visual tokens. MoE decoder contains 66 experts per layer with resource allocation on type and complexity of the inputs. For instance, only the required experts are applied to each task at one time during operation. This way, needless computational power and memory usage are saved.

It is a typical MoE decoder that trains together with a vision encoder on both language and multimodal data. The model learns relationships between different kinds of data, which further enhances its visual processing capabilities. This combined model becomes a significant foundation for Aria's visual understanding. Aria's moe design enables it to handle differently dimensioned inputs very efficiently. Aria activates only those experts that are needed, instead of activating the complete model for each input. This power consumption and memory saving in computation are compared against a traditional model that makes use of the whole system for every input.

Aria's multimodal native MoE decoder.
source - https://www.rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model

The MoE decoder of Aria makes use of dynamic routing along with balanced expert activation in order to raise the efficiency even further. One router module picks the best set of experts for every input and activates these. This ensures only the model parts necessary are used. Additionally, Aria implements a load-balancing loss by selecting different experts every time so as to not always pick the same. This helps utilize the experts of the model to the fullest by keeping the activation balanced.

Performance Evaluation with Other Models

Team has benchmarked ARIA against the best available open-source and proprietary models with all sorts of tests. Systematically, ARIA outperforms open models (see table below) like Pixtral-12B and Llama3.2-11B on tasks like document understanding, chart reading, scene text recognition, visual question answering, and even coding. On the proprietary side, it is rather competitive with GPT-4o and Gemini-1.5, so that speaks well for open multimodal tasks.

Performance comparison across various multimodal and language benchmarks.
source - https://arxiv.org/pdf/2410.05993

Table below: The ability of ARIA to process real-world data, such as subtitles in the video or long documents is a lot better. This outperforms the other open models, Qwen2-VL-7B and LLaVA-OneVision-72B in many instances, as well as sometimes even proprietary ones, such as GPT-4o mini, on video tasks and Gemini-1.5-Flash, on long documents.

Evaluation of long-context multimodal understanding on videos and documents.
source - https://arxiv.org/pdf/2410.05993

ARIA is evaluated on specializing over various data types. It contains tests that assess a wide range of skills, such as making sense of weather forecasts and financial reports to explaining handwritten equations and debugging code based on screenshots. The summarizing ability of research articles and code understanding from videos are the abilities of ARIA. Such assessment shows that ARIA is robust, high performing, and versatile in being an open-source multimodal model.

How to Access and Use Aria Model?

Aria model can be accessed on Hugging Face. Installation steps, with all dependent libraries, are mentioned on the site. After installation of the required libraries, use the transformers library to download the pre-trained weights and processor of Aria. A dedicated GitHub code base from Rhymes AI provides instructions for vLLM inference, examples, and scripts for fine-tuning on any dataset. Aria can be fine-tuned either by full parameter tuning or by LoRA (Low-Rank Adaptation) and also multiple datasets can be mixed during the fine-tuning process.

The model is open source, commercially usable under the Apache 2.0 license, thus it is accessible for a wide range of applications. Interested users can find all relevant links at the end of this article.

Limitations and Future Potential 

While Aria is impressive, it must be noted how far it can actually go. For example, Aria said to perform very closely with models like GPT-4 and Gemini but is not always accurate or fluent in some of the more complex tasks involved. Training data also might have biases that the correction process did not remove, so results could be expected in an unexpected way.

More research and community feedback will refine Aria. As developers continue to work on Aria, many breakthroughs in areas such as real-time video analysis, human-computer interaction, and content creation are expected. The ongoing work will yield unique special-purpose variants of Aria that are suitable for particular tasks or industries.

Conclusion

Aria has led to tremendous innovation in the area of multimodal AI and Mixture-of-Experts architecture. It is very flexible and shows amazing performances. It being an open-source model, nurtured great tools for researchers and developers that will spur creative ideas and teamwork. The development of Aria is going to trigger even newer ideas and uses within the scope of AI, helping us understand and work with a different kind of data.

Source
Blog: https://www.rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model
Research document: https://arxiv.org/pdf/2410.05993
GitHub Repo: https://github.com/rhymes-ai/Aria
Model Weights: https://huggingface.co/rhymes-ai/Aria


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Thursday 10 October 2024

Meta AI’s Movie Gen: Transforming Text into High-Quality Videos

Presentational View

Introduction

In the media world, instruction-based video editing and generation models have been revolutionary. They initially made it easy to do the most basic yet tedious work such as automation of mass replication of repetitive editing works and the upgrading of the video quality through AI. As these models went stronger and stronger, they developed préciser and more advanced features to be used in the works of editing. This hence made it easier to go for more complex visual effects and any form of content creation if wanted.

Movie Gen is a step in this direction, because it employs advanced AI in creating quality videos based on the needs of users. At the core, it aims to make video creation easy and accessible to everyone-in collaboration with Meta's AI research team.

What is Movie Gen?

Movie Gen is an advanced AI model that generates high-quality  videos with synchronized audio from text prompts. Essentially, the foundation models in this collection excel in a myriad of tasks with regards to text-to-video synthesis, video personalization, and precise video editing.

Examples of the different capabilities of Movie Gen.
source - https://ai.meta.com/static-resource/movie-gen-research-paper

Key Features of Movie Gen

  • High-Quality Video Generation: Produces 1080p videos at 16 frames per second.
  • Audio Integration: Generates high-fidelity audio synchronized with video content.
  • Personalized Video Creation: Tailors videos based on user-supplied images or inputs.
  • Instruction-Based Editing: Allows precise control and editing of video content through text instructions.
  • Scalability and Efficiency: Achieves high scalability through innovations in parallelization and architecture simplifications.

Capabilities/Use Case of Movie Gen

  • Text-to-Video Synthesis: It gives fully realized videos given a natural-language description.
  • Personalized Video Creation:  Generates videos from user-provided images or other inputs.
  • Instruction-Based Video Editing: This can be used for video editing that can achieve maximum precision.
  • Real-world application scenarios: usage through creation of social media content, film production, or a highly targeted marketing campaign. For example, the movie writers can use Movie Gen to develop ideas from scripts or test out multiple plot directions, while the content creators may work to create interesting stories for videos and animations.

How does Movie Gen Work?/Architecture/Workflow

Movie Gen is built with scalability and efficiency in mind. It uses the simplest transformer backbone, much like LLaMa3, so it can process whatever big sets of data are necessary to generate video. Movie Gen also includes flow matching for training that boasts better performance than the diffusion models regarding both training speed and inference speed. It fits everything in a single model within a compressed space, thus simplifying the architecture and making training easier, making it a fantastic solution for creating realistic video motion.

Overview of the joint image and video generation pipeline.
source - https://ai.meta.com/static-resource/movie-gen-research-paper

In regard to the text-to-video model, as shown in figure above, Movie Gen is very straightforward in its workflow in turning text prompts into dynamic videos. First, there is the user's text prompt. That text prompt is encoded using pre-trained text encoders such as UL2, ByT5, and MetaCLIP. These encoders capture the meaning as well as the visual content of the prompt, providing rich context for the model. The encoded prompt then controls the generative process within the core body of the architecture: the TAE. The TAE compresses input images and videos into a lower-dimensional space much easier to train and make inferences on.

In this cramped space, one transformer-based model inspired by LLaMa3 takes over. The model uses the encoded text prompt in its usage to produce an output within the latent space. Therefore, a single model would be dealing with image and video generation, loads of data being used to feed this performance. The TAE decoder converts the latent representation back into the final image or video. Such an efficient process allows Movie Gen to create quality textual alignment visual content.

Advanced Technologies Behind Movie Gen Model

Movie Gen incorporates smart AI and machine learning, producing fantastic videos. Here is a simplified look at key technologies it uses, aside from the ones mentioned above:

  • Supervised Fine-tuning (SFT): After the first round of training, Movie Gen receives more training through the usage of high-quality videos and captions. In this way, detailed ideas and more styles make the videos look better while being close to the captions.
  • Multi-Step Training Pipeline: It learns step-wise. The first it starts with low-quality images and then moves to better images and finally videos. Thus, it first learns the basic visuals and then the motion and scenes.
  • Model Parallelism: Since Movie Gen is huge, model parallelism has been utilized to divide the workload into multiple GPUs. This facilitates training to be faster and large models to be used.
  • 3D Convolutional Layers and Cross-Attention Modules: It divides video information into smaller parts, which then enters the main model. The Cross-Attention Modules introduce text prompts into the video.
  • Vision Token Concatenation and Backtranslation: Vision Token Concatenation specialises in adapting generation over video. Backtranslation is used for training the model in unsupervised video editing.

These come together to make Movie Gen even possible to generate the highest quality videos.

Performance Evaluation with Other Models

Firstly, the source i.e. technical document talked about the design of MovieGen and its features compared to other models, majorly for text-to-video generation. Overall video quality is the primary evaluation created between MovieGen and systems such as Runway Gen3, LumaLabs, and OpenAI Sora. In undertaking this assessment, these checks include frame consistency, the natural motion of elements, and the completeness of the motion generated by each model in rendering realistic and visually appealing videos. The outcome shows that there are higher quality movies created by MovieGen than its competitors.

Movie Gen Video net win rate vs. prior work
source - https://ai.meta.com/static-resource/movie-gen-research-paper

Another important test is alignment of the text, where the videos are compared in regards to how well they align with the user's text prompts. This entails ensuring that subjects and their actions within a video align closely with the description given in the prompt. MovieGen is pitted against the same commercial models in tests conducted with a set of text prompts to evaluate several ideas and complexity levels.

Besides these main tests, more tests, that pointed to other capabilities that include video personalization, video editing, and audio generation, are also conducted. These comparisons between MovieGen and best models in these capabilities were meant to find out where MovieGen needed improvement. The capabilities of MovieGen are tested on video editing capabilities by using benchmarks such as TGVE+ and a new Movie Gen Edit Bench through comparison on following instructions from users, input video preservation, and average visual quality.

How to Access and Use Movie Gen?

Currently, Movie Gen is not available for public use. Meta plans to collaborate with filmmakers and content creators to refine the model before a potential future release. Interested users who want to get the latest updates can find all relevant links for this AI model at the end of this article.

Limitations and Future Work

Movie Gen is quite powerful but has certain limitations: it only generates videos up to 16 seconds in length and is pretty intensive computationally. Future directions will help improve complex scene understanding, implementing safeguards against misuse, and reducing resource requirements to be as accessible as other tools are.

Conclusion

Movie Gen is such an advanced tool that pushes the boundaries of AI-driven video generation and editing. As a matter of fact, it has some unique features and capabilities that separate the model from others. It really turns out to be a very important tool for content creators as well as filmmakers.


Source
Blog: https://ai.meta.com/blog/movie-gen-media-foundation-models-generative-ai-video/
Research Paper: https://ai.meta.com/static-resource/movie-gen-research-paper
Meta Website: https://ai.meta.com/research/movie-gen/


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Monday 7 October 2024

Liquid Foundation Models: Revolutionizing AI with First-Principles Design

Presentational View

Introduction

Control of Artificial Intelligence (AI) capability has made significant advances during recent years. The ability to achieve these advances is gained by the requirement to ensure AI systems work and function safely, responsibly, and ethically. Properly defined boundaries, limitations, and guidelines aim to minimize possible risks and negative results of AI systems.

Still, this road to robust AI capability control is fraught with heavy challenges. Issues associated with biased data used in training, lack of transparency within the decision-making processes, and exploitation by bad actors remain among the significant hurdles. Advanced computational techniques for developing more reliable and trustworthy AI systems are the mean by which LFMs seeks to surmount these challenges.

Who invented Liquid Foundation Models?

Liquid AI is a firm comprised of former researchers from the Massachusetts Institute of Technology's (MIT) CSAIL who are developing what they call Liquid Foundation Models. Liquid AI comprises a corps of experts in dynamical systems, signal processing, and numerical linear algebra. The motto for the development of LFMs is best-in-class intelligent and efficient systems at every scale, designed to take care of large amounts of sequential multimodal data, enable advanced reasoning, and achieve reliable decision-making.

What are Liquid Foundation Models?

Liquid Foundation Models is a new class of first-principles, generative AI models. These models achieve state-of-the-art performance at every scale, though the models come with an incredibly smaller memory footprint and higher degree of efficiency during inference. LFMs are designed to handle an enormous variety of sequential data that can be video, audio, text, time series, and signals.

Model Variants

Liquid Foundation Models is offered in three versions:

  • LFM 1.3B: Most appropriate for highly resource-poor environments.
  • LFM 3.1B: Optimized for edge deployment.
  • LFM 40.3B MoE: A Mixture of Experts model designed to be deployed for solving tougher problems.

Key Features of Liquid Foundation Models

  • Multi-Modal Support: LFMs natively support multiple data modalities such as text, audio, images, and video.
  • Token Mixing & Channel Mixing: The computational units specialize in doing token mixing and channel mixing, which improves the ability of the model in processing and consolidating different types of data.
  • Efficient Inference: Less memory usage and fewer computationally expensive inferences compared to an equivalent transformer-based structure
  • Adaptive Computation: It includes adaptive linear operators, which effectively modulate computation based on input.
  • Scalability: LFMs are optimised for performance, scalability, and efficiency on a wide range of hardware platforms.

Capabilities and Applications of LFMs

  • General and specific knowledge: LFMs truly stand out in two general and specific knowledge domains, thus enabling them to perform many tasks.
  • Math and Logical Reasoning: LFMs are excellent in math and logical reasoning. For instance, they can solve fairly complex problems very quickly. This is especially useful in most engineering and data science-related work.
  • Handling Long Tasks: LFMs are efficient with long tasks. It is perfect for summarizing a document, writing an article, or conversational AI.
  • Financial Services: LFMs can easily filter through large data sets in order to detect fraud, enhance trading strategies, and thus help find the patterns that are held for intelligent investment decisions.
  • Biotechnology: LFMs help in the development of drugs and genetic research. This helps and hastens the digestion of complex biological data in the generation of new treatments.

Innovative Architecture of LFMs

Liquid Foundation Models are built in a different way than transformer models. They employ a specific design with adaptive linear operators that work based on input data and, consequently can handle tokens up to 1 million in a memory-efficient way, rather than augmenting the model size as it does in traditional Large Language Models. This makes it easy for them to produce good results, adapt quickly, and consume fewer bytes.

Architectures feature custom computational units arranged in depth groups with additional featurizer interconnections
source - https://www.liquid.ai/liquid-foundation-models

LFMs use computation units adopted from other systems, such as dynamical systems, signal processing and numerical linear algebra. These are designed as a depth group. As shown in above figure, this architecture is found to promote feature sharing and also more controlled control over the model's computation at the same time and making it easier to understand how the model works. This is seen to ensure AI systems operate in a safe and responsible manner that fits or adopted to serve the needed ethical principles. This prevents unintended consequences and increases transparency in the decision-making areas.

Instead of model scaling, Liquid puts its focus on 'featurization'. Featurization refers to the process of rearranging input data-in this case, text or audio-under a structured format. This would allow for customizing computational units depending on the nature of data and hardware that will be required. The aspects Liquid AI stresses mainly in its design are 'featurization' and the operators' complexity. Liquid AI balances the model performance and efficiency through control of these aspects. Control of strong AI capability is maintained through such balance.

Performance Evaluation

Liquid Foundation Models (LFMs) have shown top performance when compared to similar-sized language models using Eleuther AI’s evaluation tool. LFM-1B Model scores the highest in the 1B parameter category. It excels in benchmarks like MMLU (5-shot) with a score of 58.55 and HellaSwag (10-shot) with a score of 67.28. This shows how effective Liquid AI’s design is, especially in environments with limited resources.

Various benchmarks in the 1B category
source - https://www.liquid.ai/liquid-foundation-models

LFM-3B Model goes even further. It is more efficient than models like Phi-3.5 and Google’s Gemma 2, while using less memory. This makes it perfect for mobile and edge AI applications. It also outperforms other 3B parameter models, including transformers, hybrids, and RNNs, and even beats some 7B and 13B models. With a score of 38.41 on MMLU-Pro (5-shot), it is great for mobile text-based applications.

LFMs offer a new best performance/size tradeoff in the 1B, 3B, and 12B (active parameters) categories
source - https://www.liquid.ai/liquid-foundation-models

LFM-40B Model uses a Mixture of Experts (MoE) approach with 12B activated parameters. It balances size and output quality well. It performs as well as larger models but is more efficient because of its MoE design. As you can see in above figure, scoring high on the MMLU-Pro task shows that these LFMs are excellent at complex reasoning and problem-solving. This highlights their potential to tackle tough AI tasks that need advanced thinking skills.

Comparison with Other Leading AI Models

The top AI models are Liquid AI's Liquid Foundation Models (LFMs), Microsoft's Phi-3.5, and Google's Gemma 2. Each has particular features and abilities. LFMs are built from first principles. It uses systems like dynamical systems, signal processing, and numerical linear algebra. This helps it to perform well with less memory. Microsoft's Phi-3.5 models, including Phi-3.5-MoE, are designed to be powerful and cost-effective. They support several languages and come with robust safety features. Google's Gemma 2 models are lightweight and efficient. They can run very well on various hardware platforms and even do great when being small in size.

LFMs are rare as it does not depend on the transformer architecture. Phi-3.5 and Gemma 2 are using transformers. This reduces the parameters of LFMs, thus they are efficient and perform well. In the models of Phi-3.5, LFMs apply Mixture-of-Experts architecture. This would result in only certain parameters being active during its utilization. This makes it much more efficient as well. The main objective with the Gemma 2 model is on high performance and efficiency. It has very strong safety features and can be integrated into many different frameworks.

LFMs are ideal for low-resource environments. Due to the novel system they have, they are capable of tasks requiring advanced reasoning and decision-making. Phi-3.5 models are reliable and support many languages, making them good for applications requiring much reliability as well as a variety of languages. Gemma 2 models are highly efficient in cost. They are suitable for quite a number of applications-from cloud sets up to local deployments. Overall, LFMs are the frameworks that best perform in low-resource environments. Therefore, they represent a powerful tool for many AI applications.

How to Access and Use LFMs

LFMs are available for early testing and integration through a range of access points, including Liquid Playground, Lambda-both, including Chat UI and API-and, also, through Perplexity Labs. It is, however, worth noting at this point that while above access points do indeed allow for some degree of experimentation and deployment in certain cases, these models themselves are not open-source.

Limitations and Future Work

Challenges LFMs face include issues in zero-shot code tasks, accuracy of exact numerical computations, and data confidentiality over information sensitive to time. Their training language is typically English, so such a method would not be fully effective if the applications were multilingual. It's also unclear what the maximum token limit could be.

Future work will scale model sizes, improve computational efficiency, and optimize for modalities as well as hardware. Liquid AI hopes better alignment with human values can be achieved through human preference optimization techniques.

Conclusion

LFMs, or Liquid Foundation Models, provide an alternative focusing on efficiency, scalability and control. LFMs thus offer an effective and flexible solution for a variety of applications by combining the capabilities of conventional computing through conventional programming methodologies with emerging paradigms in computation. These capabilities combined with simply proprietary technology, make LFMs new disrupting tools capable of changing industries.


Source
Website: https://www.liquid.ai/liquid-foundation-models
LNN: https://www.liquid.ai/blog/liquid-neural-networks-research


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Wednesday 2 October 2024

Llama3.2: Meta’s Open Source, Lightweight, and Multimodal AI Models

Presentational View

Introduction

Lightweight models for edge and mobile devices have had much penetration while reducing resource consumption but improving overall performance. It allows real-time processing and decision-making directly on devices, reducing latency and promoting privacy. On the other hand, multimodal models advance through the incorporation of diverse types of data - such as text and images - into delivering a richer, more contextual outcome. This openness through integration opens numerous applications, including image captioning and visual question answering amongst others.

All these developments are part of AI advancements for making it more efficient, versatile, and able to process complex operations in a more accurate manner with high speeds. Llama3.2 embodies the improvements by offering increased edge AI and vision capabilities, providing support for lightweight as well as multimodal functionalities through a robust framework that developers will find highly useful in crafting innovative AI applications.

What is Llama3.2 ?

Llama3.2 is a new AI model introduced recently by Meta and is optimized to work on the smallest devices, phones and tablets. Model great for private and personalized AI. Model can interchangeably work with text and images, which makes it very handy for many jobs.

Model Variations

  • Llama 3.2 1B: It is a small model that can only work with text, ideal for small devices.
  • Llama 3.2 3B:Just another bare-features text-only model but with many more features.
  • Llama 3.2 11B Vision: It will take a text and images as input. 
  • Llama 3.2 90B Vision: even bigger model for more complex tasks, as well accepts text and images.

Key Features of Llama3.2

  • Multimodal Capabilities: Handles both text and images, hence very versatile.
  • Optimized for Edge Devices: It works really well on small devices, therefore fast and private.
  • Improved Performance: It can give better instructions and summative information than the older versions.
  • Long  context length: The model may accept context lengths up to 128K tokens. This implies it can comprehend and process that much at a go.
  • Improved Privacy: Store in the device itself keeps the information private.
  • Multilingual Support: Works on multiple languages such as English, German, French, Italian, Portuguese, Hindi, Spanish and Thai.

Capabilities/Use Cases of Llama3.2

  • Image Captioning:Llama3.2 can describe images with great verbosity, which makes this model useful for applications like auto tagging photos and generating visual content.
  • Visual Question Answering: The capability to answer questions through visual data can increase the utility in educational applications and customer service.
  • Document understanding: Llama3.2 can read and understand documents containing images-charts, graphs, etc. It is very helpful for scanning complex documents, extracting relevant data, and preparing a summary.
  • Personalized AI Agents: The model could be used as an on-device assistant that can take summary operations in multiple languages, thus enabling helping users more effectively in their daily activities by providing personalized and context-aware services. 

    source - https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/
  • Business Insights:  Llama3.2 takes business data and produces recommendations to improve through interpretation of visual data. It helps businesses in the development of actionable insights from their data, makes operations easier, and bases decisions on analytical data that is visual.

Unique Architectural Enhancements in Llama 3.2

Llama 3.2 configures a pre-trained image encoder with a language model by making use of special layers known as cross-attention layers. Thus, the model handles images fluently but also in text, making it capable of both understanding and generating natural language that would coincide with even more complicated visual information. The vision adapter will now be used in conjunction with the already developed Llama 3.1 language model, which will retain all the language skills but add to it the capability of understanding images.

It uses the cross-attention layers to focus on relevant parts of an image when processing text and vice versa. This is really helpful for tasks that require association of parts of an image with text. The cross-attention layers take the image data feed it into the main language model. It receives raw image data as input and processes it first through an image encoder, which then turns that into a format understandable to the language model. The adapter is trained on a huge set of image-text pairs. During training, the settings of its image encoder are updated but those of the language model remain the same. This helps the adapter connect the image and text data without messing up the language skills of Llama 3.1.

Pruning and Distillation—on the 1B and 3B models.
source - https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

The Llama 3.2 1B and 3B models are light and efficient. These models reach their efficiency through pruning and knowledge distillation methods applied to the original Llama 3.1 models. The process starts with the application of structured pruning on the 8B Llama 3.1 model; in which systematically removed parts of the network, adjusted weights and gradients, shrink a model while maintaining as much performance as possible. It was subjected to knowledge distillation whereby it was trained on the large 8B and 70B Llama 3.1 models. That meant incorporating the output probabilities or logits of those 'teacher' models into the pre-training of the pruned model to help it perform even better than if it were only training from scratch. The result will be sets of 1B and 3B models optimized for on-device deployment, balancing the demands of smaller devices with the performance of full-sized models.

Performance Evaluation with Other Models

Llama 3.2 shows great skills in recognizing images and understanding visual information. As shown in below table, It performs well on many tests. For example, it excels in tasks like Visual Question Answering (VQA) and Document Visual Question Answering (DocVQA). This means it can understand and answer questions based on images and document layouts. Llama 3.2 is also good at image captioning, finding images that match text, and connecting images to text. This proves its strong ability to understand and reason with images.

Vision Instruction-tuned Benchmarks
source - https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

The lightweight 1B and 3B versions of Llama 3.2 are made for use on devices. They have shown they can compete with other similar models. Tests show that the 3B model does better than both Gemma 2 and Phi 3.5-mini in tasks like following instructions, summarizing, rewriting prompts, and using tools. The 1B model also performs well compared to the Gemma model. These results show that Llama 3.2 can balance efficiency and accuracy in text tasks, making it good for devices with limited resources.

LightWeight Instruction-tuned Bechmarks
source - https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

Llama 3.2 competes well with top models like GPT-4o and Claude 3 Haiku, especially in image recognition and summarization tasks. It also performs better than its older version, Llama 3.1, in many tests. This improvement is clear in visual and math reasoning tasks, where Llama 3.2 often outperforms models like Claude 3 and GPT-4 mini. This shows that Llama 3.2 has better skills and efficiency in handling both text and image tasks.

How to Access and Use Llama3.2?

Llama3.2 is available for download from Hugging Face and from the Llama official website under the Llama 3.2 Community License Agreement. One may also use cloud services like Amazon Bedrock or Google Cloud. It can be used as a local solution on personal computers or edge devices. Detailed instructions and documentation are available at Llama's website as well as in the GitHub repository. Llama3.2 is free and open source, commercially usable under its particular license. If you are interested to learn more then all relevant links are provided at the end of this article.

Limitations and Future Work

Llama3.2 is a giant leap for the AI technology field, but the problems that still stand before it. Like other large language models,  it produces wrong, biased, or inappropriate answers. It is because it learns from voluminous datasets that contain either wrong or biased data. The vision capabilities of Llama3.2 work flawlessly with only English. That, indeed, limits the utility of such a system to users of other languages. Moreover, this model cannot be allowed in countries that have rigid regulations like the EU and UK.

In the future, Llama3.2 will continue to focus on safety and reduce bias. This model will also make more efforts to get the vision features working in more languages and to improve its ability to reason and explain answers.

Conclusion

Llama3.2 Truly an application of very strong AI - really good on dealing well with text and images, and, more important, it is very well-performing and fits well in small devices like phones or tablets. Due to its openness and possibilities of customization, it's a resource of great value for developers and businesses alike.


Source
Website: https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/
Huggingface models: https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf
GitHub Repo: https://github.com/meta-llama/llama-models/tree/main/models/llama3_2


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Thursday 26 September 2024

GRIN-MoE: Microsoft’s Revolutionary Mixture-of-Experts Model

Presentational View

Introduction

One of the large strides made by the traditional Mixture-of-Experts (MoE) is sparse computation: they only activate a few modules at a given time. This has really made MoE models much larger and much more efficient for big tasks, but they do have some problems, such as difficulties in optimizing gradients due to how experts are selected.

MoE models over time have tried to address these issues but still, there are some problems which haven't been resolved yet. This GRIN-MoE model tries to solve them. Using sparse gradient estimation for expert selection, it creates model parallelism to prevent token dropout. These features have made the MoE models more scalable and better performing, further assisting AI advancement. It was developed by a team of researchers at Microsoft. The major inspiration behind the creation of GRIN-MoE was the need to overcome the limitations that were found in the traditional MoEs and to improve their scalability efficiency.

What is GRIN-MoE?

GRIN-MoE is short for 'GRadient-INformed Mixture-of-Experts'. It is the newest AI model, attempting to better how an Mixture of Experts (MoE) should really work. The GRIN-MoE utilizes special techniques that make such systems much more scalable and efficient than the traditional MoE models.

Key Features of GRIN-MoE

  • Sparse Computation: GRIN-MoE makes use of only a subset of its parameters. It is thus both computationally efficient and powerful.
  • Sparse Gradient Estimation: It utilizes SparseMixer-v2 for estimating the gradients for expert routing. This is a pretty big leap from what the older methods were doing.
  • Model Parallelism: It creates parallelism within the model such that tokens are not dropped. Thus, it is also efficient at training.
  • High Performance: Despite its lean size, GRIN-MoE outscores several other models in coding and mathematics.
  • Efficient Resource Use: It only activates 6.6 billion parameters during inference. Hence, it balances performance with efficiency.
  • Scalability: The model can scale up its MoE training without requiring any knowledge of parallelism to be drawn upon; this is less demanding on limited-resource organization.

Capabilities/Use Cases of GRIN-MoE

The GRIN-MoE model has demonstrated excellence in a variety of complex tasks by breaking problems down into smaller sub-problems, and each one handled by different experts. Some of the interesting use cases are as follows:

  • Multi-Modal Learning: GRIN-MoE provides in-depth descriptions of images, answers questions on images by tying together visual and language understanding, and develops immersive and interactive gaming experiences.
  • Personalized Suggestions: The model makes suggestions to the customer based on the preferences of a product or service, suggests articles or videos or music according to the user's choice and creates personalized learning pages.
  • Drug Discovery and Development: GRIN-MoE computes the 3D molecular structure for drug target discovery, models for drug efficacy, and side effects.
  • Climate Modeling and Prediction: In addition, the model builds precise climate models to comprehend the shifting designs of climate, thereby helping to make extreme weather more predictable and, thereby better prepared for disaster.

These applications depict the flexibility and efficiency of GRIN-MoE in dealing with complex tasks.

How GRIN-MoE model Works?

The GRIN-MoE model is a type of Mixture-of-Experts (MoE) architecture that uses sparse gradient estimation for expert routing and sets up model parallelism to avoid token dropping. It features 16 experts per layer and activates the top 2 experts for each input token, reducing the number of active parameters while maintaining high performance. The model employs SparseMixer-v2 to estimate the gradient related to expert routing more accurately than traditional methods. This technique allows the model to directly estimate the router gradient, enhancing training accuracy and effectiveness.

Additionally, GRIN-MoE’s model parallelism strategy eliminates the need for capacity factors and token dropping, which can hinder training efficiency in conventional MoE models. By leveraging pipeline and tensor parallelism, GRIN-MoE distributes different parts of the model across various devices, achieving impressive training speeds even with a larger number of parameters. The architecture is designed to scale more effectively and efficiently than traditional MoE models, demonstrating over 80% relative throughput compared to a dense model with the same active parameters.

Its scaling behavior remains consistent with dense models as the model size increases, making it an attractive solution for complex tasks that require dividing the problem into smaller sub-problems and using different 'experts' to handle each sub-problem. So overall, the GRIN-MoE model is efficient and scalable, making it a powerful tool for handling complex tasks.

Performance Evaluation of GRIN-MoE

The GRIN-MoE model demonstrates impressive performance across a wide range of benchmarks, as shown in table below. This comprehensive evaluation includes tasks spanning reasoning, mathematics, coding, and language understanding. Notably, GRIN-MoE outperforms many open-source models with similar active parameter counts, such as Mixtral 8×7B and Llama3 8B. It even surpasses Mixtral 8×22B on most tasks, showcasing its efficiency in leveraging its architecture. While it falls short of the performance of much larger models like Llama3 70B and GPT-4o, this is expected given the vast difference in computational and data resources used in training these models.

Model Performance on Popular Benchmarks
source - https://arxiv.org/pdf/2409.12136

However, the evaluation on LiveBench-2024-07-25, presented in Table below, reveals some limitations of GRIN-MoE. While the model excels in reasoning, coding, and mathematics tasks, it underperforms in natural language tasks. This discrepancy is likely due to the specific focus of its training data on reasoning and coding abilities. The model's average score of 16.9 on natural language tasks is notably low compared to other models with similar overall performance on this benchmark.

GRIN MoE performance on LiveBench-2024-07-25
source - https://arxiv.org/pdf/2409.12136

Beyond these standardized benchmarks, GRIN-MoE's performance was also evaluated on real-world tasks, including translated questions from the 2024 GAOKAO exam. The model demonstrated strong mathematical reasoning capabilities, outperforming larger models like Llama3 70B on these challenging problems. Additional analyses were conducted to understand the model's behavior, including studies of its routing distributions across different tasks and layers. These evaluations collectively paint a picture of GRIN-MoE as a highly capable model, particularly in STEM-related tasks, while also highlighting areas for potential improvement in natural language processing.

GRIN-MoE vs. Phi-3.5 MoE vs. Mixtral MoE

GRIN-MoE, Phi-3.5 MoE, and Mixtral MoE differ in their uniqueness in feature as well as capability. GRIN-MoE's Gradient-Informed approach helps it route experts very efficiently with lowered active parameters and high performance. Especially if the environment has limited memory or computation capability, and in cases where low latency is an issue, it is beneficial. On the other hand, 16 Phi-3.5 MoE has 42 billion parameters. It activates 6.6 billion parameters when it utilizes two experts, which means more usage of resources. Mixtral MoE owns 45 billion parameters and 8 experts per MLP, requiring activation of bigger parameters, hence may be very resource intensive.

Architectures are being compared where GRIN-MoE uses SparseMixer-v2 to approximate the gradient associated with expert routing, not dropping tokens or creating expert parallelism and is hence different from Phi-3.5 MoE that depends upon supervised fine-tuning, proximal policy optimization, and direct preference optimization. Mixtral MoE is a decoder-only model which selects from a list of 8 different groups of parameters. Its total parameters per token sit at 12.9 billion. GRIN-MoE is extremely efficient and scalable without the requirement for extensive computational resources for high-performance outcomes.

Thus, GRIN-MoE leads the charts in efficiency, performance, and handling specialized tasks, so it stands as a favorite element for robust reasoning capabilities and optimum use of resources. GRIN-MoE, based on novel architectural innovation and mechanism through training, is directed to achieve high-end performance without the demand for computational resource intensity in different versions of Mixture-of-Experts models. For applications requiring the full exploitation of resources and high performance for coding and mathematics tasks, GRIN-MoE is better compared to Phi-3.5 MoE and Mixtral MoE.

How to Access and Use GRIN-MoE?

GRIN-MoE is licensed under the MIT License for multiple uses. There are two major ways to access and use GRIN-MoE: GitHub and Hugging Face. Step-by-step local code execution instructions are provided in the GitHub repository. The model can also be executed on local machines using Docker, making the setup process easy. Besides, there is an interactive demo that is provided to the users for ease of interaction with GRIN-MoE. Interested users can find all relevant links at the end of this article.

Limitations and Future Work

Although GRIN-MoE has achieved progress in AI, it is not complete, and there are limitations. The model is less effective in natural language tasks because most of the training garnered was from reasoning and coding datasets. In the future work, more diverse and detailed datasets, packed with many examples of natural language, must be included. This model uses softmax to approximate the argmax operation that works very well. However, it is tricky to use it to approximate topk as sampling and requires more research. So, GRIN-MoE might become even better with the improvement in these areas.

Conclusion

GRIN-MoE is much more scalable and efficient than previous MoE models. It strongly relies on sparse gradient estimation and model parallelism for surpassing limitations where older MoE models failed. GRIN-MoE thus does significantly better on very challenging tasks like coding and math. GRIN-MoE uses resources very economically and has further advanced features. making it a great tool for many different uses.

Source
Research Paper: https://arxiv.org/abs/2409.12136 
Research Document: https://arxiv.org/pdf/2409.12136  
GitHub Repo: https://github.com/microsoft/GRIN-MoE
Hugging Face: https://huggingface.co/microsoft/GRIN-MoE


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Tuesday 24 September 2024

Qwen2.5: Versatile, Multilingual, Open-Source LLM Series

Presentational View

Introduction

In the context of model scale in AI, people refer to adding more parameters and computation capability to the models so that they can predict more complicated processes and recognize different circumstances. This advancement is precipitated by the desire to undertake other uses of artificial intelligence and better performance. This paper noted that high quality data is important in training AI models because it will guarantee quality outputs. Dealing with large datasets is beneficial in the sense that there are many such examples that will assist the model yield very good results through feeding models a large, clean and realistic dataset.

The incorporation of external knowledge sources such as knowledge graphs within AI models increases the range and relevance of responses. This approach is advantageous over models where predictions only depend solely on the training data set. Progresses in the field of data gathering, types of model designs, and training algorithms are improving these domains.  But even AI models still have problems for example in data quality problems, imbalance and last but not the least; the vast required computations. Qwen2.5 aspires to overcome these using high quality data, large scale architectures and complex training techniques to become the new reference framework for AI progression.

Who Developed Qwen2.5?

The Qwen2.5 technology was produced by the Qwen team operating within Alibaba Cloud. This team includes experts in the fields of artificial intelligence and machine learning who have had critical participation in the development of the model throughout collection of data, design of algorithms, and optimization. Alibaba Cloud, an important player in the cloud computing sector, is committed to developing AI technology and creating innovative answers for various applications.

What is Qwen2.5?

Qwen2.5 is family of large language models. It is a series of dense, efficient, decoder-only language modeling architecture capable of performing a wide range of NLP tasks. They come in various sizes and can have as few as 0.5 billion parameters and as many as 72 billion, the differences pointing at their versatility.


source - https://qwenlm.github.io/blog/qwen2.5/

Model Variants

The Qwen2.5 models are organized into three main series to cater to different needs:

  • Qwen2.5 Series: This series is intended for various text generation tasks and contains models intended for them. It has its base models and instruct variants where instruct variation is built to have better characteristics in instruction following and dialogue.
  • Qwen2.5-Coder Series: These models are designed for coding exercises and were built from a large corpus of code. When applied they perform generation, completion, reasoning and repair of codes and are best suited for software development and related uses.
  • Qwen2.5-Math Series: These models are dedicated to mathematical tasks, and both Chinese and English languages are welcomed. They use sophisticated approaches including Chain-of-Thought and Tool-Integrated Reasoning to solve calculation challenges on a computer.

QWEN2.5 Large Language Models
source - https://qwenlm.github.io/blog/qwen2.5/

Every single series includes models of different complexity – from 0.5B to 72B parameters – to match the available computational power and requirements for specific tasks. Besides, there are specific kind of products such as Qwen-Plus and Qwen-Turbo can be ordered through API for certain usages.

Key Features of Qwen2.5

  • Long Text Generation: Qwen2.5 will create texts up to 8K tokens; because of which, it can be useful for creating large documents and provide specific information.
  • Structured Data Understanding: The novelty of the model lies in the ability to focus on comprehending structured data, such as tables, and improving the accuracy of the answers given in the context.
  • Multilingual Support: Qwen2.5 supports over 29 languages, such as Chinese, English, French, Spanish used in multilingual content creation and translation.
  • Enhanced Instruction Following: Qwen2.5 is very capable of handling executable directives and producing orderly results especially in JSON format.
  • Context Length Support: It is designed to process long context up to 128K tokens so that it ensures good coherence with the rest of the text.
  • Larger and Higher-Quality Pre-training Dataset: Drawing data from up to 18 trillion tokens, Qwen2.5 has had an enhancement in high-quality code, mathematics, and multilingual data to solve problems that range across various fields.

Use Cases of Qwen2.5

  • Enhanced Mobile Applications: Qwen2.5’s small-size implementations, including the 3B model, allow for creating high-performing, flexible, artificial intelligence-based mobile applications that remain effective on handheld devices.
  • Offline Language Translation: Qwen2.5 can be used for translations in various translation apps, which would enable those who are using translation services during their travels where the connection may be very poor.
  • Personalized On-Device Assistants: As the capabilities of instruction following and dialogue generation improves, Qwen2.5 can support the more complex on mobile device virtual assisting environments, often comprehending multiple commands and user preferences.
  • Personalized Code Learning Assistants: Qwen2.5 Possessing the knowledge in programming languages, Coders can manage the platforms for code interactive learning, targeting at the specific learning preferences of each user and offering immediate feedback during the coding tasks.
  • Solving Complex Mathematical Problems in Multiple Languages: Qwen2.5-Math has various language support that can help to retrieve information and collaborate in mathematics for researchers from different countries.
  • Developing Accessible Math Learning Resources: Qwen2.5-Math’s capability of producing an explanation further enhances the creation of math learn material which can be understood by various student with learning disabilities makes mathematics more friendly.

The above use cases illustrate how Qwen2.5 can be an all-purpose and highly sophisticated and usable system, extendable to improve various applications in many fields.

Architecture and Design of Qwen2.5

Qwen2.5, the transformer based architecture comprises of components such as Rotary Position Embeddings (RoPE), SwiGLU activation non-linearity, and RMSNorm for stable learning. RoPE encodes the absolute positional information into the rotation matrix, and introduces an explicit relative position dependency into the self-attention construction. In addition, it adopts the attention mechanisms with QKV bias that makes the model able to manage the weight of different words in a sequence and leads to making the model better.

In addition to these, Qwen2.5 has improvements for speed and for dealing with long sequences. During inference, techniques such as Grouped Query Attention (GQA) improve the efficient use of the Key-Value cache reducing the complexity of the model. As for other new methods, both Dual Chunk Attention (DCA) and YARN, like Qwen2, are more focused on the efficient contextual comprehension, which is critical for language modeling. YARN enables GQA to decide when not to use the Key-Value cache during inference, thus making the model more efficient, while DCA helps the model process lengthy contexts more readily and effectively.

At last, Qwen2.5 has established the architecture to input lots of outside knowledge to improve its credibility and avoid hallucinations or false assertion. Critically, this design coupled with its exposure to big data enables it to handle long contexts, understand structured information and provide more accurate and context-specific responses. These advances make certain that Qwen2.5 is in a position to deal with complicated jobs in a better manner.

Performance Evaluation

A number of key benchmark assessments were performed demonstrating the performance characteristics of Qwen2.5 compared to other leading models. One of the evaluation criteria relates to the performance of the base language models of the Qwen2.5 especially the Qwen2.5-72B. The performance of Qwen2.5-72B on a variety of tasks as shown in below table, including general language comprehension (MMLU, MMLU-Pro), rationality (ARC-C, Winogrande), mathematics and scientific facts (MATH, GSM8K), and programming (HumanEval, MBPP). For instance, it is clear that the availability of Qwen2.5-72B leads to better results in comparison with the Qwen2-72B in most of the experiments; the greatest improvement is gained in the area of general, mathematic, and coding tasks. This signals a definite enhancement of the techniques used in knowledge representation, problem solving and code generation. Furthermore, Qwen2.5-72B acquires comparable accuracy as does Llama-3-405B, yet using one-fifth as many parameters, marking high efficiency.

The performance of base models especially Qwen2.5-72B on a variety of tasks
source - https://qwenlm.github.io/blog/qwen2.5/

Additional evidence is the instruction-tuned models demonstrating Qwen2.5’s abilities in which the model is optimized for following instruction and dialogue. The results (shown in below table) of Qwen2.5-72B-Instruct are compared with other instruction tuned models such as Llama-3.1-70B-Instruct and Mistral-Large2 Instruct. Qwen2.5-72B-Instruct demonstrates exceptional performance, exceeding even the larger Llama-3.1-405B-Instruct in critical tasks such as mathematics (MATH: 83.1) In the context of an unscripted natural human-like dialogue, coding the test (LiveCodeBench: 55.5) and the responsiveness of the suggested dialogues to human preferences (Arena-Hard: 81.2). This brings out the functionality of Qwen2.5’s instruction tuning and a realisation of the bot’s high performance in intricate human like tasks.

Comprehensive results from instruction-tuned versions across various benchmarks
source - https://qwenlm.github.io/blog/qwen2.5/

The tests of other Qwen2.5 variants such as Qwen2.5-14B Qwen2.5-32B and Qwen2.5-7B have also been made. In MMLU and BBH models, these benchmarks surpass competitors of similar or even greater scale consistently These tests are general language understanding MATH, HumanEval, and MBPP. These results reaffirm that Qwen2.5 offers a sound performance of a model that, while not especially large, can work under limited capacity conditions. In addition, the evaluations include the interface’s multi-lingual facility, coding efficiency and effectiveness and mathematical oriented tasks which, again, indicate enhanced over previous versions and similar models of Qwen2.5.

How to Use Qwen2.5?

Regarding usage, one can use GitHub, Hugging Face, ModelScope which already host Qwen2.5. Instructions for local deployment and usage are available in the Qwen2.5 repository in GitHub. The Qwen2.5 Collection is available on Hugging Face and provides different model forms and the functionalities of such forms. For deployment, or getting an inference on the model, ModelScope is quite useful. Those are more detailed options for running Qwen2.5 – fully locally, and using the frameworks such as llama.cpp and Ollama, while the links to online demos give remote glimpses of Qwen2.5. In addition to that, the model is open-source, and its licenses allow you to utilize it in the commercial field.

Conclusion

Qwen2.5 provides solutions that are durable in the best way possible in many applicable field. Due to its flexibility in dealing with large textual data, data structures and multiple languages, It is a very useful tool both for programmers and scientists. Comparing to the existing difficulties or problems, it is possible to note that with the help of Qwen2.5, new opportunities and different ways of innovations in various branches and spheres can be reached.


Source
Blog Website: https://qwenlm.github.io/blog/qwen2.5/
LLM Blog Website: https://qwenlm.github.io/blog/qwen2.5-llm/
GitHub Repo: https://github.com/QwenLM/Qwen2.5
Hugging Face Model collection: https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e
Base Model research paper: https://arxiv.org/pdf/2407.10671


Disclaimer - This article is intended purely for informational purposes. It is not sponsored or endorsed by any company or organization, nor does it serve as an advertisement or promotion for any product or service. All information presented is based on publicly available resources and is subject to change. Readers are encouraged to conduct their own research and due diligence.

Aria: Leading the Way in Multimodal AI with Expert Integration

Introduction Multimodal Mixture-of-Experts models are the latest in wave AI. They take in multiple kinds of input into a single system-inclu...