Alibaba Unveils Qwen2-VL: The New AI Model Capable of Analyzing Videos Over 20 Minutes Long

Alibaba Cloud, the cloud services division of the Chinese e-commerce giant, has unveiled Qwen2-VL, its latest vision-language model aimed at enhancing visual comprehension, video analysis, and multilingual text-image processing.

Qwen2-VL outperforms leading models such as Meta’s Llama 3.1, OpenAI’s GPT-4o, Anthropic’s Claude 3 Haiku, and Google’s Gemini-1.5 Flash based on third-party benchmark tests. You can experiment with it hosted on Hugging Face.

Supported Languages: English, Chinese, most European languages, Japanese, Korean, Arabic, and Vietnamese.

Advanced Visual and Video Analysis

Alibaba aims to redefine AI interaction with visual data through Qwen-2VL. This model can analyze handwriting in multiple languages, identify and describe objects in images, and process live video in near real-time, making it suitable for tech support and live operational tasks.

In a blog post on GitHub, the Qwen research team highlighted: “Beyond static images, Qwen2-VL extends its capabilities to video content analysis. It can summarize videos, answer related questions, and maintain real-time conversations, which enables it to function as a personal assistant for users, providing insights directly from video content.”

Notably, Qwen-2VL can analyze videos longer than 20 minutes and answer questions about their content.

Example Video Summary:

In one demonstration, Qwen2-VL effectively summarized a video featuring astronauts discussing their mission inside a space station, giving viewers a compelling look at space exploration.

Model Variants and Open Source Options

Qwen2-VL comes in three variants: Qwen2-VL-72B (72 billion parameters), Qwen2-VL-7B, and Qwen2-VL-2B. The 7B and 2B versions are open source under the Apache 2.0 license, making them attractive options for enterprises. These variants are designed for competitive performance at an accessible scale and are available on platforms like Hugging Face and ModelScope.

However, the largest 72B model will be available later under a separate license and API from Alibaba.

Functionality and Integration

The Qwen2-VL series builds on the Qwen model family, boasting advancements such as:

- Integration into devices like mobile phones and robots for automated operations based on visual and text inputs.

- Function calling capabilities that allow interaction with third-party software and applications, understanding critical information like flight statuses and package tracking.

These features position Qwen2-VL as a powerful tool for tasks requiring complex reasoning and decision-making.

Architectural Innovations

Qwen2-VL incorporates several architectural advancements to enhance visual data processing. Naive Dynamic Resolution support enables handling images of different resolutions, ensuring accuracy in visual interpretation. The Multimodal Rotary Position Embedding (M-ROPE) system allows the model to integrate positional information across text, images, and videos effectively.

Future Developments from the Qwen Team

The Qwen Team is dedicated to advancing vision-language models by integrating additional modalities and enhancing the models' applications. The Qwen2-VL models are now available for developers and researchers eager to explore the potential of these cutting-edge tools.

Most people like

Find AI tools in YBX