Meta Llama 3.2 introduced! The end of GPT-4o?

Meta introduced its artificial intelligence model Llama 3.2 as part of the Connect 2024 event. Multimodal capabilities…

Meta introduced its artificial intelligence model Llama 3.2 as part of the Connect 2024 event. The multimodal AI challenges its competitors such as GPT-4o with its capabilities such as image processing and voice recognition.

What does Meta Llama 3.2 promise?

The new Llama 3.2 models have 11 billion and 90 billion parameter versions that can understand both text and images. These models can extract information from graphs and charts and answer questions about data trends. They can also identify important objects and details to add titles to images.

Meta also introduced 1 billion and 3 billion parameter Llama 3.2 models optimized for developers and businesses. These models are designed specifically for creating personalized applications on messaging platforms.

Meta claims that Llama 3.2 rivals Anthropic and OpenAI models in tasks such as image recognition and outperforms other conversational AI systems in language understanding criteria.

Llama 3.2 can also synthesize human-like voices when responding. It will also have the ability to respond using the voices of celebrities such as Dame Judi Dench, John Cena, and Kristen Bell. In this context, AI voices will be added to social media platforms such as Messenger and Instagram.