LOADING

Meet Falcon 2: TII Releases New AI Model Series, Outperforming Meta’s New Llama 3

Falcon 2 Soars: Highlights

  • Open-Source, Multilingual, and Multimodal – and is only AI Model with Vision-to-Language Capabilities.
  • New Falcon 2 11B Outperforms Meta’s Llama 3 8B, and Performs on par with leading Google Gemma 7B Model, as Independently Verified by Hugging Face Leaderboard
  • Next up, we’re looking to add 'Mixture of Experts' to enhance Falcon 2’s capabilities even further.
  • Try it for yourself here

What’s New

Falcon 2 is our best performing model yet. We have released two ground-breaking versions:

  • Falcon 2 11B - a more efficient and accessible LLM trained on 5.5 trillion tokens with 11 billion parameters.
  • Falcon 2 11B VLM - distinguished by its vision-to-language model (VLM) capabilities.

We are really excited about Falcon 2 11B VLM – it enables the seamless conversion of visual inputs into textual outputs. While both models are multilingual, notably, Falcon 2 11B VLM stands out as TII's first multimodal model – and the only one currently in the top tier market that has this image-to-text conversion capability, marking a significant advancement in AI innovation.

How can you use Falcon?

We have released Falcon 2 11B under the TII Falcon License 2.0, the permissive Apache 2.0-based software license which includes an acceptable use policy that promotes the responsible use of AI. More information on the new model can be found at FalconLLM.TII.ae.

How does the Falcon fare?

When tested against several prominent AI models in its class among pre-trained models, Falcon 2 11B surpasses the performance of Meta’s newly launched Llama 3 with 8 billion parameters (8B), and performs on par with Google’s Gemma 7B at first place, with a difference of only 0.01 average performance (Falcon 2 11B: 64.28 vs Gemma 7B: 64.29) according to the evaluation from Hugging Face.

More importantly, Falcon 2 11B and 11B VLM are both open-source, empowering developers worldwide with unrestricted access, without any limitations on usage of name at implementation.

Multilingual and Multimodal

Falcon 2 11B models are equipped with multilingual capabilities to seamlessly tackle tasks in English, French, Spanish, German, Portuguese, and various other languages.
Falcon 2 11B VLM, a vision-to-language model also has the capability to identify and interpret images and visuals from the environment, providing a wide range of applications across industries such as healthcare, finance, e-commerce, education, and legal sectors.

These applications range from document management, digital archiving, and context indexing to supporting individuals with visual impairments. Furthermore, these models can run efficiently on just one graphics processing unit (GPU), making them highly scalable, and easy to deploy and integrate into lighter infrastructures like laptops and other devices.

Word of mouth

Hakim

H.E. Faisal Al Bannai, Secretary General of ATRC and Strategic Research and Advanced Technology Affairs Advisor to the UAE President:

“With the release of Falcon 2 11B, we've introduced the first model in the Falcon 2 series. While Falcon 2 11B has demonstrated outstanding performance, we reaffirm our commitment to the open-source movement with it, and to the Falcon Foundation. With other multimodal models soon coming to the market in various sizes, our aim is to ensure that developers and entities that value their privacy, have access to one of the best AI models to enable their AI journey.”

Hakim

Dr. Hakim Hacid, Executive Director and Acting Chief Researcher of the AI Cross-Center Unit at TII:

“AI is continually evolving, and developers are recognizing the myriad benefits of smaller, more efficient models. In addition to reducing computing power requirements and meeting sustainability criteria, these models offer enhanced flexibility, seamlessly integrating into edge AI infrastructure, the next emerging megatrend. Furthermore, the vision-to-language capabilities of Falcon 2 open new horizons for accessibility in AI, empowering users with transformative image to text interactions.”

What's Next

Up next, Falcon 2 models will be further enhanced with advanced machine learning capabilities like 'Mixture of Experts' (MoE), aimed at pushing their performance to even more sophisticated levels.

This method involves amalgamating smaller networks with distinct specializations, ensuring that the most knowledgeable domains collaborate to deliver highly sophisticated and customized responses – almost like having a team of smart helpers who each know something different and work together to predict or make decisions when needed.

This approach not only improves accuracy, but it also accelerates decision-making, paving the way for more intelligent and efficient AI systems.

Watch this space...