EMEA AI Model Builders

Explore and deploy sovereign AI models. Built to ensure data control, governance, and independence, these models are accelerated by NVIDIA’s AI inference platform and powered by NVIDIA-accelerated infrastructure.

Explore ModelsView Performance


EMEA AI Model Builder - Barcelona Supercomputing Center Barcelona Supercomputing Center

ALIA is a collection of LLMs developed by the Barcelona Supercomputing Center (BSC). The collection includes Salamandra 2B, 7B, and ALIA 40B versions of both base and instruction fine-tuned models, pretrained on 9.37 trillion tokens across 35 European languages and code, prioritizing the official Spanish languages, including Castilian, Catalan, Basque, and Galician.

Explore

Explore sample applications to learn about different use cases for ALIA and Salamandra models.

Optimize

Optimize inference workloads for LLMs with TensorRT-LLM. Learn how to set up and get started using ALIA and Salamandra in TRT-LLM.

Get started with the models for your development environment.

Model

ALIA-40B Model 

ALIA-40b is a highly multilingual model pre-trained from scratch that will come with its respective base and instruction-tuned variants. 

Model

Salamandra Model Card

Salamandra is a highly multilingual model pre-trained from scratch that comes in three different sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants.

EMEA AI Model Builder - Black Forest Labs Black Forest Labs

FLUX is a family of advanced models engineered to transform imagination into precise and realistic images. From image generation to multi-stage image editing, creators can utilize natural language or multimedia context to control, transform, or refine visual content through an intuitive workflow. FP4-quantized checkpoints significantly reduce memory requirements, and NVIDIA TensorRT™ enhancements provide up to 2X faster inference.

Get started with the models for your development environment.

Model

FLUX.2 Model on Hugging Face

FLUX.2 by Black Forest Labs is a 32B parameter flow matching transformer model capable of generating and editing (multiple) images. The model is released under the FLUX.2-dev Non-Commercial License.

Model

FLUX.1-dev Model by Black-Forest-Labs | NVIDIA NIM

FLUX.1-dev by Black Forest Labs is a cutting-edge generative AI model that creates high-quality images from simple text prompts.

EMEA AI Model Builder - Dicta Dicta

Dicta specializes in building high-performing, open language models for the Hebrew language, delivering robust performance, cultural awareness. The DictaBERT suite includes models fine-tuned for various difficult tasks in Hebrew using custom designed architectures, as well as providing powerful base models that can be fine-tuned for any task. DictaLM is a suite of generative large language models which set a new baseline for accuracy and fluency for on-prem Hebrew AI, leveraging advanced optimization through NVIDIA TensorRT-LLM and Dynamo for seamless production-scale deployment.

Get started with the models for your development environment.

Model

Dicta-LM 3.0 Model Demo

Dicta-LM 3.0-thinking: Dicta-LM 3.0 is a powerful open-weight collection of LLMs, trained on extensive corpora of Hebrew and English texts.

Model

DictaLM 2.0 - Instruct Chat Demo

Check out DictaLM 2.0 Instruct on Hugging Face to chat in Hebrew or English with an advanced AI language model fine-tuned for instruction-based conversations.

Model

DictaBERT Collection

Collection of state-of-the-art language model for Hebrew, finetuned for various tasks,

EMEA AI Model Builder - ETH Zurich ETH Zürich, EPFL, CSCS

Apertus is a collection of Switzerland’s first fully open, transparent, and multilingual foundation models featuring powerful Apertus 8B and 70B parameter models. Trained on over 15T tokens covering 1,000+ languages and approximately 40% non-English data, it includes underrepresented languages like Swiss German and Romansh. Collaboratively developed by ETH Zurich, the Swiss Federal Technology Institute of Lausanne (EPFL), and the Swiss National Supercomputing Centre (CSCS).

Integrate

Get started with the right tools and frameworks for your AI model development environment. 

Optimize

Deployment of the models is supported via the newest versions of LLAMA, MLX, SGLang, and vLLM.

Get started with the models for your development environment.

Model

Apertus LLM Model Collection

Explore Apertus LLM on Hugging Face – Open Weights, Documentation, and Model Access.

EMEA AI Model Builder - JetBrains Jetbrains

JetBrains is a global software company specializing in intelligent, productivity-enhancing tools for software developers and teams. Its open-source LLM, Mellum, optimizes code-related tasks such as code completion and intelligent code suggestions within developer environments. Following a Llama-style architecture, the transformer-based model has 4B parameters and an 8,192-token context window and is trained on 4T+ tokens spanning multiple programming languages.

Integrate

Get started with the right tools and frameworks for your development environment.

Optimize

Optimize inference workloads for large language models (LLMs) with TensorRT-LLM. Learn how to set up and get started using TensorRT-LLM.

Get started with the models for your development environment.

Model

AI Models from JetBrains: Powering Developer Productivity with Advanced Machine Learning Tools on Hugging Face

The JetBrains Hugging Face page hosts AI models and tools developed by JetBrains for improving developer productivity and coding with advanced machine learning techniques

EMEA AI Model Builder - Mistral AI Mistral

Mistral AI delivers industry-leading accuracy and efficiency for enterprise AI, such as their Mistral 3 family of open-source, multilingual, and multimodal models. Mistral’s models span frontier-level to compact models, enabling distributed intelligence across cloud and edge. Their models are optimized across NVIDIA supercomputing and edge platforms, empowering researchers and developers to experiment, customize, and accelerate AI innovation while democratizing access to frontier-class technologies.

Integrate

Get started with the right tools and frameworks for your development environment.

Get started with the models for your development environment.

Model

Mistral Large 3

Mistral’s first mixture-of-experts model, Mistral Large 3 is trained with 41B active and 675B total parameters.

Model

Ministral 3 14B

Ministral 3 14B Instruct 2512 FP8 is a powerful and efficient language model with vision capabilities, fine-tuned for instruction tasks.

Model

Mistral Small 24B on NVIDIA Build

Mistral Small 3 (2501) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24 billion parameters and achieving state-of-the-art capabilities comparable to larger models.

EMEA AI Model Builder - Bielik.AI SpeakLeash

Bielik is a family of open-source, lightweight LLMs (Apache-2.0) developed by the SpeakLeash Foundation and trained with ACK Cyfronet AGH. Designed for business applications and continuously improved through community-driven development, Bielik excels in document understanding, summarization, and knowledge extraction.

Explore

Explore sample applications to learn about different use cases for Bielik models.

Integrate

Get started with the right tools and frameworks for your development environment.

Optimize

Optimize inference workloads for large language models (LLMs) with TensorRT-LLM. Learn how to set up and get started using TensorRT-LLM.

Get started with the models for your development environment.

Model

Bielik-11B on NVIDIA Build

Bielik-11B-v2.6-Instruct is a generative text model with 11 billion parameters, developed to process and understand the Polish language with high precision

Model

Bielik-11B on Hugging Face

Bielik-11B-v2.6-Instruct is an 11-billion-parameter generative text model, fine-tuned for instruction following in Polish, developed by SpeakLeash and ACK Cyfronet AGH to deliver high-precision Polish language understanding and response generation

EMEA AI Model Builder - Technology Innovation Institute (TII) Technology Innovation Institute (TII)

The Falcon LLM series is a family of multilingual, open-source large language models developed by TII. Built for efficiency, long-context understanding, and strong reasoning, Falcon models contain multiple architectures designs, including Transformer, State Space Models (SSMs), hybrid Transformer-SSLMs, and support a wide range of global languages, including dedicated variants for Arabic and regional dialects. They excel in knowledge-intensive, STEM, and long-context tasks, making them highly versatile for diverse real-world applications.

Get started with the models for your development environment.

Model

Falcon3-7b-instruct Model by Tiiuae on NVIDIA NIM

The Falcon3-7B-Instruct is an open foundation model designed for state-of-the-art performance in reasoning, language understanding, instruction following, code generation, and mathematics. It supports long-context tasks with a token limit of up to 32K and multilingual capabilities in English, French, Spanish, and Portuguese.

Model

Falcon H1R 7B Collection

Falcon H1R 7B is a compact, high-performance large language model designed to deliver world-class reasoning capabilities within a 7-billion-parameter architecture.

Model

Falcon-H1 Family of Models and Falcon Arabic

Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).

EMEA AI Model Builder - University College London (UCL) University College London (UCL)

UK-LLM is a sovereign AI initiative led by UCL, developed in partnership with NVIDIA and Bangor University to support speakers of minority UK languages like Welsh, Cornish, Irish, Scottish Gaelic, and Scots. It uses open-source techniques and datasets from NVIDIA Nemotron™ to enable advanced reasoning in English and Welsh, ensuring cultural and linguistic accuracy, for public services such as healthcare, education, and legal resources.

Explore

Explore sample applications to learn about different use cases for UK-LLM models.

Integrate

Get started with the right tools and frameworks for your development environment.

Optimize

Optimize inference workloads for large language models (LLMs) with TensorRT-LLM. Learn how to set up and get started using TensorRT-LLM.

Get started with the models for your development environment.

Model

BritLLM on Hugging Face

BritLLM is a collection of multilingual language model training corpora and models, developed using machine-translated data from a single source language for broad pretraining applications

EMEA AI Model Builder - University of Ljubljana University of Ljubljana

The Generative Model for Slovene (GaMS) is a large language model family developed by researchers in the Centre for Language Resources and Technologies at the University of Ljubljana’s Faculty of Computer and Information Science. The family of LLMs is designed specifically for Slovene while also supporting other South Slavic languages and English.

Integrate

Get started with the right tools and frameworks for your development environment.

Optimize

Optimize inference workloads for large language models (LLMs) with TensorRT-LLM. Learn how to set up and get started using TensorRT-LLM.

Get started with the models for your development environment.

Model

Try the GaMS3 model

GaMS3-12B-Instruct represents the next generation of the GaMS (Generative Model for Slovene) model. The model is based on Google's Gemma 3 family and continually pretrained on Slovene, English, and some portion of Croatian, Serbian, and Bosnian corpora. The supervised fine-tuning phase was done on a combination of Slovene and English datasets.

Model

GaMS-Instruct-Nemotron Collection

Next generation of GaMS models. Includes GaMS-Instruct models additionally improved using subset of NVIDIA Nemotron post training data.

More Resources

Decorative image representing Developer Community

Join the NVIDIA Developer Program

 Decorative image representing Training and Certification

Get Training and Certification

Decorative image representing Inception for Startups

Accelerate Your Startup


Ethical AI

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.

Try top community models today.

Explore Models