Sie befinden sich im Service:LLM Hosting

Allgemeine Infos

LLM Hosting

 

Important

Please note that the service is currently in the beta phase, and access conditions may change in the future.

On our HPC infrastructure, we host multiple stand-alone Open Weight Large Language Models (LLMs). Access to these models is available through both an API and a web interface, providing a data-sovereign alternative to commercial providers such as OpenAI/ChatGPT and Meta.

This service enables users to process sensitive data securely, with our IT Center ensuring compliance with data protection regulations. We prioritize privacy by not storing any data or prompt content made during usage or within requests.

 

Who is allowed to use LLMs?

Currently, members of RWTH Aachen University and users from WestAI can request access to the LLMs. For more information on how to request access and how to access via graphical user interfaces or API, please refer to the section on Accessing LLMs.

 

Availability and Maintenance Information

Since LLMs are hosted on the HPC infrastructure, their accessibility and availability may be temporarily impacted during HPC maintenance periods. If you encounter any issues, please refer to our HPC maintenance status portal for updates.

 

Quota Limitations

At the moment, we have not established any limits on requests or tokens in order to better evaluate demand and utilization.

However, should we encounter bottlenecks or an increase in demand or user activity, we reserve the right to set appropriate limits.