Troubleshooting
This page lists some common problems, their possible causes, and recommended solutions in case you encounter issues when using our AI services..
Missing API key
Exception / Error Message:
No api key passed in.
Description / Solution:
In order to use our LLM service via API access, you require an API key, that needs to be specified in your requests. These keys typically start with sk-...
. You can find example requests on Examples page.
Uploading files to text-only LLMs
Exception / Error Message:
litellm.BadRequestError: OpenAIException - mistralai/Mixtral-8x22B-Instruct-v0.1
is not a multimodal model None.
Received Model Group=mixtral-8x22B
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
or when using KI:connect (RWTHgpt):
The request was malformed. Please check your input and try again.
Description / Solution:
Some of our hosted models are text-only and do not support multimodal usage, i.e. additional uploaded files. Please choose a different model for such use cases.
Further, when using KI:connect (RWTHgpt), it might be that such errors lead to the fact that the chat is broken (see next point) and you need to start a fresh conversation with the LLM.
Broken chats
Exception / Error Message:
litellm.BadRequestError: OpenAIException - After the optional system message,
conversation roles must alternate user/assistant/user/assistant/...
Received Model Group=mistralai/Mixtral-8x22B-Instruct-v0.1
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
Description / Solution:
As the error message suggests, models are usually expecting alternating chats between the user and assistent. Ensure that these roles alternate in your API request.
However, when using KI:connect (RWTHgpt), you might end up in a state (e.g., after an error occured or when a pending request was cancelled by the user) where the user interface is not aware of the error/cancellation. If you then send another message, you will end up with the following sequence, which is rejected by the LLM.
- assistent
- user
- assistent
- user <--
- user <--
To solve that, just open a new, fresh conversation or chat.
Model not reachable
Exception / Error Message:
litellm.InternalServerError: InternalServerError: OpenAIException - Connection error..
Received Model Group=mistralai/Mistral-Small-3.2-24B-Instruct-2506
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
Description / Solution:
This message indicates that the model was not reachable or no connection could be established. This can happen e.g., in the following cases:
- The hosted model crashed due to an unknown error
- The GPU node(s) that host the model had to be shutdown or restarted.
- Sometimes instabilities with our file system can cause larger delays where request run into a timeout that automatically resolves after seconds.
We have an internal monitoring system that notifies us in such cases and we try to fix those issues as soon as possible. Sometimes, it already helps to try again after a few minutes.