Chatting with an LLM in Ollama, Anthropic, Gemini, or OpenAI

The basic interface for working with an LLM is the GtLlmChat Object subclass: #GtLlmChat instanceVariableNames: 'announcer lastUpdate assistant provider messages serializer' classVariableNames: '' package: 'Gt4Llm' . It allows the user to interact with an LLM programmatically or through a UI. The backend of these chats is abstracted and can be changed.

You can begin a chat using the default connection if it is set up. See how to check this and change the default connection in Managing LLM provider connections and setting the default.

chat := GtLlmChat new
  

You can also connect to a specific provider. To chat with OpenAI, for instance, ensure that you have added an API key. This can be done in Adding an OpenAI API key. After the API key was added, you will be able to chat with an LLM.

chat := GtLlmChat new provider: GtOpenAIProvider withApiKeyFromFile
  

Executing the snippet above will allow you to interact with the chat through a UI. You can also send messages through code.

chat sendMessage: 'Hello friend! Who are you?'
  

Providers are the chat’s backend. Currently, backends for Anthropic, OpenAI and Ollama exist.

Providers can be changed at any time, even in the middle of a chat.

chat
	provider: (GtOllamaProvider new model: 'llama3.1');
	sendMessage: 'What is your name?'
  

Please note that if you do not have the provided model locally already, it will first be downloaded, which will take some time. It should be fast afterwards as long as the model remains on the machine.

Anthropic works similarly to OpenAI, and also needs an API key (see Adding an Anthropic API key).

chat
	provider: GtAnthropicProvider withApiKeyFromFile;
	sendMessage: 'And who are you?'
  

And lastly, Google Gemini works in the same way (see Adding a Gemini API key).

chat
	provider: GtAnthropicProvider withApiKeyFromFile;
	sendMessage: 'And who are you, then?'