Working with the Ollama API client

Once Ollama is set up and running on your machine, you will be able to run code immediately.

model := GtLModelFactory new ollama_generate: 'tinyllama'.
provider := model provider.
  

This client is your entrypoint for interacting with Ollama. Next, you can download a new model. This step is optional if you’ve already downloaded all the models you want. Please note that this can take quite some time, depending on the size of the model.

provider pullModel.
  

To find out which models are available on your machine, you can query the list of clients.

provider availableModels.
  

Then, we can start interacting with the client!

chat := provider chat.
chat instructions
	sectionNamed: 'Behaviour'
	ifAbsent: [ :aSection |
		aSection addString: 'Begin all responses with ''Hi There!!!''.' ].
chat sendString: 'Hello Llama! How are you today?'.
  

And finally we can delete the model when we are finished with it (you'll probably want something more capable than llama 3.2):

provider deleteModel.