How to setup the LLM integration

The LLM integration primarily features in coder as a button as well as in various snippets through actions in context menus and other interactions. You can learn more about its capabilities in the gt4llm section of the book.

While enabled by default, Glamorous Toolkit certainly runs without any LLM integration and doesn’t use any LLM features unless prompted by the user. Its coder and inspector integration may also be turned off (see below how to do that).

To use the features, you will have to set up a connection. We offer connections for four different providers, namely Anthropic, Google Gemini, Ollama, and OpenAI. You can set up none, one, or more of them on your machine, and switch between them at your leisure.

To see which connectors are known by default, and change the default, you can use the view below (right click on any of them and click on Make default to make them the default connector):

If you want to add different connectors with different models, see Managing LLM provider connections and setting the default.

All providers require some amount of setup:

- Ollama, as a local model provider, requires a local installation. Visit the website to learn how.

- All other providers require an API key to be installed locally. You can find out how in

You can turn off coder and inspector integration using a feature toggle:

GtLlmFeatures disableInCoder
  

Conversely, if you want to experiment with new features such as multi-step actions and other features, you can enable it.

GtLlmFeatures beExperimental