LLM tools by example
A tool can be used by an LLM chat to execute logic. For example, the below chat looks up the source of multiple methods using GtLMagritteToolForMethodsSource
.
methodSourcesForExistingMethods
<gtExample>
| tool call result model |
tool := GtLMagritteToolForMethodsSource new.
call := GtLFunctionToolCall new.
call rawArguments: {
'methodDescriptors' -> {
{'methodClassName' -> 'Collection'.
'methodName' -> 'withAll:'.
'classSide' -> true} asDictionary.
{'methodClassName' -> #GtLMagritteToolForMethodsSourceExamples.
'methodName' -> 'methodSourcesForExistingMethods'.
'classSide' -> false} asDictionary
} asArray
} asDictionary.
result := tool performToolCall: call.
self assert: call tool == tool.
self assert: (result isKindOf: GtLMagritteInputModel).
self assert: call arguments size equals: 1.
self assert: (call arguments anyOne allSatisfy: [:e | e isKindOf: GtLMethodReference]).
model := result model.
self assert: model methods size equals: 2.
self
assert: model methods first sourceCode
equals: (Collection class>>#withAll:) sourceCode.
self
assert: model methods second sourceCode
equals: (GtLMagritteToolForMethodsSourceExamples>>#methodSourcesForExistingMethods) sourceCode.
^ call
A tool's implementation has a name, takes arguments and returns an instance of GtLMagritteInputModel
that wraps the actual domain model. This instance knows how to serialize itself. For OpenAI it serializes by implementing GtLInputModel>>#serializeToOpenAIResponseOutput
.
Each tool has corresponding examples that test its logic in isolation. For example, GtLMagritteToolForMethodsSource
has examples in GtLMagritteToolForMethodsSourceExamples
.