library(lang)
# Using an `ellmer` chat object
lang_use(ellmer::chat_openai(model = "gpt-4o"))
#>
#> ── `lang` session
#> Backend: 'OpenAI' via `ellmer`
#> Model: gpt-4o
#> Cache:
#> /var/folders/y_/f_0cx_291nl0s8h26t4jg6ch0000gp/T//RtmpTHbviH/_lang_cache4124265bf27d
# Using Ollama directly
lang_use("ollama", "llama3.2", seed = 100)
#>
#> ── `lang` session
#> Backend: OllamaModel: llama3.2Cache:
#> /var/folders/y_/f_0cx_291nl0s8h26t4jg6ch0000gp/T//RtmpTHbviH/_lang_cache4124265bf27d
# Turn off cache by setting it to ""
lang_use("ollama", "llama3.2", seed = 100, .cache = "")
#>
#> ── `lang` session
#> Backend: OllamaModel: llama3.2Cache: [Disabled]
Specifies the LLM provider and model to use during the R session
lang_use
Description
Allows us to specify the back-end provider, model to use during the current R session.
Usage
lang_use(backend = NULL, model = NULL, .cache = NULL, ...)
Arguments
Arguments | Description |
---|---|
backend | “ollama” or an ellmer Chat object. If using “ollama”, mall will use is out-of-the-box integration with that back-end. Defaults to “ollama”. |
model | The name of model supported by the back-end provider |
.cache | The path to save model results, so they can be re-used if the same operation is ran again. To turn off, set this argument to an empty character: "" . It defaults to a temp folder. If this argument is left NULL when calling this function, no changes to the path will be made. |
… | Additional arguments that this function will pass down to the integrating function. In the case of Ollama, it will pass those arguments to ollamar::chat() . |
Value
Console output of the current LLM setup to be used during the R session.