library(lang)
# Using an `ellmer` chat object
lang_use(ellmer::chat_openai(model = "gpt-4o"))
#> Model: gpt-4o via OpenAI
#> Lang: en_US.UTF-8
# Using Ollama directly
lang_use("ollama", "llama3.2", seed = 100)
#> Model: llama3.2 via Ollama
#> Lang: en_US.UTF-8
# Turn off cache by setting `.cache` to ""
lang_use("ollama", "llama3.2", seed = 100, .cache = "")
#> Model: llama3.2 via Ollama
#> Lang: en_US.UTF-8
#> Cache: [Disabled]
# Use `.lang` to set the target language to translate to,
# it will be set for the current R session
lang_use("ollama", "llama3.2", .lang = "spanish")
#> Model: llama3.2 via Ollama
#> Lang: spanish
#> Cache: [Disabled]
# Use `.silent` to avoid console output
lang_use("ollama", "llama3.2", .lang = "spanish", .silent = TRUE)
# To see current settings, simply call the function
lang_use()
#> Model: llama3.2 via Ollama
#> Lang: spanish
#> Cache: [Disabled]Specifies the LLM provider and model to use during the R session
lang_use
Description
Allows us to specify the back-end provider, model to use during the current R session. The target language is not processed by the function, as in converting “english” to “en” for example. The value is passed directly to the LLM, and it lets the LLM interpret the target language.
Usage
lang_use(
backend = NULL,
model = NULL,
.cache = NULL,
.lang = NULL,
.silent = FALSE,
...
)Arguments
| Arguments | Description |
|---|---|
| backend | “ollama” or an ellmer Chat object. If using “ollama”, mall will use is out-of-the-box integration with that back-end. Defaults to “ollama”. |
| model | The name of model supported by the back-end provider |
| .cache | The path to save model results, so they can be re-used if the same operation is ran again. To turn off, set this argument to an empty character: "". It defaults to a temp folder. If this argument is left NULL when calling this function, no changes to the path will be made. |
| .lang | Target language to translate to. This will override values found in the LANG and LANGUAGE environment variables. |
| .silent | Boolean flag that controls if there is or not output to the console. Defaults to FALSE. |
| … | Additional arguments that this function will pass down to the integrating function. In the case of Ollama, it will pass those arguments to ollamar::chat(). |
Value
Console output of the current LLM setup to be used during the R session.