Specify the model to use

R/llm-use.R

llm_use

Description

Allows us to specify the back-end provider, model to use during the current R session

Usage


llm_use(
  backend = NULL,
  model = NULL,
  ...,
  .silent = FALSE,
  .cache = NULL,
  .force = FALSE
)

Arguments

Arguments Description
backend “ollama” or an ellmer Chat object. If using “ollama”, mall will use is out-of-the-box integration with that back-end. Defaults to “ollama”.
model The name of model supported by the back-end provider
Additional arguments that this function will pass down to the integrating function. In the case of Ollama, it will pass those arguments to ollamar::chat().
.silent Avoids console output
.cache The path to save model results, so they can be re-used if the same operation is ran again. To turn off, set this argument to an empty character: "". It defaults to a temp folder. If this argument is left NULL when calling this function, no changes to the path will be made.
.force Flag that tell the function to reset all of the settings in the R session

Value

A mall_session object

Examples



library(mall)

llm_use("ollama", "llama3.2")
#> 
#> ── mall session object
#> Backend: ollama
#> LLM session: model:llama3.2
#> R session:
#> cache_folder:/var/folders/y_/f_0cx_291nl0s8h26t4jg6ch0000gp/T//RtmpLQHzPj/_mall_cache3a375fa49927

# Additional arguments will be passed 'as-is' to the
# downstream R function in this example, to ollama::chat()
llm_use("ollama", "llama3.2", seed = 100, temperature = 0.1)
#> 
#> ── mall session object 
#> Backend: ollamaLLM session:  model:llama3.2
#>   seed:100
#>   temperature:0.1
#> R session:
#> cache_folder:/var/folders/y_/f_0cx_291nl0s8h26t4jg6ch0000gp/T//RtmpLQHzPj/_mall_cache3a375fa49927

# During the R session, you can change any argument
# individually and it will retain all of previous
# arguments used
llm_use(temperature = 0.3)
#> 
#> ── mall session object 
#> Backend: ollamaLLM session:  model:llama3.2
#>   seed:100
#>   temperature:0.3
#> R session:
#> cache_folder:/var/folders/y_/f_0cx_291nl0s8h26t4jg6ch0000gp/T//RtmpLQHzPj/_mall_cache3a375fa49927

# Use .cache to modify the target folder for caching
llm_use(.cache = "_my_cache")
#> 
#> ── mall session object 
#> Backend: ollamaLLM session:  model:llama3.2
#>   seed:100
#>   temperature:0.3
#> R session: cache_folder:_my_cache

# Leave .cache empty to turn off this functionality
llm_use(.cache = "")
#> 
#> ── mall session object 
#> Backend: ollamaLLM session:  model:llama3.2
#>   seed:100
#>   temperature:0.3

# Use .silent to avoid the print out
llm_use(.silent = TRUE)

# Use an `ellmer` object
library(ellmer)
chat <- chat_openai(model = "gpt-4o")
llm_use(chat)
#> 
#> ── mall session object 
#> Backend: ellmerLLM session:  model:gpt-4o
#>   seed:100
#>   temperature:0.3