Provide the full prompt that the LLM will process.
Parameters
Name
Type
Description
Default
col
str
The name of the text field to process
required
prompt
str
The prompt to send to the LLM along with the col
''
pred_name
str
A character vector with the name of the new column where the prediction will be placed
'custom'
Examples
my_prompt = ("Answer a question.""Return only the answer, no explanation""Acceptable answers are 'yes', 'no'""Answer this about the following text, is this a happy customer?:")reviews.llm.custom("review", prompt = my_prompt)
review
custom
"This has been the best TV I've ever used. Great screen, and sound."
"Yes"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"No"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
A list or a DICT object that defines tells the LLM what to look for and return
''
pred_name
str
A character vector with the name of the new column where the prediction will be placed
'extract'
additional
str
Inserts this text into the prompt sent to the LLM
''
Examples
# Use 'labels' to let the function know what to extractreviews.llm.extract("review", labels ="product")
review
extract
"This has been the best TV I've ever used. Great screen, and sound."
"tv"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"laptop"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
"washing machine"
# Use 'pred_name' to customize the new column's namereviews.llm.extract("review", "product", pred_name ="prod")
review
prod
"This has been the best TV I've ever used. Great screen, and sound."
"tv"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"laptop"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
"washing machine"
# Pass a vector to request multiple things, the results will be pipe delimeted# in a single columnreviews.llm.extract("review", ["product", "feelings"])
review
extract
"This has been the best TV I've ever used. Great screen, and sound."
"tv | great"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"laptop|frustration"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
"washing machine | confusion"
# Set 'expand_cols' to True to split multiple lables# into individual columnsreviews.llm.extract( col="review", labels=["product", "feelings"], expand_cols=True )
review
product
feelings
"This has been the best TV I've ever used. Great screen, and sound."
"tv "
" great"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"laptop"
"frustration"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
"washing machine "
" confusion"
# Set custom names to the resulting columnsreviews.llm.extract( col="review", labels={"prod": "product", "feels": "feelings"}, expand_cols=True )
review
prod
feels
"This has been the best TV I've ever used. Great screen, and sound."
"tv "
" great"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"laptop"
"frustration"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
Define the model, backend, and other options to use to interact with the LLM.
Parameters
Name
Type
Description
Default
backend
str
The name of the backend to use. At the beginning of the session it defaults to “ollama”. If passing "", it will remain unchanged
''
model
str
The name of the model tha the backend should use. At the beginning of the session it defaults to “llama3.2”. If passing "", it will remain unchanged
''
_cache
str
The path of where to save the cached results. Passing "" disables the cache
'_mall_cache'
**kwargs
Arguments to pass to the downstream Python call. In this case, the chat function in ollama
{}
Examples
# Additional arguments will be passed 'as-is' to the# downstream R function in this example, to ollama::chat()reviews.llm.use("ollama", "llama3.2", options =dict(seed =100, temperature =0.1))
# During the Python session, you can change any argument# individually and it will retain all of previous# arguments usedreviews.llm.use(options =dict(temperature =0.3))
The statement or question that needs to be verified against the provided text
''
yes_no
list
A positional list of size 2, which contains the values to return if true and false. The first position will be used as the ‘true’ value, and the second as the ‘false’ value
[1, 0]
pred_name
str
A character vector with the name of the new column where the prediction will be placed
'verify'
additional
str
Inserts this text into the prompt sent to the LLM
''
Examples
reviews.llm.verify("review", "is the customer happy")
review
verify
"This has been the best TV I've ever used. Great screen, and sound."
1
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
0
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
0
# Use 'yes_no' to modify the 'true' and 'false' values to returnreviews.llm.verify("review", "is the customer happy", ["y", "n"])
review
verify
"This has been the best TV I've ever used. Great screen, and sound."
"y"
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"n"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"