from chatlas import ChatOllama
from mall import LLMVec
= ChatOllama(model = "llama3.2")
chat
= LLMVec(chat) llm
LLMVec
='', model='', _cache='_mall_cache', **kwargs) LLMVec(backend
Class that adds ability to use an LLM to run batch predictions
Methods
Name | Description |
---|---|
classify | Classify text into specific categories. |
custom | Provide the full prompt that the LLM will process. |
extract | Pull a specific label from the text. |
sentiment | Use an LLM to run a sentiment analysis |
summarize | Summarize the text down to a specific number of words. |
translate | Translate text into another language. |
verify | Check to see if something is true about the text. |
classify
='', additional='') LLMVec.classify(x, labels
Classify text into specific categories.
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
labels | list | A list or a DICT object that defines the categories to classify the text as. It will return one of the provided labels. | '' |
additional | str | Inserts this text into the prompt sent to the LLM | '' |
Examples
'this is important!', 'there is no rush'], ['urgent', 'not urgent']) llm.classify([
['urgent', None]
custom
='', valid_resps='') LLMVec.custom(x, prompt
Provide the full prompt that the LLM will process.
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
prompt | str | The prompt to send to the LLM along with the col |
'' |
extract
='', additional='') LLMVec.extract(x, labels
Pull a specific label from the text.
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
labels | list | A list or a DICT object that defines tells the LLM what to look for and return | '' |
additional | str | Inserts this text into the prompt sent to the LLM | '' |
Examples
'bob smith, 123 3rd street'], labels=['name', 'address']) llm.extract([
['| bob smith | 123 3rd street |']
sentiment
=['positive', 'negative', 'neutral'], additional='') LLMVec.sentiment(x, options
Use an LLM to run a sentiment analysis
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
options | list or dict | A list of the sentiment options to use, or a named DICT object | ['positive', 'negative', 'neutral'] |
additional | str | Inserts this text into the prompt sent to the LLM | '' |
Examples
'I am happy', 'I am sad']) llm.sentiment([
['positive', 'negative']
summarize
=10, additional='') LLMVec.summarize(x, max_words
Summarize the text down to a specific number of words.
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
max_words | int | Maximum number of words to use for the summary | 10 |
additional | str | Inserts this text into the prompt sent to the LLM | '' |
Examples
'This has been the best TV Ive ever used. Great screen, and sound.'], max_words = 5) llm.summarize([
['this tv has exceeded expectations']
translate
='', additional='') LLMVec.translate(x, language
Translate text into another language.
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
language | str | The target language to translate to. For example ‘French’. | '' |
additional | str | Inserts this text into the prompt sent to the LLM | '' |
Examples
'This has been the best TV Ive ever used. Great screen, and sound.'], language = 'spanish') llm.translate([
['Esto ha sido la mejor televisión que he tenido, gran pantalla y sonido.']
verify
='', yes_no=[1, 0], additional='') LLMVec.verify(x, what
Check to see if something is true about the text.
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | list | A list of texts | required |
what | str | The statement or question that needs to be verified against the provided text | '' |
yes_no | list | A positional list of size 2, which contains the values to return if true and false. The first position will be used as the ‘true’ value, and the second as the ‘false’ value | [1, 0] |
additional | str | Inserts this text into the prompt sent to the LLM | '' |