Class OllamaLanguageModelConfig.BuilderBase<BUILDER extends OllamaLanguageModelConfig.BuilderBase<BUILDER,PROTOTYPE>,PROTOTYPE extends OllamaLanguageModelConfig>
- Type Parameters:
BUILDER
- type of the builder extending this abstract builderPROTOTYPE
- type of the prototype interface that would be built byPrototype.Builder.buildPrototype()
- All Implemented Interfaces:
Prototype.Builder<BUILDER,
,PROTOTYPE> ConfigBuilderSupport.ConfiguredBuilder<BUILDER,
PROTOTYPE>
- Direct Known Subclasses:
OllamaLanguageModelConfig.Builder
- Enclosing interface:
OllamaLanguageModelConfig
OllamaLanguageModelConfig
.-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionprotected static class
Generated implementation of the prototype, can be extended by descendant prototype implementations. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionaddCustomHeaders
(Map<String, String> customHeaders) This method keeps existing values, then puts all new values into the map.The list of sequences where the API will stop generating further tokens.baseUrl()
The base URL for the Ollama API.The base URL for the Ollama API.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.Clear existing value of this property.config()
If this instance was configured, this would be the config instance used.Update builder from configuration (node of this type).A map containing custom headers.customHeaders
(Map<String, String> customHeaders) This method replaces all values with the new ones.boolean
enabled()
If set tofalse
(default), Ollama model will not be available even if configured.enabled
(boolean enabled) If set tofalse
(default), Ollama model will not be available even if configured.format()
The format of the generated output.The format of the generated output.from
(OllamaLanguageModelConfig prototype) Update this builder from an existing prototype instance.from
(OllamaLanguageModelConfig.BuilderBase<?, ?> builder) Update this builder from an existing prototype builder instance.Whether to log API requests.logRequests
(boolean logRequests) Whether to log API requests.Whether to log API responses.logResponses
(boolean logResponses) Whether to log API responses.The maximum number of retries for failed API requests.maxRetries
(int maxRetries) The maximum number of retries for failed API requests.The name of the Ollama model to use.The name of the Ollama model to use.The number of tokens to generate during text prediction.numPredict
(int numPredict) The number of tokens to generate during text prediction.protected void
Handles providers and decorators.putCustomHeader
(String key, String customHeader) This method adds a new value to the map, or replaces it if the key already exists.The penalty applied to repeated tokens during text generation.repeatPenalty
(double repeatPenalty) The penalty applied to repeated tokens during text generation.seed()
The seed for the random number generator used by the model.seed
(int seed) The seed for the random number generator used by the model.stop()
The list of sequences where the API will stop generating further tokens.The list of sequences where the API will stop generating further tokens.The sampling temperature to use, between 0 and 2.temperature
(double temperature) The sampling temperature to use, between 0 and 2.timeout()
The timeout setting for API requests.The timeout setting for API requests.topK()
The maximum number of top-probability tokens to consider when generating text.topK
(int topK) The maximum number of top-probability tokens to consider when generating text.topP()
The nucleus sampling value, where the model considers the results of the tokens with top_p probability mass.topP
(double topP) The nucleus sampling value, where the model considers the results of the tokens with top_p probability mass.toString()
protected void
Validates required properties.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface io.helidon.builder.api.Prototype.Builder
buildPrototype, self
-
Constructor Details
-
BuilderBase
protected BuilderBase()Protected to support extensibility.
-
-
Method Details
-
from
Update this builder from an existing prototype instance. This method disables automatic service discovery.- Parameters:
prototype
- existing prototype to update this builder from- Returns:
- updated builder instance
-
from
Update this builder from an existing prototype builder instance.- Parameters:
builder
- existing builder prototype to update this builder from- Returns:
- updated builder instance
-
config
Update builder from configuration (node of this type). If a value is present in configuration, it would override currently configured values.- Specified by:
config
in interfaceConfigBuilderSupport.ConfiguredBuilder<BUILDER extends OllamaLanguageModelConfig.BuilderBase<BUILDER,
PROTOTYPE>, PROTOTYPE extends OllamaLanguageModelConfig> - Parameters:
config
- configuration instance used to obtain values to update this builder- Returns:
- updated builder instance
-
clearTemperature
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
temperature
The sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic.- Parameters:
temperature
- anOptional
containing the sampling temperature- Returns:
- updated builder instance
- See Also:
-
clearTopK
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
topK
The maximum number of top-probability tokens to consider when generating text. Limits the token pool to thetopK
highest-probability tokens, controlling the balance between deterministic and diverse outputs.A smaller
topK
(e.g., 1) results in deterministic output, while a larger value (e.g., 50) allows for more variability and creativity.- Parameters:
topK
- anOptional
containing the topK value- Returns:
- updated builder instance
- See Also:
-
clearTopP
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
topP
The nucleus sampling value, where the model considers the results of the tokens with top_p probability mass.- Parameters:
topP
- anOptional
containing the nucleus sampling value- Returns:
- updated builder instance
- See Also:
-
clearRepeatPenalty
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
repeatPenalty
The penalty applied to repeated tokens during text generation. Higher values discourage the model from generating the same token multiple times, promoting more varied and natural output.A value of
1.0
applies no penalty (default behavior), while values greater than1.0
reduce the likelihood of repetition. Excessively high values may overly penalize common phrases, leading to unnatural results.- Parameters:
repeatPenalty
- anOptional
containing the repeat penalty- Returns:
- updated builder instance
- See Also:
-
clearSeed
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
seed
The seed for the random number generator used by the model.- Parameters:
seed
- anOptional
containing the seed- Returns:
- updated builder instance
- See Also:
-
clearNumPredict
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
numPredict
The number of tokens to generate during text prediction. This parameter determines the length of the output generated by the model.- Parameters:
numPredict
- anOptional
containing the numPredicts value- Returns:
- updated builder instance
- See Also:
-
stop
The list of sequences where the API will stop generating further tokens.- Parameters:
stop
- the list of stop sequences- Returns:
- updated builder instance
- See Also:
-
addStop
The list of sequences where the API will stop generating further tokens.- Parameters:
stop
- the list of stop sequences- Returns:
- updated builder instance
- See Also:
-
clearFormat
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
format
The format of the generated output. This parameter specifies the structure or style of the text produced by the model, such as plain text, JSON, or a custom format.Common examples:
"plain"
: Generates unstructured plain text."json"
: Produces output formatted as a JSON object.- Custom values may be supported depending on the model's capabilities.
- Parameters:
format
- anOptional
containing the response format- Returns:
- updated builder instance
- See Also:
-
enabled
If set tofalse
(default), Ollama model will not be available even if configured.- Parameters:
enabled
- whether Ollama model is enabled, defaults tofalse
- Returns:
- updated builder instance
- See Also:
-
clearBaseUrl
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
baseUrl
The base URL for the Ollama API.- Parameters:
baseUrl
- the base URL- Returns:
- updated builder instance
- See Also:
-
clearLogRequests
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
logRequests
Whether to log API requests.- Parameters:
logRequests
- anOptional
containing true if requests should be logged, false otherwise- Returns:
- updated builder instance
- See Also:
-
clearLogResponses
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
logResponses
Whether to log API responses.- Parameters:
logResponses
- anOptional
containing true if responses should be logged, false otherwise- Returns:
- updated builder instance
- See Also:
-
customHeaders
This method replaces all values with the new ones.- Parameters:
customHeaders
- custom headers map- Returns:
- updated builder instance
- See Also:
-
addCustomHeaders
This method keeps existing values, then puts all new values into the map.- Parameters:
customHeaders
- custom headers map- Returns:
- updated builder instance
- See Also:
-
putCustomHeader
This method adds a new value to the map, or replaces it if the key already exists.- Parameters:
key
- key to add or replacecustomHeader
- new value for the key- Returns:
- updated builder instance
- See Also:
-
clearTimeout
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
timeout
The timeout setting for API requests.- Parameters:
timeout
- the timeout setting inDuration.parse(java.lang.CharSequence)
format- Returns:
- updated builder instance
- See Also:
-
clearMaxRetries
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
maxRetries
The maximum number of retries for failed API requests.- Parameters:
maxRetries
- anOptional
containing the maximum number of retries- Returns:
- updated builder instance
- See Also:
-
clearModelName
Clear existing value of this property.- Returns:
- updated builder instance
- See Also:
-
modelName
The name of the Ollama model to use. This parameter determines which pre-trained model will process the input prompt and produce the output.Examples of valid model names:
"llama-2"
: Utilizes the LLaMA 2 model."alpaca"
: Uses a fine-tuned LLaMA model for conversational tasks."custom-model"
: A user-defined model trained for specific use cases.
- Parameters:
modelName
- anOptional
containing the model name- Returns:
- updated builder instance
- See Also:
-
temperature
The sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic.- Returns:
- the temperature
-
topK
The maximum number of top-probability tokens to consider when generating text. Limits the token pool to thetopK
highest-probability tokens, controlling the balance between deterministic and diverse outputs.A smaller
topK
(e.g., 1) results in deterministic output, while a larger value (e.g., 50) allows for more variability and creativity.- Returns:
- the top k
-
topP
The nucleus sampling value, where the model considers the results of the tokens with top_p probability mass.- Returns:
- the top p
-
repeatPenalty
The penalty applied to repeated tokens during text generation. Higher values discourage the model from generating the same token multiple times, promoting more varied and natural output.A value of
1.0
applies no penalty (default behavior), while values greater than1.0
reduce the likelihood of repetition. Excessively high values may overly penalize common phrases, leading to unnatural results.- Returns:
- the repeat penalty
-
seed
The seed for the random number generator used by the model.- Returns:
- the seed
-
numPredict
The number of tokens to generate during text prediction. This parameter determines the length of the output generated by the model.- Returns:
- the num predict
-
stop
The list of sequences where the API will stop generating further tokens.- Returns:
- the stop
-
format
The format of the generated output. This parameter specifies the structure or style of the text produced by the model, such as plain text, JSON, or a custom format.Common examples:
"plain"
: Generates unstructured plain text."json"
: Produces output formatted as a JSON object.- Custom values may be supported depending on the model's capabilities.
- Returns:
- the format
-
enabled
public boolean enabled()If set tofalse
(default), Ollama model will not be available even if configured.- Returns:
- the enabled
-
baseUrl
The base URL for the Ollama API.- Returns:
- the base url
-
logRequests
Whether to log API requests.- Returns:
- the log requests
-
logResponses
Whether to log API responses.- Returns:
- the log responses
-
customHeaders
A map containing custom headers.- Returns:
- the custom headers
-
timeout
The timeout setting for API requests.- Returns:
- the timeout
-
maxRetries
The maximum number of retries for failed API requests.- Returns:
- the max retries
-
modelName
The name of the Ollama model to use. This parameter determines which pre-trained model will process the input prompt and produce the output.Examples of valid model names:
"llama-2"
: Utilizes the LLaMA 2 model."alpaca"
: Uses a fine-tuned LLaMA model for conversational tasks."custom-model"
: A user-defined model trained for specific use cases.
- Returns:
- the model name
-
config
If this instance was configured, this would be the config instance used.- Returns:
- config node used to configure this builder, or empty if not configured
-
toString
-
preBuildPrototype
protected void preBuildPrototype()Handles providers and decorators. -
validatePrototype
protected void validatePrototype()Validates required properties.
-