Interface OpenAiChatModelConfig
- All Superinterfaces:
Prototype.Api
- All Known Implementing Classes:
OpenAiChatModelConfig.BuilderBase.OpenAiChatModelConfigImpl
Configuration for the OpenAI chat model,
OpenAiChatModel
.
Provides methods for setting up and managing properties related to OpenAI chat API requests.- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic class
Fluent API builder forOpenAiChatModelConfig
.static class
OpenAiChatModelConfig.BuilderBase<BUILDER extends OpenAiChatModelConfig.BuilderBase<BUILDER,
PROTOTYPE>, PROTOTYPE extends OpenAiChatModelConfig> Fluent API builder base forOpenAiChatModelConfig
. -
Field Summary
Fields -
Method Summary
Modifier and TypeMethodDescriptionapiKey()
The API key used to authenticate requests to the OpenAI API.baseUrl()
The base URL for the OpenAI API.builder()
Create a new fluent API builder to customize configuration.builder
(OpenAiChatModelConfig instance) Create a new fluent API builder from an existing instance.static OpenAiChatModelConfig
create()
Create a new instance with default values.static OpenAiChatModelConfig
Create a new instance from configuration.A map containing custom headers.boolean
enabled()
If set tofalse
(default), OpenAI model will not be available even if configured.The frequency penalty, between -2.0 and 2.0.LogitBias adjusts the likelihood of specific tokens appearing in a model's response.Whether to log API requests.Whether to log API responses.The maximum number of tokens allowed for the model's response.The maximum number of retries for failed API requests.The maximum number of tokens to generate in the completion.The model name to use (e.g., "gpt-3.5-turbo").The ID of the organization for API requests.Whether to allow parallel calls to tools.The presence penalty, between -2.0 and 2.0.proxy()
Proxy to use.The format in which the model should return the response.seed()
The seed for the random number generator used by the model.stop()
The list of sequences where the API will stop generating further tokens.Whether to enforce a strict JSON schema for the model's output.Whether to enforce strict validation of tools used by the model.The sampling temperature to use, between 0 and 2.timeout()
The timeout setting for API requests.Optional
<dev.langchain4j.model.Tokenizer> Tokenizer to use.topP()
The nucleus sampling value, where the model considers the results of the tokens with top_p probability mass.user()
The user ID associated with the API requests.
-
Field Details
-
CONFIG_ROOT
Default configuration prefix.- See Also:
-
-
Method Details
-
builder
Create a new fluent API builder to customize configuration.- Returns:
- a new builder
-
builder
Create a new fluent API builder from an existing instance.- Parameters:
instance
- an existing instance used as a base for the builder- Returns:
- a builder based on an instance
-
create
Create a new instance from configuration.- Parameters:
config
- used to configure the new instance- Returns:
- a new instance configured from configuration
-
create
Create a new instance with default values.- Returns:
- a new instance
-
maxRetries
The maximum number of retries for failed API requests.- Returns:
- an
Optional
containing the maximum number of retries
-
temperature
The sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic.- Returns:
- an
Optional
containing the sampling temperature
-
topP
The nucleus sampling value, where the model considers the results of the tokens with top_p probability mass.- Returns:
- an
Optional
containing the nucleus sampling value
-
stop
The list of sequences where the API will stop generating further tokens.- Returns:
- the list of stop sequences
-
maxTokens
The maximum number of tokens to generate in the completion.- Returns:
- an
Optional
containing the maximum number of tokens
-
maxCompletionTokens
The maximum number of tokens allowed for the model's response.- Returns:
- an
Optional
containing the maximum number of completion tokens
-
presencePenalty
The presence penalty, between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, encouraging the model to use new words.- Returns:
- an
Optional
containing the presence penalty
-
frequencyPenalty
The frequency penalty, between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line.- Returns:
- an
Optional
containing the frequency penalty
-
logitBias
LogitBias adjusts the likelihood of specific tokens appearing in a model's response. A map of token IDs to bias values (-100 to 100). Positive values increase the chance of the token, while negative values reduce it, allowing fine control over token preferences in the output.- Returns:
- a logitBias map
-
responseFormat
The format in which the model should return the response.- Returns:
- an
Optional
containing the response format
-
strictJsonSchema
Whether to enforce a strict JSON schema for the model's output.- Returns:
- an
Optional
containing true if strict JSON schema is enforced, false otherwise
-
seed
The seed for the random number generator used by the model.- Returns:
- an
Optional
containing the seed
-
user
The user ID associated with the API requests.- Returns:
- an
Optional
containing the user ID
-
strictTools
Whether to enforce strict validation of tools used by the model.- Returns:
- an
Optional
containing true if strict tools are enforced, false otherwise
-
parallelToolCalls
Whether to allow parallel calls to tools.- Returns:
- an
Optional
containing true if parallel tool calls are allowed, false otherwise
-
tokenizer
Optional<dev.langchain4j.model.Tokenizer> tokenizer()Tokenizer to use.- Returns:
- an
Optional
containing the tokenizer
-
enabled
boolean enabled()If set tofalse
(default), OpenAI model will not be available even if configured.- Returns:
- whether OpenAI model is enabled, defaults to
false
-
baseUrl
The base URL for the OpenAI API.- Returns:
- the base URL
-
apiKey
The API key used to authenticate requests to the OpenAI API.- Returns:
- an
Optional
containing the API key
-
logRequests
Whether to log API requests.- Returns:
- an
Optional
containing true if requests should be logged, false otherwise
-
logResponses
Whether to log API responses.- Returns:
- an
Optional
containing true if responses should be logged, false otherwise
-
customHeaders
A map containing custom headers.- Returns:
- custom headers map
-
timeout
The timeout setting for API requests.- Returns:
- the timeout setting in
Duration.parse(java.lang.CharSequence)
format
-
proxy
Proxy to use.- Returns:
- an
Optional
containing HTTP proxy to use
-
organizationId
The ID of the organization for API requests.- Returns:
- organization ID
-
modelName
The model name to use (e.g., "gpt-3.5-turbo").- Returns:
- the model name
-