Interface InProcessEmbeddingModelConfig
- All Superinterfaces:
Prototype.Api
- All Known Implementing Classes:
InProcessEmbeddingModelConfig.BuilderBase.InProcessEmbeddingModelConfigImpl
Configuration blueprint for LangChain4j in-process models.
- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic classFluent API builder forInProcessEmbeddingModelConfig.static classInProcessEmbeddingModelConfig.BuilderBase<BUILDER extends InProcessEmbeddingModelConfig.BuilderBase<BUILDER,PROTOTYPE>, PROTOTYPE extends InProcessEmbeddingModelConfig> Fluent API builder base forInProcessEmbeddingModelConfig. -
Method Summary
Modifier and TypeMethodDescriptionbuilder()Create a new fluent API builder to customize configuration.builder(InProcessEmbeddingModelConfig instance) Create a new fluent API builder from an existing instance.Deprecated.Create a new instance from configuration.booleanenabled()Whether the embedding model is enabled.executor()Executor configuration used by the embedding model.The path to the modelPath file (e.g., "/path/to/model.onnx").The path to the tokenizer file (e.g., "/path/to/tokenizer.json").Optional<dev.langchain4j.model.embedding.onnx.PoolingMode> The pooling model to use.type()Which in-process ONNX model variant should be used.
-
Method Details
-
builder
Create a new fluent API builder to customize configuration.- Returns:
- a new builder
-
builder
Create a new fluent API builder from an existing instance.- Parameters:
instance- an existing instance used as a base for the builder- Returns:
- a builder based on an instance
-
create
Create a new instance from configuration.- Parameters:
config- used to configure the new instance- Returns:
- a new instance configured from configuration
-
create
Deprecated.Create a new instance from configuration.- Parameters:
config- used to configure the new instance- Returns:
- a new instance configured from configuration
-
enabled
boolean enabled()Whether the embedding model is enabled. If set tofalse, the model will not be available even if configured.- Returns:
- whether the embedding model is enabled, defaults to
true
-
executor
Optional<ThreadPoolConfig> executor()Executor configuration used by the embedding model.- Returns:
- optional executor configuration
-
type
InProcessModelType type()Which in-process ONNX model variant should be used.- Returns:
- in-process ONNX model provided type
-
pathToModel
The path to the modelPath file (e.g., "/path/to/model.onnx").- Returns:
- an
Optionalcontaining the configured model path, or an emptyOptionalif not set
-
pathToTokenizer
The path to the tokenizer file (e.g., "/path/to/tokenizer.json").- Returns:
- an
Optionalcontaining the configured tokenizer path, or an emptyOptionalif not set
-
poolingMode
Optional<dev.langchain4j.model.embedding.onnx.PoolingMode> poolingMode()The pooling model to use. Can be found in the ".../1_Pooling/config.json" file on HuggingFace. Here is an example."pooling_mode_mean_tokens": truemeans thatPoolingMode.MEANshould be used.- Returns:
- an
Optionalcontaining the configuredPoolingMode, or an emptyOptionalif no pooling mode is explicitly configured
-
create(io.helidon.config.Config)