微信扫码
添加专属顾问
 
                        我要投稿
探索LangChain4j和Chroma向量数据库的集成示例,实现文本嵌入与存储。 核心内容: 1. 安装Chroma并启动Docker容器 2. 在pom.xml中添加LangChain4j-Chroma依赖 3. 示例代码演示文本嵌入与向量数据库存储过程
 
                                docker run -d \
  --name chromadb \
  -p 8000:8000 \
  -v "$(pwd)/chroma_data:/chroma/chroma" \
  -e IS_PERSISTENT=TRUE \
  -e ANONYMIZED_TELEMETRY=TRUE \
  docker.1ms.run/chromadb/chroma:latest
        <dependency>
            <groupId>dev.langchain4j</groupId>
            <artifactId>langchain4j-chroma</artifactId>
            <version>1.0.0-beta1</version>
        </dependency>
public class JlamaChromaExample {
    public static void main(String[] args) {
        String chromaEndpoint = "http://localhost:8000";
        EmbeddingStore<TextSegment> embeddingStore = ChromaEmbeddingStore
                .builder()
                .baseUrl(chromaEndpoint)
                .collectionName("test1_collection")
                .logRequests(true)
                .logResponses(true)
                .build();
        EmbeddingModel embeddingModel = JlamaEmbeddingModel.builder()
                .modelName("intfloat/e5-small-v2")
                .build();
        TextSegment segment1 = TextSegment.from("I like football.");
        Embedding embedding1 = embeddingModel.embed(segment1).content();
        embeddingStore.add(embedding1, segment1);
        TextSegment segment2 = TextSegment.from("The weather is good today.");
        Embedding embedding2 = embeddingModel.embed(segment2).content();
        embeddingStore.add(embedding2, segment2);
        Embedding queryEmbedding = embeddingModel.embed("What is your favourite sport?").content();
        List<EmbeddingMatch<TextSegment>> relevant = embeddingStore.findRelevant(queryEmbedding, 1);
        EmbeddingMatch<TextSegment> embeddingMatch = relevant.get(0);
        System.out.println(embeddingMatch.score()); // 0.8144288493114709
        System.out.println(embeddingMatch.embedded().text()); // I like football.
    }
}
WARNING: Using incubator modules: jdk.incubator.vector
INFO  c.g.tjake.jlama.model.AbstractModel - Model type = F32, Working memory type = F32, Quantized memory type = F32
WARN  c.g.t.j.t.o.TensorOperationsProvider - Native operations not available. Consider adding 'com.github.tjake:jlama-native' to the classpath
INFO  c.g.t.j.t.o.TensorOperationsProvider - Using Panama Vector Operations (OffHeap)
0.8279024262570531
I like football.
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费POC验证,效果达标后再合作。零风险落地应用大模型,已交付160+中大型企业
2025-10-29
为什么我们选择 LangGraph 作为智能体系统的技术底座?
2025-10-27
Langchain 、 Manus 组了一个研讨会:Agent越智能,死得越快!
2025-10-23
LangChain V1.0 深度解析:手把手带你跑通全新智能体架构
2025-10-23
LangChain 与 LangGraph 双双发布 1.0:AI 智能体框架迎来里程碑时刻!
2025-10-19
AI 不再“乱跑”:LangChain × LangGraph 打造可控多阶段智能流程
2025-10-15
LangChain对话Manus创始人:顶级AI智能体上下文工程的“满分作业”首次公开
2025-10-09
Langchain回应OpenAI:为什么我们不做拖拉拽工作流
2025-09-21
告别无效检索:我用LangExtract + Milvus升级 RAG 管道的实战复盘
 
            2025-09-13
2025-09-21
2025-10-19
2025-08-19
2025-08-17
2025-09-19
2025-09-12
2025-09-06
2025-08-03
2025-08-29
2025-10-29
2025-07-14
2025-07-13
2025-07-05
2025-06-26
2025-06-13
2025-05-21
2025-05-19