微信扫码
添加专属顾问
我要投稿
选择大模型部署工具不再难,本文以DeepSeek-R1 32B模型为例,详解Ollama和llama.cpp的选型指南。 核心内容: 1. Ollama和llama.cpp作为大模型部署工具的背景和区别 2. Ollama和llama.cpp的技术关系和底层实现 3. 基于DeepSeek-R1 32B模型的Ollama和llama.cpp性能评测与部署实践
FROM ./bartowski/DeepSeek-R1-Distill-Qwen-32B-Q5_K_M.gguf
ollama create my-deepseek-r1-32b-gguf -f .\deepseek-r1-32b.gguf
ollama run my-deepseek-r1-32b-gguf:latest
NAME ID SIZE PROCESSOR UNTILmy-deepseek-r1-32b-gguf:latest ad9f11c41b7a 25 GB 87%/13% CPU/GPU 3 minutes from now
https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#git-bash-mingw64
build/bin/Release/llama-cli -m "/path/to/DeepSeek-R1-Distill-Qwen-32B-Q5_K_M.gguf" -ngl 100 -c 16384 -t 10 -n -2 -cnv
ggml_vulkan: Device memory allocation of size 1025355776 failed.ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemoryllama_model_load: error loading model: unable to allocate Vulkan0 bufferllama_model_load_from_file_impl: failed to load modelcommon_init_from_params: failed to load model 'D:/llm/Model/bartowski/DeepSeek-R1-Distill-Qwen-32B-Q5_K_M.gguf'main: error: unable to load model
// Given a model and one or more GPU targets, predict how many layers and bytes we can load, and the total size// The GPUs provided must all be the same Libraryfunc EstimateGPULayers(gpus []discover.GpuInfo, f *ggml.GGML, projectors []string, opts api.Options) MemoryEstimate { // Graph size for a partial offload, applies to all GPUs var graphPartialOffload uint64 // Graph size when all layers are offloaded, applies to all GPUs var graphFullOffload uint64 // Final graph offload once we know full or partial var graphOffload uint64 ...53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费POC验证,效果达标后再合作。零风险落地应用大模型,已交付160+中大型企业
2025-12-22
Notesnook:一款完全开源、以隐私为核心的笔记应用
2025-12-22
一文彻底看懂 Google 最新开源 A2UI 协议:如何让 AI Agent “说出UI” ?
2025-12-22
火线解析MiniMax招股书!全球领先大模型成本只有OpenAI 1%,果然拳怕少壮
2025-12-21
Benotes:一款功能强大、易于安装和使用的开源笔记与书签一体化应用
2025-12-21
告别每月 AI 订阅费!这款开源笔记内置 Ollama,让你的电脑变身第二大脑
2025-12-20
开口跪!这款开源TTS让AI说话带“情绪”,还能多语言克隆!
2025-12-20
ollama v0.13.5 发布详解:新模型接入、引擎升级与工具能力增强
2025-12-19
小米大模型Mimo-V2-Flash本地部署
2025-11-19
2025-10-20
2025-10-27
2025-10-27
2025-10-03
2025-09-29
2025-11-17
2025-10-29
2025-09-29
2025-11-07
2025-12-22
2025-11-12
2025-11-10
2025-11-03
2025-10-29
2025-10-28
2025-10-13
2025-09-29