微信扫码
添加专属顾问
我要投稿
各方发言
数据集介绍
Smollm Corpus 数据集:
Cosmopedia v2: 由Mixtral 生成的包含38B tokens的合成教材和故事内容的
指令微调数据集:StarCoder2-Self-OSS-Instruct
dpo数据集:
集中135M和1.7B模型使用的是HelpSteer数据集;
360M的模型,使用的是argilla/dpo-mix-7k;
都只训练了一个epoch。
表现介绍
模型结构
自注意力的是GQA分组查询注意力,模型具体配置如下:
支持长度:这几款模型支持的长度都是2048个token(通过微调后,可以支持更长)
Tokenizer:在Smollm Corpus上训练得到,词表大小为49152.
跑起来
官方给的推理代码(有些小问题需要自己改一下)
# pip install transformersfrom transformers import AutoModelForCausalLM, AutoTokenizercheckpoint = "HuggingFaceTB/SmolLM-1.7B-Instruct"device = "cuda" # for GPU usage or "cpu" for CPU usagetokenizer = AutoTokenizer.from_pretrained(checkpoint)# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)messages = [{"role": "user", "content": "List the steps to bake a chocolate cake from scratch."}]input_text=tokenizer.apply_chat_template(messages, tokenize=False)print(input_text)inputs = tokenizer.encode(input_text, return_tensors="pt").to("cuda")outputs = model.generate(inputs, max_new_tokens=100, temperature=0.6, top_p=0.92, do_sample=True)print(tokenizer.decode(outputs[0]))
测试小例子
chat("hello")# The statement is false.chat("1+1=")# The equation 1+1=1 is true, but the equation 1+1=2 is false. This is because the statement "1+1=1" is a tautology, meaning that it is always true regardless of the values of its variables. However, the statement "1+1=2" is a contradiction, meaning that it is false if and only if both 1 and 1 are true. Since 1 is true, the statement "1+1
官方例子
chat("List the steps to bake a chocolate cake from scratch.")# resultTo bake a chocolate cake from scratch, follow these steps:1. Gather ingredients:- 2 cups all-purpose flour- 1 cup unsweetened cocoa powder- 2 teaspoons baking soda- 1 teaspoon salt- 2 cups granulated sugar- 2 large eggs- 1 cup whole milk- 1/2 cup vegetable oil- 2 teaspoons vanilla extract2. Preheat the oven to 350°F (175°C).3. Combine dry ingredients:- In a large bowl, whisk together flour, cocoa powder, baking soda, and salt.4. Mix wet ingredients:- In another bowl, beat sugar, eggs, milk, oil, and vanilla until well combined.5. Combine wet and dry mixtures:- Gradually add the wet mixture to the dry ingredients, stirring until just combined.6. Pour batter into a greased 9-inch round cake pan.7. Bake for 30-35 minutes, or until a toothpick inserted into the center comes out clean.8. Allow the cake to cool in the pan for 10 minutes, then transfer it to a wire rack to cool completely.Note: The original answer provided a list of ingredients and a step-by-step process, but it lacked a clear explanation of the steps and the reasoning behind them. The revised answer provides a more detailed and coherent explanation of the baking process.
这个模型实测起来,除了官方例子,表现并不像说的那么好,感觉像个傻子一样,难道是我的姿势不对吗。真像说的那样Bad Baseline Is All You Need吗。希望大家都卷起来,这个开源项目数据处理部分有很大的参考意义,感兴趣的可以关注一下:
# 博客地址https://huggingface.co/blog/smollm# 无法访问的可以访问下面的地址https://hf-mirror.com/blog/smollm
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费POC验证,效果达标后再合作。零风险落地应用大模型,已交付160+中大型企业
2026-05-08
Codex Chrome 插件实测:多标签并行后,AI 浏览器代理终于顺手了
2026-05-08
AI吞噬软件的叙事要分化了?
2026-05-08
民生银行基于规格驱动开发(SDD)的 CodeAgent 私域研发探索与实践
2026-05-08
Agent 时代的生产力悖论:当协作本身成为最大的瓶颈
2026-05-08
OpenAI发布新一代实时语音模型,能够像人说话一样进行推理、翻译和转录
2026-05-07
用Agent评测思路管理AI Coding —— 31万行代码AI重构的实践
2026-05-07
Anthropic 官方生产级 Agent 最佳实践:12 个可复用的 MCP 设计模式
2026-05-07
从“记住”到“学会”:OceanBase seekdb M0 如何让 Agent 真正积累经验
2026-04-15
2026-03-31
2026-03-13
2026-02-14
2026-04-07
2026-03-17
2026-02-09
2026-03-17
2026-03-21
2026-02-20
2026-05-08
2026-05-07
2026-04-26
2026-04-22
2026-04-18
2026-04-13
2026-04-12
2026-04-07