如何通过程序调用Deepseek本地大模型手把手详细讲解(看这一篇就够了,建议收藏!)

Python调用Deepseek本地大模型的方法:

https://pypi.org/project/ollama/

pip install ollama
from ollama import Client
client = Client(
host='http://192.168.1.4:11434',
headers={'x-some-header': 'some-value'}
)
stream = client.chat(model='deepseek-r1',stream=True, messages=[
{
'role': 'user',
'content': '你叫什么?',
}
])
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)

Nodejs调用Deepseek本地大模型的方法:

https://www.npmjs.com/package/ollama

npm i ollama --save
import { Ollama } from 'ollama'
const ollama = new Ollama({ host: 'http://127.0.0.1:11434' })
let prompt = `你叫什么`;
const response = await ollama.chat({
model: 'deepseek-r1:7b',
messages: [{ role: 'user', content: prompt }],
})
console.log(response.message.content)

CURL调用Deepseek本地大模型的方法:

curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-r1:7b",
"messages": [
{
"role": "user",
"content": "你好"
}
]
}'
回到顶部
AI 助手
你好,我是IT营的 AI 助手
您可以尝试点击下方的快捷入口开启体验!