您正在学习的是试看内容,报名后可学习全部内容 报名课程

3.1 LangChain 入门

主要内容:

  • 环境准备:Python、LangChain、Jupyter Notebook 安装
  • LLM 实战:基于 LangChain 实现 Prompt、Tool calling、Structured output 以及 Reasoning

1. 环境准备

1.1. 软件安装

1.1.1. Python 安装

Python 3.10+

https://www.python.org/downloads/

1.1.2. LangChain 安装

https://docs.langchain.com/oss/python/langchain/install

https://github.com/langchain-ai

1.1.3. Jupyter 安装

https://jupyter.org/install

2. LangChain 实战

2.1. OpenAI API 调用

2.1.1. 示例代码

2.1.1.1. 使用 Open AI API 调用 qwen 模型

import os
from openai import OpenAI


client = OpenAI(
    # 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"), 
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
completion = client.chat.completions.create(
    model="qwen-plus", # 此处以qwen-plus为例,可按需更换模型名称。模型列表:https://help.aliyun.com/zh/model-studio/getting-started/models
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        {'role': 'user', 'content': '你是谁?'}],
    max_tokens=100,  # 设置最大输出token数
)
    
print(completion.model_dump_json())

2.1.1.2. 使用 LangChain Open AI API 调用 qwen 模型

from langchain_openai import ChatOpenAI
import os

chatLLM = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    model="qwen-plus",  # 此处以qwen-plus为例,您可按需更换模型名称。模型列表:https://help.aliyun.com/zh/model-studio/getting-started/models
    # other params...
)
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "你是谁?"}]
response = chatLLM.invoke(messages)
print(response.to_json())

2.1.1.3. 使用 DashScope API 调用 qwen 模型

import os
from dashscope import Generation

messages = [
    {'role': 'system', 'content': 'You are a helpful assistant.'},
    {'role': 'user', 'content': '你是谁?'}
    ]
response = Generation.call(
    # 若没有配置环境变量,请用百炼API Key将下行替换为:api_key = "sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"), 
    model="qwen-plus",   # 模型列表:https://help.aliyun.com/zh/model-studio/getting-started/models
    messages=messages,
    result_format="message"
)

if response.status_code == 200:
    print(response.output.choices[0].message.content)
else:
    print(f"HTTP返回码:{response.status_code}")
    print(f"错误码:{response.code}")
    print(f"错误信息:{response.message}")
    print("请参考文档:https://help.aliyun.com/zh/model-studio/developer-reference/error-code")

2.2. 提示词(Prompts)

2.2.1. 零样本提示(Zero-Shot Prompting)

from langchain_openai import ChatOpenAI
from langchain.messages import HumanMessage, AIMessage, SystemMessage

import os

chatLLM = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    model="qwen-plus",  # 此处以qwen-plus为例,您可按需更换模型名称。模型列表:https://help.aliyun.com/zh/model-studio/getting-started/models
    # other params...
)

systemMessage = SystemMessage(content="You are a helpful assistant.");
userMessage = HumanMessage(content="""
Classify the text into neutral, negative or positive. 
Text: I think the vacation is okay.
Sentiment:
""");

messages = [systemMessage, userMessage];
response = chatLLM.invoke(messages)
print(response.content)

2.2.2. 少样本提示(Few-Shot Prompting)

from langchain_openai import ChatOpenAI
from langchain.messages import HumanMessage, AIMessage, SystemMessage

import os

chatLLM = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    model="qwen-plus",  # 此处以qwen-plus为例,您可按需更换模型名称。模型列表:https://help.aliyun.com/zh/model-studio/getting-started/models
    # other params...
)

systemMessage = SystemMessage(content="You are a helpful assistant.");
userMessage = HumanMessage(content="""
A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses the word whatpu is:
We were traveling in Africa and we saw these very cute whatpus.
 
To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses the word farduddle is:
""");

messages = [systemMessage, userMessage];
response = chatLLM.invoke(messages)
print(response.content)

2.3. 工具调用(Tool Calling)

2.4. 结构化输出(Structured output)

2.5. 推理(Reasoning)