• LLM大语言模型(十三):ChatGLM3-6B兼容Langchain的Function Call的一步一步的详细转换过程记录


    # LangChain:原始prompt

    System: Respond to the human as helpfully and accurately as possible. You have access to the following tools:

    Calculator: Useful for when you need to calculate math problems, args: {\'calculation\': {\'description\': \'calculation to perform\', \'title\': \'Calculation\', \'type\': \'string\'}}

    Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).

    Valid "action" values: "Final Answer" or Calculator

    Provide only ONE action per $JSON_BLOB, as shown:

    ```
    {
        "action": $TOOL_NAME,
        "action_input": $INPUT
    }
    ```
    Follow this format:

    Question: input question to answer
    Thought: consider previous and subsequent steps
    Action:
    ```
    $JSON_BLOB
    ```
    Observation: action result
    ... (repeat Thought/Action/Observation N times)
    Thought: I know what to respond
    Action:
    ```
    {
        "action": "Final Answer",
        "action_input": "Final response to human"
    }

    Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation
    Human: 34 * 34

    (reminder to respond in a JSON blob no matter what)


    # ChatGLM:找到原始prompt中关于tool的说明 

    Calculator: Useful for when you need to calculate math problems, args: {'calculation': {'description': 'calculation to perform', 'title': 'Calculation', 'type': 'string'}}

    # ChatGLM:找到原始prompt中用户输入

    Human: 34 * 34\n\n\n(reminder to respond in a JSON blob no matter what)

    # ChatGLM:将原始prompt转换为ChatGLM的会话格式,并记录到self.history,同时找到用户输入作为接下来的query=34 * 34

    1. [
    2.     {
    3.         'role': 'system', 
    4.         'content': 'Answer the following questions as best as you can. You have access to the following tools:', 
    5.         'tools': [
    6.             {
    7.                 'name': 'Calculator', 
    8.                 'description': 'Useful for when you need to calculate math problems', 
    9.                 'parameters': {
    10.                     'calculation': {
    11.                         'description': 'calculation to perform', 
    12.                         'type': 'string'
    13.                     }
    14.                 }
    15.             }
    16.         ]
    17.     }, 
    18.     {
    19.         'role': 'user', 
    20.         'content': '34 * 34\n\n\n (reminder to respond in a JSON blob no matter what)'
    21.     }
    22. ]

    # ChatGLM:依据self.history和query进行生成,生成结果赋值给self.history,新的self.history内容如下

    [{'role': 'system', 'content': 'Answer the following questions as best as you can. You have access to the following tools:', 'tools': [{'name': 'Calculator', 'description': 'Useful for when you need to calculate math problems', 'parameters': {'calculation': {'description': 'calculation to perform', 'type': 'string'}}}]}, {'role': 'user', 'content': '34 * 34\n\n\n (reminder to respond in a JSON blob no matter what)'}, {'role': 'user', 'content': '34 * 34'}, {'role': 'assistant', 'metadata': 'Calculator', 'content': " ```python\ntool_call(calculation='34*34')\n```"}]

    ==新增了两条信息==

    {'role': 'user', 'content': '34 * 34'}, 
    {'role': 'assistant', 'metadata': 'Calculator', 'content': " ```python\ntool_call(calculation='34*34')\n```"}

    # ChatGLM:解析LLM最新回答中的tool,并作为_call()函数的返回


    response = '\nAction: \n```\n{"action": "Calculator", "action_input": {"calculation": "34*34"}}\n```'

    # ChatGLM:更新_call()的入参History,增加一个pair=(prompt,response),传递给LangChain


    ==此时prompt就是原始prompt==
    ==response就是ChatGLM生成的接下来要用到的Tool,也就是原始prompt里希望LLM返回的结果==

    # LangChain:执行Tool的调用,得到Tool的返回值,继续调用LLM


    ==这时候LLM还没有返回Final answer,所以要继续执行LLM==

    # ChatGLM:此时的prompt是在原始prompt基础上再增加了上一步Tool的调用信息


    'System: Respond to the human as helpfully and accurately as possible. You have access to the following tools:\n\nCalculator: Useful for when you need to calculate math problems, args: {\'calculation\': {\'description\': \'calculation to perform\', \'title\': \'Calculation\', \'type\': \'string\'}}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or Calculator\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation\nHuman: 34 * 34\n\n\n

    Action: \n```\n{"action": "Calculator", "action_input": {"calculation": "34*34"}}\n```\nObservation: 1156\nThought: \n 
    ==这一段是新增的,增加了上一步Action的Tool的执行结果==

    (reminder to respond in a JSON blob no matter what)'

    # ChatGLM解析新prompt中的observation


    得到1156
    向self.history新增一条信息:
    {'role': 'observation', 'content': '1156'}

    # ChatGLM:再次执行chat,进行生成


    入参:此时query是空,history是所有的历史
    返回结果,新增如下两条信息:
    {'role': 'user', 'content': ''}
    {'role': 'assistant', 'metadata': '', 'content': '{\n    " calculation": "34*34",\n    " result": 1156\n}'}

    # ChatGLM:解析tool,发现self.history里最后一条消息的metadata是空,说明没有tool需要调用了,可以拼接Final answer,_call()返回值如下


    response = '\nAction: \n```\n{"action": "Final Answer", "action_input": "{\\n    \\" calculation\\": \\"34*34\\",\\n    \\" result\\": 1156\\n}"}\n```'

    # ChatGLM:_call()向入参的History里增加了一个新的pair


    0=新的prompt
    1=response

    # LangChain:收到了Final Answer,调用结束,最后输出


    {'input': '34 * 34', 'output': '{\n    " calculation": "34*34",\n    " result": 1156\n}'}

     参考

    1. LLM大语言模型(十二):关于ChatGLM3-6B不兼容Langchain 的Function Call-CSDN博客
    2.  LLM大语言模型(十一):基于自定义的ChatGLM3-6B构建LangChain的chain-CSDN博客
    3. LLM大语言模型(十):LangChain自定义Agent使用自定义的LLM-CSDN博客
    4. LLM大语言模型(九):LangChain封装自定义的LLM-CSDN博客
    5. LLM大语言模型(八):ChatGLM3-6B使用的tokenizer模型BAAI/bge-large-zh-v1.5-CSDN博客
    6. LLM大语言模型(七):部署ChatGLM3-6B并提供HTTP server能力
    7. LLM大语言模型(四):在ChatGLM3-6B中使用langchain_chatglm3-6b langchain-CSDN博客
  • 相关阅读:
    【开发技术】2万字详细介绍Docker 和 web项目的部署监控,docker部署,拉取kafana,prometheus镜像监控
    Go学习笔记1
    毕业设计 基于51单片机老人防跌倒GSM短信报警系统
    Netty、Kafka中的零拷贝技术到底有多牛?
    A tour of gRPC:05 - gRPC server straming 服务端流
    java基于SpringBoot+Vue+nodejs的高校自动排课系统 Element-UI
    新东方网销大火,腾讯坐不住了,急于套现,不看好线上直播?
    Open3d 使用marching cubes生成3D模型
    Mysql数据库视图,备份,范式
    小程序虐我千百遍,我还怎么待她如初恋,回忆我五年的小程序开发之路
  • 原文地址:https://blog.csdn.net/hugo_lei/article/details/138141114