文档内容提取:LLMChainExtractor

在检索过程中,一个挑战是通常你不知道你的文档存储系统在数据导入时会遇到哪些具体的查询。这意味着对查询来说最相关的信息可能埋没在一个包含大量不相关文本的文档中

将整个文档通过你的应用程序可能会导致更昂贵的LLM(大型语言模型)调用和更差的响应。 上下文压缩旨在解决这个问题。这个想法很简单:不是立即按原样返回检索到的文档,而是可以使用给定查询的上下文来压缩它们,以便只返回相关信息。这里的“压缩”既指压缩单个文档的内容,也指整体过滤掉文档。 要使用上下文压缩检索器,你需要:- 一个基础检索器- 一个文档压缩器 上下文压缩检索器将查询传递给基础检索器,获取初始文档并通过文档压缩器传递它们。文档压缩器接收一个文档列表,通过减少文档内容或完全删除文档来缩短列表。

核心思想:对召回的文档进行关键信息提取

image-20240627110142953

langchain 解决办法

retriever : 向量数据库实例化的检索器

compressor: 本质是一个chain, 传入一个语言模型(比如千帆)

1
2
3
4
5
# LLMChainExtractor
compressor = LLMChainExtractor.from_llm(qianfan_chat)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)

关键提示词

1
2
3
4
5
6
7
"""Given the following question and context, extract any part of the context *AS IS* that is relevant to answer the question. If none of the context is relevant return NO_OUTPUT. \n\nRemember, *DO NOT* edit the extracted parts of the context.\n\n> Question: {question}\n> Context:\n>>>\n{context}\n>>>\nExtracted relevant parts:"""

#
"""在给定的问题和上下文中,提取与回答问题相关的任何部分AS IS。如果上下文中没有任何相关部分,请返回NO_OUTPUT。请记住,不要编辑提取的上下文部分。
问题:{question} 上下文:
{context}
提取的相关部分:"""

调用过程:

[1]检索器搜索文档 [2] 模型抽取文档内容(成功) [3] 4个文档对应4次抽取

image-20240627110156987

抽取失败的情况

image-20240627110205236


用本地 GLM3-6B-128K 模型实现 内容提取chain

先启动你的Glm3-6B 服务端(详见Day8天内容)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 本地GLM3-6B模型
LOCAL_GLM3_6B_ENDPOINT = "http://127.0.0.1:8000/v1/chat/completions"
local_glm3_chat = ChatGLM3(
endpoint_url=LOCAL_GLM3_6B_ENDPOINT,
max_tokens=(1024 * 32),
prefix_messages=messages,
top_p=0.9,
temperature=0,
stream=True,
)

# 解析并载入url
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
docs = text_splitter.split_documents(docs)
# 随便来个文档
doc = docs[1]

template ="""
在给定的问题和上下文中,提取与回答问题相关的任何部分。
如果上下文中没有任何相关部分,请返回NO_OUTPUT。请记住,不要编辑提取的上下文部分。
问题:{question} 上下文:
{context}
提取的相关部分:
"""

prompt = ChatPromptTemplate.from_template(template)

output_parser = StrOutputParser()

chain = prompt | local_glm3_chat | output_parser

res = chain.invoke({"question": question,"context":doc.page_content})

image-20240627110218193

用第三方 API实现提取逻辑

智谱清言

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 随便来个文档
context = docs[1].page_content

#zhipu ai
ZHIPUAI_API_KEY = os.getenv('MY_ZHIPUAI_API_KEY')
client = ZhipuAI(api_key=ZHIPUAI_API_KEY) # 填写您自己的APIKey

question = 'Memory can be defined as what ?'
user_msg =f"""
在给定的问题和上下文中,提取与回答问题相关的任何部分。
如果上下文中没有任何相关部分,请返回NO_OUTPUT。请记住,不要编辑提取的上下文部分。
问题:{question} 上下文:
{context}
提取的相关部分:
"""

response = client.chat.completions.create(
model="glm-4", # 填写需要调用的模型名称
messages=[
{"role": "system", "content": "你是一个文稿编辑,负责从文本中摘取内容"},
{"role": "user", "content": user_msg}
],
)
print(response.choices[0].message.content)

内容过滤LLMChainFilter

LLMChainFilter 是一个稍微简单但更健壮的压缩器,它使用 LLM 链来决定哪些最初检索到的文档需要被过滤掉,哪些需要返回,而不会操作文档内容。

image-20240627110234654

关键提示词

1
template="Given the following question and context, return YES if the context is relevant to the question and NO if it isn't.\n\n> Question: {question}\n> Context:\n>>>\n{context}\n>>>\n> Relevant (YES / NO):"), 

使用方法

1
2
3
4
5
6
7
# LLMChainFilter
doc_filter = LLMChainFilter.from_llm(qianfan_chat)
filter_retriever = ContextualCompressionRetriever(
base_compressor=doc_filter, base_retriever=retriever
)
filtered_docs = filter_retriever.get_relevant_documents(question)
pass

[1] 检索起搜素,返回文档

[2] 模型判断是否跟问题有相关性

[3]每个文档跑一遍

image-20240627110256947

智谱AI 实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 得到文档
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

# zhipuai 支持较大的输入token 128K,大文件分割
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
docs = text_splitter.split_documents(docs)

# zhipuai 初始化
ZHIPUAI_API_KEY = os.getenv('MY_ZHIPUAI_API_KEY')
client = ZhipuAI(api_key=ZHIPUAI_API_KEY) # 填写您自己的APIKey

question = 'Memory can be defined as what ?'
context = docs[0].page_content
user_msg =f"""
根据给出的问题和上下文,如果上下文与问题相关,则返回YES,如果不相关,则返回NO。
> 问题: {question}
> 上下文:
>>>
{context}
>>>
> 相关(YES / NO):
"""

response = client.chat.completions.create(
model="glm-4", # 填写需要调用的模型名称
messages=[
{"role": "system", "content": "你是一个文稿编辑,负责判断用户问题是否和文档相关"},
{"role": "user", "content": user_msg}
],
)
print(response.choices[0].message.content)

image-20240627110513348

如何在chain中添加第三方模型的过滤?

检索→文档列表→第三方过滤(可扩展)→模型处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# 第三方实现文档过滤,并应用在chain 中
def local_filter(documents: Iterable[Document]) -> List[Document]:
# zhipuai 初始化
ZHIPUAI_API_KEY = os.getenv('MY_ZHIPUAI_API_KEY')
client = ZhipuAI(api_key=ZHIPUAI_API_KEY) # 填写您自己的APIKey

new_docs = []
for doc in documents:
# 这个问题 在函数外面,每次赋值question
context = doc.page_content
user_msg = f"""
根据给出的问题和上下文,如果上下文与问题相关,则返回YES,如果不相关,则返回NO。
> 问题: {question}
> 上下文:
>>>
{context}
>>>
> 相关(DOC_YES / DOC_NO):
"""
response = client.chat.completions.create(
model="glm-4", # 填写需要调用的模型名称
messages=[
{"role": "system", "content": "你是一个文稿编辑,负责判断用户问题是否和文档相关"},
{"role": "user", "content": user_msg}
],
)
if 'DOC_YES' in response.choices[0].message.content:
new_docs.append(doc)
return new_docs

# 提示词
template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

chain = (
{"context": retriever | RunnableLambda(local_filter), "question": RunnablePassthrough()}
| prompt
| qianfan_chat
| StrOutputParser()
)
res = chain.invoke(question)
pass

代码

抽取

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
from uuid import uuid4
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor, LLMChainFilter, EmbeddingsFilter, \
DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.chat_models import AzureChatOpenAI
from langchain_community.chat_models.baidu_qianfan_endpoint import QianfanChatEndpoint
from langchain_community.document_loaders.web_base import WebBaseLoader
from langchain_community.document_transformers import EmbeddingsRedundantFilter, LongContextReorder
from langchain_community.llms.chatglm3 import ChatGLM3
from langchain_community.vectorstores.elasticsearch import ElasticsearchStore
import os
from langchain_community.embeddings import QianfanEmbeddingsEndpoint, HuggingFaceEmbeddings
from langchain_core.messages import SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_text_splitters import RecursiveCharacterTextSplitter
from zhipuai import ZhipuAI

# Langsmith 配置,不用可注掉
unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_PROJECT"] = f" [语境压缩] 内容提取 qianfan Tracing Walkthrough - {unique_id}"
# os.environ["LANGCHAIN_TRACING_V2"] = 'true'
os.environ["LANGCHAIN_API_KEY"] = os.getenv('MY_LANGCHAIN_API_KEY')

# 本地 BGE 模型
bge_en_v1p5_model_path = "D:\\LLM\\Bge_models\\bge-base-en-v1.5"

# 使用GPU
embeddings_model = HuggingFaceEmbeddings(
model_name=bge_en_v1p5_model_path,
model_kwargs={'device': 'cuda:0'},
encode_kwargs={'batch_size': 32, 'normalize_embeddings': True, }
)

# # 向量数据库
vectorstore = ElasticsearchStore(
es_url=os.environ['ELASTIC_HOST_HTTP'],
index_name="index_sd_1024_vectors",
embedding=embeddings_model,
es_user="elastic",
vector_query_field='question_vectors',
es_password=os.environ['ELASTIC_ACCESS_PASSWORD']
)

retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# Azure Openai
os.environ["AZURE_OPENAI_API_KEY"] = os.getenv('MY_AZURE_OPENAI_API_KEY')
os.environ["AZURE_OPENAI_ENDPOINT"] = os.getenv('MY_AZURE_OPENAI_ENDPOINT')
DEPLOYMENT_NAME_GPT3P5 = os.getenv('MY_DEPLOYMENT_NAME_GPT3P5')
azure_chat = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_deployment=DEPLOYMENT_NAME_GPT3P5,
temperature=0
)

os.environ["QIANFAN_ACCESS_KEY"] = os.getenv('MY_QIANFAN_ACCESS_KEY')
os.environ["QIANFAN_SECRET_KEY"] = os.getenv('MY_QIANFAN_SECRET_KEY')

# 千帆 chatModel
qianfan_chat = QianfanChatEndpoint(
model="ERNIE-Bot-4"
)

messages = [
SystemMessage(content="You are an intelligent AI assistant, named ChatGLM3."),
]
# 本地GLM3-6B模型
LOCAL_GLM3_6B_ENDPOINT = "http://127.0.0.1:8000/v1/chat/completions"
local_glm3_chat = ChatGLM3(
endpoint_url=LOCAL_GLM3_6B_ENDPOINT,
max_tokens=(1024 * 32),
prefix_messages=messages,
top_p=0.9,
temperature=0,
stream=True,
)

if __name__ == '__main__':

question = "What is Task Decomposition ?"

# 解析并载入url
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
docs = text_splitter.split_documents(docs)

# 索引数据 ,一次就够,后续可以注释
# # vectorstore.add_documents(docs)

# LLMChainExtractor
compressor = LLMChainExtractor.from_llm(qianfan_chat)

# 压缩文档集合
compressed_docs = compressor.compress_documents(docs[:1],question)
pass

# 直接给出压缩之后的 检索器
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)

compressed_docs = compression_retriever.get_relevant_documents(question)

# 内容压缩chain: 本地 Glm3-6B 128K

# 解析并载入url
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
docs = text_splitter.split_documents(docs)
# 随便来个文档
doc = docs[1]

template ="""
在给定的问题和上下文中,提取与回答问题相关的任何部分。
如果上下文中没有任何相关部分,请返回NO_OUTPUT。请记住,不要编辑提取的上下文部分。
问题:{question} 上下文:
{context}
提取的相关部分:
"""

prompt = ChatPromptTemplate.from_template(template)

output_parser = StrOutputParser()

chain = prompt | local_glm3_chat | output_parser

res = chain.invoke({"question": question,"context":doc.page_content})
pass

# 用第三方的API 实现

# 解析并载入url
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
docs = text_splitter.split_documents(docs)
# 随便来个文档
context = docs[1].page_content

#zhipu ai
ZHIPUAI_API_KEY = os.getenv('MY_ZHIPUAI_API_KEY')
client = ZhipuAI(api_key=ZHIPUAI_API_KEY) # 填写您自己的APIKey

question = 'Memory can be defined as what ?'
user_msg =f"""
在给定的问题和上下文中,提取与回答问题相关的任何部分。
如果上下文中没有任何相关部分,请返回NO_OUTPUT。请记住,不要编辑提取的上下文部分。
问题:{question} 上下文:
{context}
提取的相关部分:
"""

response = client.chat.completions.create(
model="glm-4", # 填写需要调用的模型名称
messages=[
{"role": "system", "content": "你是一个文稿编辑,负责从文本中摘取内容"},
{"role": "user", "content": user_msg}
],
)
print(response.choices[0].message.content)

过滤

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
from uuid import uuid4
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor, LLMChainFilter, EmbeddingsFilter, \
DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.chat_models import AzureChatOpenAI
from langchain_community.chat_models.baidu_qianfan_endpoint import QianfanChatEndpoint
from langchain_community.document_loaders.web_base import WebBaseLoader
from langchain_community.document_transformers import EmbeddingsRedundantFilter, LongContextReorder
from langchain_community.vectorstores.elasticsearch import ElasticsearchStore
import os
from langchain_community.embeddings import QianfanEmbeddingsEndpoint, HuggingFaceEmbeddings
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from langchain_text_splitters import RecursiveCharacterTextSplitter
from zhipuai import ZhipuAI
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Iterable,
List,
Literal,
Optional,
Sequence,
Type,
TypeVar,
Union,
)

# Langsmith 配置,不用可注掉
unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_PROJECT"] = f" [语境压缩] 内容过滤 qianfan Tracing Walkthrough - {unique_id}"
# os.environ["LANGCHAIN_TRACING_V2"] = 'true'
os.environ["LANGCHAIN_API_KEY"] = os.getenv('MY_LANGCHAIN_API_KEY')

# 本地 BGE 模型
bge_en_v1p5_model_path = "D:\\LLM\\Bge_models\\bge-base-en-v1.5"

# 使用GPU
embeddings_model = HuggingFaceEmbeddings(
model_name=bge_en_v1p5_model_path,
model_kwargs={'device': 'cuda:0'},
encode_kwargs={'batch_size': 32, 'normalize_embeddings': True, }
)

# # 向量数据库
vectorstore = ElasticsearchStore(
es_url=os.environ['ELASTIC_HOST_HTTP'],
index_name="index_sd_1024_vectors",
embedding=embeddings_model,
es_user="elastic",
vector_query_field='question_vectors',
es_password=os.environ['ELASTIC_ACCESS_PASSWORD']
)

retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# Azure Openai
os.environ["AZURE_OPENAI_API_KEY"] = os.getenv('MY_AZURE_OPENAI_API_KEY')
os.environ["AZURE_OPENAI_ENDPOINT"] = os.getenv('MY_AZURE_OPENAI_ENDPOINT')
DEPLOYMENT_NAME_GPT3P5 = os.getenv('MY_DEPLOYMENT_NAME_GPT3P5')
azure_chat = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_deployment=DEPLOYMENT_NAME_GPT3P5,
temperature=0
)

os.environ["QIANFAN_ACCESS_KEY"] = os.getenv('MY_QIANFAN_ACCESS_KEY')
os.environ["QIANFAN_SECRET_KEY"] = os.getenv('MY_QIANFAN_SECRET_KEY')

# 千帆 chatModel
qianfan_chat = QianfanChatEndpoint(
model="ERNIE-Bot-4"
)

if __name__ == '__main__':

question = "What is Task Decomposition ?"

# 解析并载入url
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
docs = text_splitter.split_documents(docs)

# 索引数据
vectorstore.add_documents(docs)

LLMChainFilter
doc_filter = LLMChainFilter.from_llm(qianfan_chat)
filter_retriever = ContextualCompressionRetriever(
base_compressor=doc_filter, base_retriever=retriever
)
filtered_docs = filter_retriever.get_relevant_documents(question)
pass


# 第三方实现
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()

# zhipuai 支持较大的输入token 128K,大文件分割
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
docs = text_splitter.split_documents(docs)

# zhipuai 初始化
ZHIPUAI_API_KEY = os.getenv('MY_ZHIPUAI_API_KEY')
client = ZhipuAI(api_key=ZHIPUAI_API_KEY) # 填写您自己的APIKey

question = 'Memory can be defined as what ?'
context = docs[0].page_content
user_msg =f"""
根据给出的问题和上下文,如果上下文与问题相关,则返回YES,如果不相关,则返回NO。
> 问题: {question}
> 上下文:
>>>
{context}
>>>
> 相关(YES / NO):
"""

response = client.chat.completions.create(
model="glm-4", # 填写需要调用的模型名称
messages=[
{"role": "system", "content": "你是一个文稿编辑,负责判断用户问题是否和文档相关"},
{"role": "user", "content": user_msg}
],
)
print(response.choices[0].message.content)

# 第三方实现文档过滤,并应用在chain 中
def local_filter(documents: Iterable[Document]) -> List[Document]:
# zhipuai 初始化
ZHIPUAI_API_KEY = os.getenv('MY_ZHIPUAI_API_KEY')
client = ZhipuAI(api_key=ZHIPUAI_API_KEY) # 填写您自己的APIKey

new_docs = []
for doc in documents:
# 这个问题 在函数外面,每次赋值question
context = doc.page_content
user_msg = f"""
根据给出的问题和上下文,如果上下文与问题相关,则返回YES,如果不相关,则返回NO。
> 问题: {question}
> 上下文:
>>>
{context}
>>>
> 相关(DOC_YES / DOC_NO):
"""
response = client.chat.completions.create(
model="glm-4", # 填写需要调用的模型名称
messages=[
{"role": "system", "content": "你是一个文稿编辑,负责判断用户问题是否和文档相关"},
{"role": "user", "content": user_msg}
],
)
if 'DOC_YES' in response.choices[0].message.content:
new_docs.append(doc)
return new_docs

# 提示词
template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

chain = (
{"context": retriever | RunnableLambda(local_filter), "question": RunnablePassthrough()}
| prompt
| qianfan_chat
| StrOutputParser()
)
res = chain.invoke(question)
pass