Professional Documents
Culture Documents
1e2RzvrZ 1SueJwXaVHXb0x25yZtvmI0d
1e2RzvrZ 1SueJwXaVHXb0x25yZtvmI0d
1e2RzvrZ 1SueJwXaVHXb0x25yZtvmI0d
This is the LangChain Bot version with document access for Context Augmentation. It
includes:
• dialogue memory
• dialogue logging on �le
• document access in the Prompt
CAUTION: large pdf documents tend to slow down the ChatBot and cost more tokens. A
reasonable size is up to 5 pages.
1 of 5 4/28/24, 16:23
LangChainWithFiles.ipynb - Colab https://colab.research.google.com/drive/1e2RzvrZ-1S...
import os
import openai
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
2 of 5 4/28/24, 16:23
LangChainWithFiles.ipynb - Colab https://colab.research.google.com/drive/1e2RzvrZ-1S...
import pdfplumber
# this is the Langchain Template structure that allows you to use a third variable
# to include a document into your Prompt
#
prompt = PromptTemplate(
template="""
This is your Prompt.
You will describe various aspects of Bot 'personality',
of its task, and how to control the flow of dialogue.
You will refer to the contents of the document by pointig at the contents
of this variable:
{document}
Current conversation:
{history}
User: {human_input}
Chatbot: ""
""",
input_variables=["reco", "history", "human_input"]
)
prompt = PromptTemplate(
input_variables= ["history", "human_input"],
template=prompt_formatted_str
)
llm = ChatOpenAI(openai_api_key="sk-VSqFRAHprpo7uEvBGmVJT3BlbkFJnUvSqZyfL49A3G2vYu
memory = ConversationBufferWindowMemory(
3 of 5 4/28/24, 16:23
LangChainWithFiles.ipynb - Colab https://colab.research.google.com/drive/1e2RzvrZ-1S...
chat_llm_chain = LLMChain(
llm=llm,
prompt=prompt,
memory=memory,
verbose=False)
import datetime
path = "/content/"
Dfile = open(os.path.join (path, uniq_filename), "a") # the 'a' means you are adding t
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Conversation loop
while True:
# Get user input
user_input = input("User: ")
# Generate response
with torch.no_grad():
output_ids = model.generate(input_ids, max_length=100, num_return_sequences=
4 of 5 4/28/24, 16:23
LangChainWithFiles.ipynb - Colab https://colab.research.google.com/drive/1e2RzvrZ-1S...
User: hlo
The attention mask and the pad token id were not set. As a consequence, you may o
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
ChatGPT: ChatGPT: Hello! I'm ChatGPT, your friendly chatbot companion. How can I
hlo
ChatGPT: Hello! I'm ChatGPT, your friendly chatbot companion. How can I assist yo
hlo
ChatGPT: Hello! I'm ChatGPT, your friendly chatbot companion. How can I assist yo
hlo
5 of 5 4/28/24, 16:23