Tag Archives: streamlit

Document summarization and question-answering on document -LLM + LangChain

It is known that large language model (LLM) based chat can answer any query with satisfied accuracy. But when ask questions about the fact, in particular the fact is the latest reported and is not covered in the training data lifecycle. With high probability the answer is wrong. One method is to exploit retrieval augmented generation (RAG), and provide the latest documents to LLM as a context. Combining context documents and query will provide satisfied result. The following figure shows the overall flow of RAG based question and answering.

  • PDF to text:
    • PyPDF is used to extract text from PDF. I use it rather than pdf reader in LangChain because I want to have more control on text post-processing.
  • Web document:
    • Use WebBaseLoader in LangChain to download and extract text from the URL (In future, it may change to my own crawl if a lot of webpage downloaded.)
  • Document chunk and index
    • Document chunk is processed by first extracting paragraphs and then group paragraph untill the specified chunk size
    • Index engine: In LangChain, many index engines are provided from commercial to open-source. Firstly, vector database such as FAISS and Chroma are exploited, in which vector embedding of documents is extracted using LLM (Here Google Gemma used). Unfortunately, after a few try, the accuracy of recall is very bad. Then I change to traditional indexing such BM25 or TFIDF. At least, when query words exist in the document, the accuracy looks good (Just OK. Semantic vector match will be investigated in fugure to address semathc gap between query question and indexed documents.)
  • Prompt template for question & answer task :
qa_prompt_template_cfg = """Answer the question as precise as possible using the provided context. If the answer is
not contained in the context, say "answer not available in context" \n\n
Context: \n {context}?\n
Question: \n {question} \n
Answer:
"""
qa_prompt_template = PromptTemplate(
template = qa_prompt_template_cfg,
input_variables = ["context", "question"]
)

Currently processing a large PDF, e.g. 30-page financial report, is very very slow in my 4070, and the document length is much longer than the context token limit (8192). Testing in the financial resport looks good, if the query words do not have semantic issue. The quality of RAG retrieved documents highly affect answer quality. For RAG based chat, building high recall retrieval system is critical.

Human-like communication with LLM chat agent

Imagine natural talking with your chatGPT rather than mannually input prompt and read the text response. You speak out the information requirement, and the agent speaking out the response. The solution is to integrate multiple AI modules, frontend and backend together, plus solving streaming issue for user experience. The overall processing flow from the user-input to the system response is shown in the flowing:

For each module in the above, there are many available open-source tools to be exploited. You can prompt-to-talking-avatar in my YouTube channel to see what the system looks like (Currently the prompt input is keyboard. Speech recognition is not integrated).

If you want to learn more, welcome drop me email.

Streamlit: how to auto play audio, video including video size

I am currently working on the talking avatar project and use streamlit to build the demo. Streamlit audio and video function cannot support autoplay and change video window size when playing. Search and find a solution foor audio, i.e. encoding audio into base64 and use native HTML tag to control audio play. But for video, not find one. Using similar method, encoding video into base64 with HTML video tag, it work well. The following are sample function code and actual result:

def autoplay_audio(wav_file, is_auto = True):
with open(wav_file, "rb") as f:
data = f.read()
b64 = base64.b64encode(data).decode()
if is_auto:
md = f"""
<audio controls autoplay="true">
<source src="data:audio/mp3;base64,{b64}" type="audio/wav">
</audio>
"""
else:
md = f"""
<audio controls autoplay="false">
<source src="data:audio/mp3;base64,{b64}" type="audio/wav">
</audio>
"""

return md

def autoplay_video(video_file, is_auto = True):
with open(video_file, "rb") as f:
data = f.read()
b64 = base64.b64encode(data).decode()
if is_auto:
md = f"""
<video controls width=320 height=240 autoplay>
<source src="data:video/mp4;base64,{b64}">
</video>
"""
else:
md = f"""
<video controls width=320 height=240 autoplay muted>
<source src="data:video/mp4;base64,{b64}">
</video>
"""
return md