Imagine natural talking with your chatGPT rather than mannually input prompt and read the text response. You speak out the information requirement, and the agent speaking out the response. The solution is to integrate multiple AI modules, frontend and backend together, plus solving streaming issue for user experience. The overall processing flow from the user-input to the system response is shown in the flowing:

For each module in the above, there are many available open-source tools to be exploited. You can prompt-to-talking-avatar in my YouTube channel to see what the system looks like (Currently the prompt input is keyboard. Speech recognition is not integrated).
If you want to learn more, welcome drop me email.
-
Previous
Streamlit: how to auto play audio, video including video size -
Next
Document summarization and question-answering on document -LLM + LangChain
FEATURED TAGS
computer program
javascript
nvm
node.js
Pipenv
Python
美食
AI
artifical intelligence
Machine learning
data science
digital optimiser
user profile
Cooking
cycling
green railway
feature spot
景点
work
technology
F1
中秋节
dog
setting sun
sql
photograph
Alexandra canal
flowers
bee
greenway corridors
programming
C++
passion fruit
sentosa
Marina bay sands
pigeon
squirrel
Pandan reservoir
rain
otter
Christmas
orchard road
PostgreSQL
fintech
sunset
thean hou temple in sungai lembing
海上日出
SQL optimization
pieces of memory
回忆
garden festival
ta-lib
backtrader
chatGPT
stable diffusion webui
draw.io
streamlit
LLM
prompt engineering
fastapi
stock trading
artificial-intelligence