5. Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems 🔗 https://lnkd.in/duU-S-iN
6. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of AI and Data Analytics in Singapore’s Financial Services Sector 🔗 https://lnkd.in/dQj_4DG8
I often read the posts shared in LinkedIn, which contain the amazing animation graph. So I am thinking how do I also draw it? Previously, I shared a post, Draw.io, a beautiful tool to draw graph and flow . After googling, it is found Draw.io have the functio, which let us set the propertu of arrow to enable animation. Before introduce how to do, I firstly share how to setup Draw.io, focusing in Ubuntu (Windows should be also easy).
If you like install an invidual App, you can visit the URL to download the Windows/Mac/Linux version, and install.
Draw.io plugin in Visual studio code
If you are frequently use VSC to programming and don’t want to switch between VSC and another Draw.io tool, you can install the plugin, Draw.io Integration v1.6.6. Just search draw.io. There are many plugins, I select the Draw.io extension developed by Henning Dieterich.
Issue:
The functions are similar to the above two. But after completing animation setting, the export svg is still a still image, without animation.
Thus, for animation, the web version or App are recommend.
There is plugin in Jupyter-lab. But I find it is not full functions available. For example, I cannot find how load and edit a drawio file. So I do not suggest
How to enable animation of edge?
There is a few difference between the Draw.io App and web version, due to the version of development.
In Draw.io App
Click the edge you want to enable animation
Then in the right menu, find Flow animation under Style. Select it, and you will see the edge is flowing now.
Note: drawio file extension is suggest SVG
In Draw.io web version
Click the edge you want to enable animation
Then in the right menu, click and expand Properties under Style. Scroll down to find Flow animation and select it. You will see the edge is flowing now.
Although OpenAI open-sourced multi-lingual Whisper model, https://github.com/openai/whisper, achived the state-of-art results in the benchmark dataset, there are many scenaria the pretrained models donot work well. For example, the languages not covered in the pretrained model. Whisper-V3 support 100 languages. Thus, the model must be re-trained in order to support new language. For the minority languages, even they are covered in the pretrained model, the accuracy is often worse and need to collect more data to adapt the pretrained model in order to reduce word error rate. In the post, huggingface implemented whisper version is used to fine-tune the Chinese language (It is just for testing training code functionality rather than training a product version model. No computing resource. As soon as resources (computing and data) ready, SOTA model can be trained). You can refer Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers . The following figure shows the logic flow for model training.
Get annotated training data, i.e. a set of speech-text pair samples. In the demo code, I get the data from the common-voice (Chinese)
Download the pretrained whisper based model
Transform speech wave data to Mel-spectrum features, which are further feeded into transformer encoder
For Whisper, speech must be 16K sample rate. If not, nee re-sampling
The size of Mel band is 80 or 128 (for whisper large)
Transformer encoder-decoder is trained to learn text-audio cross attension and audio self-attention. The predicted next token, P(next-token|text, audio) (the probability is calculated in the decoder), together with ground-truth text to calculate cross-entropy loss. Then gradients are computed and model parameters are updated.
The training code and word-error-rate (WER) evaluation code is in the following:
""" Test codes for fine-tuning Whisper speech-to-text, i.e. speech recognition """
from datasets import load_dataset, DatasetDict, Audio from transformers import WhisperFeatureExtractor, WhisperTokenizer, WhisperProcessor, WhisperForConditionalGeneration import torch from dataclasses import dataclass from typing import Any, Dict, List, Union import evaluate from transformers import Seq2SeqTrainingArguments from transformers import Seq2SeqTrainer from evaluate import load
@dataclass class DataCollatorSpeechSeq2SeqWithPadding: processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need different padding methods # first treat the audio inputs by simply returning torch tensors input_features = [{"input_features": feature["input_features"]} for feature in features] batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences label_features = [{"input_ids": feature["labels"]} for feature in features] # pad the labels to max length labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step, # cut bos token here as it's append later anyways if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item(): labels = labels[:, 1:]
# replace -100 with the pad_token_id label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # print(f"""** {batch}""") # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch["sentence"]).input_ids # print(f"""** {batch["labels"]}""") return batch
#Step-1: Define model structure & initialization, feature extractor, text tokenizer model_base_default = "openai/whisper-small" language = "zh" save_dir = "whisper-small-zh-me" max_steps = 500
It is known that large language model (LLM) based chat can answer any query with satisfied accuracy. But when ask questions about the fact, in particular the fact is the latest reported and is not covered in the training data lifecycle. With high probability the answer is wrong. One method is to exploit retrieval augmented generation (RAG), and provide the latest documents to LLM as a context. Combining context documents and query will provide satisfied result. The following figure shows the overall flow of RAG based question and answering.
PDF to text:
PyPDF is used to extract text from PDF. I use it rather than pdf reader in LangChain because I want to have more control on text post-processing.
Web document:
Use WebBaseLoader in LangChain to download and extract text from the URL (In future, it may change to my own crawl if a lot of webpage downloaded.)
Document chunk and index
Document chunk is processed by first extracting paragraphs and then group paragraph untill the specified chunk size
Index engine: In LangChain, many index engines are provided from commercial to open-source. Firstly, vector database such as FAISS and Chroma are exploited, in which vector embedding of documents is extracted using LLM (Here Google Gemma used). Unfortunately, after a few try, the accuracy of recall is very bad. Then I change to traditional indexing such BM25 or TFIDF. At least, when query words exist in the document, the accuracy looks good (Just OK. Semantic vector match will be investigated in fugure to address semathc gap between query question and indexed documents.)
Prompt template for question & answer task :
qa_prompt_template_cfg = """Answer the question as precise as possible using the provided context. If the answer is not contained in the context, say "answer not available in context" \n\n Context: \n {context}?\n Question: \n {question} \n Answer: """ qa_prompt_template = PromptTemplate( template = qa_prompt_template_cfg, input_variables = ["context", "question"] )
In the first, it is to generate a summary for the news
Then ask a question: how many fans?
Currently processing a large PDF, e.g. 30-page financial report, is very very slow in my 4070, and the document length is much longer than the context token limit (8192). Testing in the financial resport looks good, if the query words do not have semantic issue. The quality of RAG retrieved documents highly affect answer quality. For RAG based chat, building high recall retrieval system is critical.