Category Archives: AI

Device graph – cross-device link

In modern society, many of us have multiple devices,such as desktop pc,mobile phone,pad,et at and are surfing the internet using different devices. Some may surf using different browsers such as Google, Bing, Baidu, Firefox, ……. Cookies in different devices and browsers are unique. For example, I read news paper in CNA using chrome and have assigned a cookie ID to me. Next time, I read news in CNA using Firefox using same device, and is assigned another cookie id. Then from cookie id, I have 2 different identities, i.e. different person. Now the question is, can we identify the the two cookie ids referring to one person?

Simple way is to ask users to create an account and login every time they consume contents in your platform. Thus, the user account can link different cookie ids among different devices or browsers.

But if users do not want login, have a way to do it? Technically, it can. This is called device graph, i.e. based on available cookie data, link different cookie ids to one unique identify, referring to one person.

Cookie records all the behaviors in the surfing, e.g. dwelling time, visiting url, click, ip address, device types (e.g. iphone, samsung S20, huawei P40, PC, …), browser type (e.g. chrome, firefox, safari, bing, baidu). It looks like

  • Cookie-id = 1234, IP:0.0.0., visiting url: at 21:00:00 20210908, browser: firefox, device: pc, …

For each cookie-id, we can aggregate behaviors in the history with different window, e.g. last 30 days, last 60 days, last 90 days, and extract a high-dimensional feature to characterize the cookie-id.

Then it needs to group similar cookie-ids into a cluster. From the view of pattern recognition and machine learning, it is a supervised cluster problem. If there are some login user available, these login users can give us some golden answers about which cookies must be in the same cluster. Thus, it is becoming a semi-supervised clustering problem.

Clustering can also be viewed from graph theory. We can calculate similarity score between any pair of cookies to measure the probability of the pair cookie that is from the same identity (Only need to keep high probability candidate). Then a cookie graph is built, node being cookie-id, and edge being weight to measure link strength. Now any graph cut algorithm can be exploited to solve the clustering problem. Graph cut can identify sub-graph in which all cookie ids is identified as the same identity.

Device graph is very useful technology in ads targeting, personalized recommendation.

Ads targeting

As a publisher platform, broadcast companies create content to attract users to their platform, and earn money by ads operation or by content subscription. Because media company is business driven, and they do not have enough manpower to develop own ads technology such as user tracking, ads placement, ads optimization, ……, they use third-party service such as Google DoubleClick. Based on Google technology, the media company can get real-time data about how users react to the ads displayed to them, e.g. which ads unit displayed to users (impression), whether user click the ads or not. From ads click information, in-house data scientists can develop machine learning model (lookalike model) to predict how probability a user will click an ads. Thus, ads targeting will be implemented, i.e. targeting ads to precisely tailored audience. Thus, it can improve click through rate (CTR) and drive traffic.

Big internet tech companies such as Google, Facebook, Baidu have ads targeting products. But there is intention that media companies like to build their in-house technology because of data privacy and they do not want depend on third-party service too heavy. Building in-house technology can let them easily customized ads targeting model to support niche business requirements, and improves quick response to business.

How to build ads targeting model?

Firstly, the problem is formulated as: given an ads – user pair, predict if user click the ads or not. Thus, from machine learning point, it is a binary classification problem.

Secondly, collect data to prepare training samples to learn a ML model. Collect ads-user pairs already displayed in the platform. If using Google DoubleClick, you can get the real-time impression log data. From these log data, you will know which ads unit impress the audience, and whether the audience click the ads. If the user click the ads, the impression is positive (1). Otherwise, it is negative (0). Thus, each ads-user log pair is tagged as 1 or 0.

Thirdly, represent the ads-user pair as a feature vector. In lookalike model, it try to find potential audience with similar behavior what they already know. Thus, ads information can be ignored. It only need to represent a user using a vector. This vector characterizes the user history behaviors in the platform from various dimension. How to use feature to represent a user, please refer to your-browsing-behavior-expose-your-gender-age-ethnicity.

Lastly, you can train any supervised machine learning model to do prediction. In my case, a simple weighted linear classifier works good, which is like mean of positive sample, negative sample, plus discriminative info comparing with a background model. A/B test on some ads unit shows promising results.

After the model learned, we can rank audiences based on how much probability the audience will click the ads, and selected top-N to do ads targeting.

Viewer forecasting – predicting how many users will read or watch your content

Media companies create News article, audio, and video content to engage users to their platform and provides Ads service to make money. The content creators must understand which topics are most interesting to the audiences, and how their content is popular. For the popular content, i.e. attracting large volume of viewers in short time, the creators may plan to follow user interesting to do deep report on the topic. It is thus necessary to predict the viewer volume for a published article.

For example, the CNA reporter publishes a article on 22 Sep 2021 06:25AM , COVID-19: Home recovery patients ‘anxious’ without clear instructions, set up Telegram group for support ( The reporter wants to know how many viewers the article will be attracted in the next 24 hours or 72 hours.

Predicting the viewer number of article is a classical time series regression problem. Based on the article published date, time, calendar day, day of week, history viewers of a article, the regression model can be learned and used to do forecasting. The following are steps.

  • Data collection and cleaning
    • Collect history articles together with their viewers, which are series number. e.g. article A published 4 hours ago, its hourly viewer numbers are (0-th, 0), (1-th, 10), (2-th, 100), (3,1000). Collect a lot of these sequence from published articles
  • Prepare training set to train regression model
    • Training data is a set of pairs like (x, y), x is the feature, which is evidence observed, and y is target value (golden truth). If we only forecast next hour viewer number, y is just a number. For above series example, (x,y) pairs may look like ([0,10],100), ([10,100], 1000), i.e. use the past 2-hour viewer number to predict the next hour. But in practice, it is more complicated than the simple case. For example, in the project I worked in the media company, it needs to predict next 72-hour viewers.
  • Feature extraction
    • Feature extraction is most important step in forecasting. If feature is bad, the regression accuracy is worse regardless of which state-of-art machine learning models used. In the news article, the viewer number is only one source of feature. Other features like publish date, time, time of day, day of week, channel, …. Many extra-features can help to improve forecasting precision.
    • Because viewer is integer, it is better to use it in LOG domain. e.g. use LOG(1000) rather than 1000 as a feature, and LOG will non-linear re-scale the number.
  • Machine learning model
    • Any machine learning model for regression can be applied after the feature and training data is ready. For example, xgboost, decision tree, neural network. I finally apply a DNN with metric oriented learning (My previous research, Learn a metric oriented classifier, learning NN to optimize metrics like mean square error (MSE), adjusted R2, …)
  • Forecasting performance metrics
    • Popular metrics like mean square error, mean absolute error, adjusted R2.
  • After model is ready, next step is to deploy forecasting as a service. You can try Flask,, to build a service. Your data engineering team can call the service to do real-time forecasting.

When I work on the forecasting, not only on news article, but also on audio viewer prediction for broadcast channel program (which is a little different from article, because broadcast program is a scheduled program, e.g. 1-3 program-A, 4-5 program-B), and video program prediction.

You can also apply forecasting to predict exchange rate between two dollars. e.g. predicting USD/SGD exchange rate in next a few days. I try the forecasting model on USD/RMB exchange rate prediction. It looks good.

If you feels the topic interest and want to know more, please contact me.

Music summary

Music summary is to extract a short clip from music recording to represent music content, which is used to engage consumer to buy music recording. A simple way is to use the beginning of audio. But it may not characterize the most engaging part of the music. I developed a music structure analysis and repeated pattern identification algorithm. The repeated pattern or segment may reflect the most engaging content in the recording, which is used as music summary. Refer to

Learn a metric oriented classifier

Objective function is the mathematical formulation of how to estimate classifier parameters. The classical objective function is derived from maximal log-likelihood function on training samples for the proposed classifier. Classifier parameters are estimated by solving the objective function. But log-likelihood is not directly related to performance metric, e.g. training on likelihood, and preferred evaluation metric maybe F1, accuracy or ranking. This criteria gap between training and evaluating causes the classifier trained on log-likelihood is not optimal for F1 , classification error or ranking. This is the intention of our work on MFoM based classifier learning. Updated the work on Hereafter MFoM, there are many research papers on learning classifier for specified metric in research community, in which learn-to-rank is most famous, and learn-to-rank is now a core module for modern search engine.