Category Archives: data science

Device graph – cross-device link

In modern society, many of us have multiple devices,such as desktop pc,mobile phone,pad,et at and are surfing the internet using different devices. Some may surf using different browsers such as Google, Bing, Baidu, Firefox, ……. Cookies in different devices and browsers are unique. For example, I read news paper in CNA using chrome and have assigned a cookie ID to me. Next time, I read news in CNA using Firefox using same device, and is assigned another cookie id. Then from cookie id, I have 2 different identities, i.e. different person. Now the question is, can we identify the the two cookie ids referring to one person?

Simple way is to ask users to create an account and login every time they consume contents in your platform. Thus, the user account can link different cookie ids among different devices or browsers.

But if users do not want login, have a way to do it? Technically, it can. This is called device graph, i.e. based on available cookie data, link different cookie ids to one unique identify, referring to one person.

Cookie records all the behaviors in the surfing, e.g. dwelling time, visiting url, click, ip address, device types (e.g. iphone, samsung S20, huawei P40, PC, …), browser type (e.g. chrome, firefox, safari, bing, baidu). It looks like

  • Cookie-id = 1234, IP:0.0.0., visiting url: wordpress.com at 21:00:00 20210908, browser: firefox, device: pc, …

For each cookie-id, we can aggregate behaviors in the history with different window, e.g. last 30 days, last 60 days, last 90 days, and extract a high-dimensional feature to characterize the cookie-id.

Then it needs to group similar cookie-ids into a cluster. From the view of pattern recognition and machine learning, it is a supervised cluster problem. If there are some login user available, these login users can give us some golden answers about which cookies must be in the same cluster. Thus, it is becoming a semi-supervised clustering problem.

Clustering can also be viewed from graph theory. We can calculate similarity score between any pair of cookies to measure the probability of the pair cookie that is from the same identity (Only need to keep high probability candidate). Then a cookie graph is built, node being cookie-id, and edge being weight to measure link strength. Now any graph cut algorithm can be exploited to solve the clustering problem. Graph cut can identify sub-graph in which all cookie ids is identified as the same identity.

Device graph is very useful technology in ads targeting, personalized recommendation.

Ads targeting

As a publisher platform, broadcast companies create content to attract users to their platform, and earn money by ads operation or by content subscription. Because media company is business driven, and they do not have enough manpower to develop own ads technology such as user tracking, ads placement, ads optimization, ……, they use third-party service such as Google DoubleClick. Based on Google technology, the media company can get real-time data about how users react to the ads displayed to them, e.g. which ads unit displayed to users (impression), whether user click the ads or not. From ads click information, in-house data scientists can develop machine learning model (lookalike model) to predict how probability a user will click an ads. Thus, ads targeting will be implemented, i.e. targeting ads to precisely tailored audience. Thus, it can improve click through rate (CTR) and drive traffic.

Big internet tech companies such as Google, Facebook, Baidu have ads targeting products. But there is intention that media companies like to build their in-house technology because of data privacy and they do not want depend on third-party service too heavy. Building in-house technology can let them easily customized ads targeting model to support niche business requirements, and improves quick response to business.

How to build ads targeting model?

Firstly, the problem is formulated as: given an ads – user pair, predict if user click the ads or not. Thus, from machine learning point, it is a binary classification problem.

Secondly, collect data to prepare training samples to learn a ML model. Collect ads-user pairs already displayed in the platform. If using Google DoubleClick, you can get the real-time impression log data. From these log data, you will know which ads unit impress the audience, and whether the audience click the ads. If the user click the ads, the impression is positive (1). Otherwise, it is negative (0). Thus, each ads-user log pair is tagged as 1 or 0.

Thirdly, represent the ads-user pair as a feature vector. In lookalike model, it try to find potential audience with similar behavior what they already know. Thus, ads information can be ignored. It only need to represent a user using a vector. This vector characterizes the user history behaviors in the platform from various dimension. How to use feature to represent a user, please refer to your-browsing-behavior-expose-your-gender-age-ethnicity.

Lastly, you can train any supervised machine learning model to do prediction. In my case, a simple weighted linear classifier works good, which is like mean of positive sample, negative sample, plus discriminative info comparing with a background model. A/B test on some ads unit shows promising results.

After the model learned, we can rank audiences based on how much probability the audience will click the ads, and selected top-N to do ads targeting.

Viewer forecasting – predicting how many users will read or watch your content

Media companies create News article, audio, and video content to engage users to their platform and provides Ads service to make money. The content creators must understand which topics are most interesting to the audiences, and how their content is popular. For the popular content, i.e. attracting large volume of viewers in short time, the creators may plan to follow user interesting to do deep report on the topic. It is thus necessary to predict the viewer volume for a published article.

For example, the CNA reporter publishes a article on 22 Sep 2021 06:25AM , COVID-19: Home recovery patients ‘anxious’ without clear instructions, set up Telegram group for support ( https://www.channelnewsasia.com/singapore/covid-19-home-recovery-quarantine-art-self-test-kit-telegram-support-group-2191691). The reporter wants to know how many viewers the article will be attracted in the next 24 hours or 72 hours.

Predicting the viewer number of article is a classical time series regression problem. Based on the article published date, time, calendar day, day of week, history viewers of a article, the regression model can be learned and used to do forecasting. The following are steps.

  • Data collection and cleaning
    • Collect history articles together with their viewers, which are series number. e.g. article A published 4 hours ago, its hourly viewer numbers are (0-th, 0), (1-th, 10), (2-th, 100), (3,1000). Collect a lot of these sequence from published articles
  • Prepare training set to train regression model
    • Training data is a set of pairs like (x, y), x is the feature, which is evidence observed, and y is target value (golden truth). If we only forecast next hour viewer number, y is just a number. For above series example, (x,y) pairs may look like ([0,10],100), ([10,100], 1000), i.e. use the past 2-hour viewer number to predict the next hour. But in practice, it is more complicated than the simple case. For example, in the project I worked in the media company, it needs to predict next 72-hour viewers.
  • Feature extraction
    • Feature extraction is most important step in forecasting. If feature is bad, the regression accuracy is worse regardless of which state-of-art machine learning models used. In the news article, the viewer number is only one source of feature. Other features like publish date, time, time of day, day of week, channel, …. Many extra-features can help to improve forecasting precision.
    • Because viewer is integer, it is better to use it in LOG domain. e.g. use LOG(1000) rather than 1000 as a feature, and LOG will non-linear re-scale the number.
  • Machine learning model
    • Any machine learning model for regression can be applied after the feature and training data is ready. For example, xgboost, decision tree, neural network. I finally apply a DNN with metric oriented learning (My previous research, Learn a metric oriented classifier, learning NN to optimize metrics like mean square error (MSE), adjusted R2, …)
  • Forecasting performance metrics
    • Popular metrics like mean square error, mean absolute error, adjusted R2.
  • After model is ready, next step is to deploy forecasting as a service. You can try Flask,https://flask.palletsprojects.com/en/2.0.x/, to build a service. Your data engineering team can call the service to do real-time forecasting.

When I work on the forecasting, not only on news article, but also on audio viewer prediction for broadcast channel program (which is a little different from article, because broadcast program is a scheduled program, e.g. 1-3 program-A, 4-5 program-B), and video program prediction.

You can also apply forecasting to predict exchange rate between two dollars. e.g. predicting USD/SGD exchange rate in next a few days. I try the forecasting model on USD/RMB exchange rate prediction. It looks good.

If you feels the topic interest and want to know more, please contact me.

Cookie – Tracking user behavior & recommendation

Cookie is a short code to tracking user behavior when surfing in the internet, reading news and article, watching video and podcast and audio program. From cookie collected data, we can understand who, which, where and when content clicked and dwelling time. When you google, google cookie will assign a unique identity (UUID) to you, and trace you, similarly when you Baidu, Bing. But the UUID is different in Google, Baidu, Bing because UUID is not cross browser. But when you login different browsers using same Email account, these UUIDs can be linked and identified as a single user.

Different cookie is used to track different user behavior. For example, cookie tracking user surfing news is different from tracking user watching TV program, or listening radio channel. Third-party cookie service is often used in media company to support news recommendation, audio program recommendation, video program recommendation. There are many DSP (data side platform), DMP (data management platform), and SSP (supply side platform) to provide technology services, e.g. cxense, lotame, ……

Media company often requires customized recommendation system. Third-party service provides cookie and widget toolkit to satisfy customized requirement. For news recommendation, through the widget setting, the customer can configure news category, keyword, name entity, term weighting, period, blacklist & whitelist. These functions can satisfy basic business requirements on news recommendation. This is traditional information retrieval application in news, and cannot do personalized news recommendation, which is widely applied in Google, Facebook or Microsoft Bing search. In-house data science team can exploit internal audience data to understand user interests, build machine learning model to do personalized recommendation. In practice, most companies have no such capability.

For audio / podcast and video program recommendation, most of time, it is still treated as a text information retrieval problem. These program have meta text description such as caption, short description of program, editors or reporter names, program director and actor names. Using these available meta data, recommendation can fulfill most business requirements. Audio and video/image processing and content understanding are not widely used. It is not only because of less manpower capability but also because of hungry computing resources to processing audio and image. In terms of ROI (return on investment), they may not be a good investment.