Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

关于可信智能决策的一些思考和尝试

Towards Trustworthy Decision Intelligence

崔 鹏
清华大学
Decision is often more consequential than Prediction

https://www.datapine.com/blog/data-driven-decision-making-in-businesses/
无处不在的决策
决策(Decision Making)场景:

• 推荐系统
• 推荐什么商品(Item)?

• 定价算法
• 制定多少价格(Price)? 干预(Treatment)

• 医疗场景
• 服用哪种药物(Medicine)?

……
通常做法(1):用模拟器做决策

决策效果依赖模拟器的准确性
通常做法(2):用预测做决策

在预测空间中“打哪指哪” 预测准确性取决于对独立同分布假设
的满足程度

决策效果取决于预测准确性
通常做法(2):用预测做决策

基于历史数据P(X,Y)构造预测模型

给定决策变量,优化取值
P(X)被改变,独立同分布假设不成立
寻求最优决策变量,并优化取值

决策问题可能触发分布外泛化问题
决策问题是因果范畴?

https://multithreaded.stitchfix.com/blog/2019/12/19/good-marketing-decisions/
一种对决策的框架性描述
𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

预测性决策问题 因果性决策问题
Jon Kleinberg

复合性决策问题

Kleinberg, Jon, et al. "Prediction policy problems." American Economic Review 105.5 (2015): 491-95.
两个例子
𝑋0 : Decision Variable
要不要带伞? 要不要花钱请人跳个大神?
决策的复杂性来源于社会性和经济性因素

预测:Best Effort
决策?

算法公平性 大数据杀熟 信息茧房


一种可信智能决策的框架

𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

④可监管决策 ①反事实推理

③预测公平性 ②复杂收益
一种可信智能决策的框架

𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

①反事实推理
反事实推理

• 策略平均效果评估 (Off-Policy Evaluation)

• 对干预策略在population中的整体效果进行评估

• 策略个体效果预测(Counterfactual Prediction)

• 对策略在个体 (individual) 层面的效果进行预测

• 策略优化 (Policy Optimization)

• 为个体选择效果最好的干预
策略平均效果评估 (Off-Policy Evaluation)
问题框架概述 Context 𝑋

基于从策略𝜋0 (Behavior policy)产生的离线数据


Outcome
𝒟,评估策略𝜋(Target policy)的效用值(Utility)

𝒟= 𝐱 𝑖 , 𝐚𝑖 , 𝐫𝑖 𝑖=1,2,3…,𝑛

Policy Action
Context Action Reward

比A/B Testing更现实、经济和优化
Utility: The average outcome among the population
现有方法

1. 基于Outcome预测的方法(Direct Method)

OOD prediction

2. 基于加权调整样本分布的方法(Sample Re-weighting Based Method)

Propensity Score未知
Variance过大

Mainstream in Policy Evaluation


样本直接加权
Propensity Score的作用

Distribution of
1
直接计算权重𝑤𝑖 代替
Population
• 𝑃(𝐗)
𝜋0 (𝐚=𝐚𝑖 |𝐱 𝒊 )

• 𝑤𝑖 加权后各action group下𝐗的分
1
1
𝜋0 (𝐚 = 1|𝐗) 𝜋0
1 𝜋0 (𝐚 = 𝐾|𝐗)
布与整体分布𝑃(𝐗)一致 𝜋0 (𝐚 = 2|𝐗)

Distribution of Distribution of Distribution of


Action Group 1 Action Group 1 …… Action Group 1
𝑃(𝐗|𝐚 = 1) 𝑃(𝐗|𝐚 = 2) 𝑃(𝐗|𝐚 = 𝐾)

Hao Zou, Kun Kuang, Boqi Chen, Peng Cui, Peixuan Chen. Focused Context Balancing for Robust Offline Policy Evaluation. KDD, 2019.
Focused Context Balancing (FCB) Estimator
Context Balancing 𝑊 Focused Context Balancing 𝑊
同时考虑𝜋0 和 𝜋
Distribution of
Population
𝑃(𝐗)

𝜋0 𝜋
只考虑 𝜋0

𝑛
Distribution of Distribution of
Action Groups Action Groups ෡𝐶𝐵 𝜋, 𝒟 = ෍ 𝑤𝑖 𝜋 𝐚𝑖 𝐱𝑖 𝑟𝑖
𝑈
under 𝝅𝟎 under 𝝅
𝑖=1

Hao Zou, Kun Kuang, Boqi Chen, Peng Cui, Peixuan Chen. Focused Context Balancing for Robust Offline Policy Evaluation. KDD, 2019.
Focused Context Balancing (FCB) Estimator

1 𝑛 2
: 𝑚𝑖𝑛𝑊𝐚=𝑗 σ𝑖=1 𝜋 𝐚 = 𝑗 𝐱 𝑖 𝐌𝑖 − σ𝑖:𝐚𝑖 =𝑗 𝑤𝑖 𝜋 𝐚 = 𝑗 𝐱𝑖 𝐌𝑖
𝑛 2

G 𝑠. 𝑡. σ𝑖:𝐚𝑖 =𝑗 𝑤𝑖 = 1 𝑤𝑖 ≥ 0

𝑛
෡𝐹𝐶𝐵 𝜋, 𝒟 = ෍ 𝑤𝑖 𝜋 𝐚𝑖 𝐱 𝑖 𝑟𝑖
𝑈
𝑖=1
Focused component:
Policy 𝜋作用于population后 𝜋(𝐚 = 𝑗|𝐱 𝑖 )大的样本在
action group j的X分布 balancing过程中更重要

Hao Zou, Kun Kuang, Boqi Chen, Peng Cui, Peixuan Chen. Focused Context Balancing for Robust Offline Policy Evaluation. KDD, 2019.
19

FCB实验验证

FCB estimator 在变化sample size和


context维度的不同场景下都显著优于
baseline.

Hao Zou, Kun Kuang, Boqi Chen, Peng Cui, Peixuan Chen. Focused Context Balancing for Robust Offline Policy Evaluation. KDD, 2019.
策略个体效果预测 (Counterfactual Prediction)
X

考虑个体异质性,直接对个体实施差别 T Y
化干预

• 当X包含了全部混淆变量的信息 Treat Treat Treat


• Unconfoundedness
• 基于X预测Individual Outcome
Outcome Outcome Outcome

? ? ?
直接预测建模的局限
基于历史观测数据
𝒟 = 𝐱𝑖 , 𝒕𝑖 , 𝒚𝑖 𝑖=1,2,3…,𝑛
训练得到反事实预测模型
𝑓𝜃𝑝 𝐗, 𝐓 → 𝑦 X
Challenge:
由于历史观测数据中𝒕𝒊 与𝐱𝒊 不独立,直
接学习(X,T)与Y之间的映射函数必 T Y
然受到X与T之间关系的影响。当干预T
时,会触发OOD

消除𝒕𝑖 与𝐱𝑖 之间的关联


Sample Re-weighting去除X与T的关联
采用加权的方法
𝟏 𝒑(𝒕𝒊 )
• 逆倾向性得分加权 𝒘𝒊 = 𝒐𝒓 𝒘𝒊 =
𝒑(𝒕𝒊 |𝒙𝒊 ) 𝒑(𝒕𝒊 |𝒙𝒊 )

𝟏 𝒏 𝟐
• 变量平衡 𝒎𝒊𝒏 σ 𝐌 − σ𝒊:𝒕𝒊 =𝒋 𝒘𝒊 ⋅ 𝐌𝒊
𝒏 𝒊=𝟏 𝒊 𝟐

适用于简单类型的treatment场景(二值、离散值)

当treatment维度较高时? …

Treatments in the bundle


Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He. Counterfactual Prediction for Bundle Treatments. NeurIPS, 2020.
Treatment高维问题
假设高维Treatment存在低维隐变量结构
• 如推荐商品列表由若干个因素所决定
将raw treatment与confounder X去关联

latent variable z与confounder X去关联

Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He. Counterfactual Prediction for Bundle Treatments. NeurIPS, 2020.
24

VSR
• 1. Latent variable z的学习.
• 使用变分自编码器(VAE)学习
• 2. 权重函数𝑤(x, z)的学习
• 基于贝叶斯定理的概率密度比估计
第1步
• 3. 样本权重𝑤𝑖 的获得

• 预测模型训练

第2步 第3步

Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He. Counterfactual Prediction for Bundle Treatments. NeurIPS, 2020.
25

VSR—实验验证
• 数据产生(Recsim模拟器)
• 文档由类别𝑐𝑖 和质量 𝑞𝑖 刻画。 Confounder 𝑋 ∈ ℝ𝑑 是用户对每一个类别的喜好程度.
• Latent factors , 文档 score
• 选择score最高的几个文档作为bundle treatment
• 预测用户的点击概率
• 类别数d=4, 选择的文档数量s=4, sample size n=10000

Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He. Counterfactual Prediction for Bundle Treatments. NeurIPS, 2020.
策略优化 (Policy Optimization)
原始策略(收集历史数据) 优化后策略

Treat Treat

Outcome Outcome
策略优化问题
基于反事实预测模型的决策优化
导出Policy

最大
Treatment 𝐭1
Outcome 𝑓(𝐱, 𝐭1 )
Treatment 𝐭 2
… Predictive Outcome 𝑓(𝐱, 𝐭 2 )
Model 𝑓
Treatment 𝐭 𝑚
Outcome 𝑓(𝐱, 𝐭 𝑚 )

Confounder 𝐗
Policy learning≠Prediction
Policy Learning Target:
• Minimize 决策损失值
Counterfactual Prediction Target
• Minimize 反事实预测误差

Policy Learning Target ≠ Counterfactual Prediction Target

For example:

Hao Zou, Bo Li, Jiangang Han, Shuiping Chen, Xuetao Ding, Peng Cui. Counterfactual Prediction for Outcome-oriented Treatments. ICML, 2022.
OOSR
决策损失值Regret的分析
• Regret被 和 两处的误差所控制, 而非treatment全空间。
• 为了提升决策效果,应加强outcome较好的treatment区域的预测
Outcome-Oriented
Weighting

模型𝑓𝜃 的训练过程(共m轮)
• 基于概率密度比估计法计算
• 对于第j=1,2,3…m轮
• 基于ℒ (𝑗−1) 优化模型参数得到𝜃 (𝑗−1)
• 根据模型参数𝜃 (𝑗−1) 更新下权重

Hao Zou, Bo Li, Jiangang Han, Shuiping Chen, Xuetao Ding, Peng Cui. Counterfactual Prediction for Outcome-oriented Treatments. ICML, 2022.
30

OOSR—实验验证
• 变化sample size(部分结果):

• 变化 𝛼 (selection bias的强度)

Hao Zou, Bo Li, Jiangang Han, Shuiping Chen, Xuetao Ding, Peng Cui. Counterfactual Prediction for Outcome-oriented Treatments. ICML, 2022.
小结
对决策了解更多

Evaluation

Causality

让决策表现更好 让决策更个性化

Optimization Prediction
一种可信智能决策的框架

𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

②复杂收益
33

Problem setting Product list

• Consider online retailer platforms


• The platform recommends a list of products to consumers
• Consumers purchase products according to a choice model

• Characterize consumer choice model with multiple purchases


• Each Consumer views the list sequentially
• Attention span and purchase budget
• maximal number of products that the consumer is willing to view / purchase
• They are random and obey geometric distributions

• Target: the total revenue achieved by the online retailer

Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui. Product Ranking for Revenue Maximization with Multiple Purchases. NeurIPS, 2022.
34

General idea
• Product ranking is the core problem for revenue-maximizing online retailers
• We study the product ranking problem for platform’s long-term revenue
• Two important aspects for the long-term revenue
• Consumer choice model
• Most existing works suppose each consumer purchases at most one product
• We propose a more realistic consumer choice model to characterize consumer behaviors under
multiple-purchase settings
• Exploration vs exploitation between products
• We adopt a UCB-like algorithm to balance the exploration and exploitation
෨ 𝑇) regret on the long-term total revenue
• Achieve 𝑂(

Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui. Product Ranking for Revenue Maximization with Multiple Purchases. NeurIPS, 2022.
35

Proposed method --- offline setting


• Optimal ranking policy when given a consumer’s characteristics
• Sort products in descending order according to the following score
𝜆𝑖 𝑟𝑖
1 − 𝑞 + 𝑞(1 − 𝑠)𝜆𝑖
• 𝑟𝑖 : the revenue of product 𝑖
• 𝜆𝑖 : the purchase probability for product 𝑖
• 𝑞, 𝑠: the geometric distribution parameters w.r.t. attention span and purchase budget

• Special case --- 𝑠 = 0


• The consumer will purchase at most one product
• The result becomes the same as the ranking policy in [1], which considers the single-
purchase setting

Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui. Product Ranking for Revenue Maximization with Multiple Purchases. NeurIPS, 2022.
36

Proposed method --- online setting


• Online Learning of the ranking policy
• The online retailer has no prior knowledge about consumers’ characteristics
• We consider two settings
• Non-contextual setting: all consumers share the same parameters
• Contextual setting: consumers have personalized behaviors
• We develop the Multiple-Purchase-with-Budget UCB (MPB-UCB) algorithms
• Model consumers’ behaviors and maximize revenue in the meantime
• Achieve the balance between exploration and exploitation for different products
෨ 𝑇) regret on the revenue
• Achieve 𝑂(

Algorithm 1: MPB-UCB (Non-contextual) Algorithm 2: MPB-UCB (Contextual)


37

Experiments
• Conduct experiments on both synthetic data
and semi-synthetic data
• We plot the regret curve for different settings
• MPB-UCB (Ours) achieves the best
Figure 1: Results on the synthetic data
performance
• MPB-UCB beats Single Purchase and Keep Viewing
• Baselines consider different consumer choice models
• MPB-UCB beats explore-then-exploit-based
methods
• We have a better exploration-exploitation trade-off
Figure 2: Results on the semi-synthetic data

Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui. Product Ranking for Revenue Maximization with Multiple Purchases. NeurIPS, 2022.
一种可信智能决策的框架

𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

③预测公平性
39

Fairness in decision-making systems


• Given individual features
• 𝑆: sensitive attributes, such as gender and race
• 𝑋: features
• 𝑌: outcomes
• Example ------ college admission case
• 𝑆: gender
• 𝑋: department choices, test scores, e.t.c.
• 𝑌: decision to admit a student
෠ that fits 𝑌 and satisfies some fair constraints.
• Target: a fair 𝑌
40

Traditional metrics for group fairness


• Group fairness
• Some specific statistics should be similar in different groups.
• Demographic parity: positive rate equals.
• Acceptance rates for males and females are equal.
• Equalized odds: true positive rate and false positive rate equal.
• Drawback: Cannot distinguish detailed fair and unfair parts of the problem.
(see the following examples)
41

Traditional fairness notions and drawbacks


• The drawback of DP
• Drawback: Cannot distinguish detailed fair and unfair parts of the problem.
120 + 20 = 140 ≠ 110 = 30 + 80

Same! Same!
Causal graph of fair Toy data of fair college
college admission case admission case

• Example ------ fair college admission case


• Total acceptance rates: male > female (Unfair in DP fairness)
• Acceptance rates in different department: male == female!
42

Traditional fairness notions and drawbacks


• The drawback of EO
• Drawback: Cannot distinguish detailed fair and unfair parts of the problem.
• Example ------ unfair college admission case
• Historical outcome 𝑌 is biased towards gender

• Perfect predictor 𝑌 = 𝑌 satisfies EO constraint.
But actually it is not fair!

Causal graph of unfair


college admission case
43

Conditional fairness
• Fair variables
• Pre-decision covariates, which are irrelevant in
assessing the fairness of decision-making algorithms
• Example Fair college admission case
• Department choice in the college admission case
• Conditional fairness
• Outcome ⊥ sensitive attributes | fair variables
• Explanation in College admission case:
• In each departments, the acceptance rate should be equal.
• More prediction-friendly Unfair college admission case

Renzhe Xu, Peng Cui, et al. Algorithmic Decision Making with Conditional Fairness. KDD, 2020.
44

Problem formulation
• Given individual features
• 𝑆: sensitive attributes, such as gender and race
• 𝑋: features
• 𝐹: fair variables
• 𝑂: other variables
• 𝑌: outcomes
෠ that fits 𝑌 and satisfies 𝑌෠ ⊥ 𝑆 ∣ 𝐹.
• Target: learn 𝑌

Renzhe Xu, Peng Cui, et al. Algorithmic Decision Making with Conditional Fairness. KDD, 2020.
45

Our DCFR algorithm


• Framework
• 𝑔: 𝑆, 𝑋 → 𝑍, representation function
෠ prediction function
• 𝑘: 𝑍 → 𝑌,
• Total loss function
• Prediction loss 𝐿pred (𝑌,෠ 𝑌)
• Fairness loss 𝐿fair (𝑍, 𝐹, 𝑆)
• 𝐿 = 𝐿pred 𝑌,෠ 𝑌 + 𝜆 ⋅ 𝐿fair (𝑍, 𝐹, 𝑆)
• Challenge
• 𝑍 ⊥ 𝑆 ∣ 𝐹 → 𝐿fair (𝑍, 𝐹, 𝑆)
• Solution
• Motivated by Daudin (1980), we use a regularizer to
approximate the conditional independence constraint
Renzhe Xu, Peng Cui, et al. Algorithmic Decision Making with Conditional Fairness. KDD, 2020.
46

Derivable Conditional Fairness Regularizer

• Explanation
• 𝑄(ℎ): weighted L1 loss when using ℎ(𝑍, 𝐹) to predict 𝑆.
• Threoretic guarantee
• 𝐿fair 𝑍, 𝐹, 𝑆 = 0 ⇔ 𝑍 ⊥ 𝑆 ∣ 𝐹

Renzhe Xu, Peng Cui, et al. Algorithmic Decision Making with Conditional Fairness. KDD, 2020.
47

Experiments
• Results on three real-world
datasets
• Plot the accuracy-fairness trade curve.
• Our method (DCFR) is in the solid line.

• Analysis
• Conditional Fairness task
• Our method >> baselines
• Demographic Parity and Equalized
Odds task
• Our method ≈ baselines
Renzhe Xu, Peng Cui, et al. Algorithmic Decision Making with Conditional Fairness. KDD, 2020.
一种可信智能决策的框架

𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

④可监管决策
49

Pricing: Consumer v.s. Producer

Deadweight
Loss
50

Personalized Pricing
51

Regulation Instruments over personalized pricing


• Target
• To design effective policy instruments to balance benefits between consumers and
producers
• Challenge
• Improper regulatory policies may be harmful to consumers. [Dubéand Misra, 2021]
• Example --- 6 people with willingness to pay $1, 2, 3, 5, 6, 7

Market segments Optimal pricing strategy Producer surplus Consumer surplus Total surplus
{1, 2, 3, 5, 6, 7} $5 $15 $3 $18
{1}, {2, 3, 5, 6, 7} $1, $5 $16 $3 $19
{1}, {2, 3}, {5, 6, 7} $1, $2, $5 $20 $4 $24
{1}, {2}, {3}, {5}, {6}, {7} $1, $2, $3, $5, $6, $7 $24 $0 $24

Dube, Jean-Pierre H. and Misra, Sanjog, Personalized Pricing and Consumer Welfare (June 24, 2021).
52

Problem setup
• Basic setup
• A single monopoly sell a single product to various consumers with fixed marginal cost 𝑐
• Willingness to pay
• 𝑉: consumers' willingness to pay, drawn from the demand distribution 𝐹
• The monopoly could precisely estimate consumers’ willingness to pay and make
personalized prices accordingly.
• A consumer with willingness to pay 𝑉 buys the product ⟺ 𝑉 exceeds the charged price

Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu. Regulatory Instruments for Fair Personalized Pricing. The WebConf, 2022.
53

Problem setup
• Assumption on the demand distribution
• monotone hazard rate distribution (uniform, exponential, logistic)
• strongly regular (some power law)

• Explanation
• Assumption on the ‘tail’ of the demand distribution

Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu. Regulatory Instruments for Fair Personalized Pricing. The WebConf, 2022.
54

Overview of results
• Two regulatory policies
• 𝝐-difference fair: 𝑝𝑢 − 𝑝𝑙 ≤ 𝜖
𝑝𝑢 −𝑐
• 𝜸-ratio fair: ≤𝛾
𝑝𝑙 −𝑐

• Theoretical analysis of the two policies


• For common demand distributions
• Stricter constraints → increasing consumer surplus,
decreasing producer surplus
• Stricter constraints → drop on total surplus
• 𝜖-difference achieves better consumer-producer trade-
off.

Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu. Regulatory Instruments for Fair Personalized Pricing. The WebConf, 2022.
55

Experiments
• Simulation
• Uniform / exponential / power-law demand distributions
• Results
• Balancing consumer surplus and producer surplus
• Drop on total surplus
• 𝜖-difference constraint vs 𝛾-ratio constraint

Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu. Regulatory Instruments for Fair Personalized Pricing. The WebConf, 2022.
56

Experiments
• Real-world datasets
• Coke and cake
• Demand distribution has monotone hazard rate (MHR)
• Elective vaccine and auto loan
• Demand distribution has MHR from the long-run trend, though existing fluctuations in short-run

Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu. Regulatory Instruments for Fair Personalized Pricing. The WebConf, 2022.
一种可信智能决策的框架

𝜋: Payoff 𝑋0 : Decision Variable 𝑌: Outcome

④可监管决策 ①反事实推理

③预测公平性 ②复杂收益
Acknowledgement

Hao Zou Renzhe Xu Kun Kuang Bo Li


Tsinghua U Tsinghua U Zhejiang U Tsinghua U
59

Reference
➢ Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui. Product Ranking for Revenue
Maximization with Multiple Purchases. NeurIPS, 2022.
➢ Hao Zou, Bo Li, Jiangang Han, Shuiping Chen, Xuetao Ding, Peng Cui. Counterfactual Prediction for Outcome-oriented
Treatments. ICML, 2022.
➢ Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu. Regulatory Instruments for Fair Personalized
Pricing. The WebConf, 2022.
➢ Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He. Counterfactual Prediction for Bundle
Treatments. NeurIPS, 2020.
➢ Renzhe Xu, Peng Cui, Kun Kuang, Bo Li, Linjun Zhou, Zheyan Shen and Wei Cui. Algorithmic Decision Making with
Conditional Fairness. KDD, 2020.
➢ Hao Zou, Kun Kuang, Boqi Chen, Peng Cui, Peixuan Chen. Focused Context Balancing for Robust Offline Policy
Evaluation. KDD, 2019.
60

Thanks!

Peng Cui
cuip@tsinghua.edu.cn
http://pengcui.thumedialab.com

You might also like