figshare
Browse
uasa_a_2027776_sm2143.pdf (3.35 MB)

Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement Learning Framework

Download (3.35 MB)
journal contribution
posted on 2022-01-20, 16:40 authored by Chengchun Shi, Xiaoyu Wang, Shikai Luo, Hongtu Zhu, Jieping Ye, Rui Song

A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries. Major challenges arise in online experiments of two-sided marketplace platforms (e.g., Uber) where there is only one unit that receives a sequence of treatments over time. In those experiments, the treatment at a given time impacts current outcome as well as future outcomes. The aim of this article is to introduce a reinforcement learning framework for carrying A/B testing in these experiments, while characterizing the long-term treatment effects. Our proposed testing procedure allows for sequential monitoring and online updating. It is generally applicable to a variety of treatment designs in different industries. In addition, we systematically investigate the theoretical properties (e.g., size and power) of our testing procedure. Finally, we apply our framework to both simulated data and a real-world data example obtained from a technological company to illustrate its advantage over the current practice. A Python implementation of our test is available at https://github.com/callmespring/CausalRL. Supplementary materials for this article are available online.

Funding

Shi’s research was partially supported by the LSE’s Research Support Fund in 2021. Song’s research was partially supported by grants from NSF-DMS-1555244 and 2113637.

History

Usage metrics

    Journal of the American Statistical Association

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC