Simple computing of the customer lifetime value: A fixed local-optimal policy approach

Julio B. Clempner , Alexander S. Poznyak

Journal of Systems Science and Systems Engineering ›› 2014, Vol. 23 ›› Issue (4) : 439 -459.

PDF
Journal of Systems Science and Systems Engineering ›› 2014, Vol. 23 ›› Issue (4) : 439 -459. DOI: 10.1007/s11518-014-5260-y
Article

Simple computing of the customer lifetime value: A fixed local-optimal policy approach

Author information +
History +
PDF

Abstract

In this paper, we present a new method for finding a fixed local-optimal policy for computing the customer lifetime value. The method is developed for a class of ergodic controllable finite Markov chains. We propose an approach based on a non-converging state-value function that fluctuates (increases and decreases) between states of the dynamic process. We prove that it is possible to represent that function in a recursive format using a one-step-ahead fixed-optimal policy. Then, we provide an analytical formula for the numerical realization of the fixed local-optimal strategy. We also present a second approach based on linear programming, to solve the same problem, that implement the c-variable method for making the problem computationally tractable. At the end, we show that these two approaches are related: after a finite number of iterations our proposed approach converges to same result as the linear programming method. We also present a non-traditional approach for ergodicity verification. The validity of the proposed methods is successfully demonstrated theoretically and, by simulated credit-card marketing experiments computing the customer lifetime value for both an optimization and a game theory approach.

Keywords

Customer lifetime value / optimization / optimal policy method / linear programming / ergodic controllable Markov chains / asynchronous games

Cite this article

Download citation ▾
Julio B. Clempner, Alexander S. Poznyak. Simple computing of the customer lifetime value: A fixed local-optimal policy approach. Journal of Systems Science and Systems Engineering, 2014, 23(4): 439-459 DOI:10.1007/s11518-014-5260-y

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Altman E, Shwartz E. Markov decision problems and state-action frequencies. SIAM Journal Control and Optimization, 1991, 29(4): 786-809.

[2]

Aravindakshan A, Rust RT, Lemon KN, Zeithaml VA. Customer equity Making marketing strategy financially accountable. Journal of Systems Science and Systems Engineering, 2004, 13(4): 405-422.

[3]

Ascarza E, Hardie BGS. A joint model of usage and churn in contractual settings. Marketing Science, 2013, 32: 570-590.

[4]

Berger PD, Nasr NI. Customer lifetime value: Marketing models and applications. Journal of Interactive Marketing, 1998, 12(1): 17-30.

[5]

Blattberg RC, Getz G, Thomas JS. Customer Equity: Building and Managing Relationships as Valuable Assets, 2001, Boston: Harvard Business School Press

[6]

Malthouseb EC, Neslin SA. Customer lifetime value: Empirical generalizations and some conceptual questions. Journal of Interactive Marketing, 2009, 23(2): 157-168.

[7]

Borkar V. On minimum cost per unit time control of markov chains. SIAM Journal Control and Optimization, 1983, 22: 965-978.

[8]

Borkar V. Fleming W, Lions PL. Control of markov chains with long-run average cost criterion. Stochastic Differential Systems, IMA Volumes in Mathematics and its Applications, 1986, Berlin, New York: Springer-Verlag 57-77.

[9]

Cao Y, Nsakanda A, Diaby M. A stochastic linear programming modelling and solution approach for planning the supply of rewards in loyalty reward programs. International Journal of Mathematics in Operational Research, 2012, 4(4): 400-421.

[10]

Cavazos-Cadena R. Existence of optimal stationary policies in average-reward markov decision processes with a recurrent state. Applied Mathematics and Optimization, 1992, 26(2): 171-194.

[11]

Clempner J B, Poznyak A S. Convergence properties and computational complexity analysis for lyapunov games. International Journal of Applied Mathematics and Computer Science, 2011, 21(2): 49-361.

[12]

Clempner J B, Poznyak A S. Parsini T, Tempo R. Analysis of Best-Reply Strategies in Repeated Finite Markov Chains Games. 52nd IEEE Conference on Decision and Control, 2013 934-939.

[13]

Derman C. Finite State Markovian Decision Processes, 1970, New York: Academic Press.

[14]

Dwyer FR. Customer lifetime valuation to support marketing decision making. Journal of Direct Marketing, 1989, 11(4): 11-15.

[15]

Feinberg E, Shwartz A. Handbook of Markov Decision Processes: Methods and Applications, 2002.

[16]

Gupta S, Lehmann DR, Stuart JA. Valuing customer. Journal of Marketing Research, 2004, 41(1): 7-18.

[17]

Gupta S. Customer-based valuation. Journal of Interactive Marketing, 2009, 23(2): 169-178.

[18]

Hordijk A, Kallenberg LCM. Linear programming and markov decision chains. Management Science, 1979, 25(4): 352-362.

[19]

Howard A J. Dynamic Programming and Markov Processes, 1960, Cambridge, MA: M.I.T. Press.

[20]

Ho J, Thomas L, Pomroy T, Scherer W. Segmentation in Markov chain consumer credit behavioural models. in Readings in Credit Scoring, 2004, Oxford: Oxford University Press

[21]

Kumar V, Sriram S, Luo A, Chintagunta PK. Assessing the effect of marketing investments in a business marketing context. Marketing Science, 2011, 30: 924-940.

[22]

Labbi A, Berrospi C. Optimizing marketing planning and budgeting using markov decision processes: An airline case study. IBM Journal of Research and Development, 2007, 51(3/4): 421-431.

[23]

Netzer O, Lattin JM, Srinivasan V. A hidden markov model of customer relationship dynamics. Marketing Science, 2008, 27: 185-204.

[24]

Persson A. Customer assets and customer equity: Management and measurement issues. Marketing Theory, 2013, 13: 19-46.

[25]

Pfeifer PE, Carraway RL. Modeling customer relationships as markov chains. Journal of Interactive Marketing, 2000, 14: 43-55.

[26]

Poznyak AS, Najim K, Gomez-Ramirez E. Self-learning Control of Finite Markov Chains, 2000, New York: Marcel Dekker

[27]

Rust R, Lemon K, Zeithalm V. Return on marketing: Using customer equity to focus marketing strategy. Journal of Marketing, 2004, 68: 109-127.

[28]

Rust R, Zeithalm V, Lemon K. Driving Customer Equity: How Customer Lifetime Value is Reshaping Corporate Strategy, 2000, London: Simon & Schuster

[29]

Sennott LI. A new condition for the existence of optimal stationary policies in average cost markov decision processes. Operations Research Letter, 1986, 5: 17-23.

[30]

Sennott LI. Average cost optimal stationary policies in infinite state markov decision processes with unbounded costs. Operations Research, 1989, 37: 626-633.

[31]

Shoham Y, Leyton-Brown K. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, 2009, New York: Cambridge University Press

[32]

Vorobeychik Y, Singh S. Computing stackelberg equilibria in discounted stochastic games. Proceedings of the National Conference on Artificial Intelligence (AAAI). Toronto, ON, Canada, 2012

AI Summary AI Mindmap
PDF

116

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/