怎么再游戏的she什么时候解散shenift

怎么再游戏的时候shenift_百度知道
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。
怎么再游戏的时候shenift
我有更好的答案
你可以输入/h 里面有一个命令是可以查询所有再线好友的名字,当查询到以后,再输入/r空格加他的名字空格就可以对话了具体是哪个命令我也忘记了,所以自己试一试
为您推荐:
其他类似问题
换一换
回答问题,赢新手礼包苹果手机里面游戏打开时,跳出来个游戏界面后马上又回到主菜单,怎么办_百度知道
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。
苹果手机里面游戏打开时,跳出来个游戏界面后马上又回到主菜单,怎么办
这是闪退现象,闪退,一般跟安装的应用软件、ID授权或者系统有关,解决闪退教程如下:1、看闪退的软件是在哪里下载的,如果是在PP助手或者同步助手下载的,那么就在电脑端下载PP助手或者同步助手的闪退修复软件,打开软件,链接手机,软件右上角有个闪退修复,点击修复一下就可以了。2、卸载闪退的游戏,进入手机的apple store重新下载。3、如果是手机刚刚越狱而造成的软件闪退,建议可以重启一下手机再打开软件。
助理工程师
是每个游戏都这样么?应该是你下载的游戏的问题吧,重新下载一下试试
本回答被提问者采纳
我不知道啊?
其他1条回答
为您推荐:
其他类似问题
您可能关注的内容
苹果手机的相关知识
换一换
回答问题,赢新手礼包拒绝访问 | www. | 百度云加速
请打开cookies.
此网站 (www.) 的管理员禁止了您的访问。原因是您的访问包含了非浏览器特征(3cc692bc214e4364-ua98).
重新安装浏览器,或使用别的浏览器怎么确定游戏已经存档了_百度知道
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。
怎么确定游戏已经存档了
我有更好的答案
有的游戏是自动存档,不过也应当有提示.游戏中按Esc,再按存档或载入即会显示是否曾经存入过,有时间提示.
采纳率:47%
什么游戏啊,一般都有提示撒
为您推荐:
其他类似问题
换一换
回答问题,赢新手礼包当前位置: >>
INFORMSCNew Orleans 2005c ° 2005 INFORMS | isbn
doi 10.1287/educ.Chapter 1Game Theory in Supply Chain AnalysisG?rard P. Cachon, Serguei Netessine eThe Wharton School, University of Pennsylvania, Philadelphia, PA 19104 {cachon@wharton.upenn.edu, netessine@wharton.upenn.edu} Abstract Game theory has become an essential tool in the analysis of supply chains with multiple agents, often with con?icting objectives. This chapter surveys the applications of game theory to supply chain analysis and outlines game-theoretic concepts that have potential for future application. We discuss both non-cooperative and cooperative game theory in static and dynamic settings. Careful attention is given to techniques for demonstrating the existence and uniqueness of equilibrium in non-cooperative games. A newsvendor game is employed throughout to demonstrate the application of various tools.* Game theory, non-cooperative, cooperative, equilibrium conceptsKeywords1. IntroductionGame theory (hereafter GT) is a powerful tool for analyzing situations in which the decisions of multiple agents a?ect each agent’s payo?. As such, GT deals with interactive optimization problems. While many economists in the past few centuries have worked on what can be considered game-theoretic models, John von Neumann and Oskar Morgenstern are formally credited as the fathers of modern game theory. Their classic book “Theory of Games and Economic Behavior”, [103], summarizes the basic concepts existing at that time. GT has since enjoyed an explosion of developments, including the concept of equilibrium by Nash [69], games with imperfect information by Kuhn [52], cooperative games by Aumann [3] and Shubik [86] and auctions by Vickrey [101], to name just a few. Citing Shubik [87], “In the 50s ... game theory was looked upon as a curiosum not to be taken seriously by any behavioral scientist. By the late 1980s, game theory in the new industrial organization has taken over ... game theory has proved its success in many disciplines.” This chapter has two goals. In our experience with GT problems we have found that many of the useful theoretical tools are spread over dozens of papers and books, buried among other tools that are not as useful in supply chain management (hereafter SCM). Hence, our ?rst goal is to construct a brief tutorial through which SCM researchers can quickly locate GT tools and apply GT concepts. Due to the need for short explanations, we omit all proofs, choosing to focus only on the intuition behind the results we discuss. Our second goal is to provide ample but by no means exhaustive references on the speci?c applications of various GT techniques. These references o?er an in-depth understanding of an application where necessary. Finally, we intentionally do not explore the implications of GT analysis on supply chain management, but rather, we emphasize the means of conducting the analysis to keep the exposition short.* This chapter is re-printed with modi?cations from G. P. Cachon and S. Netessine, “Game Theory in Supply Chain Analysis”, in Handbook of Quantitative Supply Chain Analysis: Modeling in the E-Business Era, D. Simchi-Levi, S. D. Wu and M. Shen, Eds, 2004 with kind permission of Springer Science and Business Media. 1 2Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS1.1. Scope and relation to the literatureThere are many GT concepts, but this chapter focuses on concepts that are particularly relevant to SCM and, perhaps, already found their applications in the literature. We dedicate a considerable amount of space to the discussion of static non-cooperative, non-zero sum games, the type of game which has received the most attention in the recent SCM literature. We also discuss cooperative games, dynamic/di?erential games and games with asymmetric/incomplete information. We omit discussion of important GT concepts covered in [88]: auctions in Chapters 4 and 10; principal-agent models in Chapter 3; and bargaining in Chapter 11. The material in this chapter was collected predominantly from [38], [39], [63], [67], [97] and [102]. Some previous surveys of GT models in management science include [58] survey of mathematical theory of games, [36] survey of di?erential games and [105] survey of static models. A recent survey [56] focuses on application of GT tools in ?ve speci?c OR/MS models.2. Non-cooperative static gamesIn non-cooperative static games the players choose strategies simultaneously and are thereafter committed to their chosen strategies, i.e., these are simultaneous move, one-shot games. Non-cooperative GT seeks a rational prediction of how the game will be played in practice.1 The solution concept for these games was formally introduced by John Nash [69] although some instances of using similar concepts date back a couple of centuries.2.1. Game setupTo break the ground for the section, we introduce basic GT notation. A warning to the reader: to achieve brevity, we intentionally sacri?ce some precision in our presentation. See texts like [38] and [39] if more precision is required. Throughout this chapter we represent games in the normal form. A game in the normal form consists of (1) players indexed by i = 1, ..., n, (2) strategies or more generally a set of strategies denoted by xi , i = 1, ..., n available to each player and (3) payo?s πi (x1 , x2 , ..., xn ) , i = 1, ..., n received by each player. Each strategy is de?ned on a set Xi , xi ∈ Xi , so we call the Cartesian product X1 × X2 × ... × Xn the strategy space. Each player may have a unidimensional strategy or a multi-dimensional strategy. In most SCM applications players have unidimensional strategies, so we shall either explicitly or implicitly assume unidimensional strategies throughout this chapter. Furthermore, with the exception of one example, we will work with continuous strategies, so the strategy space is Rn . A player’s strategy can be thought of as the complete instruction for which actions to take in a game. For example, a player can give his or her strategy to a person that has absolutely no knowledge of the player’s payo? or preferences and that person should be able to use the instructions contained in the strategy to choose the actions the player desires. As a result, each player’s set of feasible strategies must be independent of the strategies chosen by the other players, i.e., the strategy choice by one player is not allowed to limit the feasible strategies of another player. (Otherwise the game is ill de?ned and any analytical results obtained from the game are questionable.) In the normal form players choose strategies simultaneously. Actions are adopted after strategies are chosen and those actions correspond to the chosen strategies. As an alternative to the one-shot selection of strategies in the normal form, a game can also be designed in the extensive form. With the extensive form actions are chosen only as1Some may argue that GT should be a tool for choosing how a manager should play a game, which may involve playing against rational or semi-rational players. In some sense there is no con?ict between these descriptive and normative roles for GT, but this philosophical issue surely requires more in-depth treatment than can be a?orded here. Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS3needed, so sequential choices are possible. As a result, players may learn information between the selection of actions, in particular, a player may learn which actions were previously chosen or the outcome of a random event. Figure 1 provides an example of a simple extensive form game and its equivalent normal form representation: there are two players, player I chooses from {Left,Right} and player II chooses from {Up,Down}. In the extensive form player I chooses ?rst, then player II chooses after learning player I’s choice. In the normal form they choose simultaneously. The key distinction between normal and extensive form games is that in the normal form a player is able to commit to all future decisions. We later show that this additional commitment power may in?uence the set of plausible equilibria.I Left Up (3,3) (0,0) Right II Down (5,2) II I Left Up (3,3) Down (3,3) Right (0,0) (5,2)Figure 1. Extensive vs normal form game representation.A player can choose a particular strategy or a player can choose to randomly select from among a set of strategies. In the former case the player is said to choose a pure strategy whereas in the latter case the player chooses a mixed strategy. There are situations in economics and marketing that have used mixed strategies: see, e.g., [100] for search models and [53] for promotion models. However, mixed strategies have not been applied in SCM, in part because it is not clear how a manager would actually implement a mixed strategy. For example, it seems unreasonable to suggest that a manager should “?ip a coin” among various capacity levels. Fortunately, mixed strategy equilibria do not exist in games with a unique pure strategy equilibrium. Hence, in those games attention can be restricted to pure strategies without loss of generality. Therefore, in the remainder of this chapter we consider only pure strategies. In a non-cooperative game the players are unable to make binding commitments before choosing their strategies. In a cooperative game players are able to make binding commitments. Hence, in a cooperative game players can make side-payments and form coalitions. We begin our analysis with non-cooperative static games. In all sections, except the last one, we work with the games of complete information, i.e., the players’ strategies and payo?s are common knowledge to all players. As a practical example throughout this chapter, we utilize the classic newsvendor problem transformed into a game. In the absence of competition each newsvendor buys Q units of a single product at the beginning of a single selling season. Demand during the season is a random variable D with distribution function FD and density function fD . Each unit is purchased for c and sold on the market for r & c. The newsvendor solves the following optimization problem max π = max ED [r min (D, Q) ? cQ] ,Q Qwith the unique solution?1 Q? = FD?? r?c . r 4Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMSGoodwill penalty costs and salvage revenues can easily be incorporated into the analysis, but for our needs we normalized them out. Now consider the GT version of the newsvendor problem with two retailers competing on product availability. [76] was the ?rst to analyze this problem, which is also one of the ?rst articles modeling inventory management in a GT framework. It is useful to consider only the two-player version of this game because then graphical analysis and interpretations are feasible. Denote the two players by subscripts i and j, their strategies (in this case stocking quantities) by Qi , Qj and their payo?s by πi , πj . We introduce interdependence of the players’ payo?s by assuming the two newsvendors sell the same product. As a result, if retailer i is out of stock, all unsatis?ed customers try to buy the product at retailer j instead. Hence, retailer i’s total demand is Di + (Dj ? Qj )+ : the sum of his own demand and the demand from customers not satis?ed by retailer j. Payo?s to the two players are then ? h ? i πi (Qi , Qj ) = ED ri min Di + (Dj ? Qj )+ , Qi ? ci Qi , i, j = 1, 2.2.2. Best response functions and the equilibrium of the gameWe are ready for the ?rst important GT concept: best response functions.Definition 1. Given an n?player game, player i’s best response (function) to the strategies x?i of the other players is the strategy x? that maximizes player i0 s payo? πi (xi , x?i ): i x? (x?i ) = arg max πi (xi , x?i ). ixi(x? (x?i ) iis probably better described as a correspondence rather than a function, but we shall nevertheless call it a function with an understanding that we are interpreting the term “function” liberally.) If πi is quasi-concave in xi the best response is uniquely de?ned by the ?rst-order conditions of the payo? functions. In the context of our competing newsvendors example, the best response functions can be found by optimizing each player’s payo? functions w.r.t. the player’s own decision variable Qi while taking the competitor’s strategy Qj as given. The resulting best response functions are ? ? ri ? ci ?1 Q? (Qj ) = FD +(D ?Q )+ , i, j = 1, 2. i i j j ri Taken together, the two best response functions form a best response mapping R2 → R2 or in the more general case Rn → Rn . Clearly, the best response is the best player i can hope for given the decisions of other players. Naturally, an outcome in which all players choose their best responses is a candidate for the non-cooperative solution. Such an outcome is called a Nash equilibrium (hereafter NE) of the game. Definition 2. An outcome (x? , x? , ..., x? ) is a Nash equilibrium of the game if x? is a 1 2 n i best response to x? for all i = 1, 2, ..., n. ?i Going back to competing newsvendors, NE is characterized by solving a system of best responses that translates into the system of ?rst-order conditions: ? ? r1 ? c1 ?1 ? ? , Q1 (Q2 ) = F + D1 +(D2 ?Q? ) 2 ? r1 ? r2 ? c2 Q? (Q? ) = F ?1 . + 2 1 D2 +(D1 ?Q? ) r2 1 When analyzing games with two players it is often helpful to graph the best response functions to gain intuition. Best responses are typically de?ned implicitly through the ?rst-order Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS5conditions, which makes analysis di?cult. Nevertheless, we can gain intuition by ?nding out how each player reacts to an increase in the stocking quantity by the other player (i.e., ?Q? (Qj )/ ?Qj ) through employing implicit di?erentiation as follows: i ri f Di +(Dj ?Qj )+ |Dj &Qj (Qi ) Pr (Dj & Qj ) ?Q? (Qj ) ?Qi ?Q i = ? ?2π j = ? & 0. i ?Qj ri fDi +(Dj ?Qj )+ (Qi ) 2?Qi ? 2 πi(1)The expression says that the slopes of the best response functions are negative, which implies an intuitive result that each player’s best response is monotonically decreasing in the other player’s strategy. Figure 2 presents this result for the symmetric newsvendor game. The equilibrium is located on the intersection of the best responses and we also see that the best responses are, indeed, decreasing. One way to think about a NE is as a ?xed point of the best response mapping Rn → Rn . Indeed, according to the de?nition, NE must satisfy the system of equations ?πi /?xi = 0, all i. Recall that a ?xed point x of mapping f (x), Rn → Rn is any x such that f (x) = x. De?ne fi (x1 , ..., xn ) = ?πi /?xi + xi . By the de?nition of a ?xed point, fi (x? , ..., x? ) = x? = ?πi (x? , ..., x? )/?xi + x? → ?πi (x? , ..., x? )/?xi = 0, ?i. 1 n i 1 n i 1 n Hence, x? solves the ?rst-order conditions if and only if it is a ?xed point of mapping f (x) de?ned above.Q2* Q2 (Q1 )* Q1 (Q2 )Q1Figure 2. Best responses in the newsvendor game.The concept of NE is intuitively appealing. Indeed, it is a self-ful?lling prophecy. To explain, suppose a player were to guess the strategies of the other players. A guess would be consistent with payo? maximization and therefore would be reasonable only if it presumes that strategies are chosen to maximize every player’s payo? given the chosen strategies. In other words, with any set of strategies that is not a NE there exists at least one player that is choosing a non payo? maximizing strategy. Moreover, the NE has a self-enforcing property: no player wants to unilaterally deviate from it since such behavior would lead to lower payo?s. Hence NE seems to be the necessary condition for the prediction of any rational behavior by players2 .2 However, an argument can also be made that to predict rational behavior by players it is su?cient that players not choose dominated strategies, where a dominated strategy is one that yields a lower payo? than some other strategy (or convex combination of other strategies) for all possible strategy choices by the other players. 6Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMSWhile attractive, numerous criticisms of the NE concept exist. Two particularly vexing problems are the non-existence of equilibrium and the multiplicity of equilibria. Without the existence of an equilibrium, little can be said regarding the likely outcome of the game. If there are multiple equilibria, then it is not clear which one will be the outcome. Indeed, it is possible the outcome is not even an equilibrium because the players may choose strategies from di?erent equilibria. For example, consider the normal form game in Figure 1. There are two Nash equilibria in that game {Left,Up} and {Right,Down}: each is a best response to the other player’s strategy. However, because the players choose their strategies simultaneously it is possible that player I chooses Right (the 2nd equilibrium) while player II choose Up (the 1st equilibrium), which results in {Right,Up}, the worst outcome for both players. In some situations it is possible to rationalize away some equilibria via a re?nement of the NE concept: e.g., trembling hand perfect equilibrium [83], sequential equilibrium [51] and proper equilibria [67]. These re?nements eliminate equilibria that are based on non-credible threats, i.e., threats of future actions that would not actually be adopted if the sequence of events in the game led to a point in the game in which those actions could be taken. The extensive form game in Figure 1 illustrates this point. {Left,Up} is a Nash equilibrium (just as it is in the comparable normal form game) because each player is choosing a best response to the other player’s strategy: Left is optimal for player I given player II plans to play Up and player II is indi?erent between Up or Down given player I chooses Left. But if player I were to choose Right, then it is unreasonable to assume player II would actually follow through with UP: UP yields a payo? of 0 while Down yields a payo? of 2. Hence, the {Left,Up} equilibrium is supported by a non-credible threat by player II to play Up. Although these re?nements are viewed as extremely important in economics (Selten was awarded the Nobel prize for his work), the need for these re?nements has not yet materialized in the SCM literature. But that may change as more work is done on sequential/dynamic games. An interesting feature of the NE concept is that the system optimal solution (i.e., a solution that maximizes the sum of players’ payo?s) need not be a NE. Hence, decentralized decision making generally introduces ine?ciency in the supply chain. There are, however, some exceptions: see [60] and [75] for situations in which competition may result in the system-optimal performance. In fact, a NE may not even be on the Pareto frontier: the set of strategies such that each player can be made better o? only if some other player is made worse o?. A set of strategies is Pareto optimal if they are on the P otherwise a set of strategies is Pareto inferior. Hence, a NE can be Pareto inferior. The Prisoner’s Dilemma game (see [39]) is the classic example of this: only one pair of strategies when both players “cooperate” is Pareto optimal, and the unique Nash equilibrium is when both players “defect” happens to be Pareto inferior. A large body of the SCM literature deals with ways to align the incentives of competitors to achieve optimality. See [17] for a comprehensive survey and taxonomy. See [18] for a supply chain analysis that makes extensive use of the Pareto optimal concept.2.3. Existence of equilibriumA NE is a solution to a system of n ?rst-order conditions, so an equilibrium may not exist. Non-existence of an equilibrium is potentially a conceptual problem since in this case it is not clear what the outcome of the game will be. However, in many games a NE does exist and there are some reasonably simple ways to show that at least one NE exists. As already mentioned, a NE is a ?xed point of the best response mapping. Hence ?xed point theorems can be used to establish the existence of an equilibrium. There are three key ?xed point theorems, named after their creators: Brouwer, Kakutani and Tarski, see [13] for details and references. However, direct application of ?xed point theorems is somewhat inconvenient and hence generally not done. For exceptions see [55] and [61] for existence proofs that are based on Brouwer’s ?xed point theorem. Alternative methods, derived from these ?xed Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS7π 2 ( x1& )π 2 ( x1 ' )* x2 ( x1 ' )x2* x2 ( x1 ' ' )x2Figure 3. Example with a bi-modal objective function.point theorems, have been developed. The simplest and the most widely used technique for demonstrating the existence of NE is through verifying concavity of the players’ payo?s. Theorem 1 ([30]). Suppose that for each player the strategy space is compact3 and convex and the payo? function is continuous and quasi-concave with respect to each player’s own strategy. Then there exists at least one pure strategy NE in the game. If the game is symmetric in a sense that the players’ strategies and payo?s are identical, one would imagine that a symmetric solution should exist. This is indeed the case, as the next Theorem ascertains. Theorem 2. Suppose that a game is symmetric and for each player the strategy space is compact and convex and the payo? function is continuous and quasi-concave with respect to each player’s own strategy. Then there exists at least one symmetric pure strategy NE in the game. To gain some intuition about why non-quasi-concave payo?s may lead to non-existence of NE, suppose that in a two-player game, player 2 has a bi-modal objective function with two local maxima. Furthermore, suppose that a small change in the strategy of player 1 leads to a shift of the global maximum for player 2 from one local maximum to another. To be more speci?c, let us say that at x0 the global maximum x? (x0 ) is on the left (Figure 3 left) and 1 2 1 at x00 the global maximum x? (x00 ) is on the right (Figure 3 right). Hence, a small change 1 2 2 in x1 from x0 to x00 induces a jump in the best response of player 2, x? . The resulting 1 1 2 best response mapping is presented in Figure 4 and there is no NE in pure strategies in this game. In other words, best response functions do not intersect anywhere. As a more speci?c example, see [73] for an extension of the newsvendor game to the situation in which product inventory is sold at two di? such a game may not have a NE since both players’ objectives may be bimodal. Furthermore, [20] demonstrate that pure strategy NE may not exist in two other important settings: two retailers competing with cost functions described by the Economic Order Quantity (EOQ) or two service providers competing with service times described by the M/M/1 queuing model. The assumption of a compact strategy space may seem restrictive. For example, in the 2 newsvendor game the strategy space R+ is not bounded from above. However, we could easily bound it with some large enough ?nite number to represent the upper bound on the demand distribution. That bound would not impact any of the choices, and therefore the3Strategy space is compact if it is closed and bounded. 8x2Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS* x1x2*x1 'x1 &x1Figure 4. Non-existence of NE.x2 equilibria* x1* x2x1Figure 5. Non-uniqueness of the equilibrium.transformed game behaves just as the original game with an unbounded strategy space. (However, that bound cannot depend on any player’s strategy choice.) To continue with the newsvendor game analysis, it is easy to verify that the newsvendor’s objective function is concave and hence quasi-concave w.r.t. the stocking quantity by taking the second derivative. Hence the conditions of Theorem 1 are satis?ed and a NE exists. There are virtually dozens of papers employing Theorem 1. See, for example, [57] for the proof involving quasi-concavity, [59] and [74] for the proofs involving concavity. Clearly, quasi-concavity of each player’s objective function only implies uniqueness of the best response but does not imply a unique NE. One can easily envision a situation where unique best response functions cross more than once so that there are multiple equilibria (see Figure 5). If quasi-concavity of the players’ payo?s cannot be veri?ed, there is an alternative existence proof that relies on Tarski’s [94] ?xed point theorem and involves the notion of supermodular games. The theory of supermodular games is a relatively recent development introduced and advanced by [97]. Definition 3. A twice continuously di?erentiable payo? function πi (x1 , ..., xn ) is supermodular (submodular) i? ? 2 πi /?xi ?xj ≥ 0 (≤ 0) for all x and all j 6= i. The game is called supermodular if the players’ payo?s are supermodular. Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS9Supermodularity essentially means complementarity between any two strategies and is not linked directly to either convexity, concavity or even continuity. (This is a signi?cant advantage when forced to work with discrete strategies, e.g., [27].) However, similar to concavity/convexity, supermodularity/submodularity is preserved under maximization, limits and addition and hence under expectation/integration signs, an important feature in stochastic SCM models. While in most situations the positive sign of the second derivative can be used to verify supermodularity (using De?nition 3), sometimes it is necessary to utilize supermodularity-preserving transformations to show that payo?s are supermodular. [97] provides a variety of ways to verify that the function is supermodular and some of these results are used in [27], [22], [72] and [70]. The following theorem follows directly from Tarski’s ?xed point result and provides another tool to show existence of NE in noncooperative games: Theorem 3. In a supermodular game there exists at least one NE. Coming back to the competitive newsvendors example, recall that the second-order crosspartial derivative was found to be ? 2 πi = ?ri f Di +(Dj ?Qj )+ |Dj &Qj (Qi ) Pr (Dj & Qj ) & 0, ?Qi ?Qj so that the newsvendor game is submodular and hence existence of equilibrium cannot be assured. However, a standard trick is to re-de?ne the ordering of the players’ strategies. Let y = ?Qj so that ? 2 πi = ri f Di +(Dj +y)+ |Dj &Qj (Qi ) Pr (Dj & ?y) & 0, ?Qi ?y and the game becomes supermodular in (xi , y) so existence of NE is assured. Notice that we do not change either payo?s or the structure of the game, we only alter the ordering of one player’s strategy space. Obviously, this trick only works in two-player games, see also [57] for the analysis of the more general version of the newsvendor game using a similar transformation. Hence, we can state that in general NE exists in games with decreasing best responses (submodular games) with two players. This argument can be generalized slightly in two ways that we mention brie?y, see [102] for details. One way is to consider an n?player game ? where best responses are functions of aggregate actions of all other players, ? P that is, x? = x? xj . If best responses in such a game are decreasing, then NE exists. i i j6=i ?P ? Another generalization is to consider the same game with x? = x? but require i i j6=i xj symmetry. In such a game, existence can be shown even with non-monotone best responses provided that there are only jumps up but on intervals between jumps best responses can be increasing or decreasing. We now step back to discuss the intuition behind the supermodularity results. Roughly speaking, Tarski’s ?xed point theorem only requires best response mappings to be nondecreasing for the existence of equilibrium and does not require quasi-concavity of the players’ payo?s and allows for jumps in best responses. While it may be hard to believe that non-decreasing best responses is the only requirement for the existence of a NE, consider once again the simplest form of a single-dimensional equilibrium as a solution to the ?xed point mapping x = f (x) on the compact set. It is easy to verify after a few attempts that if f (x) is non-decreasing but possibly with jumps up then it is not possible to derive a situation without an equilibrium. However, when f (x) jumps down, non-existence is possible (see Figure 6). Hence, increasing best response functions is the only major requirement for an
players’ objectives do not have to be quasi-concave or even continuous. However, to 10Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMSx f(x)xf(x) x xFigure 6. Increasing (left) and decreasing (right) mappings.describe an existence theorem with non-continuous payo?s requires the introduction of terms and de?nitions from lattice theory. As a result, we restricted ourselves to the assumption of continuous payo? functions, and in particular, to twice-di?erentiable payo? functions. Although it is now clear why increasing best responses ensure existence of an equilibrium, it is not immediately obvious why De?nition 3 provides a su?cient condition, given that it only concerns the sign of the second-order cross-partial derivative. To see this connection, consider separately the continuous and the dis-continuous parts of the best response x? (xj ). i When the best response is continuous, we can apply the Implicit Function Theorem to ?nd its slope as follows ?x? ?xi ?x i = ? ?2π j . i ?xj 2?xi ? 2 πiClearly, if is the best response, it must be the case that ? 2 πi /?x2 & 0 or else it would not i be the best response. Hence, for the slope to be positive it is su?cient to have ? 2 πi /?xi ?xj & 0 which is what De?nition 3 provides. This reasoning does not, however, work at discontinuities in best responses since the Implicit Function Theorem cannot be applied. To show that only jumps up are possible if ? 2 πi /?xi ?xj & 0 holds, consider a situation in which there is a jump down in the best response. As one can recall, jumps in best responses happen when the objective function is bi-modal (or more generally multi-modal). For example, consider a ? ? ? ? speci?c point x# and let x1 x# & x2 x# be two distinct points at which ?rst-order condii i j j j ? ? ? ? tions hold (i.e., the objective function πi is bi-modal). Further, suppose πi x1 x# , x# & j ? ? ? ?i j ? ? ? ? ? ? ? ? # # # # # # 2 1 2 πi xi xj , xj but πi xi xj + ε , xj + ε & πi xi xj + ε , xj + ε . That is, ini? ? tially x2 x# is a global maximum but as we increase x# in?nitesimally, there is a jump i j j ? ? # 1 down and a smaller xi xj + ε becomes the global maximum. For this to be the case, it must be that ? ? ? ? ? ? ? ? ?πi x1 x# , x# ?πi x2 x# , x# i i j j j j & , ?xj ?xj ? ? ? ? ? ? ? ? or, in words, the objective function rises faster at x1 x# , x# than at x2 x# , x# . i i j j j j This, however, can only happen if ? 2 πi /?xi ?xj & 0 at least somewhere on the interval ? ?i h ? ? which is a contradiction. Hence, if ? 2 πi /?xi ?xj & 0 holds then only x1 x# , x2 x# i i j j jumps up in the best response are possible.x? i Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS112.4. Uniqueness of equilibriumFrom the perspective of generating qualitative insights, it is quite useful to have a game with a unique NE. If there is only one equilibrium, then one can characterize equilibrium actions without much ambiguity. Unfortunately, demonstrating uniqueness is generally much harder than demonstrating existence of equilibrium. This section provides several methods for proving uniqueness. No sin all may have to be tried to ?nd the one that works. Furthermore, one should be careful to recognize that these methods assume existence, i.e., existence of NE must be shown separately. Finally, it is worth pointing out that uniqueness results are only available for games with continuous best response functions and hence there are no general methods to prove uniqueness of NE in supermodular games. 2.4.1. Method 1. Algebraic argument In some rather fortunate situations one can ascertain that the solution is unique by simply looking at the optimality conditions. For example, in a two-player game the optimality condition of one of the players may have a unique closed-form solution that does not depend on the other player’s strategy and, given the solution for one player, the optimality condition for the second player can be solved uniquely. See [44] and [71] for examples. In other cases one can assure uniqueness by analyzing geometrical properties of the best response functions and arguing that they intersect only once. Of course, this is only feasible in two-player games. See [76] for a proof of uniqueness in the two-player newsvendor game and [62] for a supply chain game with competition in reverse logistics. However, in most situations these geometrical properties are also implied by the more formal arguments stated below. Finally, it may be possible to use a contradiction argument: assume that there is more than one equilibrium and prove that such an assumption leads to a contradiction, as in [55]. 2.4.2. Method 2. Contraction mapping argument Although the most restrictive among all methods, the contraction mapping argument is the most widely known and is the most frequently used in the literature because it is the easiest to verify. The argument is based on showing that the best response mapping is a contraction, which then implies the mapping has a unique ?xed point. To illustrate the concept of a contraction mapping, suppose we would like to ?nd a solution to the following ?xed point equation: x = f (x), x ∈ R1 . To do so, a sequence of values is generated?by an ?iterative algorithm, {x(1) , x(2) , x(3) , ...} where x(1) is arbitrarily picked and x(t) = f x(t?1) . The hope is that this sequence converges to a unique ?xed point. It does so if, roughly speaking, each step in the sequence moves closer to the ?xed point. One could verify that if |f 0 (x)| & 1 in some vicinity of x? then such an iterative algorithm converges to a unique x? = f (x? ) . Otherwise, the algorithm diverges. Graphically, the equilibrium point is located on the intersection of two functions: x and f (x). The iterative algorithm is presented in Figure 7. The iterative scheme in Figure 7 left is a contraction mapping: it approaches the equilibrium after every iteration. Definition 4. Mapping f (x), Rn → Rn is a contraction i? kf (x1 ) ? f (x2 )k ≤ α kx1 ? x2 k, ?x1 , x2 , α & 1. In words, the application of a contraction mapping to any two points strictly reduces (i.e., α = 1 does not work) the distance between these points. The norm in the de?nition can be any norm, i.e., the mapping can be a contraction in one norm and not a contraction in another norm. 12Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMSf(x) f(x) xxx(2)x(3)x(1)x(2)x(1)x(3)Figure 7. Converging (left) and diverging (right) iterations.Theorem 4. If the best response mapping is a contraction on the entire strategy space, there is a unique NE in the game. One can think of a contraction mapping in terms of iterative play: player 1 selects some strategy, then player 2 selects a strategy based on the decision by player 1, etc. If the best response mapping is a contraction, the NE obtained as a result of such iterative play is stable but the opposite is not necessarily true, i.e., no matter where the game starts, the ?nal outcome is the same. See also [63] for an extensive treatment of stable equilibria. A major restriction in Theorem 4 is that the contraction mapping condition must be satis?ed everywhere. This assumption is quite restrictive because the best response mapping may be a contraction locally, say in some not necessarily small ε?neighborhood of the equilibrium but not outside of it. Hence, if iterative play starts in this ε?neighborhood, then it converges to the equilibrium, but starting outside that neighborhood may not lead to the equilibrium (even if the equilibrium is unique). Even though one may wish to argue that it is reasonable for the players to start iterative play some place close to the equilibrium, formalization of such an argument is rather di?cult. Hence, we must impose the condition that the entire strategy space be considered. See [91] for an interesting discussion of stability issues in a queuing system. While Theorem 4 is a starting point towards a method for demonstrating uniqueness, it does not actually explain how to validate that a best reply mapping is a contraction. Suppose we have a game with n players each endowed with the strategy xi and we have obtained the best response functions for all players, xi = fi (x?i ). We can then de?ne the following matrix of derivatives of the best response functions: ? ? ?f1 ?f1 ? 0 ... ?xn ? ? ? ? ? ?f2 ?x2 ?f 0 ... ?x2 ? ? ?x1 2 A=? ?. ? ... ... ... ... ? ? ? ?fn ?fn ? ?x ?x ... 0 ? 1 2 Further, denote by ρ(A) the spectral radius of matrix A and recall that the spectral radius of a matrix is equal to the largest absolute eigenvalue ρ(A) = {max |λ| : Ax = λx, x 6= 0}, see [47].Theorem 5. The mapping f (x), Rn → Rn is a contraction if and only if ρ(A) & 1 everywhere. Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS13Theorem 5 is simply an extension of the iterative convergence argument we used above into multiple dimensions, and the spectral radius rule is an extension of the requirement |f 0 (x)| & 1. Still, Theorem 5 is not as useful as we would like it to be: calculating eigenvalues of a matrix is not trivial. Instead, it is often helpful to use the fact that the largest eigenvalue and hence the spectral radius is bounded above by any of the matrix norms, see [47]. So instead of working with the spectral radius itself, it is su?cient to show kAk & 1 for any one matrix norm. The most convenient matrix norms are the maximum column-sum and the maximum row-sum norms (see [47] for other matrix norms). To use either of these norms to verify the contraction mapping, it is su?cient to verify that no column sum or no row sum of matrix A exceeds one, n n X ? ?fk ? X ? ?fi ? ? ? ? ? ? ? & 1 , or ? ? ? ?xi ? ? ?xk ? & 1, ?k. i=1 i=1 [70] used the contraction mapping argument in this most general form in the multiple-player variant of the newsvendor game described above. A challenge associated with the contraction mapping argument is ?nding best response functions because in most SC models best responses cannot be found explicitly. Fortunately, Theorem 5 only requires the derivatives of the best response functions, which can be done using the Implicit Function Theorem (from now on, IFT, see [13]). Using the IFT, Theorem 5 can be re-stated as n X ? ? 2 πk ? ? ? 2 πk ? ? ? ? ? ? ? ? ? (2) ? ?xk ?xi ? & ? ?x2 ? , ?k. ki=1,i6=ki.e., the slopes of the best response functions are less than one. This condition is especially intuitive if we use the graphical illustration (Figure 2). Given that the slope of each best response function is less than one everywhere, if they cross at one point then they cannot cross at an additional point. A contraction mapping argument in this form was used by [98] and by [81]. Returning to the newsvendor game example, we have found that the slopes of the best response functions are ? ? ? ? ? ? ?Qi (Qj ) ? ? f Di +(Dj ?Qj )+ |Dj &Qj (Qi ) Pr (Dj & Qj ) ? ? ? & 1. ?=? ? ?Qj ? ? ? fDi +(Dj ?Qj )+ (Qi ) Hence, the best response mapping in the newsvendor game is a contraction and the game has a unique and stable NE.Contraction mapping conditions in the diagonal dominance form have been used extensively by [8, 11, 9, 7]. As has been noted by [10], many standard economic demand models satisfy this condition. In games with only two players the condition in Theorem 5 simpli?es to ? ? ? ? ? ?f1 ? ? ? ? ? & 1 and ? ?f2 ? & 1, (4) ? ?x2 ? ? ?x1 ?This condition is also known as “diagonal dominance” because the diagonal of the matrix of second derivatives, also called the Hessian, dominates the o?-diagonal entries: ? ? 2 ? ? π1 ? 2 π1 ? 2 π1 ... ?x1 ?xn ? ? ? ?x2 ?x1 ?x2 1 ? ? ?2π ? 2 π2 ? 2 π1 2 ? ... ?x2 ?xn ? ?. ?x2 2 (3) H = ? ?x2 ?x1 ? ? ... ... ... ... ? ? 2 ? ? ? 2 πn ?2 ? ? ?x ?x ?x πn ... ? πn ?x ?x2n 1 n 2 n 14Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS2.4.3. Method 3. Univalent mapping argument Another method for demonstrating uniqueness of equilibrium is based on verifying that the best response mapping is one-to-one: that is, if f (x) is a Rn → Rn mapping, then y = f (x) implies that for all x0 6= x, y 6= f (x0 ). Clearly, if the best response mapping is one-to-one then there can be at most one ?xed point of such mapping. To make an analogy, recall that, if the equilibrium is interior4 , the NE is a solution to the system of the ?rst-order conditions: ?πi / ?xi = 0, ?i, which de?nes the best response mapping. If this mapping is single-dimensional R1 → R1 then it is quite clear that the condition su?cient for the mapping to be one-to-one is quasi-concavity of πi . Similarly, for the Rn → Rn mapping to be one-to-one we require quasi-concavity of the mapping which translates into quasi-de?niteness of the Hessian: Theorem 6. Suppose the strategy space of the game is convex and all equilibria are interior. Then if the determinant |H| is negative quasi-de?nite (i.e., if the matrix H + H T is negative de?nite) on the players’ strategy set, there is a unique NE. Proof of this result can be found in [41] and some further developments that deal with boundary equilibria are found in [80]. Notice that the univalent mapping argument is somewhat weaker than the contraction mapping argument. Indeed, the re-statement (2) of the contraction mapping theorem directly implies univalence since the dominant diagonal assures us that H is negative de?nite. Hence, it is negative quasi-de?nite. It immediately follows that the newsvendor game satis?es the univalence theorem. However, if some other matrix norm is used, the relationship between the two theorems is not that speci?c. In the case of just two players the univalence theorem can be written as, according to [63], s? ? ? ? 2 2 ? 2 ? ? ? π2 ? 2 π1 ? ? ≤ 2 ? ? π1 ? ? π2 ?, ?x1 , x2 . ? ? ?x2 ?x2 ? ? ?x2 ?x1 + ?x1 ?x2 ? 1 22.4.4. Method 4. Index theory approach. This method is based on the PoincareHopf index theorem found in di?erential topology, see, e.g., [42]. Similarly to the univalence mapping approach, it requires a certain sign from the Hessian, but this requirement need hold only at the equilibrium point.Theorem 7. Suppose the strategy space of the game is convex and all payo? functions are quasi-concave. Then if (?1)n |H| is positive whenever ?πi /?xi = 0, all i, there is a unique NE. Observe that the condition (?1)n |H| is trivially satis?ed if |H| is negative de?nite which is implied by the condition (2) of contraction mapping, i.e., this method is also somewhat weaker than the contraction mapping argument. Moreover, the index theory condition need only hold at the equilibrium. This makes it the most general, but also the hardest to apply. To gain some intuition about why the index theory method works, consider the two-player game. The condition of Theorem 7 simpli?es to ? ?2π ? ? 2 π1 ? ? 1 ?π1 ?π2 ? ?x2 ?x1 ?x2 ? 1 = 0, = 0, ? ? 2 π2 ? & 0 ?x1 , x2 : 2 ? π2 ? ?x ?x ? ?x1 ?x2 ?x22 1 2which can be interpreted as meaning the multiplication of the slopes of best response functions should not exceed one at the equilibrium: ?f1 ?f2 & 1 at x? , x? . 1 2 ?x2 ?x1 (5)4Interior equilibrium is the one in which ?rst-order conditions hold for each player. The alternative is boundary equilibrium in which at least one of the players select the strategy on the boundary of his strategy space. Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS15As with the contraction mapping approach, with two players the Theorem becomes easy to visualize. Suppose we have found best response functions x? = f1 (x2 ) and x? = f2 (x1 ) 1 2 ?1 as in Figure 2. Find an inverse function x2 = f1 (x1 ) and construct an auxiliary function ?1 g(x1 ) = f1 (x1 ) ? f2 (x1 ) that measures the distance between two best responses. It remains to show that g(x1 ) crosses zero only once since this would directly imply a single crossing point of f1 (x1 ) and f2 (x2 ). Suppose we could show that every time g(x1 ) crosses zero, it does so from below. If that is the case, we are assured there is only a single crossing: it is impossible for a continuous function to cross zero more than once from below because it would also have to cross zero from above somewhere. It can be shown that the function g(x1 ) crosses zero only from below if the slope of g(x1 ) at the crossing point is positive as follows?1 ?g(x1 ) ?f1 (x1 ) ?f2 (x1 ) = ? = ?x1 ?x1 ?x11?f2 (x2 ) ?x2??f2 (x1 ) & 0, ?x1which holds if (5) holds. Hence, in a two-player game condition (5) is su?cient for the uniqueness of the NE. Note that condition (5) trivially holds in the newsvendor game since each slope is less than one and hence the multiplication of slopes is less than one as well everywhere. Index theory has been used by [72] to show uniqueness of the NE in a retailerwholesaler game when both parties stock inventory and sell directly to consumers and by [21] and [24].2.5. Multiple equilibriaMany games are just not blessed with a unique equilibrium. The next best situation is to have a few equilibria. The worst situation is either to have an in?nite number of equilibria or no equilibrium at all. The obvious problem with multiple equilibria is that the players may not know which equilibrium will prevail. Hence, it is entirely possible that a non-equilibrium outcome results because one player plays one equilibrium strategy while a second player chooses a strategy associated with another equilibrium. However, if a game is repeated, then it is possible that the players eventually ?nd themselves in one particular equilibrium. Furthermore, that equilibrium may not be the most desirable one. If one does not want to acknowledge the possibility of multiple outcomes due to multiple equilibria, one could argue that one equilibrium is more reasonable than the others. For example, there may exist only one symmetric equilibrium and one may be willing to argue that a symmetric equilibrium is more focal than an asymmetric equilibrium. (See [59] for an example). In addition, it is generally not too di?cult to demonstrate the uniqueness of a symmetric equilibrium. If the players have unidimensional strategies, then the system of n ?rst-order conditions reduces to a single equation and one need only show that there is a unique solution to that equation to prove the symmetric equilibrium is unique. If the players have m-dimensional strategies, m & 1, then ?nding a symmetric equilibrium reduces to determining whether a system of m equations has a unique solution (easier than the original system, but still challenging). An alternative method to rule out some equilibria is to focus only on the Pareto optimal equilibrium, of which there may be only one. For example, in supermodular games the equilibria are Pareto rankable under an additional condition that each players’ objective function is increasing in other players’ strategies, i.e., there is a most preferred equilibrium by every player and a least preferred equilibrium by every player. (See [104] for an example). However, experimental evidence exists that suggests players do not necessarily gravitate to the Pareto optimal equilibrium as is demonstrated by [19]. Hence, caution is warranted with this argument. 16Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS2.6. Comparative statics in gamesIn GT models, just as in the non-competitive SCM models, many of the managerial insights and results are obtained through comparative statics such as monotonicity of the optimal decisions w.r.t. some parameter of the game. 2.6.1. The Implicit Functions Theorem approach This approach works for both GT and single decision-maker applications as will become evident from the statement of the next theorem. Theorem 8. Consider the system of equations ?πi (x1 , ..., xn , a) = 0, i = 1, ..., n, ?xi de?ning x? , ..., x? as implicit functions of parameter a. If all derivatives are continuous func1 n tions and the Hessian (3) evaluated at x? , ..., x? is non-zero, then the function x? (a), R1 → n 1 Rn is continuous on a ball around x? and its derivatives are found as follows: ??1 ? 2 ? ? ? ? ? ?π1 ? ? π1 ? ? 2 π1 ? 2 π1 ? ?x1 ? ? ? ?x ?a ? ? ?x2 ?x1 ?x2 ... ?x1 ?xn ? ?a ? 1 1 ? ? ?π ? ?2π ? ? ?x? ? ? 2 π2 ? 2 π1 1 2 ? 2 ... ?x2 ?xn ? ? ?x2 ?a ? ? ?a ? ? ? ? ?x2 ?x1 ?. ?x2 2 (6) ? = ?? ? ? ? ... ? ? ... ? ... ... ... ? ? ? ... ? ? ?x? ? 2 ? ? ?π1 ? ? 2 πn ? ?2 ? n ? ? ? ?x ?x ?x πn ... ? π2n ?xn ?a ?a ?x n 1 n ?x2nSince the IFT is covered in detail in many non-linear programming books and its application to the GT problems is essentially the same, we do not delve further into this matter. In many practical problems, if |H| 6= 0 then it is instrumental to multiply both sides of the expression (6) by H ?1 . That is justi?ed because the Hessian is assumed to have a non-zero determinant to avoid the cumbersome task of inverting the matrix. The resulting expression is a system of n linear equations which have a closed form solution. See [72] for such an application of the IFT in a two-player game and [8] in n?player games. The solution to (6) in the case of two players is ?x? ?x1 ?a 1 =? ?a2? 2 π1 ? 2 π2 ?x2 22?x? 2 =? ?a|H| ?2 ? π1 ? π2 ?2 ? ?x1π1 ?x2 π2 1 ?a ?x ?x2 ?x2 ?a1? π1 ? ? ?x1 ?x2 ?x2π2 ?a22, .(7) (8)|H|Using our newsvendor game as an example, suppose we would like to analyze sensitivity of the equilibrium solution to changes in r1 so let a = r1 . Notice that ?π2 /?Q2 ?r1 = 0 and also that the determinant of the Hessian is positive. Both expressions in the numerator of (7) are positive as well so that ?Q? /?r1 & 0. Further, the numerator of (8) is negative so that 1 ?Q? /?r1 & 0. Both results are intuitive. 2 Solving a system of n equations analytically is generally cumbersome and one may have to use Kramer’s rule or analyze an inverse of H instead, see [8] for an example. The only way to avoid this complication is to employ supermodular games as described below. However, the IFT method has an advantage that is not enjoyed by supermodular games: it can handle constraints of any form. That is, any constraint on the players’ strategy spaces of the form gi (xi ) ≤ 0 or gi (xi ) = 0 can be added to the objective function by forming a Lagrangian: Li (x1 , ..., xn , λi ) = πi (x1 , ..., xn ) ? λi gi (xi ). All analysis can then be carried through the same way as before with the only addition being that the Lagrange multiplier λi becomes a decision variable. For example, let’s assume in Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS17the newsvendor game that the two competing ?rms stock inventory at a warehouse. Further, the amount of space available to each company is a function of the total warehouse capacity C, e.g., gi (Qi ) ≤ C. We can construct a new game where each retailer solves the following problem: ? h ? i + ED ri min Di + (Dj ? Qj ) , Qi ? ci Qi , i = 1, 2. maxQi ∈{gi (Qi )≤C}Introduce two Lagrange multipliers, λi , i = 1, 2 and re-write the objective functions as ? i h ? + max L (Qi , λi , Qj ) = ED ri min Di + (Dj ? Qj ) , Qi ? ci Qi ? λi (gi (Qi ) ? C) .Qi ,λiThe resulting four optimality conditions can be analyzed using the IFT the same way as has been demonstrated previously. 2.6.2. Supermodular games approach In some situations, supermodular games provide a more convenient tool for comparative statics. Theorem 9. Consider a collection of supermodular games on Rn parameterized by a parameter a. Further, suppose ? 2 πi /?xi ?a ≥ 0 for all i. Then the largest and the smallest equilibria are increasing in a. Roughly speaking, a su?cient condition for monotone comparative statics is supermodularity of players’ payo?s in strategies and a parameter. Note that, if there are multiple equilibria, we cannot claim that every equilibr rather, a set of all equilibria is monotone in the sense of Theorem 9. A convenient way to think about the last Theorem is through the augmented Hessian: ? ?2π ? ? 2 π1 ? 2 π1 ? 2 π1 ? 1 ? ? ?x2 ?x1 ?x2 ... ?x1 ?xn ?x1 ?a ? 1 ? ? 2 π2 ? ? 2 π2 ? 2 π1 ?2 ? ?x ?x ... ?x2 ?xn ?x2π1 ? ?a ? ?x2 2 1 ? 2 ? ... ... ... ... ... ? . ? 2 ? ? ? πn ? 2 πn ? 2 πn ? 2 πn ? ? ?xn ?x1 ?xn ?x2 ... ?x2 ?xn ?a ? n ? ?2π ? 2 ? 2 π1 ? 2 πn ? ? 1 ... ? πn 2?x1 ?a ?x2 ?a ?xn ?a ?aRoughly, if all o?-diagonal elements of this matrix are positive, then the monotonicity result holds (signs of diagonal elements do not matter and hence concavity is not required). To apply this result to competing newsvendors we will analyze sensitivity of equilibrium inven? ? tories Q? , Q? to ri . First, transform the game to strategies (Qi , y) so that the game is i j supermodular and ?nd cross-partial derivatives ? ? ? 2 πi + = Pr Di + (Dj ? Qj ) & Qi ≥ 0, ?Qi ?ri ?πj = 0 ≥ 0, ?y?riso that (Q? , y ? ) are both increasing in ri or Q? is increasing and Q? is decreasing in ri just i i j as we have already established using the IFT. The simplicity of the argument (once supermodular games are de?ned) as compared to the machinery required to derive the same result using the IFT is striking. Such simplicity has attracted much attention in SCM and has resulted in extensive applications of supermodular games. Examples include [27], [28], [72] and [72], to name just a few. There is, however, an important limitation to the use of Theorem 9: it cannot handle many constraints as IFT can. Namely, the decision space must be a lattice to apply supermodularity i.e., it must include 18Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMSits coordinate-wise maximum and minimum. Hence, a constraint of the form xi ≤ b can be handled but a constraint xi + xj ≤ b cannot since points (xi , xj ) = (b, 0) and (xi , xj ) = (0, b) are within the constraint but the coordinate-wise maximum of these two points (b, b) is not. Notice that to avoid dealing with this issue in detail we stated in the theorems that the strategy space should all be Rn . Since in many SCM applications there are constraints on the players’ strategies, supermodularity must be applied with care.3. Dynamic gamesWhile many SCM models are static, including all newsvendor-based models, a signi?cant portion of the SCM literature is devoted to dynamic models in which decisions are made over time. In most cases the solution concept for these games is similar to the backwards induction used when solving dynamic programming problems. There are, however, important di?erences as will be clear from the discussion of repeated games. As with dynamic programming problems, we continue to focus on the games of complete information, i.e., at each move in the game all players know the full history of play.3.1. Sequential moves: Stackelberg equilibrium conceptThe simplest possible dynamic game was introduced by [90]. In a Stackelberg duopoly model, player 1, the Stackelberg leader, chooses a strategy ?rst and then player 2, the Stackelberg follower, observes this decision and makes his own strategy choice. Since in many SCM models the upstream ?rm, e.g., the wholesaler, possesses certain power over the typically smaller downstream ?rm, e.g., the retailer, the Stackelberg equilibrium concept has found many applications in SCM literature. We do not address the issues of who should be the leader and who should be the follower, see Chapter 11 in [88]. To ?nd an equilibrium of a Stackelberg game which often called the Stackelberg equilibrium we need to solve a dynamic multi-period problem via backwards induction. We will focus on a two-period problem for analytical convenience. First, ?nd the solution x? (x1 ) for 2 the second player as a response to any decision made by the ?rst player: x? (x1 ) : 2 ?π2 (x2 , x1 ) = 0. ?x2Next, ?nd the solution for the ?rst player anticipating the response by the second player: dπ1 (x1 , x? (x1 )) ?π1 (x1 , x? ) ?π1 (x1 , x2 ) ?x? 2 2 2 = + = 0. dx1 ?x1 ?x2 ?x1 Intuitively, the ?rst player chooses the best possible point on the second player’s best response function. Clearly, the ?rst player can choose a NE, so the leader is always at least as well o? as he would be in NE. Hence, if a player were allowed to choose between making moves simultaneously or being a leader in a game with complete information he would always prefer to be the leader. However, if new information is revealed after the leader makes a play, then it is not always advantageous to be the leader. Whether the follower is better o? in the Stackelberg or simultaneous move game depends on the speci?c problem setting. See [71] for examples of both situations and comparative analysis of Stackelberg vs NE; see also [104] for a comparison between the leader vs follower roles in a decentralized assembly model. For example, consider the newsvendor game with sequential moves. The best response function for the second player remains the same as in the simultaneous move game: ? ? r2 ? c2 ?1 . Q? (Q1 ) = FD +(D ?Q )+ 2 2 1 1 r2 Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMS19For the leader the optimality condition is ? ? dπ1 (Q1 , Q? (Q1 )) + 2 = r1 Pr D1 + (D2 ? Q2 ) & Q1 ? c1 dQ1 ? ? ?Q? + 2 ?r1 Pr D1 + (D2 ? Q2 ) & Q1 , D2 & Q2 ?Q1 = 0, where ?Q? /?Q1 is the slope of the best response function found in (1). Existence of a Stack2 elberg equilibrium is easy to demonstrate given the continuous payo? functions. However, uniqueness may be considerably harder to demonstrate. A su?cient condition is quasiconcavity of the leader’s pro?t function, π1 (x1 , x? (x1 )) . In the newsvendor game example, 2 this implies the necessity of ?nding derivatives of the density function of the demand distribution as is typical for many problems involving uncertainty. In stochastic models this is feasible with certain restrictions on the demand distribution. See [54] for an example with a supplier that establishes the wholesale price and a newsvendor that then chooses an order quantity and [18] for the reverse scenario in which a retailer sets the wholesale price and buys from a newsvendor supplier. See [71] for a Stackelberg game with a wholesaler choosing a stocking quantity and the retailer deciding on promotional e?ort. One can further extend the Stackelberg equilibrium concept into multiple periods, see [35] and [1] for examples.3.2. Simultaneous moves: repeated and stochastic gamesA di?erent type of dynamic game arises when both players take actions in multiple periods. Since inventory models used in SCM literature often involve inventory replenishment decisions that are made over and over again, multi-period games should be a logical extension of these inventory models. Two major types of multiple-period games exist: without and with time dependence. In the multi-period game without time dependence the exact same game is played over and over again hence the term repeated games. The strategy for each player is now a sequence of actions taken in all periods. Consider one repeated game version of the competing newsvendor game in which the newsvendor chooses a stocking quantity at the start of each period, demand is realized and then leftover inventory is salvaged. In this case, there are no links between successive periods other than the players’ memory about actions taken in all the previous periods. Although repeated games have been extensively analyzed in economics literature, it is awkward in a SCM setting to assume that nothing li typically in SCM there is some transfer of inventory and/or backorders between periods. As a result, repeated games thus far have not found many applications in the SCM literature. Exceptions are [29], [95] and [26] in which reputational e?ects are explored as means of supply chain coordination in place of the formal contracts. A fascinating feature of repeated games is that the set of equilibria is much larger than the set of equilibria in a static game and may include equilibria that are not possible in the static game. At ?rst, one may assume that the equilibrium of the repeated game would be to play the same static NE strategy in each period. This is, indeed, an equilibrium but only one of many. Since in repeated games the players are able to condition their behavior on the observed actions in the previous periods, they may employ so-called trigger strategies: the player will choose one strategy until the opponent changes his play at which point the ?rst player will change the strategy. This threat of reverting to a di?erent strategy may even induce players to achieve the best possible outcome, i.e., the centralized solution, which is called an implicit collusion. Many such threats are, however, non-credible in the sense that once a part of the game has been played, such a strategy is not an equilibrium anymore for the remainder of the game, as is the case in our example in Figure 1. To separate out credible threats from non-credible, [82] introduced the notion of a subgame-perfect equilibrium. See [44] and [99] for solutions involving subgame-perfect equilibria in dynamic games. 20Cachon and Netessine: Game Theory c INFORMSCNew Orleans 2005, ° 2005 INFORMSSubgame-perfect equilibria reduce the equilibrium set somewhat. However, in?nitelyrepeated games are still particularly troublesome in terms of multiplicity of equilibria. The famous Folk theorem5 proves that any convex combination of the feasible payo?s is attainable in the in?nitely repeated game as an equilibrium, implying that “virtually anything” is an equilibrium outcome6 . See [29] for the analysis of a repeated game between the wholesaler setting the wholesale price and the newsvendor setting the stocking quantity. In time-dependent multi-period games players’ payo?s in each period depend on the actions in the previous as well as current periods. Typically the payo? structure does not change from period to period (so called stationary payo?s). Clearly, such setup closely resembles multi-period inventory models in which time periods are connected through the transfer of inventories and backlogs. Due to this similarity, time-dependent games have found applications in SCM literature. We will only discuss one type of time-dependent multi-period games, stochastic games or Markov games, due to their wide applicability in SCM. See also [62] for the analysis of deterministic time-dependent multi-period games in reverse logistics supply chains. Stochastic games were developed by [84] and later by [89], [49] and [46]. The theory of stochastic games is also extensively covered in [37]. The setup of the stochastic game is essentially a combination of a static game and a Markov Decisions Process: in addition to the set of players with strategies which is now a vector of strategies, one for each period, and payo?s, we have a set of states and a transition mechanism p(s0 |s, x), probability that we transition from state s to state s0 given action x. Transition probabilities are typically de?ned through random demand occurring in each period. The di?culties inherent in considering non-stationary inventory models are passed over to the game-theoretic extensions of these models, so a standard simplifying a}

我要回帖

更多关于 she什么时候开演唱会 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信