Here is what’s been occupying my mind to the point where I can’t sleep, eat, or work. Except that I can still sleep, eat, and work, but the rest is true. Anyway, this:
A “zero-sum game” refers to a situation where for player A to win, player B must lose. In other words, if each has 5 wigets, conditions that will give A 6 wigets will give B 4 wigets.
A “non-zero-sum game” refers to a situation where both player A and player B can win. In other words, if each has 5 wigets, there are conditions that will permit each of them to go to 6 wigets.
So, my question is: Is there a term for a semi-zero-sum game? In other words, suppose there are conditions under which A can get 7 wigets leaving B with 4 wigets? I know this can happen, but is there a term for it?
This question has been tormenting me.
Okay, now I go roast the turkey.
35 thoughts on “I NEED to Know”
My (vague) understanding is that that is an example of a competitive game, where a zero-sum game is a strictly competitive game.
Non zero sum games are any games where one player’s gain does not imply another players loss. That covers situations where both players can get to a “victory” condition, and where one winds up with fewer widgets than the other.
It all depends on how those are allocated.
If you have 10 widgets and players A and B compete for them, that is zero sum. Anything A gets is something B cannot. If A gets 6 widgets, B cannot get to 6, period. Zero sum.
If you have an undefined pool of widgets (or a defined pool that can be expanded) and players A and B compete for them independently without one harming the other in the doing so, that is non-zero sum, regardless of the final score. Whatever and how many A gets doesn’t affect B’s chances.
And I am at about the level of my depth. I’ve two friends going for degrees in mathematics, I just absorb some stuff from them about Nash equilbria and such.
It sounds like your semi zero sum is really a zero-sum if A is restricting B’s score by his own scoring.
Thanks for the reply. You may be right, but if so, the term is sloppy. “Zero sum” ought to imply, well, a zero sum–ie, the total number of widgets remains the same. They need a better term. I’m going to write a pretty sharp letter about it, let me tell you.
This may be what you are looking for: http://www.beyondintractability.org/bi-essay/sum
I think it is called “crony capitalism” :)
Seriously, though, in game theory I’m pretty sure that is still a non-zero-sum game. (In a non-zero-sum game, you can also have both A and B ending up with 4 widgets.)
Maybe you want a word for where everybody always ends up better off. That’s probably called “utopia”, though a free market/voluntarist economy probably comes close.
Jason F: Yep, exactly what I was looking for. Thanks kindly.
Hm. I wonder what the name is for the game where someone can get ahead only by reducing the total to everyone. (I can get from five widgets to six only if I push you down from five to three!) I mean, given inefficient systems like bribery and the like… I suspect there’s a term for that out there somewhere too. Negative-sum games?
Positive sum: The total payoff is positive. (E.g. I buy a book from you. You’d rather have the money, I’d rather have the book. We both win.) It can also be win-lose, providing the win is bigger.
Negative sum: The total payoff is negative. (E.g. company bribes Congress so that taxpayers are out $2 billion and $1 billion is given to the company, the rest is the cost of collection.)
Zero sum: The total payoff is zero (or fixed).
Non-Zero sum: The total payoff is not zero (or fixed). Positive sum and negative sum are some of the non-zero sum games, not all of them.
Now that the terminology question is over….
There’s a vast tedious literature of uncounted numbers of prisoner’s dilemma variations done for theses and dissertations in this area.
Experiments: People love fairness. People are often unfair, especially in private or anonymously. People love to punish unfairness and will pay for the privilege of being the punisher. People think punishment is unfair if it affects them in any way. People like to win. People hate for other people to win, even if they win too. People are willing to lose if they can be sure everyone else loses with them. Plot risk-seeking and risk-averse curves for different demographics. Replace “people” with animals of various species, from bonobos to bees, and run more experiments. Come up with some obscure trick not yet seen in the literature — suppose we tell people when they lose they’ll be shocked and then we don’t shock them (aren’t we sly) — and run through the changes again….
Seth reported the names I know. Now that you point it out, the names seem sloppy. There can be some strategies available which are zero-sum, and some which are positive-sum in the same game. The game itself may not be zero-sum, it’s the strategy which is.
I don’t remember any special name for the situation where one player can improve without affecting the other player. If we had a name for that, we should probably have more names — a player using a strategy which improves his own position without affecting the other could have one name, and a player using a strategy which improves the others position without affecting his own would have another.
When the widgets are fungible then the situation doesn’t come up. Say one way we each get 5 widgets. Another way we create 8 widgets for me and 5 for you, but you don’t have to play that way, you can decide to just let me have 5. OK, I give you one and then we get 7 and 6. If you’re real hardnosed maybe I’ll give you 7 and keep 6. No matter how hard you bargain I won’t put up with giving you 8 and I keep 5.
The original point was that most entertainment games are zero-sum. You win and I lose. But we can have games where we have an incentive to cooperate. That can’t happen when we only care who wins.
If I can do something that costs me nothing and benefits you, I probably will. Why not? But if you then say “Yay! I’m #1! I’m the best!” then I probably won’t do it next time.
I don’t remember a specific name for a semi-zero-sum game when I studied game theory, but we did discuss how A could end up with more than B while both feel they are coming out on top, so maybe game manipulation is what you’re getting at? Where A can convince B that the widgets aren’t worth as much as B thinks, or perhaps B places less value on the widgets than A does to start with and A simply capitalizes on this. I’m not sure. It’s hard to think in widgets. We always used cake as our examples. Delicious, delicious cake.
The book “The Compleat Strategist” [sic] by J. D. Williams describes this as either a regular non-zero-sum game or “a three-person zero-sum game, of sorts, where the third player has some of the characteristics of a millstone around the neck.” The third player, C, could correspond to Nature, the House (if you’re playing poker in a casino, every chip you use to tip your dealer after a game of poker increases C’s payout while decreasing yours) or something similar.
The “sum” part is the sum of all the player’s winnings, where a negative figure represents a loss. So zero-sum means the number of chips is static. Generally the hope the non-zero sum will be positive, but that can easily not be the case (there are situations where the competition itself destroys value). But even if the sum is positive, that doesn’t imply that every player will come out ahead of their original position. In a typical economy, for instance, value is created overall, but that doesn’t mean some people aren’t going to lose their shirts. The term still applies. If you wanted a term to differentiate between gains for everyone versus gains only for some, treat the former as the exceptional situation, for which the usual description is “win-win” — a subset of non-zero-sum.
I think we should call it a “some-sum game”.
The very first thing I thought of was a passage from Capital. One comment Marx makes in Capital about the relation between worker and capitalist is that the fact the capitalist reaps the surplus of the worker’s labour is “a piece of good luck for the buyer, but by no means an injustice towards the seller.” (Capital, 301) That is, because the worker is compensated for exactly his labour, the exchange has been something like 5:5 –> 6:5.
So the term you’re looking for is “good luck”. :}
Of course if the prize is a Chinese brunch, then it’s a dim-sum game.
I see any team sport game as being positive-sum, chess as being negative-sum, and the card game “War” as being zero-sum.
I imagine what you are describing as being like, if in basketball, instead of shots made behind the large semi-circle being worth a straight 3 points, it awarded 2 points to the team that made the shot, and took one away from the other team.
The problem with that, is that as the sum becomes more positive, the negative points against the opposing team become more and more irrelevant. You may as well just award the scoring team 3 points and not bother taking any away from the other team.
However, if the amount of points penalized from the opposing team becomes dependant on either the positive-sum or the amount of points that the team has, then the penalty is brought back into relevance.
If I was in charge of making the change to said game. I would keep all point values in front of the half-court line the same, but shots made behind the half-court line would add 3 points to the scoring team’s score and remove 10%(rounded up) of the opposing team’s score.
It’s a long way from game theory, but if you consider living things then 5+5 does not automatically equal 10; for example, the synergistic interactions between certain antibiotics achieve an outcome where the total is more than the sum of the parts.
Indeed, if a strain of bacteria is sufficiently unpleasant doctors will use an antibiotic which is intrinsically incapable of damaging it, but enhances the activity of another antibiotic to do so, in which case 5+0=6.
Another example of the use of an antibiotic intrinsically incapable of damaging the bacteria is where it inhibits the quorum sensing abilities of the bacterial colony, and therefore the ability of the bacterial colony to do the microbiological equivalent of invading Poland.
The bacterial colony is unaware that it has the firepower to do so because the communications are screwed; it can’t count…
The type of game does not depend on the scoring, only the result. In a typical sport, there’s a winner and a loser (or a draw), so zero-sum. It doesn’t matter if the score is 1:0 in baseball or 150:144 in basketball, in the season’s record it’s still recorded as 1:0 (win, loss).
If the scoring in basketball were changed so that the home team scored as now, but the visiting team caused the home team’s score to drop whenever they scored (by the same amount), the game wouldn’t be changed at all.
Well, “win, win” specifically means positive sum with both players coming out ahead and “positive sum” is more general, so if you need highly precise speech for a discussion, you could indicate that “positive sum” rather than “win, win” means that the outcome may be highly uneven. Of course, in more relaxed settings, that would be a bit Humpty Dumpty.
Hmm. Now you’ve got me wanting such a word. Perhaps you could coin one and see if it catches on.
@Seth- For that matter, any game can be seen as positive-sum, negative-sum, or zero-sum, depending on how you look at it.
Seth and Chris are onto something.
US military strategists used to treat nuclear war as a zero-sum game. The intention was to come out ahead of the other side. They built elaborate thought-castles around that.
For example, what if the USSR nuked one US city and then said it was an accident and apologized. If we accepted their apology we were out one city and they came out ahead. If instead a nuclear war happened and both side were utterly destroyed, we came out even worse. So they kept careful track of what went on in the USSR and the USA and they maintained a database that estimated the values of individual cities in the USA and the USSR. If some combinations of US cities were destroyed, a computer program would almost instantly provide a list of Soviet cities that would add up to about the same value. The USA would nuke those cities and then accept the Soviet apology.
They considered it a zero-sum game. If half of US cities were nuked but every USSR city was nuked, that was a win. But depending on how you looked at it, it was more like losing 50% of you nation.
If US strategists had considered it a negative sum game they might have been more ready to consider disarmament.
In economic theory a situation where the only way for one participant to gain is for another to lose is called Pareto Optimal or Pareto Efficient. Basically it means there’s no room for growth left in the system. If you can make one participant better off without making any other participant worse off it’s called a Pareto Improvement.
Another fun concept is games that transition from positive sum to negative sum through self-perpetuated escalation processes.
Example: multiple parties bidding on a $20 bill. The winner gets the $20 bill, everybody else pays their losing bid to the “auctioneer.”
At low levels, it’s a positive sum game, but you can quickly reach the point where people end up bidding over $20, because if the ‘winner’ bids $30 and the ‘loser’ bids $29, the winner only loses $10 while the loser loses $29.
The only safe decision in an environment without collaboration and high trust is not to play at all, which has some significant implications.
“It is not enough to succeed; others must fail.” I heard it attributed to Gore Vidal, but if you google it you will find it attributed to him and several others.
I was going to say “life”. But I’ll add it to your post.
There are situations in which cooperative strategies yield more aggregate output than non-cooperative strategies. There’s a lot of writing in game theory on the Prisoner’s Dilemma, some of which include repeated transactions and multiple players in which the participants can learn over time how other players behave when faced with opportunity to do better by cooperating (balanced by risk of being betrayed). Depending on the values assigned to the various combinations of answering the interrogators and keeping faith with one’s accomplice, one can create zero-sum situations or situations in which cooperation improves participants’ aggregate result, but to make Prisoner’s Dilemma interesting I usually see conditions described in which the best individual outcome is betraying a faithful accomplice but in which the best aggregate result comes from both keeping faith, and the worst aggregate result comes from both prisoners ratting out their accomplice.
Imagine two people who want to eat pie. One has sugar and butter, the other has flour and cocoa. Neither can make a pie alone. Together, they can cooperatively make something each would like to eat. It’s pretty clear that situations like this exist, in which trade puts both parties ahead in an immediate, tangible, practical sense. Or imagine a dairy farmer who has vastly more milk than he can drink, but would like things like Internet service and gasoline and new shoes and private school for his children: it’s pretty clear that trading milk away before it spoils dramatically improves his situation.
The world is full of opportunities to cooperate that are not zero-sum games from the perspective of the participants. Occasionally an economist will hypothesize that all things are fungible and assign a monetary value to all the goods in a system, and demonstrate that people can’t profit by saying they’re really exchanging money, but the truth is that we’re not really exchanging money: we’re exchanging things like labor that would not exist but for trade, goods whose marginal value to the owner is low because they are surplus of need, products that will spoil is not consumed as their expiration approaches – all of which have much more value to the recipient than to the seller. So people get ahead all the time even in systems that an economist will swear result in no gain to anyone.
And then, there are systems like the pie factory in which participants can really create value by cooperating.
Hi Mr Brust
my name is matt and i just wanted to say how much your work had ment to me over my life, i learned to read real books by reading your works ( my 1st Chapter book was Taltos ), the main point for my reply here besides to thank you is to ask how if i could trouble you to sign one of my copies. If that is ok please let me know how i can send you my book.
Thank you very much for makeing all these great books for us to enjoy.
Matt: Thanks for the kind words. My email address is here on the web site; drop me an email about a signed copy.
A home poker game, with no rake, would be a good example of a zero-sum game. For one person to win money, one or more other people must lose an equal amount of money.
Most games that model real life are non-zero-sum games though. One example that could fit what you’re looking for is The Prisoner’s Dilemma. Think of two crooks (partners) who have been arrested. If both give evidence they’ll go to jail for two years each (-2,-2), if neither gives evidence then the case against them isn’t as strong so they only go to jail for one year (-1,-1). But if one testifies and the other does’t the one who testifies gets the deal and gets off while the other gets five years (+10,-5). The number values are semi-arbitrary, but the relative value for the actions is important and which defines the game type.
From a game theory standpoint not testifying is cooperating (with your partner) while testifying is being non-cooperative. Games are defined on the relative rewards and penalties of cooperating or not play out. If you have a scenario set in particular you’re interested in I could try and look it up (although it’s been something like twenty years since I took game theory), but if you’re just looking for a generic name for all games that aren’t zero-sum, non-zero-sum is it.
OK – as best as I understand your question, the answer is no. The definition of equilibrium in non-cooperative game theory (the flavor of game theory that most every economist and political scientist uses) is a situation in which no actor has any incentive to deviate, given what every other actor is doing. This means that game theory has difficulty in taking account of what you might call situations of harsh coercion in which one actor ends up strictly better off and one actor ends up strictly worse off than they might otherwise be. The equilibrium is unstable – why would I be prepared to stick with a situation of five goodies, when I could get six elsewhere? This has led people like Terry Moe (a right wing political scientist/economic theorist, but one with a decent respect for coercive politics) to argue that game theory simply isn’t able to properly incorporate coercion.
There are ways in which you can bring in coercion though. Here (if we are sticking to simple one shot games) the basic model is the mixed motive coordination game or as it used to be called, the battle of the sexes game. Imagine a couple (to get rid of the sexism, assume non gender specific names) called Chris and Pat, trying to figure out what to do on a Saturday evening. Chris likes boxing more than ballet. Pat likes ballet more than boxing. Both of them, however, would prefer to go to something together, even if it is their less preferred choice, than to end up going to different events. This is your standard mixed motive coordination game. Both actors would prefer to coordinate on one of the two equilibria (boxing, boxing or ballet, ballet respectively) than on a non-equilibrium ‘mistake’ (boxing, ballet and ballet, boxing).
But here, you can bring in bargaining power. Imagine that Pat cares less than Chris about whether they both end up at the same event. In game theoretic parlance, Pat is less sensitive than Chris to breakdown – to the risk that they may end up not coordinating on going to the same event. If Pat and Chris both know this (as they do in game theory under the assumptions of common knowledge and complete information, which isn’t quite what it sounds like), then Pat is in a stronger bargaining position than Chris. Pat can more credibly threaten Chris that Pat will not turn up to a boxing match, than Chris can threaten Pat. Typically, sensitivity to breakdown will depend on outside options. Pat may be less sensitive, because Pat has another possible date. Jack Knight in a 1992 book, Institutions and Social Conflict extends this argument into a broader theory of political institutions that tries to integrate Marx and game theory. Sam Bowles and Suresh Naidu also have a really nice (if technical piece) trying to do the same kind of leftiness with evolutionary game theory available on Suresh’s website. If you want to see where Marxism and rational choice economics meet together, Sam Bowles is your man.
If you want, you can bring back in the kinds of unpleasant forms of coercion into game theory by the backdoor. Imagine a situation of gross power disparities and no information asymmetries – say between Dragons and Tecklas. Dragons can order Tecklas to be whipped to death if they do something that displeases the Dragon. Tecklas, knowing this, will be apt to tug their forelocks and give Dragons everything they want, up to the point of indifference (i.e., in this case, up to the point in which they are certain they will die anyway). In a sense, you could think of this as a mixed motive coordination game like the one between Pat and Chris. Both will end up in an equilibrium where they are “better off” than the breakdown values – the Dragon will enjoy the delights of being on the more pleasant end of feudal power relations, while the Teckla will enjoy the more basic pleasure of keeping his skin intact. However, this is more a redescription of the problem in game theoretic terms than anything very useful. There is, of course, more subtle things that you can do with all of this. If there is asymmetric information, perhaps the Teckla can do a Good Soldier Svejk. If relations between the Dragon and Teckla are indefinitely iterated, then things can be more complex, although still dominated by power relations (technically, you might think about the battle of the sexes as a meta-game over equilibria in whatever the specific iterated game is, whether Prisoner’s Dilemma, Assurance or whatever). But this is probably a lot more than you ever wanted when you idly asked the question.
Sorry, looking at it again this morning, I misread your question as one about a situation in which one actor loses a widget, while the other gains one, whereas it seems to be a question about moving to a situation where one actor gets a widget, while the other stays in the same situation.
Situations in which one actor gains one are much less problematic (and and are, from a game theoretic and political science perspective) also less interesting. As people have said above, they are a specific flavor of a positive sum game but one without any distributional consequences (in game theory actors are strictly indifferent to others’ utility gain). They are in principle unproblematic, because they are Pareto superior (no actor is worse off, but some actor is better off).
However, Russell Hardin argues that if there are later moves in the game, which might have distributional consequences, actors may have an incentive even to oppose Pareto superior moves. The logic here is that if one thinks of this as occurring in a two dimensional space, where the x and y axes are the respective utilities of the two players, a move which gives one actor an extra widget will constrain the set of future Pareto superior moves that are available.
Getting rid of the jargon – if one starts from the case where both player A and player B have five widgets, then it would seem at first unproblematic for player A to move to a situation where he still has five widgets and player B has gotten two extra for a total of seven – A is no worse off than he was before. However, if one thinks about this dynamically, (i.e. there may be more moves later in the game) Pareto superiority isn’t nearly as attractive a decision model, since it means that a number of situations that used to be possible, and that might have been attractive to player A (e.g., situations where A has six widgets and player B also has six, where A has seven widgets, and player B has six etc) are now ruled out of contention. They are Pareto inferior – since they would involve hurting player B (who now has seven widgets) to benefit A, so that B would never agree to them.
If you think that this is plausible, and if actors know that there are further ’rounds’ to be played, then you can argue that rational actors will not agree to Pareto superior moves which do not benefit them sufficiently. But if you think that this is so, then a lot of basic game theoretic assumptions look untenable, which is why perhaps people have not taken up Hardin’s argument. More generally, game theory takes a lot of crazy arguments for granted, and should always be used with some gingerness. David Kreps’ book “Game Theory and Economic Modeling” is a very nice introduction by a top flight game theorist, who doesn’t assume too much knowledge on the part of his readers, but who also explains clearly the limits, as well as the benefits of game theoretic reasoning.
No, it was a question about a situation where one gains a widget, and the other loses half a widget, and all I wanted was the name for it. But Jason F. answered it a long time ago.
While most of us would like to always benefit from something even if we lose, it’s very nearly impossible to know the rules. The whole notion of the prisoner’s dilemma is silly. How many times does one person turn on another based on an offered deal, just have the prosecutor yank the deal, have that person’s boss yank the deal, have the feds nullify the whole deal, or have a judge throw out the deal?
Kim Robinson put it well in a little vignette in “Red Mars.” Two people are playing chess, one being consistently the superior player. A few moves out from an inevitable checkmate, the superior player observes how life imitates chess — to the high amusement of the other player. The superior player talks about the rules of life and how mastering them can get one the win in any situation. Laughing, the inferior players demurs with something like:
“No, in life, my queen could whisper into your bishop’s ear, and the next thing you know he’s moving like a rook and you’re f***ed.”