Outline (of talk/lecture/colloquium at Stony Brook in July 1999) Remarks on Cooperation as a Concept It is taken as obvious, particularly when it is a matter of human relations, that there can be cooperation and that this can be "high level cooperation" or "high grade cooperation". But really there are some elements of mystery where all might seem to be simple and obvious. Instincts, culture, and tradition are factors leading the cooperative behavior of humans or animals. Civilization is a key to the context of the cooperative possibilities for corporations. Civil law provides the means by which contracts between or among enterprises can be binding and enforceable. Also, a cooperative game has the characteristic that it is not war and a player cannot simply be conquered and deprived of the right of separate existence. Test Example for the Approach Described Last Year at Stony Brook I have been doing work since talking here last year to apply the theory or approach that was presented then to an example of a 3-person game. Already then (when I spoke in 1998) I had been able to treat bargaining-type games of 2 persons and to get results consistent with the solution point of Zeuthen and Nash. This topic of previous presentations was "Reduction of Coalitions to Agencies, a Scheme for the Analysis of Cooperative Games". (The TEXT VERSION of that (without the graphically presented illustrative examples) is available on my "web page" under the topic of AGENCIES.) The basic idea of this approach or theory is that a game can be studied in the repeated game context where the repeating can help to facilitate cooperation in a context otherwise unfavorable. This is like when cooperation can be found to be quite stable in a repeated prisoner's dilemma game. Transfiguring Coalitions into Agencies The idea of agencies is a means for reducing a game where the idea of cooperation is quite essential to the structure of the game to a game which is, in form at least, entirely a non-cooperative game where the players simply have specified possible "moves" (or strategic actions) which they carry through in the formal context of a non-cooperative game. But we will only be studying this derived game under the presumption of indefinite repetition so that we will not be considering it as a "one shot" non-cooperative game would be considered. Thus cooperation is expected to arise or derive, but this is expected to depend essentially on the context of repetition. And the 'agencies" are the analogue of the chairpersons of committees, as if every coalition be regarded as equivalent to a committee with such a chairperson. Indeed our agents have ABSOLUTE power, to the extent that they are like powerful chairpersons of committees. But the power of any player that he might exercise as a powerful agent is limited by the concept of the repetition of the game and by the convention the he/she must be "elected" (or chosen, or accepted) by at least one other player and by the concept that each player will build up a reputation that will be known to all the other players. Election of Agents or Agencies An agent is like someone with a complete "power of attorney" given by another person or persons. In the simplest case of a game of 2 persons, where we wish to look for the ideal possibilities of cooperation but also to take into account the conflicting or competitive interests of the 2 players, it would be that either of the two players could accept the other as his agent. And THEN, after one of the two had become thus the "general agent", the idea is that that player could then COORDINATE all the possible uses of the pure strategies a priori available to the two separate players. (And with more than 2 players a "general agent" would be one that had been given agency power to represent all of the players of the game.) Our study of the specific very simple 3-person game that we studied made use of a specific convention on the election of agents. The various possibilities for such election rules are discussed more broadly in the earlier text on "AGENCIES' that is also available on the web site where this outline is being made available. For our model study we used a convention that if a "cycle" was elected, where a string of voters would be connected in a loop by the fact of each having voted for the next voter of the loop, then the "cycle" would be resolved by making a RANDOM removal of one of the votes so that the cycle member whose vote was thus removed would become the player elected by all the other members of the loop to be their agent. And we also used the "chain rule" so that if A votes for B and at the same time (same election stage) B votes for C then this elects C to be the agent for both A and B (and himself). Of course a use of this rule is implicit in the description just given of how the election of a "cycle" leads, after a random choice, to the election of one member of the "cycle" as the agent elected to represent all of the cycle's members (including himself). A fuller discussion of election rules for agents is presented in the text called "agencies". For the specific model game (or for any game of only three players) the election rules we used achieve the effect that ONLY TWO stages of election are needed. Then either a "general agent" has been election or there is given the "verdict" that cooperation (or coalescence) has failed. The first stage of election may reduce the game to one of just two parties or also may have achieved at one step the election of a "general agent", one player entitled to act "in behalf" of all players. If there is a reduction to two parties then one of these will be a player that has been elected to be the agent for one other player (besides himself). Then the second stage of election follows and this is, in form, the same as it would be if we were beginning with just a game of 2 players. And if with two players (or 2 parties) voting, if neither of them accepts the other as agent, then the "verdict" is that cooperation has failed. Then, in a general game, the 2 parties would be expected to then act non-cooperatively, utilizing the strategic possibilities available to them for non-cooperative action. (One of the parties would be an elected agent representing himself and one other of the original trio of players. This agent would, in the non-cooperative play, have the potential of using ALL of the pure strategies of BOTH of the players represented, with full coordination of these pure strategies.) But in our model game of three players there is nothing to do in the case that full coalescence (full cooperation) is not achieved. In the specific case that the coalition of players P1 and P2 has been achieved but that general agency or full cooperation has not been achieved then this amounts to that one or the other of P1 and P2 has become the agent of both and our specification of the model game is just that in this case that P1 and P2 are given the smaller bonus of b to share between them (with a payment of b/2 for each as we have set up the game's payoff function in terms of votes and "allocations"). And if coalescence fails at the first election stage or if it fails with either the coalition of P1 and P3 or that of P2 and P3 having been formed then the payoff rule is that all players are given zero. On the other hand, if a general agency has been elected and we can consider that the triple coalition (1,2,3) has been formed or achieved, then the rule is that that coalition is awarded the sum of +1 in payoff to be divided among the members (all three players) with the use of "transferable utility". This means, in our context of agencies, that the elected "general agent" is given the payoff of +1 which he is able to divide, using transferable utility, as if it were a sum of money, among the three players (including himself). Allocations For the model game study and calculations which I had done by the time of speaking in July 1999 at Stony Brook I had SIMPLIFIED the game by restricting the strategic choices that a priori would be available to players as agents by requiring that no matter HOW a specific player might have happened to become elected "general agent" that he would choose THE SAME "allocation" in every case. In principle, a "general agent" has the right to do whatever he/she pleases and this operating with all the pure strategies of all the individual players. Thus, for a game of 3 players and transferable utility, an elected general agent has the right to dispose of all the resources of v(1,2,3) (from the characteristic function). And in our model game we had v(1,2,3) = +1 and we assumed that a general agent was constrained to divide up that total amount of transferable utility in such a fashion that the amount thus "allocated" to any individual player would be non-negative and that all 3 of the amounts would sum to +1 which is the quantity of v(1,2,3). But in general, in principle, if we are looking for "stable behavior" strategies of players operating as electors of agents and as agents then whenever a player arrives at the circumstance of having been elected to be general agent then he/she CAN DISTINGUISH BETWEEN all the possible various routes by which he/she happened to become so elected. And it is like in politics, where a politician may choose to differently reward voters depending on WHEN, in the stages of election, a voter happened to vote for the politician. With a game of 2 players there would be only one or two alternative possible "routes of election" by which a player might become elected to be general agent. But with 3 players there can be many many more, depending on the precise election rules. In the calculations that I had done before speaking at Stony Brook in Summer 1999 I had chosen to SIMPLIFY by requiring that any player must "behave" with a uniform formula of allocations (that he/she would impose whenever elected, by whatever election route, to be general agent). Thus if P1 was thus final player-agent (general agent) (like the general partner in a partnership in which all but one partner would be limited partners) the P1 would allocate according to (u11,u21,u31) with u11 + u21 + u31 = +1 (and this was the notation that we used in the calculations (using MATHEMATICA)). Thus u21, for example, means the amount of utility allocated to player 2 by player 1 whenever P1 would have become the final player-agent. (Subsequently, since the calculations seemed to indicate that the coalition of (P1,P2) was failing to be "properly appreciated" or properly effective, and since this had the effect that the results seemed not to favor the Shapley value at all, we/I came to the opinion that it will be necessary to use a more elaborated modeling and to give the quasi-robotic players of the modeling more strategic potential so that, like politicians, they can give more rewards to supportive voters in PRIMARY ELECTIONS than they would give to supportive voters in FINAL ELECTIONS (or vice versa).) Returning to Remarks at Stony Brook 1999 The text above contains some remarks that represent my views as they have evolved subsequent to the time of the 1999 meeting at Stony Brook. Now I continue to write more or less what was said at that time. I remarked that I had hoped to be able to present fully worked out calculations of a model with reactive strategies for the repeated game based on the simple 3-person example that has been discussed. But as it happened, I had had difficulties with the calculations and I had not found equilibria that were really adequately persuasive that they were indicating a good analysis of the game. So I presented some examples of calculation results as they were obtained (by that time) and I went into a discussion of how such results could relate to the comparison of the Shapley value and the nucleolus or any other game theoretic concepts that would assign a value vector to a cooperative game in the way that this is done by the Shapley value or by the nucleolus. A few transparencies were exhibited included one that presented the results of calculations using MATHEMATICA to find equilibria arising from the model (restricted to what are pure strategies in terms of continuous parameters). The modeling had the quantity b described above which describes the special advantage of the coalition (P1,P2) of players P1 and P2; and it also had a quantity which was originally handwritten as "epsilon" and which became "e" for the formulae to which the machinery of MATHEMATICA would be applied. This quantity "epsilon" effectively measured the precision with which a player could "demand" that he should be getting favorable allocations of utility when another player would become general agent as a precondition for electing to "accept" that other player as agent (in the elections which elect the agency structures leading to a choice of a general agent or to a failure of the achievement of (full) coalescence. A calculation with epsilon = 1/200 and with b = 1/4 led to the finding of an apparent equilibrium with the property than P3 was actually getting the best payoff. Thus this equilibrium (in the model, not in "ideal" modeling, presumably) could not be reasonably interpreted since the effect of the advantage of the coalition (1,2) SHOULD be advantageous, value-wise, to players P1 and P2 rather than to player P3. Similar calculations done with epsilon larger, at 1/30, gave results seeming more reasonable, and either with b = 1/4 or with b = 1/10 an advantage to players P1 and P2 was indicated. But I was surprised by the fact that the advantage of the favored players came out as rather small. But then this seemed to correspond well, at least for b < 1/3, with the fact that the "nucleolus value" has that the advantage given to P1 and P2 by v(1,2) = b is ineffectual and that, so long as b < 1/3 , the value of the game is simply (1/3,1/3,1/3). And it was clear that before we would have epsilon passing to a limit value of zero so that we would consider the limiting results for the "evaluation" of the game under that asymptotic hypothesis that before that we should not expect precise results but rather approximate results so that it seemed that, if we COULD properly get a result for the limiting case that it might be that as epsilon would tend towards zero that the SMALL advantage of players P1 and P2 would entirely disappear and that the "nucleolus value" would be the derivation. This approach, with "epsilon" or "e" tending to zero was suggested by what had proved to be successful for examples of bargaining games of just two players. And in those studies it was found that if a bargaining game was of the sort that would have the value of (1,1) by the "solution" of Nash or Zeuthen but with an advantage to player P1 by the Kalai-Smorodinsky solution (and with the Pareto boundary happening to be smooth and convex) then that as "e" (epsilon) would decrease towards zero that the advantage found for P1 over P2 would decrease parallelwise and that the "Pareto efficiency" would increase towards 100%. With 3 players the results of calculations seemed to indicate that the first efforts at modeling were too primitive but that indeed it WOULD also result for 3 players that as "e" would tend towards zero that the Pareto efficiency would tend towards 100%. (A separate text is being prepared to describe the way in which the players' strategies were so defined in the modeling that they would choose "demands" relating to their (behavioral) "acceptance probabilities" rather than choosing those acceptance rates directly. For example, at the first stage of voting for agents ("accepting" agencies) player P1 was modeled as choosing to accept player P2 as his agent with the probability a1f2 (in a context of repeated play). And a1f2 was not a quantity that P1 would choose as a strategic choice, but RATHER player P1 would choose d1f2 , a "demand" relating to his acceptance rate a1f2 and d1f2 would enter into a formula determining a1f2 which also involved u12 which was the amount of utility that player P2 would allocate to player P1 IF it developed that P2 became general agent.)(These details were not actually gone into in the limited time of my talk in July 1999.) Conclusions in Relation to Further Model Development As I said above, for smaller values of the parameter b at least, the calculations and the equilibria (to the extent that I could find them) seemed to suggest that the "nucleolus value" of (1/3,1/3,13) was better than the Shapley value of ([b+2]/6,[b+2]/6,[1-b]/3). But of course, as b increases to an ultimate possible value of +1 then the "nucleolus value" must become ([b+1]/4,[b+1]/4,[1-b]/2) when b increases above 1/3 and when b increases to +1 then these two alternative value vectors must become the same, (1/2,1/2,0). But calculations failed to indicate that the modeling was adequate, for larger values of b, to fit either the Shapley or the nucleolus evaluation. In effect, the modeling of the cooperative potentialities of the players seemed to be weak with regard to its effective appreciation of the influence of the strengths of the potential alliances of the favored players, P1 and P2. So it seemed indicated that the modeling should be refined and THEN, with that done, then possibly the strong benefits to players P1 and P2 that are indicated by the Shapley value might be more effectively appreciated by the outcome of calculations. In the equilibria that could actually be apparently well calculated, with e = 1/30 and b = 1/4 or b = 1/10 it was found that in the cases where they players (in the repeated game) would fail to achieve general coalescence (or the election of a general agent) then that more often they would fail to achieve any coalescence at all and less often a coalition of two players would be formed (or an agent representing only himself and one other player would be elected). Then thinking about this I began to realize that it would be necessary for the outcomes of incomplete coalescence to occur more frequently (compared with the frequency of total non-coalescence) in order for the special advantage of players P1 and P2 to make itself felt with more strength. The calculations had the players voting (accepting agencies) with such probabilities that once any agent had been elected at the first voting stage that then it was almost certain that a general agency (agent) would be elected at the second stage. So the value of the coalition (P1,P2) was effectively entering very little into the resulting payoffs since that coalition was so rarely not superseded by a general coalition with v(1,2,3) = +1 . Of course, according to the nucleolus evaluation, the (1,2) coalition SHOULD actually fail to be effective unless b rises and becomes more than 1/3. But I failed to find the right sort of outcome, by calculations, for b = 2/3 , for example. This led me to think that the modeling should be refined so that the players can more effectively seek to attract desirable acceptance results (from other players) particularly at the time of the first stage of voting. Program for Further Work So we can continue the study by calculations (using MATHEMATICA) for these simplest non-trivial 3-person bargaining games. It seems that we should have 5 different varieties of allocation strategy for each player so that he/she, if elected to be general agent, allocates the gain to the players according to these cases: (Illustrated for player P1) (1): P1 was first elected by P2 in stage 1, then by P3 in stage 2. (2): P1 was first elected by P3 in stage 1, then by P2 in stage 3. (3): P1 was elected in stage 1 by both of P2 and P3. (4): P1 was elected in stage 2 by P2 as agent for (P2,P3). (5): P1 was elected in stage 2 by P3 as agent for (P3,P2). This, obviously, will greatly increase the number of parameters (strategic parameters) for which we will be seeking numerical solutions. Is this practical? Here I think there is a very fundamental principle relevant to the current status of the world's technological civilization. It is becoming more and more practical to solve problems by calculation procedures on a scale that was previously outside the range of feasible practicality. Large weather and geological calculations can be done in the area of PDE or PDES theory. The 4 color problem was successfully studied 20+ years ago but in the heavy computing era. The large scale calculations in "theoretical biology" are analogous, that area COULD study even more complex evolutionary issues involving game contexts relating to repeated asymmetrical prisoner's dilemma games. Etc. And there is, with the computation resources that are or are becoming available, the possibility of "experimental mathematics" research relating to game theory. And that experimental variety or side of research is where I think that the approach in terms of agents as the means of cooperative coalescence can lead to a genuinely scientific approach to the challenge of "evaluating" cooperative games. It is not quite obvious, a priori, why there should be a function defined for ALL cooperative games that would give a value vector for the game as a function of its description. And there are indeed very degenerate example NTU games where the concept of a value vector seems inappropriate. On the other hand, it IS reasonable that MOST such games might be assignable a reasonable value concept. But what is reasonable? A scientific approach hopefully can illuminate this question.