Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Team Reasoning and Aggregate Agents

These are the questions you would want to answer if you were going to pursue team reasoning.

1. What is team reasoning?

2. In what sense does team reasoning give rise to aggregate agents?

3. How might team reasoning be used in constructing a theory of shared agency?

‘collective intentions are the product of a distinctive mode of practical reasoning, team reasoning, in which agency is attributed to groups.’

(Gold & Sugden, 2007)

recall ...

How?

aggregate subject

previously ...

aggregate subjects constituted by self-reflection (Pettit, 2014)

observation: appears to presuppose shared intention

‘collective intentions are the product of a distinctive mode of practical reasoning, team reasoning, in which agency is attributed to groups.’

(Gold & Sugden, 2007)

To which aggregate subjects (‘groups’) is agency attributed

in team reasoning?

‘[A] team exists to the extent that its members take themselves to be members of it.

[T]o take oneself to be a member of a team is

  • to engage in [team] reasoning oneself,
  • while holding certain beliefs about the use of [team] reasoning by others’

(Sugden, 2000)

Under what conditions might you and I

take* ourselves to be members of a you-and-I team?

*in Sugden’s special sense of ‘take’

When faced with hi-lo, we might both spontaneously do this. Perhaps knowing the structure of the game could enable this.
Player X
high low
Player Yhigh2
2
0
0
low 0
0
1
1
In fact even Prisoner’s Dilemma situations could bring this about in us.
Prisoner X
resistconfess
Prisoner Yresist3
3
0
4
confess 4
0
1
1

Why suppose that team reasoning explains how

there could be aggregate subjects?

  • we take* ourselves to be components of an aggregate agent
  • the aggregate agent has preferences (literally)
  • through team reasoning, we ensure that the aggregate agent’s choices maximise the aggregate agent’s expected utility
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.

Compare two routes to aggregate subjects.

Compare two routes to aggregate subjects: team reasoning and the reflectively-constituted-aggregate-subject idea due to Pettit, List, Helm and others which we considered earlier.
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.
How is this different from the idea we ecountered earlier, due to Pettit, List, Helm and others, of reflectively constituted aggregate subject?

team reasoning

reflectively constituted aggregate subject

An obvious point is that team reasoning provides formal tools whereas the reflectively-constituted-aggregate-subject idea is presented informally. Can we think of them as analogs of each other, phrased in different ways?

formal

informal

need not be reflective

reflective

The reflectively constituted aggregate subject exists because we think it exists. It has beliefs, desires and intentions because we think it does. This means there is a contrast between simple actions and the actions of a reflectively constituted aggregate subject. In the case of simple actions, I need to have, and act on beliefs, desires and intentions. But I don’t need to ascribe those states to myself. By contrast, when an aggregate subject acts, component subjects must be ascribing beliefs, desires and intentions to the aggregate subject.
Team reasoning imposes no comparable requirement. In team reasoning, we each have team-directed preferences and work out how to act on the basis of these. So team reasoning is just reasoning, except that it is based on a different set of premises.

can be short-term

Team reasoning is designed with one-off games in mind; we meet as strangers, play a single round of Hi-Lo, and never see each other again. (It doesn’t have to be one-off, of course.)

long-term

The reflectively constituted aggregate subject can only exist if it exists over a relatively long period of time. Why? I suppose that there can’t be agents with beliefs, desires and intentions that exist only for a moment; having such attitudes requires at least the prospect of persitence through time because what the attitudes explain is how a life unfolds. If so, it seems to me plausible that to attribute an attitude to an aggregate agent (or any agent) is to assume that it persists for more than a moment.

need not depend on shared agency

Proponents of team reasoning generally claim that its occurrence does not require shared intention or shared agency. If you think about the Hi-Lo game we encountered earlier, this seems fairly straightforward. While team-directed preferences of the team do require matching team-directed preferences of the team members, in this situation we might be entitled to rely on there being such preferences without having a shared intention concerning either the preferences or our actions.

depends on shared agency

For a reflectively constituted aggregate subject, shared intention is required as explained earlier.

requires preferences

But what are preferences.

does not require preferences?

Why suppose that team reasoning explains how

there could be aggregate subjects?

  • we take* ourselves to be components of an aggregate agent
  • the aggregate agent has preferences (literally)
  • through team reasoning, we ensure that the aggregate agent’s choices maximise the aggregate agent’s expected utility
If you have preferences, you satisfy the axioms.
Remember Elsberg Paradox: you not satisfy the axioms does not imply that you preferences are irrational: it implies that you do not have preferences at all.
Using Steele & Stefánsson (2020, p. §2.3) here.

transitivity

For any A, B, C ∈ S: if A⪯B and B⪯C then A⪯C.

(Steele & Stefánsson, 2020)

completeness

For any A, B ∈ S: either A⪯B or B⪯A

continuity

‘Continuity implies that no outcome is so bad that you would not be willing to take some gamble that might result in you ending up with that outcome [...] provided that the chance of the bad outcome is small enough.’

Suppose A⪯B⪯C. Then there is a p∈[0,1] such that: {pA, (1 − p)C} ∼ B (Steele & Stefánsson, 2020)

independence

roughly, if you prefer A to B then you should prefer A and C to B and C.

Suppose A⪯B. Then for any C, and any p∈[0,1]: {pA,(1−p)C}⪯{pB,(1−p)C}

Steele & Stefánsson (2020, p. §2.3)

autonomy

‘There is ... nothing inherently inconsistent in the possibility that every member of the group has an individual preference for y over x (say, each prefers wine bars to pubs) while the group acts on an objective that ranks x above y.’

(Sugden, 2000)

dilemma

autonomy -> rare for team reasoning to occur because axioms

no autonomy -> no aggregate subject after all (just cooperative games)

We specified at the start that our theory concerned only games in which it was not possible to make an enforceable agreement in advance of playing.

team reasoning

reflectively constituted aggregate subject

formal

informal

need not be reflective

reflective

can be short-term

long-term

need not depend on shared agency

depends on shared agency

requires preferences

does not require preferences?

These are the questions you would want to answer if you were going to pursue team reasoning.

1. What is team reasoning?

2. In what sense does team reasoning give rise to aggregate agents?

3. How might team reasoning be used in constructing a theory of shared agency?