Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source
These are the questions you would want to answer if you were going to pursue team reasoning.
 
--------
\subsection{slide-7}
recall ...
 
--------
\subsection{slide-20}
When faced with hi-lo, we might both spontaneously do this. Perhaps knowing the structure of the game could enable this.
 
--------
\subsection{slide-21}
In fact even Prisoner’s Dilemma situations could bring this about in us.
 
--------
\subsection{slide-23}
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.
 
--------
\subsection{slide-24}
Compare two routes to aggregate subjects: team reasoning and the reflectively-constituted-aggregate-subject idea due to Pettit, List, Helm and others which we considered earlier.
 
--------
\subsection{slide-25}
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.
 
How is this different from the idea we ecountered earlier, due to Pettit, List, Helm and others, of reflectively constituted aggregate subject?
 
--------
\subsection{slide-26}
An obvious point is that team reasoning provides formal tools whereas the reflectively-constituted-aggregate-subject idea is presented informally. Can we think of them as analogs of each other, phrased in different ways?
 
--------
\subsection{slide-27}
The reflectively constituted aggregate subject exists because we think it exists. It has beliefs, desires and intentions because we think it does. This means there is a contrast between simple actions and the actions of a reflectively constituted aggregate subject. In the case of simple actions, I need to have, and act on beliefs, desires and intentions. But I don’t need to ascribe those states to myself. By contrast, when an aggregate subject acts, component subjects must be ascribing beliefs, desires and intentions to the aggregate subject.
 
Team reasoning imposes no comparable requirement. In team reasoning, we each have team-directed preferences and work out how to act on the basis of these. So team reasoning is just reasoning, except that it is based on a different set of premises.
 
--------
\subsection{slide-28}
Team reasoning is designed with one-off games in mind; we meet as strangers, play a single round of Hi-Lo, and never see each other again. (It doesn’t have to be one-off, of course.)
 
The reflectively constituted aggregate subject can only exist if it exists over a relatively long period of time. Why? I suppose that there can’t be agents with beliefs, desires and intentions that exist only for a moment; having such attitudes requires at least the prospect of persitence through time because what the attitudes explain is how a life unfolds. If so, it seems to me plausible that to attribute an attitude to an aggregate agent (or any agent) is to assume that it persists for more than a moment.
 
--------
\subsection{slide-29}
Proponents of team reasoning generally claim that its occurrence does not require shared intention or shared agency. If you think about the Hi-Lo game we encountered earlier, this seems fairly straightforward. While team-directed preferences of the team do require matching team-directed preferences of the team members, in this situation we might be entitled to rely on there being such preferences without having a shared intention concerning either the preferences or our actions.
 
For a reflectively constituted aggregate subject, shared intention is required as explained earlier.
 
--------
\subsection{slide-30}
But what are preferences.
 
 
--------
\subsection{slide-34}
If you have preferences, you satisfy the axioms.
 
Remember Elsberg Paradox: you not satisfy the axioms does not imply that you preferences are irrational: it implies that you do not have preferences at all.
 
Using Steele \& Stefánsson (2020, p. §2.3) here.
 
(Steele \& Stefánsson, 2020)
 
Suppose A⪯B⪯C. Then there is a p∈[0,1] such that: {pA, (1 − p)C} ∼ B (Steele \& Stefánsson, 2020)
 
Suppose A⪯B. Then for any C, and any p∈[0,1]: {pA,(1−p)C}⪯{pB,(1−p)C}
 
--------
\subsection{slide-38}
We specified at the start that our theory concerned only games in which it was not possible to make an enforceable agreement in advance of playing.
 
--------
\subsection{slide-39}
 
--------
\subsection{slide-41}
These are the questions you would want to answer if you were going to pursue team reasoning.
 

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)