Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • tShow transcript (+SHIFT = all transcript)
  • nShow notes (+SHIFT = all notes)

Please be cautious in using the transcripts.

They were created mechanically and have mostly not been checked or revised.

Here is how they were created:

  1. live lecture recorded;
  2. machine transcription of live recording;
  3. ask LLM to clean up transcript, and link to individual slides.

This is an error-prone process.

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

The Autonomy Dilemma (Objection to Pacherie)

insert-transcript#ffd2f4ec-f73d-465b-9406-bf8eebf2a655-here
problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning
insert-transcript#4fad9a74-d61c-4ff4-959c-5d9d666943ce-here
Completely coherent for Puppe to have preferences as long as we are consistent enough in living out the fiction.
The team-reasoning aggregate subject is an imaginary agent. It requires the work of the imagination to sustain it.
This is why the autonomy dilemma has such force: who is going to sustain it if it’s preferences are not mine or yours?
Why should we care about the merely imaginary agent at all?
Of course there are cases where we do (e.g. the Squadron), but these typically involve institutions etc. So are rare and costly.

Why suppose that team reasoning explains how

there could be aggregate subjects?

  • we take* ourselves to be components of an aggregate agent
  • through team reasoning, we ensure that the aggregate agent’s choices maximise the aggregate agent’s expected utility
  • the aggregate agent has preferences (literally)
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.
insert-transcript#d9b8670c-9f36-4349-8cbc-709b1cf69452-here
If you have preferences, you satisfy the axioms.
Remember Elsberg Paradox: you not satisfy the axioms does not imply that you preferences are irrational: it implies that you do not have preferences at all.
Using Steele & Stefánsson (2020, p. §2.3) here.

transitivity

For any A, B, C ∈ S: if A⪯B and B⪯C then A⪯C.

(Steele & Stefánsson, 2020)

completeness

For any A, B ∈ S: either A⪯B or B⪯A

continuity

‘Continuity implies that no outcome is so bad that you would not be willing to take some gamble that might result in you ending up with that outcome [...] provided that the chance of the bad outcome is small enough.’

Suppose A⪯B⪯C. Then there is a p∈[0,1] such that: {pA, (1 − p)C} ∼ B (Steele & Stefánsson, 2020)

independence

roughly, if you prefer A to B then you should prefer A and C to B and C.

Suppose A⪯B. Then for any C, and any p∈[0,1]: {pA,(1−p)C}⪯{pB,(1−p)C}

Steele & Stefánsson (2020, p. §2.3)

insert-transcript#f5938743-2d77-45bb-a161-607d673786bd-here

For an aggregate agent comprising me and you to nonaccidentally satisfy these axioms

- we must coincide on what its preferences are

- we must coincide on when each of us is acting as part of the aggregate (rather than individually)
... and on which part of the aggregate we each are

- whenever any of us unilaterally attempts to influence what its preferences are by acting, we must succeed in doing so

- ...

Seems to require something like institutional structure. Head of department just says how things are.
But you will often get people going a bit off-script.
insert-transcript#c9aa510e-03c9-44d8-ab9c-015909e2900d-here

autonomy

‘There is ... nothing inherently inconsistent in the possibility that every member of the group has an individual preference for y over x (say, each prefers wine bars to pubs) while the group acts on an objective that ranks x above y.’

(Sugden, 2000)

dilemma

autonomy -> demanding for team reasoning to occur because axioms

∞TODO Improve argument
UPDATE 2025-11
I think this is too quick. Bacharach does not agree with Sugden about autonomy (offers Paretianness (roughly, if it’s weakly better for each it’s better for the team) as a principle linking individual to group preferences.) But on that view it might, for all I have shown, still be reasonable to think that there is an aggregate subject of preferences.
HOWEVER the larger argument still goes through. If we take the aggregate’s preferences to be a function of individual ones, the aggregate is not genuinely distinct; if we do not, then the aggregate has to satisfy the axioms!

no autonomy -> no aggregate subject after all (just self-interested optimism)

We specified at the start that our theory concerned only games in which it was not possible to make an enforceable agreement in advance of playing.
insert-transcript#108865a9-6d51-4f64-a3c0-f438a5c3f7eb-here

Objection to Pacherie

1. Shared intention lite requires non-trivial aggregate agents.

2. Aggregate agents are trivial unless autonomous.

3. Achieving autonomy is demanding.

therefore:

4. We cannot explain joint action in the second and third years of life by invoking shared intention lite.

This is a bad argument—such things are difficult to specify and difficult to demonstrate. Can you help me improve the argument?
Can you help me improve the argument?
insert-transcript#9ef4d6ae-38c9-4dd2-a9b2-e2705e03a066-here
problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning
BTW we also had an objection to Bratman earlier, so it looks like we are in some kind of trouble (more on this in the conclusion)