Logic

Game Theory

Social Choice Theory

2024

2023

2022

2021

2020

2019

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

An extension of May's Theorem to three alternatives: axiomatizing Minimax voting, Wesley H. Holliday, and Eric Pacuit, , under submission, 2024

May's Theorem [K. O. May, Econometrica 20 (1952) 680-684] characterizes majority voting on two alternatives as the unique preferential voting method satisfying several simple axioms. Here we show that by adding some desirable axioms to May's axioms, we can uniquely determine how to vote on three alternatives. In particular, we add two axioms stating that the voting method should mitigate spoiler effects and avoid the so-called strong no show paradox. We prove a theorem stating that any preferential voting method satisfying our enlarged set of axioms, which includes some weak homogeneity and preservation axioms, agrees with Minimax voting in all three-alternative elections, except perhaps in some improbable knife-edged elections in which ties may arise and be broken in different ways.

An Axiomatic Characterization of Split Cycle, Yifeng Ding, Wesley H. Holliday, and Eric Pacuit, , forthcoming in Social Choice and Welfare, 2024

A number of rules for resolving majority cycles in elections have been proposed in the literature. Recently, Holliday and Pacuit (Journal of Theoretical Politics 33 (2021) 475-524) axiomatically characterized the class of rules refined by one such cycle-resolving rule, dubbed Split Cycle: in each majority cycle, discard the majority preferences with the smallest majority margin. They showed that any rule satisfying five standard axioms, plus a weakening of Arrow's Independence of Irrelevant Alternatives (IIA) called Coherent IIA, is refined by Split Cycle. In this paper, we go further and show that Split Cycle is the only rule satisfying the axioms of Holliday and Pacuit together with two additional axioms: Coherent Defeat and Positive Involvement in Defeat. Coherent Defeat states that any majority preference not occurring in a cycle is retained, while Positive Involvement in Defeat is closely related to the well-known axiom of Positive Involvement (as in J. Perez, Social Choice and Welfare 18 (2001) 601-616). We characterize Split Cycle not only as a collective choice rule but also as a social choice correspondence, over both profiles of linear ballots and profiles of ballots allowing ties.

Learning to Manipulate under Limited Information, Wesley H. Holliday, Alexander Kristoffersen, Eric Pacuit, , presented at Workshop on Social Choice and Learning Algorithms 2024 and New Directions in Social Choice 2024, 2024

By classic results in social choice theory, any reasonable preferential voting method sometimes gives individuals an incentive to report an insincere preference. The extent to which different voting methods are more or less resistant to such strategic manipulation has become a key consideration for comparing voting methods. Here we measure resistance to manipulation by whether neural networks of varying sizes can learn to profitably manipulate a given voting method in expectation, given different types of limited information about how other voters will vote. We trained over 70,000 neural networks of 26 sizes to manipulate against 8 different voting methods, under 6 types of limited information, in committee-sized elections with 5-21 voters and 3-6 candidates. We find that some voting methods, such as Borda, are highly manipulable by networks with limited information, while others, such as Instant Runoff, are not, despite being quite profitably manipulated by an ideal manipulator with full information. For the two probability models for elections that we use, the overall least manipulable of the 8 methods we study are Condorcet methods, namely Minimax and Split Cycle.

Advocacy Games, Steven Kuhn and Eric Pacuit, in preparation, 2024

Epistemic Game Theory, Paolo Galeazzi, Eric Pacuit, and Olivier Roy, short book in preparation, 2023

Stable Voting, Wesley H. Holliday and Eric Pacuit, Constitutional Political Economy, 2023

We propose a new single-winner voting system using ranked ballots: Stable Voting. The motivating principle of Stable Voting is that if a candidate $A$ would win without another candidate $B$ in the election, and $A$ beats $B$ in a head-to-head majority comparison, then $A$ should still win in the election with $B$ included (unless there is another candidate $A'$ who has the same kind of claim to winning, in which case a tiebreaker may choose between such candidates). We call this principle Stability for Winners (with Tiebreaking). Stable Voting satisfies this principle while also having a remarkable ability to avoid tied outcomes in elections even with small numbers of voters.

Split Cycle: A New Condorcet Consistent Voting Method Independent of Clones and Immune to Spoilers, Wesley H. Holliday and Eric Pacuit, Public Choice, 2023

We introduce a new Condorcet consistent voting method, called Split Cycle. Split Cycle belongs to the small family of known voting methods satisfying independence of clones and the Pareto principle. Unlike other methods in this family, Split Cycle satisfies a new criterion we call immunity to spoilers, which concerns adding candidates to elections, as well as the known criteria of positive involvement and negative involvement, which concern adding voters to elections. Thus, relative to other clone-independent Paretian methods, Split Cycle mitigates "spoiler effects" and "strong no show paradoxes."

Choice Driven Counterfactuals, Ilaria Canavotto and Eric Pacuit, Journal of Philosophical Logic, 51, pp. 297 - 345, 2022

In this paper, we investigate the semantics and logic of *choice-driven counterfactuals*, that is, of counterfactuals whose evaluation relies on auxiliary premises about how agents are expected to act, *i.e.*, about their *default choice behavior*. To do this, we merge one of the most prominent logics of agency in the philosophical literature, namely stit logic [Belnap et al., 2001; Horty, 2001], with the well-known logic of counterfactuals due to Stalnaker [1968] and Lewis [1973]. A key component of our semantics for counterfactuals is to distinguish between *deviant* and *non-deviant* actions at a moment, where an action available to an agent at a moment is deviant when its performance does not agree with the agent's default choice behavior at that moment. After developing and axiomatizing a stit logic with action types, instants, and deviant actions, we study the philosophical implications and logical properties of two candidate semantics for choice-driven counterfactuals, one called *rewind models* inspired by Lewis [1979] and the other called *independence models* motivated by well-known counterexamples to Lewis's proposal [Slote, 1978]. In the last part of the paper we consider how to evaluate choice-driven counterfactuals at moments arrived at by some agents performing a deviant action.

Axioms for Defeat in Democratic Elections, Wesley Holliday and Eric Pacuit, Journal of Theoretical Politics, 33(4), pp. 475 - 524, 2021

We propose six axioms concerning when one candidate should defeat another in a democratic election involving two or more candidates. Five of the axioms are widely satisfied by known voting procedures. The sixth axiom is a weakening of Kenneth Arrow's famous condition of the Independence of Irrelevant Alternatives (IIA). We call this weakening Coherent IIA. We prove that the five axioms plus Coherent IIA single out a voting procedure studied in our recent work: Split Cycle. In particular, Split Cycle is the most resolute voting procedure satisfying the six axioms for democratic defeat. In addition, we analyze how Split Cycle escapes Arrow's Impossibility Theorem and related impossibility results.

Arrow's Decisive Coalitions, Wesley Holliday and Eric Pacuit, Social Choice and Welfare, 54, pp. 463–505, 2020

In his classic monograph, *Social Choice and Individual Values*, Arrow introduced the notion of a decisive coalition of voters as part of his mathematical framework for social choice theory. The subsequent literature on Arrow’s Impossibility Theorem has shown the importance for social choice theory of reasoning about coalitions of voters with different grades of decisiveness. The goal of this paper is a fine-grained analysis of reasoning about decisive coalitions, formalizing how the concept of a decisive coalition gives rise to a social choice theoretic language and logic all of its own. We show that given Arrow's axioms of the Independence of Irrelevant Alternatives and Universal Domain, rationality postulates for social preference correspond to strong axioms about decisive coalitions. We demonstrate this correspondence with results of a kind familiar in economics-representation theorems-as well as results of a kind coming from mathematical logic—completeness theorems. We present a complete logic for reasoning about decisive coalitions, along with formal proofs of Arrow's and Wilson’s theorems. In addition, we prove the correctness of an algorithm for calculating, given any social rationality postulate of a certain form in the language of binary preference, the corresponding axiom in the language of decisive coalitions. These results suggest for social choice theory new perspectives and tools from logic.

Action Types in Stit Semantics, John F. Horty and Eric Pacuit, Review of Symbolic Logic, 10(4), pp. 617 - 637, 2017

Stit semantics grows out of a modal tradition in the logic of action that concentrates on an operator representing the agency of an individual in seeing to it that some state of affairs holds, rather than on the actions the individual performs in doing so. The purpose of this paper is to enrich stit semantics, and especially epistemic stit semantics, by supplementing the overall framework with an explicit treatment of action types. We show how the introduction of these new action types allows us to define a modal operator capturing an epistemic sense of agency, and how this operator can be used to express an epistemic sense of ability.

Dynamics for Probabilistic Common Belief, Eric Pacuit, Studies in Logic, 8(4), pp. 32 - 53, 2015

Robert Aumann's agreeing to disagree theorem shows that if two agents have the same prior probability and update their probability of an event E with private information by conditioning, then if the posterior probabilities of E are common knowledge, then the posteriors must be the same. Dov Monderer and Dov Samet prove a generalization of Aumann's result involving a probabilistic variant of common knowledge. In this paper, I use various methods from probabilistic and dynamic-epistemic logics to explore a dynamic characterization of the Monderer and Samet result. The main goal is to develop a model that describes the evolution of the agents' knowledge and (probabilistic) beliefs as they interact with each other and the environment. I will show how the logical frameworks are rich and flexible tools that can be used to study many dynamic processes of interactive social belief change.

On the Use (and Abuse) of Logic in Game Theory, Eric Pacuit, Journal of Philosophical Logic, 44(6), pp. 741 - 753, 2015

A quick glance at the opening paragraphs in many of the classic logic textbooks reveals a common view: Logical methods highlight the reasoning patterns of a single (idealized) agent engaged in some form of mathematical thinking. However, this traditional view of the 'subject matter' of logic is expanding. There is a growing literature using phrases such as 'rational interaction' or 'information flow' to describe its subject matter while still employing traditional logical methods. The clearest example of this can be found in the work of Johan van Benthem and others on logical dynamics; Rohit Parikh and others on social software; Samson Abramsky and others on game semantics for linear logic, and Mike Wooldridge, Valentin Goranko and others on logics for multiagent systems. There are many issues driving this shift in thinking about what logic is about. One important reason for this shift is the close connection between logic and game theory.<br/><br/>For centuries, logicians have used game-theoretic notions in their analyses. In the past 20 years or so, the influence has started to run in the other direction. There is now a very active research area focused on adapting existing logical systems and developing new ones to reason about different aspects of a game situation. Full coverage of this fascinating research area would require much more space than I have in this article. So, I will not attempt a comprehensive overview of the use of logic in game theory and, instead, will focus on just a few key topics.

Changing Types: Information Dynamics for Qualitative Type Spaces, Dominik Klein and Eric Pacuit, Studia Logica, 102(2), pp. 297 - 319, 2014

Many different approaches to describing the players' knowledge and beliefs can be found in the literature on the epistemic foundations of game theory. We focus here on non-probabilistic approaches. The two most prominent are the so-called Kripke- or Aumann- structures and knowledge structures (non-probabilistic variants of Harsanyi type spaces). Much of the recent work on Kripke structures has focused on dynamic extensions and simple ways of incorporating these. We argue that many of these ideas can be applied to knowledge structures as well. Our main result characterizes precisely when one type can be transformed into another type by a specific type of information update. Our work in this paper suggest that it would be interesting to pursue a theory of "information dynamics" for knowledge structures (and eventually Harsanyi type spaces).

Evidence and Plausibility in Neighborhood Structures, Johan van Benthem, David Fernandez-Duque, and Eric Pacuit, Annals of Pure and Applied Logic, 165(1), pp. 106 - 133, 2014

- This paper supercedes Johan van Benthem, David Fernández-Duque and Eric Pacuit, Evidence Logic: A New Look at Neighborhood Structures, in
*Proceedings Of Advances In Modal Logic Volume 9*, pp. 97 - 118.

The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the True state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting 'uniform' and 'flat' models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.

Dynamic Epistemic Logic II: Logics of Information Change, Eric Pacuit, Philosophy Compass, 8(9), pp. 815 - 833, 2013

This is the second paper in a two-part series introducing logics for reasoning about the dynamics of knowledge and beliefs. Part I introduced different logical systems that can be used to reason about the knowledge and beliefs of a group of agents. In this second paper, I show how to adapt these logical systems to reason about the knowledge and beliefs of a group of agents during the course of a social interaction or rational inquiry. Inference, communication and observation are typical examples of informative events, which have been subjected to a logical analysis. The main goal of this article is to introduce the key conceptual and technical issues that drive much of the research in this area.

Dynamic Epistemic Logic I: Modeling Knowledge and Belief, Eric Pacuit, Philosophy Compass, 8(9), pp. 798 - 814, 2013

Dynamic epistemic logic, broadly conceived, is the study of logics of *information change*. This is the first paper in a two-part series introducing this research area. In this paper, I introduce the basic logical systems for reasoning about the knowledge and beliefs of a group of agents.

Substantive Assumptions in Interaction: A Logical Perspective, Olivier Roy and Eric Pacuit, Synthese, 190(5), pp. 891 - 908, 2013

In this paper we study substantive assumptions in social interaction. By substantive assumptions we mean contingent assumptions about what the players know and believe about each other's choices and information. We first explain why substantive assumptions are fundamental for the analysis of games and, more generally, social interaction. Then we show that they can be compared formally, and that there exist contexts where no substantive assumptions are being made. Finally we show that the questions raised in this paper are related to a number of issues concerning 'large' structures in epistemic game theory.

Editorial Introduction: Selected Papers from the 2nd Workshop on Logic Rationality and Interaction (LORI-II), Eric Pacuit, Journal of Philosophical Logic, 41(4), pp. 629 - 631, 2012

- Editorial introduction for a special issue of the Journal of Philosophical Logic that I edited.

Dynamic Logics of Evidence-Based Beliefs, Johan van Benthem and Eric Pacuit, Studia Logica, 99(61), pp. 61 - 92, 2011

This paper adds evidence structure to standard models of belief, in the form of families of sets of worlds. We show how these more fine-grained models support natural actions of 'evidence management', ranging from update with external new information to internal rearrangement. We show how this perspective leads to new richer languages for existing neighborhood semantics for modal logic. Our main results are relative completeness theorems for the resulting dynamic logic of evidence.

Reasoning with Protocols under Imperfect Information, Eric Pacuit and Sunil Simon, The Review of Symbolic Logic, 4(3), pp. 412 - 444, 2011

We introduce and study a PDL-style logic for reasoning about protocols, or plans, under imperfect information. Our paper touches on a number of issues surrounding the relationship between an agent's abilities, available choices, and information in an interactive situation. The main question we address is under what circumstances can the agent commit to a protocol or plan, and what can she achieve by doing so?

Toward a Theory of Play: A Logical Perspective on Games and Interaction, Johan van Benthem, Eric Pacuit, and Olivier Roy, Games, 2(1), pp. 52 - 86, 2011

Logic and game theory have had a few decades of contacts by now, with the lassical results of epistemic game theory as major high-lights. In this paper, we emphasize a recent new perspective toward 'logical dynamics', designing logical systems that focus on the actions that change information, preference, and other driving forces of agency. We show how this dynamic turn works out for games, drawing on some recent advances in the literature. Our key examples are the long-term dynamics of information exchange, as well as the much-discussed issue of extensive game rationality. Our paper also proposes a new broader interpretation of what is happening here. The combination of logic and game theory provides a fine-grained perspective on information and interaction dynamics, and we are witnessing the birth of something new which is not just logic, nor just game theory, but rather a <em>Theory of Play</em>.

Dynamic Logics of Knowledge and Access, Tomohiro Hoshi and Eric Pacuit, Synthese, 177(1), pp. 29 - 49, 2010

A recurring issue in any formal model representing agents' (changing) informational attitudes is how to account for the fact that the agents are limited in their access to the available inference steps, possible observations and available messages. This may be because the agents are not logically omniscient and so do not have unlimited reasoning ability. But it can also be because the agents are following a predefined protocol that explicitly limits statements available for observation and/or communication. Within the broad literature on epistemic logic, there are a variety of accounts that make precise a notion of an agent's 'limited access' (for example, Awareness Logics, Justification Logics, and Inference Logics). This paper interprets the agents' access set of formulas as a constraint on the agents' information gathering process limiting which formulas can be observed.

Editorial Introduction: Logic and Intelligent Interaction, Thomas Agotnes, Johan van Benthem, and Eric Pacuit, Synthese, 169(2), pp. 219 - 221, 2009

- Introduction for a special issue of Synthese for which I was a co-editor

Merging Frameworks for Interaction, Johan van Benthem, Jelle Gerbrandy, Tomohiro Hoshi, and Eric Pacuit, Journal of Philosophical Logic, 38(5), pp. 491 - 526, 2009

A variety of logical frameworks have been developed to study rational agents interacting over time. This paper takes a closer look at one particular interface, between two systems that both address the dynamics of knowledge and information flow. The first is Epistemic Temporal Logic (ETL) which uses linear or branching time models with added epistemic structure induced by agents' different capabilities for observing events. The second framework is Dynamic Epistemic Logic (DEL) that describes interactive processes in terms of epistemic event models which may occur inside modalities of the language. This paper systematically and rigorously relates theDEL framework with the ETL framework. The precise relationship between DEL and ETL is explored via a new representation theorem characterizing the largest class of ETL models corresponding to DEL protocols in terms of notions of Perfect Recall, No Miracles, and Bisimulation Invariance. We then focus on new issues of completeness. One contribution is an axiomatization for the dynamic logic of public announcements constrained by protocols, which has been an open problem for some years, as it does not fit the usual 'reduction axiom' format of DEL. Finally, we provide a number of examples that show how DEL suggests an interesting fine-structure inside ETL.

Neighbourhood Structures: Bisimilarity and Basic Model Theory, Helle Hvid Hansen, Clemens Kupke and Eric Pacuit, Logical Methods in Computer Science, 5(2:2), pp. 1 - 38, 2009

Neighbourhood structures are the standard semantic tool used to reason about non-normal modal logics. The logic of all neighbourhood models is called classical modal logic. In coalgebraic terms, a neighbourhood frame is a coalgebra for the contravariant powerset functor composed with itself, denoted by $2^2$. We use this coalgebraic modelling to derive notions of equivalence between neighbourhood structures. $2^2$-bisimilarity and behavioural equivalence are well known coalgebraic concepts, and they are distinct, since $2^2$ does not preserve weak pullbacks. We introduce a third, intermediate notion whose witnessing relations we call precocongruences (based on pushouts). We give back-and-forth style characterisations for $2^2$-bisimulations and precocongruences, we show that on a single coalgebra, precocongruences capture behavioural equivalence, and that between neighbourhood structures, precocongruences are a better approximation of behavioural equivalence than $2^2$-bisimulations. We also introduce a notion of modal saturation for neighbourhood models, and investigate its relationship with definability and image-finiteness. We prove a Hennessy-Milner theorem for modally saturated and for image-finite neighbourhood models. Our main results are an analogue of Van Benthem's characterisation theorem and a model-theoretic proof of Craig interpolation for classical modal logic.

A General Approach to Aggregation Problems, Tijmen Daniels and Eric Pacuit, Journal of Logic and Computation, 19(3), pp. 517 - 536, 2008

We discuss a general approach to judgement aggregation based on lattice theory. Agents choose elements of a lattice, and an aggregation procedure yields a 'social choice' based on the individual choices. Settings traditionally studied in social choice theory can be thought of as implicational systems, and lattice theory provides an abstraction of such systems. In fact, traditionally studied settings correspond to certain atomistic lattices in our framework. Our aim is to systematically investigate how properties of a given lattice induce constraints on aggregation procedures that lead up to impossibility theorems. We allow for non-atomistic lattices and this raises some subtle issues. We will discuss how well our framework fits in with the traditional approaches to social choice theory, in particular with respect to generalizations of some of the well known axioms, and go on prove an impossibility result that highlights the role of certain lattice theoretical properties. These properties reflect some of the traditional axioms or other aspects of traditional systems.

An Abstract Approach to Reasoning about Games with Mistaken and Changing Beliefs, Benedikt Lowe and Eric Pacuit, The Australasian Journal of Logic, 6, pp. 162 - 181, 2008

We do not believe that logic is the sole answer to deep and intriguing questions about human behaviour, but we think that it might be a useful tool in simulating and understanding it to a certain degree and in specifically restricted areas of application. We do not aim to resolve the question of what rational behaviour in games with mistaken and changing beliefs is. Rather, we develop a formal and abstract framework that allows us to reason about behaviour in games with mistaken and changing beliefs leaving aside normative questions concerning whether the agents are behaving 'rationally'; we focus on what agents do in a game. In this paper, we are not concerned with the reasoning process of the (ideal) economic agent; rather, our intended application is artificial agents, e.g., autonomous agents interacting with a human user or with each other as part of a computer game or in a virtual world. We give a story of mistaken beliefs that is a typical example of the situation in which we should want our formal setting to be applied. Then we give the definitions for our formal system and how to use this setting to get a backward induction solution. We then apply our semantics to the story related earlier and give an analysis of it. Our final section contains a discussion of related work and future projects. We discuss the advantages of our approach over existing approaches and indicate how it can be connected to the existing literature.

Some Comments on History Based Structures, Eric Pacuit, Journal of Applied Logic, 5(4), pp. 613 - 624, 2007

History based models, introduced by Parikh and Ramanujam, provide a natural mathematical model of social interactive situations. These models offer a 'low level' description of a social description of a social situation---describing the situation in terms of events, sequences of events, and the agents' view of these events. A multi-agent epistemic temporal modal logic can be used to reason about these structures. A number of other models have been proposed in the literature which can be used as a semantics for such a logical language. Most notably, the interpreted systems discussed by Fagin et al. In this paper, we will discuss the differences and similarities between these two mathematical models. In particular, it is shown that these two semantics are modally equivalent. We will conclude with a discussion of a number of questions that are raised when history based models are used to reason about game-theoretic situations.

Understanding the Brandenburger-Keisler Paradox, Eric Pacuit, Studia Logica, 86(3), pp. 435 - 454, 2007

Adam Brandenburger and H. Jerome Keisler have recently discovered a two person Russell-style paradox. They show that the following configurations of beliefs is impossible: Ann believes that Bob assumes that Ann believes that Bob's assumption is wrong. In [7] a modal logic interpretation of this paradox is proposed. The idea is to introduce two modal operators intended to represent the agents' beliefs and assumptions. The goal of this paper is to take this analysis further and study this paradox from the point of view of a modal logician. In particular, we show that the paradox can be seen as a theorem of an appropriate hybrid logic.

First-Order Classical Modal Logic, Horacio Arlo-Costa and Eric Pacuit, Studia Logica, 84(2), pp. 171 - 210, 2006

- This paper superscedes Horacio Arlo-Costa and Eric Pacuit (2005). First-order classical modal logic: Applications in logics of knowledge and probability, in
*Proceedings of Theoretical Aspects of Rationality and Knowledge (TARK)*, pp. 262 - 278.

The paper focuses on extending to the first order case the semantical program for modalities first introduced by Dana Scott and Richard Montague. We focus on the study of neighborhood frames with constant domains and we offer in the first part of the paper a series of new completeness results for salient classical systems of first order modal logic. Among other results we show that it is possible to prove strong completeness results for normal systems without the Barcan Formula (like FOL + K)in terms of neighborhood frames with constant domains. The first order models we present permit the study of many epistemic modalities recently proposed in computer science as well as the development of adequate models for monadic operators of high probability. Models of this type are either difficult of impossible to build in terms of relational Kripkean semantics [40]. We conclude by introducing general first order neighborhood frames with constant domains and we offer a general completeness result for the entire family of classical first order modal systems in terms of them, circumventing some well-known problems of propositional and first order neighborhood semantics (mainly the fact that many classical modal logics are incomplete with respect to an unmodified version of either neighborhood or relational frames). We argue that the semantical program that thus arises offers the first complete semantic unification of the family of classical first order modal logics.

The Logic of Knowledge Based Obligation, Eric Pacuit, Rohit Parikh and Eva Cogan, Synthese, 149(2), pp. 311 - 341, 2006

Deontic Logic goes back to Ernst Mally's 1926 work, Grundgesetze des Sollens: Elemente der Logik des Willens [Mally. E.: 1926, Grundgesetze des Sollens: Elemente der Logik des Willens, Leuschner & Lubensky, Graz], where he presented axioms for the notion 'p ought to be the case'. Some difficulties were found in Mally's axioms, and the field has much developed. Logic of Knowledge goes back to Hintikka's work Knowledge and Belief [Hintikka, J.: 1962, Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press] in which he proposed formal logics of knowledge and belief. This field has also developed quite a great deal and is now the subject of the TARK conferences. However, there has been relatively little work combining the two notions of knowledge (belief) with the notion of obligation. (See, however, [Lomuscio, A. and Sergot, M.: 2003, Studia Logica 75 63 - 92; Moore, R. C.:1990, In J. F. Allen, J. Hendler and A. Tate (eds.), Readings in Planning, Morgan Kaufmann Publishers, San Mateo, CA]) In this paper we point out that an agent's obligations are often dependent on what the agent knows, and indeed one cannot reasonably be expected to respond to a problem if one is not aware of its existence. For instance, a doctor cannot be expected to treat a patient unless she is aware of the fact that he is sick, and this creates a secondary obligation on the patient or someone else to inform the doctor of his situation. In other words, many obligations are situation dependent, and only apply in the presence of the relevant information. Thus a case for combining Deontic Logic with the Logic of Knowledge is clear. We introduce the notion of knowledge based obligation and offer an S5, history based Kripke semantics to express this notion, as this semantics enables us to represent how information is transmitted among agents and how knowledge changes over time as a result of communications. We consider both the case of an absolute obligation (although dependent on information) as well as the (defeasible) notion of an obligation which may be over-ridden by more relevant information. For instance a physician who is about to inject a patient with drug d may find out that the patient is allergic to d and that she should use d0 instead. Dealing with the second kind of case requires a resort to non-monotonic reasoning and the notion of justified belief which is stronger than plain belief, but weaker than absolute knowledge in that it can be over-ridden. This notion of justified belief also creates a derived notion of default obligation where an agent has, as far as the agent knows, an obligation to do some action a. A dramatic application of this notion is our analysis of the Kitty Genovese case where, in 1964, a young woman was stabbed to death while 38 neighbours watched from their windows but did nothing. The reason was not indifference, but none of the neighbours had even a default obligation to act, even though, as a group, they did have an obligation to take some action to protect Kitty.

Voting Methods, Eric Pacuit, in *The Stanford Encyclopedia of Philosophy*, Edward N. Zalta (editor), 2019

A fundamental problem faced by any group of people is how to arrive at a good group decision when there is disagreement among its members. The difficulties are most evident when there is a large number of people with diverse opinions, such as, when electing leaders in a national election. But it is often not any easier with smaller groups, such as, when a committee must select a candidate to hire, or when a group of friends must decide where to go for dinner. Mathematicians, philosophers, political scientists and economists have devised various voting methods that select a winner (or winners) from a set of alternatives taking into account everyone’s opinion. It is not hard to find examples in which different voting methods select different winners given the same inputs from the members of the group. What criteria should be used to compare and contrast different voting methods? Not only is this an interesting and difficult theoretical question, but it also has important practical ramifications. Given the tumultuous 2016 election cycle, many people (both researchers and politicians) have suggested that the US should use a different voting method. However, there is little agreement about which voting method should be used. This article introduces and critically examines a number of different voting methods. Deep and important results in the theory of social choice suggest that there is no single voting method that is best in all situations (see List 2013 for an overview). My objective in this article is to highlight and discuss the key results and issues that facilitate comparisons between voting methods.

Epistemic Foundations of Game Theory, Eric Pacuit and Olivier Roy, in The Stanford Encyclopedia of Philosophy, Edward N. Zalta (eds.), 2015

- new version with substantive revisions expected in Summer 2022

Foundational work in game theory aims at making explicit the assumptions that underlie the basic concepts of the discipline. Non-cooperative game theory is the study of individual, rational decision making in situations of strategic interaction. This entry presents the epistemic foundations of non-cooperative game theory (this area of research is called epistemic game theory). <br/><br/> Epistemic game theory views rational decision making in games as something not essentially different from rational decision making under uncertainty. As in Decision Theory (Peterson 2009), to choose rationally in a game is to select the 'best' action in light of one's beliefs or information. In a decision problem, the decision maker's beliefs are about a passive state of nature, the state of which determines the consequences of her actions. In a game, the consequences of one's decision depend on the choices of the other agents involved in the situation (and possibly the state of nature). Recognizing this---i.e., that one is interacting with other agents who try to choose the best course of action in the light of their own beliefs---brings higher-order information into the picture. The players' beliefs are no longer about a passive or external environment. They concern the choices and the information of the other players. What one expects of one's opponents depends on what one thinks the others expect from her, and what the others expect from a given player depends on what they think her expectations about them are.

This entry provides an overview of the issues that arise when one takes this broadly decision-theoretic view on rational decision making in games. After some general comments about information in games, we present the formal tools developed in epistemic game theory and epistemic logic that have been used to understand the role of higher-order information in interactive decision making. We then show how these tools can be used to characterize known 'solution concepts' of games in terms of rational decision making in specific informational contexts. Along the way, we highlight a number of philosophical issues that arise in this area.

Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback, Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mossé, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde, William S. Zwicker, , presented at ICML 2024 and Social Choice and Welfare Conference 2024, 2024

Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about 'collective' preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.

Epistemic Pseudo-Agreement and Hyper-Domain Semantics for Epistemic Modals, Yichi (Raven) Zhang and Eric Pacuit, Proceedings of the 23rd Amsterdam Colloquium, 2022, 2022

We propose a hyper-domain semantics to analyze the group reading of epistemic modals according to which the relevant quantificational base for modals is not represented by a single aggregated information state but rather by a set of information states. The resulting framework allows us to capture what we call “epistemic pseudo-agreements”. Additionally, to further investigate the corresponding logic of our hyper-domain semantics, we devise a translation of the consequence relation in our hyper-domain semantics into a new logic called “Logic of Epistemic Disagreement” interpreted over possibility models.

Measuring Violations of Positive Involvement in Voting, Wesley H. Holliday and Eric Pacuit, in J. Y. Halpern and A. Perea (Eds.): Theoretical Aspects of Rationality and Knowledge 2021 (TARK 2021), EPTCS 335, 2021, pp. 189 - 209, 2021

In the context of computational social choice, we study voting methods that assign a set of winners to each profile of voter preferences. A voting method satisfies the property of positive involvement (PI) if for any election in which a candidate x would be among the winners, adding another voter to the election who ranks x first does not cause x to lose. Surprisingly, a number of standard voting methods violate this natural property. In this paper, we investigate different ways of measuring the extent to which a voting method violates PI, using computer simulations. We consider the probability (under different probability models for preferences) of PI violations in randomly drawn profiles vs. profile-coalition pairs (involving coalitions of different sizes). We argue that in order to choose between a voting method that satisfies PI and one that does not, we should consider the probability of PI violation conditional on the voting methods choosing different winners. We should also relativize the probability of PI violation to what we call voter potency, the probability that a voter causes a candidate to lose. Although absolute frequencies of PI violations may be low, after this conditioning and relativization, we see that under certain voting methods that violate PI, much of a voter's potency is turned against them - in particular, against their desire to see their favorite candidate elected.

Voting Theory in the Lean Theorem Prover, Wesley H. Holliday, Chase Norman, and Eric Pacuit, Proceedings of Logic, Rationality, and Interaction, 8th International Workshop, LORI 2021, Xi'an, China, October 16-18, 2021, pp. 111 - 127, 2021

There is a long tradition of fruitful interaction between logic and social choice theory. In recent years, much of this interaction has focused on computer-aided methods such as SAT solving and interactive theorem proving. In this paper, we report on the development of a framework for formalizing voting theory in the Lean theorem prover, which we have applied to verify properties of a recently proposed voting method. While previous applications of interactive theorem proving to social choice (using Isabelle/HOL and Mizar) have focused on the verification of impossibility theorems, we aim to cover a variety of results ranging from impossibility theorems to the verification of properties of specific voting methods (e.g., Condorcet consistency, independence of clones, etc.). In order to formalize voting theoretic axioms concerning adding or removing candidates and voters, we work in a variable-election setting whose formalization makes use of dependent types in Lean.

Strategic Voting Under Uncertainty About the Voting Method, Wesley H. Holliday and Eric Pacuit, in *Proceedings of TARK 2019*, EPTCS, pp. 1 - 21 , doi:10.4204/EPTCS, 2019

Much of the theoretical work on strategic voting makes strong assumptions about what voters know about the voting situation. A strategizing voter is typically assumed to know how other voters will vote and to know the rules of the voting method. A growing body of literature explores strategic voting when there is uncertainty about how others will vote. In this paper, we study strategic voting when there is uncertainty about the voting method. We introduce three notions of manipulability for a set of voting methods: *sure*, *safe*, and *expected* manipulability. With the help of a computer program, we identify voting scenarios in which uncertainty about the voting method may reduce or even eliminate a voter’s incentive to misrepresent her preferences. Thus, it may be in the interest of an election designer who wishes to reduce strategic voting to leave voters uncertain about which of several reasonable voting methods will be used to determine the winners of an election.

Deliberating between Backward and Forward Induction Reasoning: First Steps, Aleks Knoks and Eric Pacuit, in *Proceedings of Theoretical Aspects of Rationality and Knowledge (TARK 2015)*, 2015

Backward and forward induction can be viewed as two styles of reasoning in dynamic games. Since each prescribes taking a different attitude towards the past moves of the other player(s), the strategies they identify as rational are sometimes incompatible. Our goal is to study players who are able to deliberate between backward and forward induction, as well as conditions under which one is superior to the other. This extended abstract is our first step towards this goal. We present an extension of Stalnaker's game models in which the players can make "trembling hand" mistakes. This means that when a player observes an unexpected move, she has to figure out whether it is a result of a deliberate choice or a mistake, thereby committing herself to one of the two styles of reasoning.

When is an Example a Counterexample?, Eric Pacuit, Arthur Paul Pedersen, and Jan-Willem Romeijn, in *Proceedings of Theoretical Aspects of Rationality and Knowledge (TARK 2013)*, pp. 1 - 10, 2013

In this extended abstract, we carefully examine a purported counterexample to a postulate of iterated belief revision. We suggest that the example is better seen as a failure to apply the theory of belief revision in sufficient detail. The main contribution is conceptual aiming at the literature on the philosophical foundations of the AGM theory of belief revision. Our discussion is centered around the observation that it is often unclear whether a specific example is a "genuine" counterexample to an abstract theory or a misapplication of that theory to a concrete case.

DEL Planning And Some Tractable Cases, Benedikt Lowe, Eric Pacuit, and Andreas Witzel, in *Proceedings of Logic, Rationality, And Interaction (Lori-III)*, pp. 179 - 192, H. van Ditmarsch, J. Lang and Shier Ju (eds.), Springer, 2011

We describe the planning problem within the framework of dynamic epistemic logic (DEL), considering the tree of sequences of events as the underlying structure. In general, the DEL planning problem is computationally difficult to solve. On the other hand, a great deal of fruitful technical advances have led to deep insights into the way DEL works, and these can be exploited in special cases. We present a few properties that will lead to considerable simplifications of the DEL planning problem and apply them in a toy example.

A Note on Assumption-Completeness in Modal Logic, Jonathan Zvesper and Eric Pacuit, in *Proceedings of Logic and the Foundations of Game and Decision Theory (LOFT 8)*, pp. 190 - 206, 2010

We study the notion of assumption-completeness, which is a property of belief models first introduced in [18]. In that paper it is considered a limitative result --- of significance for game theory --- if a given language does not have an assumption-complete belief model. We show that there are assumption-complete models for the basic modal language (Theorem 8).

Joint Revision of Belief and Intention, Thomas Icard, Eric Pacuit, and Yoav Shoham, in *Principles of Knowledge Representation and Reasoning: Proceedings of The Twelfth International Conference*, pp. 572 - 574, 2010

We present a formal semantical model to capture action, belief and intention, based on the 'database perspective' (Shoham 2009). We then provide postulates for belief and intention revision, and state a representation theorem relating our postulates to the formal model. Our belief postulates are in the spirit of the AGM theory; the intention postulates stand in rough correspondence with the belief postulates.

Identifying the Structure of a Narrative Via an Agent-Based Logic of Preferences and Beliefs: Formalizations of Episodes from CSI:Crime Scene Investigation, Benedikt Lowe, Eric Pacuit, and Sanchit Saraf, in *Proceedings of the Fifth International Workshop on Modeling of Objects, Components and Agents (MOCA'09)*, pp. 45 - 63, 2009

Finding out what makes two stories equivalent is a daunting task for a formalization of narratives. Using a high-level language of beliefs and preferences for describing stories and a simple algorithm for analyzing them, we determine the doxastic game fragment of actual narratives from the TV crime series CSI: Crime Scene Investigation, and identify a small number of basic building blocks sufficient to construct the doxastic game structure of these narratives.

Modal Logics of Negotiation and Preference, Ulle Endriss and Eric Pacuit, in *Proceedings of the 10th European Conference on Logics in Artificial Intelligence (JELIA-2006)*, pp. 138 - 150, 2006

We develop a dynamic modal logic that can be used to model scenarios where agents negotiate over the allocation of a finite number of indivisible resources. The logic includes operators to speak about both preferences of individual agents and deals regarding the reallocation of certain resources. We reconstruct a known result regarding the convergence of sequences of mutually beneficial deals to a Pareto optimal allocation of resources, and discuss the relationship between reasoning tasks in our logic and problems in negotiation. For instance, checking whether a given restricted class of deals is sufficient to guarantee convergence to a Pareto optimal allocation for a specific negotiation scenario amounts to a model checking problem; and the problem of identifying conditions on preference relations that would guarantee convergence for a restricted class of deals under all circumstances can be cast as a question in modal logic correspondence theory.

Preference-Based Belief Dynamics, Eric Pacuit and Olivier Roy, in *Proceedings of 7th Conference on Logic and the Foundations of Game and Decision (LOFT 2006)*, pp. 141 - 151, 2006

In this paper, we study an approach first put forward by Stephen Morris in [16] where an agent's (non-probabilistic) beliefs are defined from its preferences. The main idea is to apply the logic of the Savage approach and deduce logical properties of an agent's beliefs (such as positive introspection) from properties of the agent's preferences. What interests us for this paper is the relationship between preference change and belief change. Already in [16], Morris showed how a specific kind of belief change can be related to a consistency requirement on preferences in sequential decision making. Indeed a number of authors have focused on the relationship between so-called 'coherent' choice and (probabilistic) belief revision operators. In this paper we use an approach similar to Morris' in order to characterize various operations on beliefs in terms of preference changes.

The Tree of Knowledge in Action: Towards a Common Perspective, Johan van Benthem and Eric Pacuit, in *Proceedings of Advances in Modal Logic, Volume 6*, pp. 87 - 106, 2006

We survey a number of decidability and undecidability results concerning epistemic temporal logic. The goal is to provide a general picture which will facilitate the 'sharing of ideas' from a number of different areas concerned with modeling agents in interactive social situations.

A Note on Some Explicit Modal Logics, Eric Pacuit, in *Proceedings of the Fifth Panhellenic Logic Symposium*, pp. 117 - 125, 2005

Sergei Artemov introduced the Logic of Proofs (LP) as a logic of explicit proofs. There is also an epistemic reading in which t is a *justification$ of P. Motivated, in part, by this epistemic reading, Fitting introduced a Kripke style semantics for LP. In this note, we prove soundness and completeness of some axiom systems which are not covered by Fitting.

Knowledge-Theoretic Properties of Strategic Voting, Samir Chopra, Eric Pacuit, and Rohit Parikh, in *Proceedings of Logics In Artificial Intelligence: 9th Euorpean Conference (JELIA)*, pp. 18 - 30, 2004

Results in social choice theory such as the Arrow and Gibbard-Satterthwaite theorems constrain the existence of rational collective decision making procedures in groups of agents. The Gibbard-Satterthwaite theorem says that no voting procedure is *strategy-proof*. That is, there will always be situations in which it is in a voter's interest to misrepresent its True preferences i.e., vote strategically. We present some properties of strategic voting and then examine -- via a bimodal logic utilizing epistemic and trategizing modalities -- the knowledge-theoretic properties of voting situations and note that unless the voter *knows* that it should vote strategically, and how, i.e., knows what the *other* voters' preferences are and *which* alternate preference $P'$ it should use, the voter will not strategize. Our results suggest that opinion polls in election situations effectively serve as the first $n-1$ stages in an $n$ stage election.

Majority Logic, Eric Pacuit and Samer Salame, in *Proceedings of Knowledge Representation and Reasoning (KR 2004)*, pp. 598 - 605, 2004

We extend graded modal logic (GML) to a logic that captures the concept of majority. We provide an axiomatization for majority logic, MJL, and sketch soundness and completeness proofs. Along the way, we must answer the question what is a majority of an infinite set? Majority spaces are introduced as a solution to this question.

Impossibility theorems involving weakenings of expansion consistency and resoluteness in voting, Wesley H. Holliday, Chase Norman, Eric Pacuit, and Saam Zahedian, In Michael A. Jones, David McCune, and Jennifer Wilson, editors, *Mathematical Analyses of Decisions, Voting, and Games*, Contemporary Mathematics. American Mathematical Society, Forthcoming., 2024

A fundamental principle of individual rational choice is Sen's $\gamma$ axiom, also known as *expansion consistency*, stating that any alternative chosen from each of two menus must be chosen from the union of the menus. Expansion consistency can also be formulated in the setting of social choice. In voting theory, it states that any candidate chosen from two fields of candidates must be chosen from the combined field of candidates. An important special case of the axiom is *binary* expansion consistency, which states that any candidate chosen from an initial field of candidates and chosen in a head-to-head match with a new candidate must also be chosen when the new candidate is added to the field, thereby ruling out *spoiler effects*. In this paper, we study the tension between this weakening of expansion consistency and weakenings of *resoluteness*, an axiom demanding the choice of a single candidate in any election. As is well known, resoluteness is inconsistent with basic fairness conditions on social choice, namely anonymity and neutrality. Here we prove that even significant weakenings of resoluteness, which are consistent with anonymity and neutrality, are inconsistent with binary expansion consistency. The proofs make use of SAT solving, with the correctness of a SAT encoding formally verified in the Lean Theorem Prover, as well as a strategy for generalizing impossibility theorems obtained for special types of voting methods (namely majoritarian and pairwise voting methods) to impossibility theorems for arbitrary voting methods. This proof strategy may be of independent interest for its potential applicability to other impossibility theorems in social choice.

Focusing on Campaigns, Dominik Klein and Eric Pacuit, in *Rohit Parikh on Logic, Language and Society*, Volume 11 of the series Outstanding Contributions to Logic, pp. 77-89, Springer, 2017

One of the important lessons to take away from Rohit Parikh's impressive body of work is that logicians and computer scientists have much to gain by focusing their attention on the intricacies of political campaigns. Drawing on recent work developing a theory of expressive voting, we study the dynamics of voters' opinions during an election. In this paper, we develop a model in which the relative importance of the different issues that concern a voter may change either in response to candidates' statements during a campaign or due to unforeseen events. We study how changes in a voter's attention to the issues influence voting behavior under voting systems such as plurality rule and approval voting. We argue that it can be worthwhile for candidates to reshape public focus, but that doing so can be a complex and risky activity.

A Dynamic Analysis of Interactive Rationality, Eric Pacuit and Olivier Roy, in *Epistemology, Knowledge and the Impact of Interaction*, Volume 38 of the series Logic, Epistemology, and the Unity of Science, pp. 187-206, J. Redmond, O. Pombo Martins, and A. Nepomuceno Fernandez (eds.), Springer, 2016

Epistemic game theory has shown the importance of informational contexts to understand strategic interaction. We propose a general framework to analyze how such contexts may arise. The idea is to view informational contexts as the fixed points of iterated, rational responses to incoming information about the agents' possible choices. We discuss conditions under which such fixed points may exist. In the process, we generalize existing rules for information updates used in the dynamic epistemic logic literature. We then apply this framework to weak dominance. Our analysis provides a new perspective on a well known problem with the epistemic characterization of iterated removal of weakly dominated strategies.

Dependence and Independence in Social Choice Theory, Eric Pacuit and Fan Yang, in *Dependence Logic*, S. Abramsky, J. Kontinen, J. Väänänen and H. Vollmer (eds.), Springer, pp. 235-260, 2016

One of the goals of social choice theory is to study the group decision methods that satisfy two types of desiderata. The first type ensures that the group decision depends in the right way on the voters' opinions. The second type ensures that the voters are free to express any opinion, as long as it is an admissible input to the group decision method. Impossibility theorems, such as Arrow's Theorem, point to an interesting tension between these two desiderata. In this paper, we argue that dependence and independence logic offer an interesting new perspective on this aspect of social choice theory. To that end, we develop a version of independence logic that can express Arrow's properties of preference aggregation functions. We then prove that Arrow's Theorem is derivable in a natural deduction system for the first-order consequences of our logic.

Dynamic Models of Rational Deliberation in Games, Eric Pacuit, in *Models of Strategic Reasoning*, pp. 3 - 33, J. van Benthem, S. Ghosh and R. Verbrugge (eds.), Springer, 2015

There is a growing body of literature that analyzes games in terms of the 'process of deliberation' that leads the players to select their component of a rational outcome. Although the details of the various models of deliberation in games are different, they share a common line of thought: The rational outcomes of a game are arrived at through a process in which each player settles on an optimal choice given her evolving beliefs about her own choices and the choices of her opponents. The goal is to describe deliberation in terms of a sequence of belief changes about what the players are doing or what their opponents may be thinking. The central question is: What are the update mechanisms that match different game-theoretic analyses? The general conclusion is that the rational outcomes of a game depend not only on the structure of the game, but also on the players' initial beliefs, which dynamical rule is being used by the players to update their inclinations (in general, different players may be using different rules), and what exactly is commonly known about the process of deliberation.

Connecting Logics of Choice and Change, Johan van Benthem and Eric Pacuit, in *Nuel Belnap on Indeterminism and Free Action, Outstanding Contributions to Logic*, pp. 291 - 314, Springer, 2014

This chapter is an attempt at clarifying the current scene of sometimes competing action logics, looking for compatibilities and convergences. Current paradigms for deliberate action fall into two broad families: dynamic logics of events, and STIT logics of achieving specified effects.We compare the two frameworks, and show how they can be related technically by embedding basic STIT into a modal logic of matrix games. Amongst various things, this analysis shows how the attractive principle of independence of agents' actions in STIT might actually be a source of high complexity in the total action logic. Our main point, however, is the compatibility of dynamic logics with explicit events and STIT logics based on a notion that we call 'control'---and we present a new system of dynamic-epistemic logic with control that has both. Finally, we discuss how dynamic logic and STIT face similar issues when including further crucial aspects of agency such as knowledge, preference, strategic behavior, and explicit acts of choice and deliberation.

Procedural Information and the Dynamics of Belief, Eric Pacuit, in *Interdisciplinary Works in Logic, Epistemology, Psychology and Linguistics*, pp. 135 - 154, M. Rebuschi, M. Batt, G. Heinzmann, F. Lihoreau, M. Musiol, and A. Trognon (eds.), Springer, 2014

This paper surveys recent dynamic logics of knowledge and belief for a single agent. Many of the recent developments in this area have been driven by analyzing concrete examples. These range from toy examples, such as the infamous muddy children puzzle, to philosophical quandaries, such as Fitch's Paradox , to everyday examples of social interaction . Different logical systems are then judged, in part, on how well they conform to the analyst's intuitions about the relevant set of examples. But this raises an important methodological issue: Implicit assumptions about what the actors know and believe about the situation being modeled often guide the analyst's intuitions. In many cases, it is crucial to make these underlying assumptions explicit. The primary goal of this paper is to demonstrate how this 'meta-information ' can be made explicit in the formal models of knowledge and belief.

Temporal Aspects of the Dynamics of Knowledge, Valentin Goranko and Eric Pacuit, in *Johan van Benthem on Logic and Information Dynamics, Outstanding Contributions to Logic*, pp. 235 - 266, A. Baltag and S. Smets (eds.), Springer, 2014

Knowledge and time are fundamental aspects of agency and their interaction is in the focus of a wide spectrum of philosophical and logical studies. This interaction is two-fold: on the one hand, knowledge evolves over time; on the other hand, in the subjective view of the agent, time only passes when her knowledge about the world changes. In this chapter we discuss models and logics reflecting the temporal aspects of the dynamics of knowledge and offer some speculations and ideas on how the interaction of temporality and knowledge can be systematically yeah thoughtreated.

The Wisdom of Crowds: Methods of Human Judgement Aggregation, Aidan Lyon and Eric Pacuit, in *Handbook of Human Computation*, pp. 599 - 614, P. Michelucci (eds.), Springer, 2013

Although the idea is an old one, there has been a recent boom in research into the Wisdom of Crowds, and this appears to be at least partly due to the now widespread availability of the Internet, and the advent of social media and Web 2.0 applications. In this paper, we start by laying out a simple conceptual framework for thinking about the Wisdom of the Crowds. We identify six core aspects that are part of any instance of the Wisdom of the Crowds. One of these aspects, called aggregation, is the main focus of this paper. An aggregation method is the method of bringing the many contributions of a crowd together into a collective output. We discuss three different types of aggregation methods: mathematical aggregation, group deliberation and prediction markets.

Logics of Rational Interaction, Barteld Kooi and Eric Pacuit, in *Dynamic Formal Epistemology*, pp. 5 - 32, P. Girard, O. Roy, and M. Marion (eds.), Springer, 2011

There is a growing literature focused on using logical methods to reason about communities of agents engaged in some form of social interaction. Much of the work builds upon existing logical frameworks developed by philosophers and computer scientists incorporating insights and ideas from philosophy (especially epistemology and philosophy of action), game theory, decision theory and social choice theory. The result is a web of logical systems each addressing different aspects of rational agency and social interaction. Rather than providing an encyclopedic account of these different logical systems, this paper focuses on issues surrounding the modeling of informational attitudes in social interactive situations. The main objective is to introduce the two main approaches to modeling "rational interaction" and provide pointers to the current literature.

Towards a Logical Analysis of Adjusted Winner, Eric Pacuit, in *Proof, Computation And Agency*, pp. 229 - 239, J. van Benthem, A. Gupta, and R. Parikh (eds.), Springer, 2011

It is often convenient to view a computational procedure, or a program, as a relation on a set of states, where a state can be thought of as a function that assigns a value to every possible variable and a truth value to all propositions. In this paper, I show how to extend these ideas to verifying correctness of the Adjusted Winner fair division algorithm.

Social Interaction, Knowledge, and Social Software, Eric Pacuit and Rohit Parikh, in *Interactive Computation: The New Paradigm*, pp. 441 - 461, D. Goldin, S. A. Smolka, and P. Wegner (eds.), Springer, 2006

In [31] a theory of human computation, analogous to Turing's theory of machine computation, is discussed. The issue there is whether there might be an analogue to Church's thesis in this human domain. Examples of human algorithms discussed include the making of scrambled eggs. By comparison, Lynn Stein in this volume discusses the making of a peanut butter and jelly sandwich. Neither she nor us in this volume have any concern with Church's thesis as such, although that might prove to be a fascinating topic for a future paper. Rather the issue here is interaction, which occurs most naturally in multiagent algorithms, unlike the making of scrambled eggs or peanut butter sandwiches where one agent normally suffices. Such multiagent algorithms, examples of which are building a house, or playing bridge, are examples of what we shall call social software after [32]. In that paper, one of us asked 'Is it possible to create a theory of how social procedures work with a view to creating better ones and ensuring the correctness of the ones we do have?' The present chapter will survey some of the logical and mathematical tools that have been developed over the years that may help address this question.

Reasoning about Communication Graphs, Eric Pacuit and Rohit Parikh, in *Interactive Logic: Selected Papers from the 7th Augustus de Morgan Workshop*, pp. 135 - 157, J. van Benthem, B. Löwe, and D. Gabbay, Amsterdam University Press, 2006

Let us assume that some agents are connected by a communication graph. In the communication graph, an edge from agent i to agent j means that agent i can directly receive information from agent j. Agent i can then refine its own information by learning information that j has, including information acquired by j from another agent, k. We introduce a multi-agent modal logic with knowledge modalities and a modality representing communication among agents. Among other properties, we show that the logic is decidable, that it completely characterizes the communication graph, and that it satisfies the basic properties of the space logic of [18].