Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
404 Client Error: Not Found for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/4a/c2/4ac2f591fbb17c3f5be61245ae4fdf641356bf384509a15eb2c11ab525958869/87339fe27558475ed2419d0d92af20b5035a686821b82440c1cecefe2a21fdf3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOYNNAVQWR%2F20240612%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240612T214029Z&X-Amz-Expires=259200&X-Amz-Signature=31c827ae5c462517a4ed29bf6931768ffab8ea7950fcc24289cd189c47b4f87f&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27agisf.jsonl%3B%20filename%3D%22agisf.jsonl%22%3B&x-id=GetObject
Error code:   UnexpectedError

Need help to make the dataset viewer work? Open a discussion for direct support.

id
string
title
string
url
string
source
string
source_type
string
text
string
date_published
unknown
authors
sequence
summaries
sequence
filename
string
025ec6c77a59b3363162576fa55a4fd7
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/3-agents-as-programs.html
agentmodels
markdown
--- layout: chapter title: "Agents as probabilistic programs" description: "One-shot decision problems, expected utility, softmax choice and Monty Hall." is_section: true --- ## Introduction Our goal is to implement agents that compute rational *policies*. Policies are *plans* for achieving good outcomes in environments where: - The agent makes a *sequence* of *distinct* choices, rather than choosing once. - The environment is *stochastic* (or "random"). - Some features of the environment are initially *unknown* to the agent. (So the agent may choose to gain information in order to improve future decisions.) This section begins with agents that solve the very simplest decision problems. These are trivial *one-shot* problems, where the agent selects a single action (not a sequence of actions). We use WebPPL to solve these problems in order to illustrate the core concepts that are necessary for the more complex problems in later chapters. <a id="planning_as"></a> ## One-shot decisions in a deterministic world In a *one-shot decision problem* an agent makes a single choice between a set of *actions*, each of which has potentially distinct *consequences*. A rational agent chooses the action that is best in terms of his or her own preferences. Often, this depends not on the *action* itself being preferred, but only on its *consequences*. For example, suppose Tom is choosing between restaurants and all he cares about is eating pizza. There's an Italian restaurant and a French restaurant. Tom would choose the French restaurant if it offered pizza. Since it does *not* offer pizza, Tom will choose the Italian. Tom selects an action $$a \in A$$ from the set of all actions. The actions in this case are {"eat at Italian restaurant", "eat at French restaurant"}. The consequences of an action are represented by a transition function $$T \colon S \times A \to S$$ from state-action pairs to states. In our example, the relevant *state* is whether or not Tom eats pizza. Tom's preferences are represented by a real-valued utility function $$U \colon S \to \mathbb{R}$$, which indicates the relative goodness of each state. Tom's *decision rule* is to take action $$a$$ that maximizes utility, i.e., the action $$ {\arg \max}_{a \in A} U(T(s,a)) $$ In WebPPL, we can implement this utility-maximizing agent as a function `maxAgent` that takes a state $$s \in S$$ as input and returns an action. For Tom's choice between restaurants, we assume that the agent starts off in a state `"initialState"`, denoting whatever Tom does before going off to eat. The program directly translates the decision rule above using the higher-order function `argMax`. <!-- TODO fix argmax --> ~~~~ ///fold: argMax var argMax = function(f, ar){ return maxWith(f, ar)[0] }; /// // Choose to eat at the Italian or French restaurants var actions = ['italian', 'french']; var transition = function(state, action) { if (action === 'italian') { return 'pizza'; } else { return 'steak frites'; } }; var utility = function(state) { if (state === 'pizza') { return 10; } else { return 0; } }; var maxAgent = function(state) { return argMax( function(action) { return utility(transition(state, action)); }, actions); }; print('Choice in initial state: ' + maxAgent('initialState')); ~~~~ >**Exercise**: Which parts of the code can you change in order to make the agent choose the French restaurant? There is an alternative way to compute the optimal action for this problem. The idea is to treat choosing an action as an *inference* problem. The previous chapter showed how we can *infer* the probability that a coin landed Heads from the observation that two of three coins were Heads. ~~~~ var twoHeads = Infer({ model() { var a = flip(0.5); var b = flip(0.5); var c = flip(0.5); condition(a + b + c === 2); return a; } }); viz(twoHeads); ~~~~ The same inference machinery can compute the optimal action in Tom's decision problem. We sample random actions with `uniformDraw` and condition on the preferred outcome happening. Intuitively, we imagine observing the consequence we prefer (e.g. pizza) and then *infer* from this the action that caused this consequence. <!-- address evidential vs causal decision theory? --> This idea is known as "planning as inference" refp:botvinick2012planning. It also resembles the idea of "backwards chaining" in logical inference and planning. The `inferenceAgent` solves the same problem as `maxAgent`, but uses planning as inference: ~~~~ var actions = ['italian', 'french']; var transition = function(state, action) { if (action === 'italian') { return 'pizza'; } else { return 'steak frites'; } }; var inferenceAgent = function(state) { return Infer({ model() { var action = uniformDraw(actions); condition(transition(state, action) === 'pizza'); return action; } }); } viz(inferenceAgent("initialState")); ~~~~ >**Exercise**: Change the agent's goals so that they choose the French restaurant. ## One-shot decisions in a stochastic world In the previous example, the transition function from state-action pairs to states was *deterministic* and so described a deterministic world or environment. Moreover, the agent's actions were deterministic; Tom always chose the best action ("Italian"). In contrast, many examples in this tutorial will involve a *stochastic* world and a noisy "soft-max" agent. Imagine that Tom is choosing between restaurants again. This time, Tom's preferences are about the overall quality of the meal. A meal can be "bad", "good" or "spectacular" and each restaurant has good nights and bad nights. The transition function now has type signature $$ T\colon S \times A \to \Delta S $$, where $$\Delta S$$ represents a distribution over states. Tom's decision rule is now to take the action $$a \in A$$ that has the highest *average* or *expected* utility, with the expectation $$\mathbb{E}$$ taken over the probability of different successor states $$s' \sim T(s,a)$$: $$ \max_{a \in A} \mathbb{E}( U(T(s,a)) ) $$ To represent this in WebPPL, we extend `maxAgent` using the `expectation` function, which maps a distribution with finite support to its (real-valued) expectation: ~~~~ ///fold: argMax var argMax = function(f, ar){ return maxWith(f, ar)[0] }; /// var actions = ['italian', 'french']; var transition = function(state, action) { var nextStates = ['bad', 'good', 'spectacular']; var nextProbs = (action === 'italian') ? [0.2, 0.6, 0.2] : [0.05, 0.9, 0.05]; return categorical(nextProbs, nextStates); }; var utility = function(state) { var table = { bad: -10, good: 6, spectacular: 8 }; return table[state]; }; var maxEUAgent = function(state) { var expectedUtility = function(action) { return expectation(Infer({ model() { return utility(transition(state, action)); } })); }; return argMax(expectedUtility, actions); }; maxEUAgent('initialState'); ~~~~ >**Exercise**: Adjust the transition probabilities such that the agent chooses the Italian Restaurant. The `inferenceAgent`, which uses the planning-as-inference idiom, can also be extended using `expectation`. Previously, the agent's action was conditioned on leading to the best consequence ("pizza"). This time, Tom is not aiming to choose the action most likely to have the best outcome. Instead, he wants the action with better outcomes on average. This can be represented in `inferenceAgent` by switching from a `condition` statement to a `factor` statement. The `condition` statement expresses a "hard" constraint on actions: actions that fail the condition are completely ruled out. The `factor` statement, by contrast, expresses a "soft" condition. Technically, `factor(x)` adds `x` to the unnormalized log-probability of the program execution within which it occurs. To illustrate `factor`, consider the following variant of the `twoHeads` example above. Instead of placing a hard constraint on the total number of Heads outcomes, we give each setting of `a`, `b` and `c` a *score* based on the total number of heads. The score is highest when all three coins are Heads, but even the "all tails" outcomes is not ruled out completely. ~~~~ var softHeads = Infer({ model() { var a = flip(0.5); var b = flip(0.5); var c = flip(0.5); factor(a + b + c); return a; } }); viz(softHeads); ~~~~ As another example, consider the following short program: ~~~~ var dist = Infer({ model() { var n = uniformDraw([0, 1, 2]); factor(n * n); return n; } }); viz(dist); ~~~~ Without the `factor` statement, each value of the variable `n` has equal probability. Adding the `factor` statements adds `n*n` to the log-score of each value. To get the new probabilities induced by the `factor` statement we compute the normalizing constant given these log-scores. The resulting probability $$P(y=2)$$ is: $$ P(y=2) = \frac {e^{2 \cdot 2}} { (e^{0 \cdot 0} + e^{1 \cdot 1} + e^{2 \cdot 2}) } $$ Returning to our implementation as planning-as-inference for maximizing *expected* utility, we use a `factor` statement to implement soft conditioning: ~~~~ var actions = ['italian', 'french']; var transition = function(state, action) { var nextStates = ['bad', 'good', 'spectacular']; var nextProbs = (action === 'italian') ? [0.2, 0.6, 0.2] : [0.05, 0.9, 0.05]; return categorical(nextProbs, nextStates); }; var utility = function(state) { var table = { bad: -10, good: 6, spectacular: 8 }; return table[state]; }; var alpha = 1; var softMaxAgent = function(state) { return Infer({ model() { var action = uniformDraw(actions); var expectedUtility = function(action) { return expectation(Infer({ model() { return utility(transition(state, action)); } })); }; factor(alpha * expectedUtility(action)); return action; } }); }; viz(softMaxAgent('initialState')); ~~~~ The `softMaxAgent` differs in two ways from the `maxEUAgent` above. First, it uses the planning-as-inference idiom. Second, it does not deterministically choose the action with maximal expected utility. Instead, it implements *soft* maximization, selecting actions with a probability that depends on their expected utility. Formally, let the agent's probability of choosing an action be $$C(a;s)$$ for $$a \in A$$ when in state $$s \in S$$. Then the *softmax* decision rule is: $$ C(a; s) \propto e^{\alpha \mathbb{E}(U(T(s,a))) } $$ The noise parameter $$\alpha$$ modulates between random choice $$(\alpha=0)$$ and the perfect maximization $$(\alpha = \infty)$$ of the `maxEUAgent`. Since rational agents will *always* choose the best action, why consider softmax agents? One of the goals of this tutorial is to infer the preferences of agents (e.g. human beings) from their choices. People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice[^softmax], which has been tested empirically on human action selection refp:luce2005individual. Moreover, it has been used extensively in Inverse Reinforcement Learning as a model of human errors refp:kim2014inverse, refp:zheng2014robust. For this reason, we employ the softmax model throughout this tutorial. When modeling an agent assumed to be optimal, the noise parameter $$\alpha$$ can be set to a large value. <!-- [TODO: Alternatively, agent could output dist.MAP().val instead of dist.] --> [^softmax]: A softmax agent's choice of action is a differentiable function of their utilities. This differentiability makes possible certain techniques for inferring utilities from choices. >**Exercise**: Monty Hall. In this exercise inspired by [ProbMods](https://probmods.org/chapters/06-inference-about-inference.html), we will approach the classical statistical puzzle from the perspective of optimal decision-making. Here is a statement of the problem: > *Alice is on a game show and she’s given the choice of three doors. Behind one door is a car; behind the others, goats. She picks door 1. The host, Monty, knows what’s behind the doors and opens another door, say No. 3, revealing a goat. He then asks Alice if she wants to switch doors. Should she switch?* > Use the tools introduced above to determine the answer. Here is some code to get you started: ~~~~ // Remove each element in array ys from array xs var remove = function(xs, ys) { return _.without.apply(null, [xs].concat(ys)); }; var doors = [1, 2, 3]; // Monty chooses a door that is neither Alice's door // nor the prize door var monty = function(aliceDoor, prizeDoor) { return Infer({ model() { var door = uniformDraw(doors); // ??? return door; }}); }; var actions = ['switch', 'stay']; // If Alice switches, she randomly chooses a door that is // neither the one Monty showed nor her previous door var transition = function(state, action) { if (action === 'switch') { return { prizeDoor: state.prizeDoor, montyDoor: state.montyDoor, aliceDoor: // ??? }; } else { return state; } }; // Utility is high (say 10) if Alice's door matches the // prize door, 0 otherwise. var utility = function(state) { // ??? }; var sampleState = function() { var aliceDoor = uniformDraw(doors); var prizeDoor = uniformDraw(doors); return { aliceDoor, prizeDoor, montyDoor: sample(monty(aliceDoor, prizeDoor)) } } var agent = function() { var action = uniformDraw(actions); var expectedUtility = function(action){ return expectation(Infer({ model() { var state = sampleState(); return utility(transition(state, action)); }})); }; factor(expectedUtility(action)); return { action }; }; viz(Infer({ model: agent })); ~~~~ ### Moving to complex decision problems This chapter has introduced some of the core concepts that we will need for this tutorial, including *expected utility*, *(stochastic) transition functions*, *soft conditioning* and *softmax decision making*. These concepts would also appear in standard treatments of rational planning and reinforcement learning refp:russell1995modern. The actual decision problems in this chapter are so trivial that our notation and programs are overkill. The [next chapter](/chapters/3a-mdp.html) introduces *sequential* decisions problems. These problems are more complex and interesting, and will require the machinery we have introduced here. <br> ### Footnotes
"2018-06-21T16:25:20"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
3-agents-as-programs.md
ea441710b479ba9d9171a1b2273a2aca
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/5-biases-intro.html
agentmodels
markdown
--- layout: chapter title: "Cognitive biases and bounded rationality" description: Soft-max noise, limited memory, heuristics and biases, motivation from intractability of POMDPs. is_section: true --- ### Optimality and modeling human actions We've mentioned two uses for models of sequential decision making: Use (1): **Solve practical decision problems** (preferably with a fast algorithm that performs optimally) Use (2): **Learn the preferences and beliefs of humans** (e.g. to predict future behavior or to provide recommendations/advice) The table below provides more detail about these two uses[^table]. The first chapters of the book focused on Use (1) and described agent models for solving MDPs and POMDPs optimally. Chapter IV ("[Reasoning about Agents](/chapters/4-reasoning-about-agents.html)"), by contrast, was on Use (2), employing agent models as *generative models* of human behavior which are inverted to learn human preferences. The present chapter discusses the limitations of using optimal agent modes as generative models for Use (2). We argue that developing models of *biased* or *bounded* decision making can address these limitations. <a href="/assets/img/table_chapter5_intro.png"><img src="/assets/img/table_chapter5_intro.png" alt="table" style="width: 650px;"/></a> >**Table 1:** Two uses for formal models of sequential decision making. The heading "Optimality" means "Are optimal models of decision making used?". <br> [^table]: Note that there are important interactions between Use (1) and Use (2). A challenge with Use (1) is that it's often hard to write down an appropriate utility function to optimize. The ideal utility function is one that reflects actual human preferences. So by solving (2) we can solve one of the "key tasks" in (1). This is exactly the approach taken in various applications of IRL. See work on Apprenticeship Learning refp:abbeel2004apprenticeship. <!-- TABLE. TODO: find nice html/markdown rendering: Goal|Key tasks|Optimality?|Sub-fields|Fields Solve practical decision problems|1. Define appropriate utility function and decision problem. 2. Solve optimization problem|If it’s tractable|RL, Game and Decision Theory, Experimental Design|ML/Statistics, Operations Research, Economics (normative) Learn the preferences and beliefs of humans|1. Collect data by observation/experiment. 2. Infer parameters and predict future behavior|If it fits human data|IRL, Econometrics (Structural Estimation), Inverse Planning|ML, Economics (positive), Psychology, Neuroscience --> ### Random vs. Systematic Errors The agent models presented in previous chapters are models of *optimal* performance on (PO)MDPs. So if humans deviate from optimality on some (PO)MDP then these models won't predict human behavior well. It's important to recognize the flexibility of the optimal models. The agent can have any utility function and any initial belief distribution. We saw in the previous chapters that apparently irrational behavior can sometimes be explained in terms of inaccurate prior beliefs. Yet certain kinds of human behavior resist explanation in terms of false beliefs or unusual preferences. Consider the following: >**The Smoker** <br> Fred smokes cigarettes every day. He has tried to quit multiple times and still wants to quit. He is fully informed about the health effects of smoking and has learned from experience about the cravings that accompany attempts to quit. It's hard to explain such persistent smoking in terms of inaccurate beliefs[^beliefs]. [^beliefs]: One could argue that Fred has a temporary belief that smoking is high utility which causes him to smoke. This belief subsides after smoking a cigarette and is replaced with regret. To explain this in terms of a POMDP agent, there has to be an *observation* that triggers the belief-change via Bayesian updating. But what is this observation? Fred has *cravings*, but these cravings alter Fred's desires or wants, rather than being observational evidence about the empirical world. A common way of modeling with deviations from optimal behavior is to use softmax noise refp:kim2014inverse and refp:zheng2014robust. Yet the softmax model has limited expressiveness. It's a model of *random* deviations from optimal behavior. Models of random error might be a good fit for certain motor or perceptual tasks (e.g. throwing a ball or locating the source of a distant sound). But the smoking example suggests that humans deviate from optimality *systematically*. That is, when not behaving optimally, humans actions remain *predictable* and big deviations from optimality in one domain do not imply highly random behavior in all domains. Here are some examples of systematic deviations from optimal action: <br> >**Systematic deviations from optimal action** - Smoking every week (i.e. systematically) while simultaneously trying to quit (e.g. by using patches and throwing away cigarettes). - Finishing assignments just before the deadline, while always planning to finish them as early as possible. - Forgetting random strings of letters or numbers (e.g. passwords or ID numbers) -- assuming they weren't explicitly memorized[^strings]. - Making mistakes on arithmetic problems[^math] (e.g. long division). [^strings]: With effort people can memorize these strings and keep them in memory for long periods. The claim is that if people do not make an attempt to memorize a random string, they will systematically forget the string within a short duration. This can't be easily explained on a POMDP model, where the agent has perfect memory. [^math]: People learn the algorithm for long division but still make mistakes -- even when stakes are relatively high (e.g. important school exams). While humans vary in their math skill, all humans have severe limitations (compared to computers) at doing arithmetic. See refp:dehaene2011number for various robust, systematic limitations in human numerical cognition. These examples suggest that human behavior in everyday decision problems will not be easily captured by assuming softmax optimality. In the next sections, we divide these systematics deviations from optimality into *cognitive biases* and *cognitive bounds*. After explaining each category, we discuss their relevance to learning the preferences of agents. ### Human deviations from optimal action: Cognitive Bounds Humans perform sub-optimally on some MDPs and POMDPs due to basic computational constraints. Such constraints have been investigated in work on *bounded rationality* and *bounded optimality* refp:gershman2015computational. A simple example was mentioned above: people cannot quickly memorize random strings (even if the stakes are high). Similarly, consider the real-life version of our Restaurant Choice example. If you walk around a big city for the first time, you will forget the location of most of the restaurants you see on the way. If you try a few days later to find a restaurant, you are likely to take an inefficient route. This contrasts with the optimal POMDP-solving agent who never forgets anything. Limitations in memory are hardly unique to humans. For any autonomous robot, there is some number of random bits that it cannot quickly place in permanent storage. In addition to constraints on memory, humans and machines have constraints on time. The simplest POMDPs, such as Bandit problems, are intractable: the time needed to solve them will grow exponentially (or worse) in the problem size refp:cassandra1994acting, refp:madani1999undecidability. The issue is that optimal planning requires taking into account all possible sequences of actions and states. These explode in number as the number of states, actions, and possible sequences of observations grows[^grows]. [^grows]: Dynamic programming helps but does not tame the beast. There are many POMDPs that are small enough to be easily described (i.e. they don't have a very long problem description) but which we can't solve optimally in practice. So for any agent with limited time there will be POMDPs that they cannot solve exactly. It's plausible that humans often encounter POMDPs of this kind. For example, in lab experiments humans make systematic errors in small POMDPs that are easy to solve with computers refp:zhang2013forgetful and refp:doshi2011comparison. Real-world tasks with the structure of POMDPs, such as choosing how to invest resources or deciding on a sequence of scientific experiments, are much more complex and so presumably can't be solved by humans exactly. ### Human deviations from optimal action: Cognitive Biases Cognitive bounds of time and space (for memory) mean that any realistic agent will perform sub-optimally on some problems. By contrast, the term "cognitive biases" is usually applied to errors that are idiosyncratic to humans and would not arise in AI systems[^biases]. There is a large literature on cognitive biases in psychology and behavioral economics refp:kahneman2011thinking, refp:kahneman1984choices. One relevant example is the cluster of biases summarized by *Prospect Theory* refp:kahneman1979prospect. In one-shot choices between "lotteries", people are subject to framing effects (e.g. Loss Aversion) and to erroneous computation of expected utility[^prospect]. Another important bias is *time inconsistency*. This bias has been used to explain addiction, procrastination, impulsive behavior and the use of pre-commitment devices. The next chapter describes and implements time-inconsistent agents. [^biases]: We do not presuppose a well substantiated scientific distinction between cognitive bounds and biases. Many have argued that biases result from heuristics and that the heuristics are a fine-tuned shortcut for dealing with cognitive bounds. For our purposes, the main distinction is between intractable decision problems (such that any agent will fail on large enough instances of the problem) and decision problems that appear trivial for simple computational systems but hard for some proportion of humans. For example, time-inconsistent behavior appears easy to avoid for computational systems but hard to avoid for humans. [^prospect]: The problems descriptions are extremely simple. So this doesn't look like an issue of bounds on time or memory forcing people to use a heuristic approach. ### Learning preferences from bounded and biased agents We've asserted that humans have cognitive biases and bounds. These lead to systemtic deviations from optimal performance on (PO)MDP decision problems. As a result, the softmax-optimal agent models from previous chapters will not always be good generative models for human behavior. To learn human beliefs and preferences when such deviations from optimality are present, we extend and elaborate our (PO)MDP agent models to capture these deviations. The next chapter implements time-inconsistent agents via hyperbolic discounting. The subsequent chapter implements "greedy" or "myopic" planning, which is a general strategy for reducing time- and space-complexity. In the final chapter of this section, we show (a) that assuming humans are optimal can lead to mistaken inferences in some decision problems, and (b) that our extended generative models can avoid these mistakes. Next chapter: [Time inconsistency I](/chapters/5a-time-inconsistency.html) <br> ### Footnotes
"2017-03-19T18:46:48"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
5-biases-intro.md
de7fd0b4b054937b7a2d8856f11ee694
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/7-multi-agent.html
agentmodels
markdown
--- layout: chapter title: Multi-agent models description: Schelling coordination games, tic-tac-toe, and a simple natural-language example. is_section: true --- In this chapter, we will look at models that involve multiple agents reasoning about each other. This chapter is based on reft:stuhlmueller2013reasoning. ## Schelling coordination games We start with a simple [Schelling coordination game](http://lesswrong.com/lw/dc7/nash_equilibria_and_schelling_points/). Alice and Bob are trying to meet up but have lost their phones and have no way to contact each other. There are two local bars: the popular bar and the unpopular one. Let's first consider how Alice would choose a bar (if she was not taking Bob into account): ~~~~ var locationPrior = function() { if (flip(.55)) { return 'popular-bar'; } else { return 'unpopular-bar'; } }; var alice = function() { return Infer({ model() { var myLocation = locationPrior(); return myLocation; }}); }; viz(alice()); ~~~~ But Alice wants to be at the same bar as Bob. We extend our model of Alice to include this: ~~~~ var locationPrior = function() { if (flip(.55)) { return 'popular-bar'; } else { return 'unpopular-bar'; } }; var alice = function() { return Infer({ model() { var myLocation = locationPrior(); var bobLocation = sample(bob()); condition(myLocation === bobLocation); return myLocation; }}); }; var bob = function() { return Infer({ model() { var myLocation = locationPrior(); return myLocation; }}); }; viz(alice()); ~~~~ Now Bob and Alice are thinking recursively about each other. We add caching (to avoid repeated computations) and a depth parameter (to avoid infinite recursion): ~~~~ var locationPrior = function() { if (flip(.55)) { return 'popular-bar'; } else { return 'unpopular-bar'; } } var alice = dp.cache(function(depth) { return Infer({ model() { var myLocation = locationPrior(); var bobLocation = sample(bob(depth - 1)); condition(myLocation === bobLocation); return myLocation; }}); }); var bob = dp.cache(function(depth) { return Infer({ model() { var myLocation = locationPrior(); if (depth === 0) { return myLocation; } else { var aliceLocation = sample(alice(depth)); condition(myLocation === aliceLocation); return myLocation; } }}); }); viz(alice(10)); ~~~~ >**Exercise**: Change the example to the setting where Bob wants to avoid Alice instead of trying to meet up with her, and Alice knows this. How do the predictions change as the reasoning depth grows? How would you model the setting where Alice doesn't know that Bob wants to avoid her? >**Exercise**: Would any of the answers to the previous exercise change if recursive reasoning could terminate not just at a fixed depth, but also at random? ## Game playing We'll look at the two-player game tic-tac-toe: <img src="/assets/img/tic-tac-toe-game-1.svg"/> >*Figure 1:* Tic-tac-toe. (Image source: [Wikipedia](https://en.wikipedia.org/wiki/Tic-tac-toe#/media/File:Tic-tac-toe-game-1.svg)) Let's start with a prior on moves: ~~~~ var isValidMove = function(state, move) { return state[move.x][move.y] === '?'; }; var movePrior = dp.cache(function(state){ return Infer({ model() { var move = { x: randomInteger(3), y: randomInteger(3) }; condition(isValidMove(state, move)); return move; }}); }); var startState = [ ['?', 'o', '?'], ['?', 'x', 'x'], ['?', '?', '?'] ]; viz.table(movePrior(startState)); ~~~~ Now let's add a deterministic transition function: ~~~~ ///fold: isValidMove, movePrior var isValidMove = function(state, move) { return state[move.x][move.y] === '?'; }; var movePrior = dp.cache(function(state){ return Infer({ model() { var move = { x: randomInteger(3), y: randomInteger(3) }; condition(isValidMove(state, move)); return move; }}); }); /// var assign = function(obj, k, v) { var newObj = _.clone(obj); return Object.defineProperty(newObj, k, {value: v}) }; var transition = function(state, move, player) { var newRow = assign(state[move.x], move.y, player); return assign(state, move.x, newRow); }; var startState = [ ['?', 'o', '?'], ['?', 'x', 'x'], ['?', '?', '?'] ]; transition(startState, {x: 1, y: 0}, 'o'); ~~~~ We need to be able to check if a player has won: ~~~~ ///fold: movePrior, transition var isValidMove = function(state, move) { return state[move.x][move.y] === '?'; }; var movePrior = dp.cache(function(state){ return Infer({ model() { var move = { x: randomInteger(3), y: randomInteger(3) }; condition(isValidMove(state, move)); return move; }}); }); var assign = function(obj, k, v) { var newObj = _.clone(obj); return Object.defineProperty(newObj, k, {value: v}) }; var transition = function(state, move, player) { var newRow = assign(state[move.x], move.y, player); return assign(state, move.x, newRow); }; /// var diag1 = function(state) { return mapIndexed(function(i, x) {return x[i];}, state); }; var diag2 = function(state) { var n = state.length; return mapIndexed(function(i, x) {return x[n - (i + 1)];}, state); }; var hasWon = dp.cache(function(state, player) { var check = function(xs){ return _.countBy(xs)[player] == xs.length; }; return any(check, [ state[0], state[1], state[2], // rows map(first, state), map(second, state), map(third, state), // cols diag1(state), diag2(state) // diagonals ]); }); var startState = [ ['?', 'o', '?'], ['x', 'x', 'x'], ['?', '?', '?'] ]; hasWon(startState, 'x'); ~~~~ Now let's implement an agent that chooses a single action, but can't plan ahead: ~~~~ ///fold: movePrior, transition, hasWon var isValidMove = function(state, move) { return state[move.x][move.y] === '?'; }; var movePrior = dp.cache(function(state){ return Infer({ model() { var move = { x: randomInteger(3), y: randomInteger(3) }; condition(isValidMove(state, move)); return move; }}); }); var assign = function(obj, k, v) { var newObj = _.clone(obj); return Object.defineProperty(newObj, k, {value: v}) }; var transition = function(state, move, player) { var newRow = assign(state[move.x], move.y, player); return assign(state, move.x, newRow); }; var diag1 = function(state) { return mapIndexed(function(i, x) {return x[i];}, state); }; var diag2 = function(state) { var n = state.length; return mapIndexed(function(i, x) {return x[n - (i + 1)];}, state); }; var hasWon = dp.cache(function(state, player) { var check = function(xs){ return _.countBy(xs)[player] == xs.length; }; return any(check, [ state[0], state[1], state[2], // rows map(first, state), map(second, state), map(third, state), // cols diag1(state), diag2(state) // diagonals ]); }); /// var isDraw = function(state) { return !hasWon(state, 'x') && !hasWon(state, 'o'); }; var utility = function(state, player) { if (hasWon(state, player)) { return 10; } else if (isDraw(state)) { return 0; } else { return -10; } }; var act = dp.cache(function(state, player) { return Infer({ model() { var move = sample(movePrior(state)); var eu = expectation(Infer({ model() { var outcome = transition(state, move, player); return utility(outcome, player); }})); factor(eu); return move; }}); }); var startState = [ ['o', 'o', '?'], ['?', 'x', 'x'], ['?', '?', '?'] ]; viz.table(act(startState, 'x')); ~~~~ And now let's include planning: ~~~~ ///fold: movePrior, transition, hasWon, utility, isTerminal var isValidMove = function(state, move) { return state[move.x][move.y] === '?'; }; var movePrior = dp.cache(function(state){ return Infer({ model() { var move = { x: randomInteger(3), y: randomInteger(3) }; condition(isValidMove(state, move)); return move; }}); }); var assign = function(obj, k, v) { var newObj = _.clone(obj); return Object.defineProperty(newObj, k, {value: v}) }; var transition = function(state, move, player) { var newRow = assign(state[move.x], move.y, player); return assign(state, move.x, newRow); }; var diag1 = function(state) { return mapIndexed(function(i, x) {return x[i];}, state); }; var diag2 = function(state) { var n = state.length; return mapIndexed(function(i, x) {return x[n - (i + 1)];}, state); }; var hasWon = dp.cache(function(state, player) { var check = function(xs){ return _.countBy(xs)[player] == xs.length; }; return any(check, [ state[0], state[1], state[2], // rows map(first, state), map(second, state), map(third, state), // cols diag1(state), diag2(state) // diagonals ]); }); var isDraw = function(state) { return !hasWon(state, 'x') && !hasWon(state, 'o'); }; var utility = function(state, player) { if (hasWon(state, player)) { return 10; } else if (isDraw(state)) { return 0; } else { return -10; } }; var isComplete = function(state) { return all( function(x){ return x != '?'; }, _.flatten(state)); } var isTerminal = function(state) { return hasWon(state, 'x') || hasWon(state, 'o') || isComplete(state); }; /// var otherPlayer = function(player) { return (player === 'x') ? 'o' : 'x'; }; var act = dp.cache(function(state, player) { return Infer({ model() { var move = sample(movePrior(state)); var eu = expectation(Infer({ model() { var outcome = simulate(state, move, player); return utility(outcome, player); }})); factor(eu); return move; }}); }); var simulate = function(state, action, player) { var nextState = transition(state, action, player); if (isTerminal(nextState)) { return nextState; } else { var nextPlayer = otherPlayer(player); var nextAction = sample(act(nextState, nextPlayer)); return simulate(nextState, nextAction, nextPlayer); } }; var startState = [ ['o', '?', '?'], ['?', '?', 'x'], ['?', '?', '?'] ]; var actDist = act(startState, 'o'); viz.table(actDist); ~~~~ ## Language understanding <!-- TODO text needs more elaboration or some links to papers or online content --> A model of pragmatic language interpretation: The speaker chooses a sentence conditioned on the listener inferring the intended state of the world when hearing this sentence; the listener chooses an interpretation conditioned on the speaker selecting the given utterance when intending this meaning. Literal interpretation: ~~~~ var statePrior = function() { return uniformDraw([0, 1, 2, 3]); }; var literalMeanings = { allSprouted: function(state) { return state === 3; }, someSprouted: function(state) { return state > 0; }, noneSprouted: function(state) { return state === 0; } }; var sentencePrior = function() { return uniformDraw(['allSprouted', 'someSprouted', 'noneSprouted']); }; var literalListener = function(sentence) { return Infer({ model() { var state = statePrior(); var meaning = literalMeanings[sentence]; condition(meaning(state)); return state; }}); }; viz(literalListener('someSprouted')); ~~~~ A pragmatic speaker, thinking about the literal listener: ~~~~ var alpha = 2; ///fold: statePrior, literalMeanings, sentencePrior var statePrior = function() { return uniformDraw([0, 1, 2, 3]); }; var literalMeanings = { allSprouted: function(state) { return state === 3; }, someSprouted: function(state) { return state > 0; }, noneSprouted: function(state) { return state === 0; } }; var sentencePrior = function() { return uniformDraw(['allSprouted', 'someSprouted', 'noneSprouted']); }; /// var literalListener = function(sentence) { return Infer({ model() { var state = statePrior(); var meaning = literalMeanings[sentence]; condition(meaning(state)); return state; }}); }; var speaker = function(state) { return Infer({ model() { var sentence = sentencePrior(); factor(alpha * literalListener(sentence).score(state)); return sentence; }}); } viz(speaker(3)); ~~~~ Pragmatic listener, thinking about speaker: ~~~~ var alpha = 2; ///fold: statePrior, literalMeanings, sentencePrior var statePrior = function() { return uniformDraw([0, 1, 2, 3]); }; var literalMeanings = { allSprouted: function(state) { return state === 3; }, someSprouted: function(state) { return state > 0; }, noneSprouted: function(state) { return state === 0; } }; var sentencePrior = function() { return uniformDraw(['allSprouted', 'someSprouted', 'noneSprouted']); }; /// var literalListener = dp.cache(function(sentence) { return Infer({ model() { var state = statePrior(); var meaning = literalMeanings[sentence]; condition(meaning(state)); return state; }}); }); var speaker = dp.cache(function(state) { return Infer({ model() { var sentence = sentencePrior(); factor(alpha * literalListener(sentence).score(state)); return sentence; }}); }); var listener = dp.cache(function(sentence) { return Infer({ model() { var state = statePrior(); factor(speaker(state).score(sentence)); return state; }}); }); viz(listener('someSprouted')); ~~~~ Next chapter: [How to use the WebPPL Agent Models library](/chapters/8-guide-library.html)
"2016-12-04T11:26:34"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
7-multi-agent.md
32ca55d04d57aa1d34e0db0fa171a0c9
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/5d-joint-inference.html
agentmodels
markdown
--- layout: chapter title: Joint inference of biases and preferences I description: Assuming agent optimality leads to mistakes in inference. Procrastination and Bandit Examples. --- ### Introduction Techniques for inferring human preferences and beliefs from their behavior on a task usually assume that humans solve the task (softmax) optimally. When this assumptions fails, inference often fails too. This chapter explores how incorporating time inconsistency and myopic planning into models of human behavior can improve inference. <!-- TODO shorten these two paragraphs --> <!-- Biases will only affect some of the humans some of the time. In a narrow domain, experts can learn to avoid biases and they can use specialized approximation algorithms that achieve near-optimal performance in the domain. So our approach is to do *joint inference* over preferences, beliefs and biases and cognitive bounds. If the agent's behavior is consistent with optimal (PO)MDP solving, we will infer this fact and infer preferences accordingly. On the other hand, if there's evidence of biases, this will alter inferences about preferences. We test our approach by comparing to a model that has a fixed assumption of optimality. We show that in simple, intuitive decision problems, assuming optimality leads to mistaken inferences about preferences. --> <!--As we discussed in Chapter 4, the identifiability of preferences is a ubiquitous issue in IRL. Our approach, which does inference over a broader space of agents (with different combinations of biases), makes identification from a particular decision problem less likely in general. Yet the lack of identifiability of preferences is not something that undermines our approach. For some decision problems, the best an inference system can do is rule out preferences that are inconsistent with the behavior and accurately maintain posterior uncertainty over those that are consistent. Some of the examples below provide behavior that is ambiguous about preferences in this way. Yet we also show simple examples in which biases and bounds *can* be identified. --> ### Formalization of Joint Inference <a id="formalization"></a>We formalize joint inference over beliefs, preferences and biases by extending the approach developed in Chapter IV, "[Reasoning about Agents](/chapters/4-reasoning-about-agents)", where an agent was formally <a href="/chapters/4-reasoning-about-agents.html#pomdpDefine">defined</a> by parameters $$ \left\langle U, \alpha, b_0 \right\rangle$$. To include the possibility of time-inconsistency and myopia, an agent $$\theta$$ is now characterized by a tuple of parameters as follows: $$ \theta = \left\langle U, \alpha, b_0, k, \nu, C \right\rangle $$ where: - $$U$$ is the utilty function - $$\alpha$$ is the softmax noise parameter - $$b_0$$ is the agent's belief (or prior) over the initial state - $$k \geq 0$$ is the constant for hyperbolic discounting function $$1/(1+kd)$$ - $$\nu$$ is an indicator for Naive or Sophisticated hyperbolic discounting - $$C \in [1,\infty]$$ is the integer cutoff or bound for Reward-myopic or Update-myopic Agents[^bound] As in <a href="/chapters/4-reasoning-about-agents.html#pomdpInfer">Equation (2)</a> of Chapter IV, we condition on state-action-observation triples: $$ P(\theta \vert (s,o,a)_{0:n}) \propto P( (s,o,a)_{0:n} \vert \theta)P(\theta) $$ We obtain a factorized form in exactly the same way as in <a href="/chapters/4-reasoning-about-agents.html#pomdpInfer">Equation (2)</a>, i.e. we generate the sequence $$b_i$$ from $$i=0$$ to $$i=n$$ of agent beliefs: $$ P(\theta \vert (s,o,a)_{0:n}) \propto P(\theta) \prod_{i=0}^n P( a_i \vert s_i, b_i, U, \alpha, k, \nu, C ) $$ The likelihood term on the right-hand side of this equation is simply the softmax probability that the agent with given parameters chooses $$a_i$$ in state $$s_i$$. This equation for inference does not make use of the *delay* indices used by time-inconsistent and Myopic agents. This is because the delays figure only in their internal simulations. In order to compute the likelihood the agent takes an action, we don't need to keep track of delay values. [^bound]: To simplify the presentation, we assume here that one does inference either about whether the agent is Update-myopic or about whether the agent is Reward-myopic (but not both). It's actually straightforward to include both kinds of agents in the hypothesis space and infer both $$C_m$$ and $$C_g$$. ## Learning from Procrastinators The <a href="/chapters/5b-time-inconsistency.html#procrastination">Procrastination Problem</a> (Figure 1 below) illustrates how agents with identical preferences can deviate *systematically* in their behavior due to time inconsistency. Suppose two agents care equally about finishing the task and assign the same cost to doing the hard work. The optimal agent will complete the task immediately. The Naive hyperbolic discounter will delay every day until the deadline, which could be thirty days away! <img src="/assets/img/procrastination_mdp.png" alt="diagram" style="width: 650px;"/> >**Figure 1:** Transition graph for Procrastination Problem. States are represented by nodes. Edges are state-transitions and are labeled with the action name and the utility of the state-action pair. Terminal nodes have a bold border and their utility is labeled below. This kind of systematic deviation between agents is also significant for inferring preferences. We consider the problem of *online* inference, where we observe the agent's behavior each day and produce an estimate of their preferences. Suppose the agent has a deadline $$T$$ days into the future and leaves the work till the last day. This is typical human behavior -- and so is a good test for a model of inference. We compare the online inferences of two models. The *Optimal Model* assumes the agent is time-consistent with softmax parameter $$\alpha$$. The *Possibly Discounting* model includes both optimal and Naive hyperbolic discounting agents in its prior. (The Possibly Discounting model includes the Optimal Model as a special case; this allows us to place a uniform prior on the models and exploit [Bayesian Model Selection](http://alumni.media.mit.edu/~tpminka/statlearn/demo/).) For each model, we compute posteriors for the agent's parameters after observing the agent's choice at each timestep. We set $$T=10$$. So the observed actions are: >`["wait", "wait", "wait", ... , "work"]` where `"work"` is the final action. We fix the utilities for doing the work (the `workCost` or $$-w$$) and for delaying the work (the `waitCost` or $$-\epsilon$$). We infer the following parameters: - The reward received after completing the work: $$R$$ or `reward` - The agent's softmax parameter: $$\alpha$$ - The agent's discount rate (for the Possibly Discounting model): $$k$$ or `discount` Note that we are not just inferring *whether* the agent is biased; we also infer how biased they are. For each parameter, we plot a time-series showing the posterior expectation of the parameter on each day. We also plot the model's posterior predictive probability that the agent does the work on the last day (assuming the agent gets to the last day without having done the work). <!--TODO: ideally we would do this as actual online inference. --> ~~~~ // infer_procrastination ///fold: makeProcrastinationMDP, makeProcrastinationUtility, displayTimeSeries, ... var makeProcrastinationMDP = function(deadlineTime) { var stateLocs = ["wait_state", "reward_state"]; var actions = ["wait", "work", "relax"]; var stateToActions = function(state) { return (state.loc === "wait_state" ? ["wait", "work"] : ["relax"]); }; var advanceTime = function (state) { var newTimeLeft = state.timeLeft - 1; var terminateAfterAction = (newTimeLeft === 1 || state.loc === "reward_state"); return extend(state, { timeLeft: newTimeLeft, terminateAfterAction: terminateAfterAction }); }; var transition = function(state, action) { assert.ok(_.includes(stateLocs, state.loc) && _.includes(actions, action), 'procrastinate transition:' + [state.loc,action]); if (state.loc === "reward_state") { return advanceTime(state); } else if (action === "wait") { var waitSteps = state.waitSteps + 1; return extend(advanceTime(state), { waitSteps }); } else { var newState = extend(state, { loc: "reward_state" }); return advanceTime(newState); } }; var feature = function(state) { return state.loc; }; var startState = { loc: "wait_state", waitSteps: 0, timeLeft: deadlineTime, terminateAfterAction: false }; return { actions, stateToActions, transition, feature, startState }; }; var makeProcrastinationUtility = function(utilityTable) { assert.ok(hasProperties(utilityTable, ['waitCost', 'workCost', 'reward']), 'makeProcrastinationUtility args'); var waitCost = utilityTable.waitCost; var workCost = utilityTable.workCost; var reward = utilityTable.reward; // NB: you receive the *workCost* when you leave the *wait_state* // You then receive the reward when leaving the *reward_state* state return function(state, action) { if (state.loc === "reward_state") { return reward + state.waitSteps * waitCost; } else if (action === "work") { return workCost; } else { return 0; } }; }; var getMarginal = function(dist, key){ return Infer({ model() { return sample(dist)[key]; }}); }; var displayTimeSeries = function(observedStateAction, getPosterior) { var features = ['reward', 'predictWorkLastMinute', 'alpha', 'discount']; // dist on {a:1, b:3, ...} -> [E('a'), E('b') ... ] var distToMarginalExpectations = function(dist, keys) { return map(function(key) { return expectation(getMarginal(dist, key)); }, keys); }; // condition observations up to *timeIndex* and take expectations var inferUpToTimeIndex = function(timeIndex, useOptimalModel) { var observations = observedStateAction.slice(0, timeIndex); return distToMarginalExpectations(getPosterior(observations, useOptimalModel), features); }; var getTimeSeries = function(useOptimalModel) { var inferAllTimeIndexes = map(function(index) { return inferUpToTimeIndex(index, useOptimalModel); }, _.range(observedStateAction.length)); return map(function(i) { // get full time series of online inferences for each feature return map(function(infer){return infer[i];}, inferAllTimeIndexes); }, _.range(features.length)); }; var displayOptimalAndPossiblyDiscountingSeries = function(index) { print('\n\nfeature: ' + features[index]); var optimalSeries = getTimeSeries(true)[index]; var possiblyDiscountingSeries = getTimeSeries(false)[index]; var plotOptimal = map( function(pair){ return { t: pair[0], expectation: pair[1], agentModel: 'Optimal' }; }, zip(_.range(observedStateAction.length), optimalSeries)); var plotPossiblyDiscounting = map( function(pair){ return { t: pair[0], expectation: pair[1], agentModel: 'Possibly Discounting' }; }, zip(_.range(observedStateAction.length), possiblyDiscountingSeries)); viz.line(plotOptimal.concat(plotPossiblyDiscounting), { groupBy: 'agentModel' }); }; print('Posterior expectation on feature after observing ' + '"wait" for t timesteps and "work" when t=9'); map(displayOptimalAndPossiblyDiscountingSeries, _.range(features.length)); return ''; }; var procrastinationData = [[{"loc":"wait_state","waitSteps":0,"timeLeft":10,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":1,"timeLeft":9,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":2,"timeLeft":8,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":3,"timeLeft":7,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":4,"timeLeft":6,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":5,"timeLeft":5,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":6,"timeLeft":4,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":7,"timeLeft":3,"terminateAfterAction":false},"wait"],[{"loc":"wait_state","waitSteps":8,"timeLeft":2,"terminateAfterAction":false},"work"],[{"loc":"reward_state","waitSteps":8,"timeLeft":1,"terminateAfterAction":true},"relax"]]; /// var getPosterior = function(observedStateAction, useOptimalModel) { var world = makeProcrastinationMDP(); var lastChanceState = secondLast(procrastinationData)[0]; return Infer({ model() { var utilityTable = { reward: uniformDraw([0.5, 2, 3, 4, 5, 6, 7, 8]), waitCost: -0.1, workCost: -1 }; var params = { utility: makeProcrastinationUtility(utilityTable), alpha: categorical([0.1, 0.2, 0.2, 0.2, 0.3], [0.1, 1, 10, 100, 1000]), discount: useOptimalModel ? 0 : uniformDraw([0, .5, 1, 2, 4]), sophisticatedOrNaive: 'naive' }; var agent = makeMDPAgent(params, world); var act = agent.act; map(function(stateAction) { var state = stateAction[0]; var action = stateAction[1]; observe(act(state, 0), action); }, observedStateAction); return { reward: utilityTable.reward, alpha: params.alpha, discount: params.discount, predictWorkLastMinute: sample(act(lastChanceState, 0)) === 'work' }; }}); }; displayTimeSeries(procrastinationData, getPosterior); ~~~~ The optimal model makes inferences that clash with everyday intuition. Suppose someone has still not done a task with only two days left. Would you confidently rule out them doing it at the last minute? With two days left, the Optimal model has almost complete confidence that the agent doesn't care about the task enough to do the work (`reward < workCost = 1`). Hence it assigns probability $$0.005$$ to the agent doing the task at the last minute (`predictWorkLastMinute`). By contrast, the Possibly Discounting model predicts the agent will do the task with probability around $$0.2$$. The predictive probability is no higher than $$0.2$$ because the Possibly Discounting model allows the agent to be optimal (`discount==0`) and because a sub-optimal agent might be too lazy to do the work even at the last minute (i.e. `discount` is high enough to overwhelm `reward`). Suppose someone completes the task on the final day. What do you infer about them? The Optimal Model has to explain the action by massively revising its inference about `reward` and $$\alpha$$. It suddenly infers that the agent is extremely noisy and that `reward > workCost` by a big margin. The extreme noise is needed to explain why the agent would miss a good option nine out of ten times. By contrast, the Possibly Discounting Model does not change its inference about the agent's noise level very much at all (in terms of pratical significance). It infers a much higher value for `reward`, which is plausible in this context. <!--[Point that Optimal Model predicts the agent will finish early on a similar problem, while Discounting Model will predict waiting till last minute.]--> ---------- ## Learning from Reward-myopic Agents in Bandits Chapter V.2. "[Bounded Agents](/chapters/5c-myopia)" explained that Reward-myopic agents explore less than optimal agents. The Reward-myopic agent plans each action as if time runs out in $$C_g$$ steps, where $$C_g$$ is the *bound* or "look ahead". If exploration only pays off in the long-run (after the bound) then the agent won't explore[^bandit1]. This means there are two possible explanations for an agent not exploring: either the agent is greedy or the agent has a low prior on the utility of the unknown options. [^bandit1]: If there's no noise in transitions or in selection of actions, the Reward-myopic agent will *never* explore and will do poorly. We return to the deterministic bandit-style problem from earlier. At each trial, the agent chooses between two arms with the following properties: - `arm0`: yields chocolate - `arm1`: yields either champagne or no prize at all (agent's prior is $$0.7$$ for nothing) <img src="/assets/img/5c-irl-bandit-diagram.png" alt="diagram" style="width: 400px;"/> The inference problem is to infer the agent's preference over chocolate. While having only two deterministic arms may seem overly simple, the same structure is shared by realistic problems. For example, we can imagine observing people choosing between different cuisines, restaurants or menu options. Usually people know about some options well but are uncertain about others. When inferring their preferences, we distinguish between options chosen for exploration vs. exploitation. The same applies to people choosing media sources: someone might try out a channel to learn whether it shows their favorite genre. As with the Procrastination example above, we compare the inferences of two models. The *Optimal Model* assumes the agent solves the POMDP optimally. The *Possibly Reward-myopic Model* includes both the optimal agent and Reward-myopic agents with different values for the bound $$C_g$$. The models know the agent's utility for champagne and his prior about how likely champagne is from `arm1`. The models have a fixed prior on the agent's utility for chocolate. We vary the agent's time horizon between 2 and 10 timesteps and plot posterior expectations for the utility of chocolate. For the Possibly Reward-myopic model, we also plot the expectation for $$C_g$$. <!-- TODO fix this codebox --> <!-- infer_utility_from_no_exploration --> ~~~~ // helper function to assemble and display inferred values ///fold: var timeHorizonValues = _.range(10).slice(2); var features = ['Utility of arm 0 (chocolate)', 'Greediness bound']; var displayExpectations = function(getPosterior) { var getExpectations = function(useOptimalModel) { var inferAllTimeHorizons = map(function(horizon) { return getPosterior(horizon, useOptimalModel); }, timeHorizonValues); return map( function(i) { return map(function(infer){return infer[i];}, inferAllTimeHorizons); }, _.range(features.length)); }; var displayOptimalAndPossiblyRewardMyopicSeries = function(index) { print('\n\nfeature: ' + features[index]); var optimalSeries = getExpectations(true)[index]; var possiblyRewardMyopicSeries = getExpectations(false)[index]; var plotOptimal = map( function(pair) { return { horizon: pair[0], expectation: pair[1], agentModel: 'Optimal' }; }, zip(timeHorizonValues, optimalSeries)); var plotPossiblyRewardMyopic = map( function(pair){ return { horizon: pair[0], expectation: pair[1], agentModel: 'Possibly RewardMyopic' }; }, zip(timeHorizonValues, possiblyRewardMyopicSeries)); viz.line(plotOptimal.concat(plotPossiblyRewardMyopic), { groupBy: 'agentModel' }); }; print('Posterior expectation on feature after observing no exploration'); map(displayOptimalAndPossiblyRewardMyopicSeries, _.range(features.length)); return ''; }; var getMarginal = function(dist, key){ return Infer({ model() { return sample(dist)[key]; }}); }; /// var getPosterior = function(numberOfTrials, useOptimalModel) { var trueArmToPrizeDist = { 0: Delta({ v: 'chocolate' }), 1: Delta({ v: 'nothing' }) }; var bandit = makeBanditPOMDP({ numberOfArms: 2, armToPrizeDist: trueArmToPrizeDist, numberOfTrials: numberOfTrials }); var startState = bandit.startState; var alternativeArmToPrizeDist = extend(trueArmToPrizeDist, { 1: Delta({ v: 'champagne' }) }); var alternativeStartState = makeBanditStartState(numberOfTrials, alternativeArmToPrizeDist); var priorAgentPrior = Delta({ v: Categorical({ vs: [startState, alternativeStartState], ps: [0.7, 0.3] }) }); var priorPrizeToUtility = Infer({ model() { return { chocolate: uniformDraw(_.range(20).concat(25)), nothing: 0, champagne: 20 }; }}); var priorMyopia = ( useOptimalModel ? Delta({ v: { on: false, bound:0 }}) : Infer({ model() { return { bound: categorical([.4, .2, .1, .1, .1, .1], [1, 2, 3, 4, 6, 10]) }; }})); var prior = { priorAgentPrior, priorPrizeToUtility, priorMyopia }; var baseAgentParams = { alpha: 1000, sophisticatedOrNaive: 'naive', discount: 0, noDelays: useOptimalModel }; var observations = [[startState, 0]]; var outputDist = inferBandit(bandit, baseAgentParams, prior, observations, 'offPolicy', 0, 'beliefDelay'); var marginalChocolate = Infer({ model() { return sample(outputDist).prizeToUtility.chocolate; }}); return [ expectation(marginalChocolate), expectation(getMarginal(outputDist, 'myopiaBound')) ]; }; print('Prior expected utility for arm0 (chocolate): ' + listMean(_.range(20).concat(25)) ); displayExpectations(getPosterior); ~~~~ The graphs show that as the agent's time horizon increases the inferences of the two models diverge. For the Optimal agent, the longer time horizon makes exploration more valuable. So the Optimal model infers a higher utility for the known option as the time horizon increases. By contrast, the Possibly Reward-myopic model can explain away the lack of exploration by the agent being Reward-myopic. This latter model infers slightly lower values for $$C_g$$ as the horizon increases. >**Exercise**: Suppose that instead of allowing the agent to be greedy, we allowed the agent to be a hyperbolic discounter. Think about how this would affect inferences from the observations above and for other sequences of observation. Change the code above to test out your predictions. <br> Next chapter: [Joint inference of biases and preferences II](/chapters/5e-joint-inference.html) <br> ### Footnotes
"2019-08-29T10:20:19"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
5d-joint-inference.md
a894af445e54ec10a745213ccd2d14e3
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/3b-mdp-gridworld.html
agentmodels
markdown
--- layout: chapter title: "MDPs and Gridworld in WebPPL" description: Noisy actions (softmax), stochastic transitions, policies, Q-values. --- This chapter explores some key features of MDPs: stochastic dynamics, stochastic policies, and value functions. ### Hiking in Gridworld We begin by introducing a new gridworld MDP: > **Hiking Problem**: >Suppose that Alice is hiking. There are two peaks nearby, denoted "West" and "East". The peaks provide different views and Alice must choose between them. South of Alice's starting position is a steep hill. Falling down the hill would result in painful (but non-fatal) injury and end the hike early. We represent Alice's hiking problem with a Gridworld similar to Bob's Restaurant Choice example. The peaks are terminal states, providing different utilities. The steep hill is represented by a row of terminal state, each with identical negative utility. Each timestep before Alice reaches a terminal state incurs a "time cost", which is negative to represent the fact that Alice prefers a shorter hike. <!-- TODO might be good to indicate on plot that the steep hills are bad --> <!-- draw_hike --> ~~~~ var H = { name: 'Hill' }; var W = { name: 'West' }; var E = { name: 'East' }; var ___ = ' '; var grid = [ [___, ___, ___, ___, ___], [___, '#', ___, ___, ___], [___, '#', W , '#', E ], [___, ___, ___, ___, ___], [ H , H , H , H , H ] ]; var start = [0, 1]; var mdp = makeGridWorldMDP({ grid, start }); viz.gridworld(mdp.world, { trajectory: [mdp.startState] }); ~~~~ We start with a *deterministic* transition function. In this case, Alice's risk of falling down the steep hill is solely due to softmax noise in her action choice (which is minimal in this case). The agent model is the same as the one at the end of [Chapter III.1](/chapters/3a-mdp.html). We place the functions `act`, `expectedUtility` in a function `makeMDPAgent`. The following codebox defines this function and we use it later on without defining it (since it's in the `webppl-agents` library). <!-- define_agent_simulate --> ~~~~ // Set up agent structure var makeMDPAgent = function(params, world) { var stateToActions = world.stateToActions; var transition = world.transition; var utility = params.utility; var alpha = params.alpha; var act = dp.cache( function(state) { return Infer({ model() { var action = uniformDraw(stateToActions(state)); var eu = expectedUtility(state, action); factor(alpha * eu); return action; }}); }); var expectedUtility = dp.cache( function(state, action){ var u = utility(state, action); if (state.terminateAfterAction){ return u; } else { return u + expectation(Infer({ model() { var nextState = transition(state, action); var nextAction = sample(act(nextState)); return expectedUtility(nextState, nextAction); }})); } }); return { params, expectedUtility, act }; }; var simulate = function(startState, world, agent) { var act = agent.act; var transition = world.transition; var sampleSequence = function(state) { var action = sample(act(state)); var nextState = transition(state, action); if (state.terminateAfterAction) { return [state]; } else { return [state].concat(sampleSequence(nextState)); } }; return sampleSequence(startState); }; // Set up world var makeHikeMDP = function(options) { var H = { name: 'Hill' }; var W = { name: 'West' }; var E = { name: 'East' }; var ___ = ' '; var grid = [ [___, ___, ___, ___, ___], [___, '#', ___, ___, ___], [___, '#', W , '#', E ], [___, ___, ___, ___, ___], [ H , H , H , H , H ] ]; return makeGridWorldMDP(_.assign({ grid }, options)); }; var mdp = makeHikeMDP({ start: [0, 1], totalTime: 12, transitionNoiseProbability: 0 }); var makeUtilityFunction = mdp.makeUtilityFunction; // Create parameterized agent var utility = makeUtilityFunction({ East: 10, West: 1, Hill: -10, timeCost: -.1 }); var agent = makeMDPAgent({ utility, alpha: 1000 }, mdp.world); // Run agent on world var trajectory = simulate(mdp.startState, mdp.world, agent); viz.gridworld(mdp.world, { trajectory }); ~~~~ >**Exercise**: Adjust the parameters of `utilityTable` in order to produce the following behaviors: >1. The agent goes directly to "West". >2. The agent takes the long way around to "West". >3. The agent sometimes goes to the Hill at $$[1,0]$$. Try to make this outcome as likely as possible. <!-- 3 is obtained by making timeCost positive and Hill better than alternatives --> ### Hiking with stochastic transitions Imagine that the weather is very wet and windy. As a result, Alice will sometimes intend to go one way but actually go another way (because she slips in the mud). In this case, the shorter route to the peaks might be too risky for Alice. To model bad weather, we assume that at every timestep, there is a constant independent probability `transitionNoiseProbability` of the agent moving orthogonally to their intended direction. The independence assumption is unrealistic (if a location is slippery at one timestep it is more likely slippery the next), but it is simple and satisfies the Markov assumption for MDPs. Setting `transitionNoiseProbability=0.1`, the agent's first action is now to move "up" instead of "right". ~~~~ ///fold: makeHikeMDP var makeHikeMDP = function(options) { var H = { name: 'Hill' }; var W = { name: 'West' }; var E = { name: 'East' }; var ___ = ' '; var grid = [ [___, ___, ___, ___, ___], [___, '#', ___, ___, ___], [___, '#', W , '#', E ], [___, ___, ___, ___, ___], [ H , H , H , H , H ] ]; return makeGridWorldMDP(_.assign({ grid }, options)); }; /// // Set up world var mdp = makeHikeMDP({ start: [0, 1], totalTime: 13, transitionNoiseProbability: 0.1 // <- NEW }); // Create parameterized agent var makeUtilityFunction = mdp.makeUtilityFunction; var utility = makeUtilityFunction({ East: 10, West: 1, Hill: -10, timeCost: -.1 }); var agent = makeMDPAgent({ utility, alpha: 100 }, mdp.world); // Generate a single trajectory, draw var trajectory = simulateMDP(mdp.startState, mdp.world, agent, 'states'); viz.gridworld(mdp.world, { trajectory }); // Generate 100 trajectories, plot distribution on lengths var trajectoryDist = Infer({ model() { var trajectory = simulateMDP(mdp.startState, mdp.world, agent); return { trajectoryLength: trajectory.length } }, method: 'forward', samples: 100 }); viz(trajectoryDist); ~~~~ >**Exercise:** >1. Keeping `transitionNoiseProbability=0.1`, find settings for `utilityTable` such that the agent goes "right" instead of "up". >2. Set `transitionNoiseProbability=0.01`. Change a single parameter in `utilityTable` such that the agent goes "right" (there are multiple ways to do this). <!-- put up timeCost to -1 or so --> ### Noisy transitions vs. Noisy agents It's important to distinguish noise in the transition function from the softmax noise in the agent's selection of actions. Noise (or "stochasticity") in the transition function is a representation of randomness in the world. This is easiest to think about in games of chance[^noise]. In a game of chance (e.g. slot machines or poker) rational agents will take into account the randomness in the game. By contrast, softmax noise is a property of an agent. For example, we can vary the behavior of otherwise identical agents by varying their parameter $$\alpha$$. Unlike transition noise, softmax noise has little influence on the agent's planning for the Hiking Problem. Since it's so bad to fall down the hill, the softmax agent will rarely do so even if they take the short route. The softmax agent is like a person who takes inefficient routes when stakes are low but "pulls themself together" when stakes are high. [^noise]: An agent's world model might treat a complex set of deterministic rules as random. In this sense, agents will vary in whether they represent an MDP as stochastic or not. We won't consider that case in this tutorial. >**Exercise:** Use the codebox below to explore different levels of softmax noise. Find a setting of `utilityTable` and `alpha` such that the agent goes to West and East equally often and nearly always takes the most direct route to both East and West. Included below is code for simulating many trajectories and returning the trajectory length. You can extend this code to measure whether the route taken by the agent is direct or not. (Note that while the softmax agent here is able to "backtrack" or return to its previous location, in later Gridworld examples we disalllow backtracking as a possible action). ~~~~ ///fold: makeHikeMDP, set up world var makeHikeMDP = function(options) { var H = { name: 'Hill' }; var W = { name: 'West' }; var E = { name: 'East' }; var ___ = ' '; var grid = [ [___, ___, ___, ___, ___], [___, '#', ___, ___, ___], [___, '#', W , '#', E ], [___, ___, ___, ___, ___], [ H , H , H , H , H ] ]; return makeGridWorldMDP(_.assign({ grid }, options)); }; var mdp = makeHikeMDP({ start: [0, 1], totalTime: 13, transitionNoiseProbability: 0.1 }); var world = mdp.world; var startState = mdp.startState; var makeUtilityFunction = mdp.makeUtilityFunction; /// // Create parameterized agent var utility = makeUtilityFunction({ East: 10, West: 1, Hill: -10, timeCost: -.1 }); var alpha = 1; // <- SOFTMAX NOISE var agent = makeMDPAgent({ utility, alpha }, world); // Generate a single trajectory, draw var trajectory = simulateMDP(startState, world, agent, 'states'); viz.gridworld(world, { trajectory }); // Generate 100 trajectories, plot distribution on lengths var trajectoryDist = Infer({ model() { var trajectory = simulateMDP(startState, world, agent); return { trajectoryLength: trajectory.length } }, method: 'forward', samples: 100 }); viz(trajectoryDist); ~~~~ ### Stochastic transitions: plans and policies We return to the case of a stochastic environment with very low softmax action noise. In a stochastic environment, the agent sometimes finds themself in a state they did not intend to reach. The functions `agent` and `expectedUtility` (inside `makeMDPAgent`) implicitly compute the expected utility of actions for every possible future state, including states that the agent will try to avoid. In the MDP literature, this function from states and remaining time to actions is called a *policy*. (For infinite-horizon MDPs, policies are functions from states to actions.) Since policies take into account every possible contingency, they are quite different from the everyday notion of a plan. Consider the example from above where the agent takes the long route because of the risk of falling down the hill. If we generate a single trajectory for the agent, they will likely take the long route. However, if we generated many trajectories, we would sometimes see the agent move "right" instead of "up" on their first move. Before taking this first action, the agent implicitly computes what they *would* do if they end up moving right. To find out what they would do, we can artificially start the agent in $$[1,1]$$ instead of $$[0,1]$$: <!-- policy --> ~~~~ ///fold: makeHikeMDP var makeHikeMDP = function(options) { var H = { name: 'Hill' }; var W = { name: 'West' }; var E = { name: 'East' }; var ___ = ' '; var grid = [ [___, ___, ___, ___, ___], [___, '#', ___, ___, ___], [___, '#', W , '#', E ], [___, ___, ___, ___, ___], [ H , H , H , H , H ] ]; return makeGridWorldMDP(_.assign({ grid }, options)); }; /// // Parameters for world var mdp = makeHikeMDP({ start: [1, 1], // Previously: [0, 1] totalTime: 11, // Previously: 12 transitionNoiseProbability: 0.1 }); var makeUtilityFunction = mdp.makeUtilityFunction; // Parameters for agent var utility = makeUtilityFunction({ East: 10, West: 1, Hill: -10, timeCost: -.1 }); var agent = makeMDPAgent({ utility, alpha: 1000 }, mdp.world); var trajectory = simulateMDP(mdp.startState, mdp.world, agent, 'states'); viz.gridworld(mdp.world, { trajectory }); ~~~~ Extending this idea, we can display the expected values of each action the agent *could have taken* during their trajectory. These expected values numbers are analogous to state-action Q-values in infinite-horizon MDPs. The expected values were already being computed implicitly; we now use `getExpectedUtilitiesMDP` to access them. The displayed numbers in each grid cell are the expected utilities of moving in the corresponding directions. For example, we can read off how close the agent was to taking the short route as opposed to the long route. (Note that if the difference in expected utility between two actions is small then a noisy agent will take each of them with nearly equal probability). ~~~~ ///fold: makeBigHikeMDP, getExpectedUtilitiesMDP var makeBigHikeMDP = function(options) { var H = { name: 'Hill' }; var W = { name: 'West' }; var E = { name: 'East' }; var ___ = ' '; var grid = [ [___, ___, ___, ___, ___, ___], [___, ___, ___, ___, ___, ___], [___, ___, '#', ___, ___, ___], [___, ___, '#', W , '#', E ], [___, ___, ___, ___, ___, ___], [ H , H , H , H , H , H ] ]; return makeGridWorldMDP(_.assign({ grid }, options)); }; // trajectory must consist only of states. This can be done by calling // *simulate* with an additional final argument 'states'. var getExpectedUtilitiesMDP = function(stateTrajectory, world, agent) { var eu = agent.expectedUtility; var actions = world.actions; var getAllExpectedUtilities = function(state) { var actionUtilities = map( function(action){ return eu(state, action); }, actions); return [state, actionUtilities]; }; return map(getAllExpectedUtilities, stateTrajectory); }; /// // Long route is better, agent takes long route var mdp = makeBigHikeMDP({ start: [1, 1], totalTime: 12, transitionNoiseProbability: 0.03 }); var makeUtilityFunction = mdp.makeUtilityFunction; var utility = makeUtilityFunction({ East: 10, West: 7, Hill : -40, timeCost: -0.4 }); var agent = makeMDPAgent({ utility, alpha: 100 }, mdp.world); var trajectory = simulateMDP(mdp.startState, mdp.world, agent, 'states'); var actionExpectedUtilities = getExpectedUtilitiesMDP(trajectory, mdp.world, agent); viz.gridworld(mdp.world, { trajectory, actionExpectedUtilities }); ~~~~ So far, our agents all have complete knowledge about the state of the world. In the [next chapter](/chapters/3c-pomdp.html), we will explore partially observable worlds. <br> ### Footnotes
"2016-12-13T14:21:09"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
3b-mdp-gridworld.md
255b3dc4b0cda7173da19999092818bb
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/5c-myopic.html
agentmodels
markdown
--- layout: chapter title: Bounded Agents-- Myopia for rewards and updates description: Heuristic POMDP algorithms that assume a short horizon. --- ### Introduction The previous chapter extended the MDP agent model to include exponential and hyperbolic discounting. The goal was to produce models of human behavior that capture a prominent *bias* (time inconsistency). As noted [earlier](/chapters/5-biases-intro) humans are not just biased but also *cognitively bounded*. This chapter extends the POMDP agent to capture heuristics for planning that are sub-optimal but fast and frugal. ## Reward-myopic Planning: the basic idea Optimal planning is difficult because the best action now depends on the entire future. The optimal POMDP agent reasons backwards from the utility of its final state, judging earlier actions on whether they lead to good final states. With an infinite time horizon, an optimal agent must consider the expected utility of being in every possible state, including states only reachable after a very long duration. Instead of explicitly optimizing for the entire future when taking an action, an agent can "myopically" optimize for near-term rewards. With a time-horizon of 1000 timesteps, a myopic agent's first action might optimize for reward up to timestep $$t=5$$. Their second action would optimize for rewards up to $$t=6$$, and so on. Whereas the optimal agent computes a complete policy before the first timestep and then follows the policy, the "reward-myopic agent" computes a new myopic policy at each timestep, thus spreading out computation over the whole time-horizon and usually doing much less computation overall[^reward]. [^reward]: If optimal planning is super-linear in the time-horizon, the Reward-myopic agent will do less computation overall. The Reward-myopic agent only considers states or belief-states that it actually enters or that it gets close to, while the Optimal approach considers every possible state or belief-state. The Reward-myopic agent succeeds when continually optimizing for the short-term produces good long-term performance. Often this fails: climbing a moutain can get progressively more exhausting and painful until the summit is finally reached. One patch for this problem is to provide the agent with fake short-term rewards that are a proxy for long-term expected utility. This is closely related to "reward shaping" in Reinforcement Learning refp:chentanez2004intrinsically. ### Reward-myopic Planning: implementation and examples The **Reward-myopic agent** takes the action that would be optimal if the time-horizon were $$C_g$$ steps into the future. The "cutoff" or "bound", $$C_g > 0$$, will typically be much smaller than the time horizon for the decision problem. Notice the similarity between Reward-myopic agents and hyperbolic discounting agents. Both agents make plans based on short-term rewards. Both revise these plans at every timestep. And the Naive Hyperbolic Discounter and Reward-myopic agents both have implicit models of their future selves that are incorrect. A major difference is that Reward-myopic planning is easy to make computationally fast. The Reward-myopic agent can be implemented using the concept of *delay* described in the previous [chapter](/chapters/5b-time-inconsistency) and the implementation is left as an exercise: >**Exercise:** Formalize POMDP and MDP versions of the Reward-myopic agent by modifiying the equations for the expected utility of state-action pairs or belief-state-action pairs. Implement the agent by modifying the code for the POMDP and MDP agents. Verify that the agent behaves sub-optimally (but more efficiently) on Gridworld and Bandit problems. ------ The Reward-myopic agent succeeds if good short-term actions produce good long-term consequences. In Bandit problems, elaborate long-terms plans are not needed to reach particular desirable future states. It turns out that a maximally Reward-myopic agent, who only cares about the immediate reward ($$C_g = 1$$), does well on Multi-arm Bandits provided they take noisy actions refp:kuleshov2014algorithms. The next codeboxes show the performance of the Reward-myopic agent on Bandit problems. The first codebox is a two-arm Bandit problem, illustrated in Figure 1. We use a Reward-myopic agent with high softmax noise: $$C_g=1$$ and $$\alpha=10$$. The Reward-myopic agent's average reward over 100 trials is close to the expected average reward given perfect knowledge of the arms. <img src="/assets/img/5b-greedy-bandit.png" alt="diagram" style="width: 600px;"/> >**Figure 1:** Bandit problem. The curly brackets contain possible probabilities according to the agent's prior (the bolded number is the true probability). For `arm0`, the agent has a uniform prior on the values $$\{0, 0.25, 0.5, 0.75, 1\}$$ for the probability the arm yields the reward 1.5. <br> <!-- noisy_reward_myopic_regret_ratio --> ~~~~ ///fold: getUtility var getUtility = function(state, action) { var prize = state.manifestState.loc; return prize === 'start' ? 0 : prize; }; /// // Construct world: One bad arm, one good arm, 100 trials. var trueArmToPrizeDist = { 0: Categorical({ vs: [1.5, 0], ps: [0.25, 0.75] }), 1: Categorical({ vs: [1, 0], ps: [0.5, 0.5] }) }; var numberOfTrials = 100; var bandit = makeBanditPOMDP({ numberOfTrials, numberOfArms: 2, armToPrizeDist: trueArmToPrizeDist, numericalPrizes: true }); var world = bandit.world; var startState = bandit.startState; // Construct reward-myopic agent // Arm0 is a mixture of [0,1.5] and Arm1 of [0,1] var agentPrior = Infer({ model() { var prob15 = uniformDraw([0, 0.25, 0.5, 0.75, 1]); var prob1 = uniformDraw([0, 0.25, 0.5, 0.75, 1]); var armToPrizeDist = { 0: Categorical({ vs: [1.5, 0], ps: [prob15, 1 - prob15] }), 1: Categorical({ vs: [1, 0], ps: [prob1, 1 - prob1] }) }; return makeBanditStartState(numberOfTrials, armToPrizeDist); }}); var rewardMyopicBound = 1; var alpha = 10; // noise level var params = { alpha, priorBelief: agentPrior, rewardMyopic: { bound: rewardMyopicBound }, noDelays: false, discount: 0, sophisticatedOrNaive: 'naive' }; var agent = makeBanditAgent(params, bandit, 'beliefDelay'); var trajectory = simulatePOMDP(startState, world, agent, 'states'); var averageUtility = listMean(map(getUtility, trajectory)); print('Arm1 is best arm and has expected utility 0.5.\n' + 'So ideal performance gives average score of: 0.5 \n' + 'The average score over 100 trials for rewardMyopic agent: ' + averageUtility); ~~~~ The next codebox is a three-arm Bandit problem show in Figure 2. Given the agent's prior, `arm0` has the highest prior expectation. So the agent will try that before exploring other arms. We show the agent's actions and their average score over 40 trials. <img src="/assets/img/5b-greedy-bandit-2.png" alt="diagram" style="width: 400px;"/> >**Figure 2:** Bandit problem where `arm0` has highest prior expectation for the agent but where `arm2` is actually the best arm. (This may take a while to run.) <!-- noisy_rewardMyopic_3_arms --> ~~~~ // agent is same as above: bound=1, alpha=10 ///fold: var rewardMyopicBound = 1; var alpha = 10; // noise level var params = { alpha: 10, rewardMyopic: { bound: rewardMyopicBound }, noDelays: false, discount: 0, sophisticatedOrNaive: 'naive' }; var getUtility = function(state, action) { var prize = state.manifestState.loc; return prize === 'start' ? 0 : prize; }; /// var trueArmToPrizeDist = { 0: Categorical({ vs: [3, 0], ps: [0.1, 0.9] }), 1: Categorical({ vs: [1, 0], ps: [0.5, 0.5] }), 2: Categorical({ vs: [2, 0], ps: [0.5, 0.5] }) }; var numberOfTrials = 40; var bandit = makeBanditPOMDP({ numberOfArms: 3, armToPrizeDist: trueArmToPrizeDist, numberOfTrials, numericalPrizes: true }); var world = bandit.world; var startState = bandit.startState; var agentPrior = Infer({ model() { var prob3 = uniformDraw([0.1, 0.5, 0.9]); var prob1 = uniformDraw([0.1, 0.5, 0.9]); var prob2 = uniformDraw([0.1, 0.5, 0.9]); var armToPrizeDist = { 0: Categorical({ vs: [3, 0], ps: [prob3, 1 - prob3] }), 1: Categorical({ vs: [1, 0], ps: [prob1, 1 - prob1] }), 2: Categorical({ vs: [2, 0], ps: [prob2, 1 - prob2] }) }; return makeBanditStartState(numberOfTrials, armToPrizeDist); }}); var params = extend(params, { priorBelief: agentPrior }); var agent = makeBanditAgent(params, bandit, 'beliefDelay'); var trajectory = simulatePOMDP(startState, world, agent, 'stateAction'); print("Agent's first 20 actions (during exploration phase): \n" + map(second,trajectory.slice(0,20))); var averageUtility = listMean(map(getUtility, map(first,trajectory))); print('Arm2 is best arm and has expected utility 1.\n' + 'So ideal performance gives average score of: 1 \n' + 'The average score over 40 trials for rewardMyopic agent: ' + averageUtility); ~~~~ ------- ## Myopic Updating: the basic idea The Reward-myopic agent ignores rewards that occur after its myopic cutoff $$C_g$$. By contrast, an "Update-myopic agent", takes into account all future rewards but ignores the value of belief updates that occur after a cutoff. Concretely, the agent at time $$t=0$$ assumes they can only *explore* (i.e. update beliefs from observations) up to some cutoff point $$C_m$$ steps into the future, after which they just exploit without updating beliefs. In reality, the agent continues to update after time $$t=C_m$$. The Update-myopic agent, like the Naive hyperbolic discounter, has an incorrect model of their future self. Myopic updating is optimal for certain special cases of Bandits and has good performance on Bandits in general refp:frazier2008knowledge. It also provides a good fit to human performance in Bernoulli Bandits refp:zhang2013forgetful. ### Myopic Updating: applications and limitations Myopic Updating has been studied in Machine Learning refp:gonzalez2015glasses and Operations Research refp:ryzhov2012knowledge. In most cases, the cutoff point $$C_m$$ after which the agent assumes himself to exploit is set to $$C_m=1$$. This results in a scalable, analytically tractable optimization problem: pull the arm that maximizes the expected value of future exploitation given you pulled that arm. This "future exploitation" means that you pick the arm that is best in expectation for the rest of time. We've presented Bandit problems with a finite number of uncorrelated arms. Myopic Updating also works for generalized Bandit Problems: e.g. when rewards are correlated or continuous and in the setting of "Bayesian Optimization" where instead of a fixed number of arms the goal is to optimize a high-dimensional real-valued function. Myopic Updating does not work well for POMDPs in general. Suppose you are looking for a good restaurant in a foreign city. A good strategy is to walk to a busy street and then find the busiest restaurant. If reaching the busy street takes longer than the myopic cutoff $$C_m$$, then an Update-myopic agent won't see value in this plan. We present a concrete example of this problem below ("Restaurant Search"). This example highlights a way in which Bandit problems are an especially simple POMDP. In a Bandit problem, every aspect of the unknown latent state can be queried at any timestep (by pulling the appropriate arm). So even the Myopic Agent with $$C_m=1$$ is sensitive to the information value of every possible observation that the POMDP can yield[^selfmodel]. [^selfmodel]: The Update-myopic agent incorrectly models his future self, by assuming he ceases to update after cutoff point $$C_m$$. This incorrect "self-modeling" is also a property of model-free RL agents. For example, a Q-learner's estimation of expected utilities for states ignores the fact that the Q-learner will randomly explore with some probability. SARSA, on the other hand, does take its random exploration into account when computing this estimate. But it doesn't model the way in which its future exploration behavior will make certain actions useful in the present (as in the example of finding a restaurant in a foreign city). ### Myopic Updating: formal model Myopic Updating only makes sense in the context of an agent that is capable of learning from observations (i.e. in the POMDP rather than MDP setting). So our goal is to generalize our agent model for solving POMDPs to a Myopic Updating with $$C_m \in [1,\infty]$$. **Exercise:** Before reading on, modify the equations defining the [POMDP agent](/chapters/3c-pomdp) in order to generalize the agent model to include Myopic Updating. The optimal POMDP agent will be the special case when $$C_m=\infty$$. ------------ To extend the POMDP agent to the Update-myopic agent, we use the idea of *delays* from the previous chapter. These delays are not used to evaluate future rewards (as any discounting agent would use them). They are used to determine how future actions are simulated. If the future action occurs when delay $$d$$ exceeds cutoff point $$C_m$$, then the simulated future self does not do a belief update before taking the action. (This makes the Update-myopic agent analogous to the Naive agent: both simulate the future action by projecting the wrong delay value onto their future self). We retain the <a href="/chapters/3c-pomdp.html#notation">notation</a> from the definition of the POMDP agent and skip directly to the equation for the expected utility of a state, which we modify for the Update-myopic agent with cutoff point $$C_m \in [1,\infty]$$: $$ EU_{b}[s,a,d] = U(s,a) + \mathbb{E}_{s',o,a'}(EU_{b'}[s',a'_{b'},d+1]) $$ where: - $$s' \sim T(s,a)$$ and $$o \sim O(s',a)$$ - $$a'_{b'}$$ is the softmax action the agent takes given new belief $$b'$$ - the new belief state $$b'$$ is defined as: $$ b'(s') \propto I_{C_m}(s',a,o,d)\sum_{s \in S}{T(s,a,s')b(s)} $$ <!-- problem with < sign in latex math--> where $$I_{C_m}(s',a,o,d) = O(s',a,o)$$ if $$d$$ < $$C_m$$ and $$I_{C_m}(s',a,o,d) = 1$$ otherwise. The key change from POMDP agent is the definition of $$b'$$. The Update-myopic agent assumes his future self (after the cutoff $$C_m$$) updates only on his last action $$a$$ and not on observation $$o$$. For example, in a deterministic Gridworld the future self would keep track of his locations (as his location depends deterministically on his actions) but wouldn't update his belief about hidden states. The implementation of the Update-myopic agent in WebPPL is a direct translation of the definition provided above. >**Exercise:** Modify the code for the POMDP agent to represent an Update-myopic agent. See this <a href="/chapters/3c-pomdp.html#pomdpCode">codebox</a> or this library [script](https://github.com/agentmodels/webppl-agents/blob/master/src/agents/makePOMDPAgent.wppl). ### Myopic Updating for Bandits The Update-myopic agent performs well on a variety of Bandit problems. The following codeboxes compare the Update-myopic agent to the Optimal POMDP agent on binary, two-arm Bandits (see the specific example in Figure 3). <!--TODO: add statement about equivalent performance. --> <img src="/assets/img/5b-myopic-bandit.png" alt="diagram" style="width: 600px;"/> >**Figure 3**: Bandit problem. The agent's prior includes two hypotheses for the rewards of each arm, with the prior probability of each labeled to the left and right of the boxes. The priors on each arm are independent and so there are four hypotheses overall. Boxes with actual rewards have a bold border. <br> <!-- myopic_bandit_performance --> ~~~~ // Helper functions for Bandits: ///fold: // HELPERS FOR CONSTRUCTING AGENT var baseParams = { alpha: 1000, noDelays: false, sophisticatedOrNaive: 'naive', updateMyopic: { bound: 1 }, discount: 0 }; var getParams = function(agentPrior) { var params = extend(baseParams, { priorBelief: agentPrior }); return extend(params); }; var getAgentPrior = function(numberOfTrials, priorArm0, priorArm1) { return Infer({ model() { var armToPrizeDist = { 0: priorArm0(), 1: priorArm1() }; return makeBanditStartState(numberOfTrials, armToPrizeDist); }}); }; // HELPERS FOR CONSTRUCTING WORLD // Possible distributions for arms var probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] }); var probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] }); // Construct Bandit POMDP var getBandit = function(numberOfTrials){ return makeBanditPOMDP({ numberOfArms: 2, armToPrizeDist: { 0: probably0Dist, 1: probably1Dist }, numberOfTrials: numberOfTrials, numericalPrizes: true }); }; var getUtility = function(state, action) { var prize = state.manifestState.loc; return prize === 'start' ? 0 : prize; }; // Get score for a single episode of bandits var score = function(out) { return listMean(map(getUtility, out)); }; /// // Agent prior on arm rewards // Possible distributions for arms var probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] }); var probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] }); // True latentState: // arm0 is probably0Dist, arm1 is probably1Dist (and so is better) // Agent prior on arms: arm1 (better arm) has higher EV var priorArm0 = function() { return categorical([0.5, 0.5], [probably1Dist, probably0Dist]); }; var priorArm1 = function(){ return categorical([0.6, 0.4], [probably1Dist, probably0Dist]); }; var runAgent = function(numberOfTrials, optimal) { // Construct world and agents var bandit = getBandit(numberOfTrials); var world = bandit.world; var startState = bandit.startState; var prior = getAgentPrior(numberOfTrials, priorArm0, priorArm1); var agentParams = getParams(prior); var agent = makeBanditAgent(agentParams, bandit, optimal ? 'belief' : 'beliefDelay'); return score(simulatePOMDP(startState, world, agent, 'states')); }; // Run each agent 10 times and take average of scores var means = map(function(optimal) { var scores = repeat(10, function(){ return runAgent(5,optimal); }); var st = optimal ? 'Optimal: ' : 'Update-Myopic: '; print(st + 'Mean scores on 10 repeats of 5-trial bandits\n' + scores); return listMean(scores); }, [true, false]); print('Overall means for [Optimal,Update-Myopic]: ' + means); ~~~~ >**Exercise**: The above codebox shows that performance for the two agents is similar. Try varying the priors and the `armToPrizeDist` and verify that performance remains similar. How would you provide stronger empirical evidence that the two algorithms are equivalent for this problem? The following codebox computes the runtime for Update-myopic and Optimal agents as a function of the number of Bandit trials. (This takes a while to run.) We see that the Update-myopic agent has better scaling even on a small number of trials. Note that neither agent has been optimized for Bandit problems. >**Exercise:** Think of ways to optimize the Update-myopic agent with $$C_m=1$$ for binary Bandit problems. <!-- myopic_bandit_scaling --> ~~~~ ///fold: Similar helper functions as above codebox // HELPERS FOR CONSTRUCTING AGENT var baseParams = { alpha: 1000, noDelays: false, sophisticatedOrNaive: 'naive', updateMyopic: { bound: 1 }, discount: 0 }; var getParams = function(agentPrior){ var params = extend(baseParams, { priorBelief: agentPrior }); return extend(params); }; var getAgentPrior = function(numberOfTrials, priorArm0, priorArm1){ return Infer({ model() { var armToPrizeDist = { 0: priorArm0(), 1: priorArm1() }; return makeBanditStartState(numberOfTrials, armToPrizeDist); }}); }; // HELPERS FOR CONSTRUCTING WORLD // Possible distributions for arms var probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] }); var probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] }); // Construct Bandit POMDP var getBandit = function(numberOfTrials) { return makeBanditPOMDP({ numberOfArms: 2, armToPrizeDist: { 0: probably0Dist, 1: probably1Dist }, numberOfTrials, numericalPrizes: true }); }; var getUtility = function(state, action) { var prize = state.manifestState.loc; return prize === 'start' ? 0 : prize; }; // Get score for a single episode of bandits var score = function(out) { return listMean(map(getUtility, out)); }; // Agent prior on arm rewards // Possible distributions for arms var probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] }); var probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] }); // True latentState: // arm0 is probably0Dist, arm1 is probably1Dist (and so is better) // Agent prior on arms: arm1 (better arm) has higher EV var priorArm0 = function() { return categorical([0.5, 0.5], [probably1Dist, probably0Dist]); }; var priorArm1 = function(){ return categorical([0.6, 0.4], [probably1Dist, probably0Dist]); }; var runAgents = function(numberOfTrials) { // Construct world and agents var bandit = getBandit(numberOfTrials); var world = bandit.world; var startState = bandit.startState; var agentPrior = getAgentPrior(numberOfTrials, priorArm0, priorArm1); var agentParams = getParams(agentPrior); var optimalAgent = makeBanditAgent(agentParams, bandit, 'belief'); var myopicAgent = makeBanditAgent(agentParams, bandit, 'beliefDelay'); // Get average score across totalTime for both agents var runOptimal = function() { return score(simulatePOMDP(startState, world, optimalAgent, 'states')); }; var runMyopic = function() { return score(simulatePOMDP(startState, world, myopicAgent, 'states')); }; var optimalDatum = { numberOfTrials, runtime: timeit(runOptimal).runtimeInMilliseconds*0.001, agentType: 'optimal' }; var myopicDatum = { numberOfTrials, runtime: timeit(runMyopic).runtimeInMilliseconds*0.001, agentType: 'myopic' }; return [optimalDatum, myopicDatum]; }; /// // Compute runtime as # Bandit trials increases var totalTimeValues = _.range(9).slice(2); print('Runtime in s for [Optimal, Myopic] agents:'); var runtimeValues = _.flatten(map(runAgents, totalTimeValues)); viz.line(runtimeValues, { groupBy: 'agentType' }); ~~~~ ### Myopic Updating for the Restaurant Search Problem The Update-myopic agent assumes they will not update beliefs after the bound $$C_m$$ and so does not make plans that depend on learning something after the bound. We illustrate this limitation with a new problem: >**Restaurant Search:** You are looking for a good restaurant in a foreign city without the aid of a smartphone. You know the quality of some restaurants already and you are uncertain about the others. If you walk right up to a restaurant, you can tell its quality by seeing how busy it is inside. You care about the quality of the restaurant and about minimizing the time spent walking. How does the Update-myopic agent fail? Suppose that a few blocks from agent is a great restaurant next to a bad restaurant and the agent doesn't know which is which. If the agent checked inside each restaurant, they would pick out the great one. But if they are Update-myopic, they assume they'd be unable to tell between them. The codebox below depicts a toy version of this problem in Gridworld. The restaurants vary in quality between 0 and 5. The agent knows the quality of Restaurant A and is unsure about the other restaurants. One of Restaurants D and E is great and the other is bad. The Optimal POMDP agent will go right up to each restaurant and find out which is great. The Update-myopic agent, with low enough bound $$C_m$$, will either go to the known good restaurant A or investigate one of the restaurants that is closer than D and E. <!--TODO: Toy version is lame (too small). Why is the myopic version so slow? TODO: gridworld draw should take pomdp trajectories. they should also take POMDP as "world". --> <!-- optimal_agent_restaurant_search --> ~~~~ var pomdp = makeRestaurantSearchPOMDP(); var world = pomdp.world; var makeUtilityFunction = pomdp.makeUtilityFunction; var startState = pomdp.startState; var agentPrior = Infer({ model() { var rewardD = uniformDraw([0,5]); // D is bad or great (E is opposite) var latentState = { A: 3, B: uniformDraw(_.range(6)), C: uniformDraw(_.range(6)), D: rewardD, E: 5 - rewardD }; return { manifestState: pomdp.startState.manifestState, latentState }; }}); // Construct optimal agent var params = { utility: makeUtilityFunction(-0.01), // timeCost is -.01 alpha: 1000, priorBelief: agentPrior }; var agent = makePOMDPAgent(params, world); var trajectory = simulatePOMDP(pomdp.startState, world, agent, 'states'); var manifestStates = _.map(trajectory, _.property('manifestState')); print('Quality of restaurants: \n' + JSON.stringify(pomdp.startState.latentState)); viz.gridworld(pomdp.mdp, { trajectory: manifestStates }); ~~~~ >**Exercise:** The codebox below shows the behavior the Update-myopic agent. Try different values for the `myopicBound` parameter. For values in $$[1,2,3]$$, explain the behavior of the Update-myopic agent. <!-- myopic_agent_restaurant_search --> ~~~~ ///fold: Construct world and agent prior as above var pomdp = makeRestaurantSearchPOMDP(); var world = pomdp.world; var makeUtilityFunction = pomdp.makeUtilityFunction; var agentPrior = Infer({ model() { var rewardD = uniformDraw([0,5]); // D is bad or great (E is opposite) var latentState = { A: 3, B: uniformDraw(_.range(6)), C: uniformDraw(_.range(6)), D: rewardD, E: 5 - rewardD }; return { manifestState: pomdp.startState.manifestState, latentState }; }}); /// var myopicBound = 1; var params = { utility: makeUtilityFunction(-0.01), alpha: 1000, priorBelief: agentPrior, noDelays: false, discount: 0, sophisticatedOrNaive: 'naive', updateMyopic: { bound: myopicBound } }; var agent = makePOMDPAgent(params, world); var trajectory = simulatePOMDP(pomdp.startState, world, agent, 'states'); var manifestStates = _.map(trajectory, _.property('manifestState')); print('Rewards for each restaurant: ' + JSON.stringify(pomdp.startState.latentState)); print('Myopic bound: ' + myopicBound); viz.gridworld(pomdp.mdp, { trajectory: manifestStates }); ~~~~ Next chapter: [Joint inference of biases and preferences I](/chapters/5d-joint-inference.html) <br> ### Footnotes
"2017-03-19T18:54:16"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
5c-myopic.md
f9925fa4aa8c50448d99bfdb6889ffa9
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/3a-mdp.html
agentmodels
markdown
--- layout: chapter title: "Sequential decision problems: MDPs" description: Markov Decision Processes, efficient planning with dynamic programming. --- ## Introduction The [previous chapter](/chapters/3-agents-as-programs.html) introduced agent models for solving simple, one-shot decision problems. The next few sections introduce *sequential* problems, where an agent's choice of action *now* depends on the actions they will choose in the future. As in game theory, the decision maker must coordinate with another rational agent. But in sequential decision problems, that rational agent is their future self. As a simple illustration of a sequential decision problem, suppose that an agent, Bob, is looking for a place to eat. Bob gets out of work in a particular location (indicated below by the blue circle). He knows the streets and the restaurants nearby. His decision problem is to take a sequence of actions such that (a) he eats at a restaurant he likes and (b) he does not spend too much time walking. Here is a visualization of the street layout. The labels refer to different types of restaurants: a chain selling Donuts, a Vegetarian Salad Bar and a Noodle Shop. ~~~~ var ___ = ' '; var DN = { name: 'Donut N' }; var DS = { name: 'Donut S' }; var V = { name: 'Veg' }; var N = { name: 'Noodle' }; var grid = [ ['#', '#', '#', '#', V , '#'], ['#', '#', '#', ___, ___, ___], ['#', '#', DN , ___, '#', ___], ['#', '#', '#', ___, '#', ___], ['#', '#', '#', ___, ___, ___], ['#', '#', '#', ___, '#', N ], [___, ___, ___, ___, '#', '#'], [DS , '#', '#', ___, '#', '#'] ]; var mdp = makeGridWorldMDP({ grid, start: [3, 1] }); viz.gridworld(mdp.world, { trajectory : [mdp.startState] }); ~~~~ <a id="mdp"></a> ## Markov Decision Processes: Definition We represent Bob's decision problem as a Markov Decision Process (MDP) and, more specifically, as a discrete "Gridworld" environment. An MDP is a tuple $$ \left\langle S,A(s),T(s,a),U(s,a) \right\rangle$$, including the *states*, the *actions* in each state, the *transition function* that maps state-action pairs to successor states, and the *utility* or *reward* function. In our example, the states $$S$$ are Bob's locations on the grid. At each state, Bob selects an action $$a \in \{ \text{up}, \text{down}, \text{left}, \text{right} \} $$, which moves Bob around the grid (according to transition function $$T$$). In this example we assume that Bob's actions, as well as the transitions and utilities, are all deterministic. However, our approach generalizes to noisy actions, stochastic transitions and stochastic utilities. As with the one-shot decisions of the previous chapter, the agent in an MDP will choose actions that *maximize expected utility*. This depends on the total utility of the *sequence* of states that the agent visits. Formally, let $$EU_{s}[a]$$ be the expected (total) utility of action $$a$$ in state $$s$$. The agent's choice is a softmax function of this expected utility: $$ C(a; s) \propto e^{\alpha EU_{s}[a]} $$ The expected utility depends on both immediate utility and, recursively, on future expected utility: <a id="recursion">**Expected Utility Recursion**</a>: $$ EU_{s}[a] = U(s, a) + \mathbb{E}_{s', a'}(EU_{s'}[a']) $$ <br> with the next state $$s' \sim T(s,a)$$ and $$a' \sim C(s')$$. The decision problem ends either when a *terminal* state is reached or when the time-horizon is reached. (In the next few chapters the time-horizon will always be finite). The intuition to keep in mind for solving MDPs is that the expected utility propagates backwards from future states to the current action. If a high utility state can be reached by a sequence of actions starting from action $$a$$, then action $$a$$ will have high expected utility -- *provided* that the sequence of actions is taken with high probability and there are no low utility steps along the way. ## Markov Decision Processes: Implementation The recursive decision rule for MDP agents can be directly translated into WebPPL. The `act` function takes the agent's state as input, evaluates the expectation of actions in that state, and returns a softmax distribution over actions. The expected utility of actions is computed by a separate function `expectedUtility`. Since an action's expected utility depends on future actions, `expectedUtility` calls `act` in a mutual recursion, bottoming out when a terminal state is reached or when time runs out. We illustrate this "MDP agent" on a simple MDP: ### Integer Line MDP - **States**: Points on the integer line (e.g -1, 0, 1, 2). - **Actions/transitions**: Actions "left", "right" and "stay" move the agent deterministically along the line in either direction. - **Utility**: The utility is $$1$$ for the state corresponding to the integer $$3$$ and is $$0$$ otherwise. Here is a WebPPL agent that starts at the origin (`state === 0`) and that takes a first step (to the right): ~~~~ var transition = function(state, action) { return state + action; }; var utility = function(state) { if (state === 3) { return 1; } else { return 0; } }; var makeAgent = function() { var act = function(state, timeLeft) { return Infer({ model() { var action = uniformDraw([-1, 0, 1]); var eu = expectedUtility(state, action, timeLeft); factor(100 * eu); return action; }}); }; var expectedUtility = function(state, action, timeLeft){ var u = utility(state, action); var newTimeLeft = timeLeft - 1; if (newTimeLeft === 0){ return u; } else { return u + expectation(Infer({ model() { var nextState = transition(state, action); var nextAction = sample(act(nextState, newTimeLeft)); return expectedUtility(nextState, nextAction, newTimeLeft); }})); } }; return { act }; } var act = makeAgent().act; var startState = 0; var totalTime = 4; // Agent's move '-1' means 'left', '0' means 'stay', '1' means 'right' print("Agent's action: " + sample(act(startState, totalTime))); ~~~~ This code computes the agent's initial action, given that the agent will get to take four actions in total. To simulate the agent's entire trajectory, we add a third function `simulate`, which updates and stores the world state in response to the agent's actions: ~~~~ var transition = function(state, action) { return state + action; }; var utility = function(state) { if (state === 3) { return 1; } else { return 0; } }; var makeAgent = function() { var act = function(state, timeLeft) { return Infer({ model() { var action = uniformDraw([-1, 0, 1]); var eu = expectedUtility(state, action, timeLeft); factor(100 * eu); return action; }}); }; var expectedUtility = function(state, action, timeLeft) { var u = utility(state, action); var newTimeLeft = timeLeft - 1; if (newTimeLeft === 0) { return u; } else { return u + expectation(Infer({ model() { var nextState = transition(state, action); var nextAction = sample(act(nextState, newTimeLeft)); return expectedUtility(nextState, nextAction, newTimeLeft); }})); } }; return { act }; } var act = makeAgent().act; var simulate = function(state, timeLeft){ if (timeLeft === 0){ return []; } else { var action = sample(act(state, timeLeft)); var nextState = transition(state, action); return [state].concat(simulate(nextState, timeLeft - 1)) } }; var startState = 0; var totalTime = 4; print("Agent's trajectory: " + simulate(startState, totalTime)); ~~~~ >**Exercise**: Change the world such that it is a loop, i.e. moving right from state `3` moves to state `0`, and moving left from state `0` moves to state `3`. How does this change the agent's sequence of actions? >**Exercise**: Change the agent's action space such that the agent can also move two steps at a time. How does this change the agent's sequence of actions? >**Exercise**: Change the agent's utility function such that the agent moves as far as possible to the right, given its available total time. The `expectedUtility` and `simulate` functions are similar. The `expectedUtilty` function includes the agent's own (*subjective*) simulation of the future distribution on states. In the case of an MDP and optimal agent, the agent's simulation is identical to the world simulator. In later chapters, we describe agents whose subjective simulations differ from the world simulator. These agents either have inaccurate models of their own future choices or innacurate models of the world. We already mentioned the mutual recursion between `act` and `expectedUtility`. What does this recursion look like if we unroll it? In this example we get a tree that expands until `timeLeft` reaches zero. The root is the starting state (`startState === 0`) and this branches into three successor states (`-1`, `0`, `1`). This leads to an exponential blow-up in the runtime of a single action (which depends on how long into the future the agent plans): ~~~~ ///fold: transition, utility, makeAgent, act, and simulate as above var transition = function(state, action) { return state + action; }; var utility = function(state) { if (state === 3) { return 1; } else { return 0; } }; var makeAgent = function() { var act = function(state, timeLeft) { return Infer({ model() { var action = uniformDraw([-1, 0, 1]); var eu = expectedUtility(state, action, timeLeft); factor(100 * eu); return action; }}); }; var expectedUtility = function(state, action, timeLeft) { var u = utility(state, action); var newTimeLeft = timeLeft - 1; if (newTimeLeft === 0) { return u; } else { return u + expectation(Infer({ model() { var nextState = transition(state, action); var nextAction = sample(act(nextState, newTimeLeft)); return expectedUtility(nextState, nextAction, newTimeLeft); }})); } }; return { act }; } var act = makeAgent().act; var simulate = function(state, timeLeft){ if (timeLeft === 0){ return []; } else { var action = sample(act(state, timeLeft)); var nextState = transition(state, action); return [state].concat(simulate(nextState, timeLeft - 1)) } }; /// var startState = 0; var getRuntime = function(totalTime) { return timeit(function() { return act(startState, totalTime); }).runtimeInMilliseconds.toPrecision(4); }; var numSteps = [3, 4, 5, 6, 7]; var runtimes = map(getRuntime, numSteps); print('Runtime in ms for for a given number of steps: \n') print(_.zipObject(numSteps, runtimes)); viz.bar(numSteps, runtimes); ~~~~ Most of this computation is unnecessary. If the agent starts at `state === 0`, there are three ways the agent could be at `state === 0` again after two steps: either the agent stays put twice or the agent goes one step away and then returns. The code above computes `agent(0, totalTime-2)` three times, while it only needs to be computed once. This problem can be resolved by *memoization*, which stores the results of a function call for re-use when the function is called again on the same input. This use of memoization results in a runtime that is polynomial in the number of states and the total time. <!-- We explore the efficiency of these algorithms in more detail in Section VI. --> In WebPPL, we use the higher-order function `dp.cache` to memoize the `act` and `expectedUtility` functions: ~~~~ ///fold: transition, utility and makeAgent functions as above, but... // ...with `act` and `expectedUtility` wrapped in `dp.cache` var transition = function(state, action) { return state + action; }; var utility = function(state) { if (state === 3) { return 1; } else { return 0; } }; var makeAgent = function() { var act = dp.cache(function(state, timeLeft) { return Infer({ model() { var action = uniformDraw([-1, 0, 1]); var eu = expectedUtility(state, action, timeLeft); factor(100 * eu); return action; }}); }); var expectedUtility = dp.cache(function(state, action, timeLeft) { var u = utility(state, action); var newTimeLeft = timeLeft - 1; if (newTimeLeft === 0) { return u; } else { return u + expectation(Infer({ model() { var nextState = transition(state, action); var nextAction = sample(act(nextState, newTimeLeft)); return expectedUtility(nextState, nextAction, newTimeLeft); }})); } }); return { act }; } var act = makeAgent().act; var simulate = function(state, timeLeft){ if (timeLeft === 0){ return []; } else { var action = sample(act(state, timeLeft)); var nextState = transition(state, action); return [state].concat(simulate(nextState, timeLeft - 1)) } }; /// var startState = 0; var getRuntime = function(totalTime) { return timeit(function() { return act(startState, totalTime); }).runtimeInMilliseconds.toPrecision(4); }; var numSteps = [3, 4, 5, 6, 7]; var runtimes = map(getRuntime, numSteps); print('WITH MEMOIZATION \n'); print('Runtime in ms for for a given number of steps: \n') print(_.zipObject(numSteps, runtimes)); viz.bar(numSteps, runtimes) ~~~~ >**Exercise**: Could we also memoize `simulate`? Why or why not? <a id='restaurant_choice'></a> ## Choosing restaurants in Gridworld The agent model above that includes memoization allows us to solve Bob's "Restaurant Choice" problem efficiently. We extend the agent model above by adding a `terminateAfterAction` to certain states to halt simulations when the agent reaches these states. For the Restaurant Choice problem, the restaurants are assumed to be terminal states. After computing the agent's trajectory, we use the [webppl-agents library](https://github.com/agentmodels/webppl-agents) to animate it. <!-- TODO try to simplify the code above or explain a bit more about how webppl-agents and gridworld stuff works --> ~~~~ ///fold: Restaurant constants, tableToUtilityFunction var ___ = ' '; var DN = { name : 'Donut N' }; var DS = { name : 'Donut S' }; var V = { name : 'Veg' }; var N = { name : 'Noodle' }; var tableToUtilityFunction = function(table, feature) { return function(state, action) { var stateFeatureName = feature(state).name; return stateFeatureName ? table[stateFeatureName] : table.timeCost; }; }; /// // Construct world var grid = [ ['#', '#', '#', '#', V , '#'], ['#', '#', '#', ___, ___, ___], ['#', '#', DN , ___, '#', ___], ['#', '#', '#', ___, '#', ___], ['#', '#', '#', ___, ___, ___], ['#', '#', '#', ___, '#', N ], [___, ___, ___, ___, '#', '#'], [DS , '#', '#', ___, '#', '#'] ]; var mdp = makeGridWorldMDP({ grid, start: [3, 1], totalTime: 9 }); var world = mdp.world; var transition = world.transition; var stateToActions = world.stateToActions; // Construct utility function var utilityTable = { 'Donut S': 1, 'Donut N': 1, 'Veg': 3, 'Noodle': 2, 'timeCost': -0.1 }; var utility = tableToUtilityFunction(utilityTable, world.feature); // Construct agent var makeAgent = function() { var act = dp.cache(function(state) { return Infer({ model() { var action = uniformDraw(stateToActions(state)); var eu = expectedUtility(state, action); factor(100 * eu); return action; }}); }); var expectedUtility = dp.cache(function(state, action){ var u = utility(state, action); if (state.terminateAfterAction){ return u; } else { return u + expectation(Infer({ model() { var nextState = transition(state, action); var nextAction = sample(act(nextState)); return expectedUtility(nextState, nextAction); }})); } }); return { act }; }; var act = makeAgent().act; // Generate and draw a trajectory var simulate = function(state) { var action = sample(act(state)); var nextState = transition(state, action); var out = [state, action]; if (state.terminateAfterAction) { return [out]; } else { return [out].concat(simulate(nextState)); } }; var trajectory = simulate(mdp.startState); viz.gridworld(world, { trajectory: map(first, trajectory) }); ~~~~ >**Exercise**: Change the utility table such that the agent goes to `Donut S`. What ways are there to accomplish this outcome? ### Noisy agents, stochastic environments This section looked at two MDPs that were essentially deterministic. Part of the difficulty of solving MDPs is that actions, rewards and transitions can be stochastic. The [next chapter](/chapters/3b-mdp-gridworld.html) explores both noisy agents and stochastic gridworld environments.
"2017-06-16T23:10:13"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
3a-mdp.md
44d01d6d6c9feba874f40b861e6cc502
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/6a-inference-dp.html
agentmodels
markdown
--- layout: chapter title: Dynamic programming description: Exact enumeration of generative model computations + caching. status: stub is_section: false hidden: true ---
"2016-03-09T21:34:03"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
6a-inference-dp.md
d5e89ed22367f9ddf80e6ecc23bc7b71
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/3d-reinforcement-learning.html
agentmodels
markdown
--- layout: chapter title: "Reinforcement Learning to Learn MDPs" description: RL for Bandits, Thomson Sampling for learning MDPs. --- ## Introduction Previous chapters assumed that the agent already knew the structure of the environment. In MDPs, the agent knows everything about the environment and just needs to compute a good plan. In POMDPs, the agent is ignorant of some hidden state but knows how the environment works *given* this hidden state. Reinforcement Learning (RL) methods apply when the agent doesn't know the structure of the environment. For example, suppose the agent faces an unknown MDP. Provided the agent observes the reward/utility of states, RL methods will eventually converge on the optimal policy for the MDP. That is, RL eventually learns the same policy that an agent with full knowledge of the MDP would compute. RL has been one of the key tools behind recent major breakthroughs in AI, such as defeating humans at Go refp:silver2016mastering and learning to play videogames from only pixel input refp:mnih2015human. This chapter applies RL to learning discrete MDPs. It's possible to generalize RL techniques to continuous state and action spaces and also to learning POMDPs refp:jaderberg2016reinforcement but that's beyond the scope of this tutorial. ## Reinforcement Learning for Bandits The previous chapter <a href="/chapters/3c-pomdp.html#bandits">introduced</a> the Multi-Arm Bandit problem. We computed the Bayesian optimal solution to Bandit problems by treating them as POMDPs. Here we apply Reinforcement Learning to Bandits. RL agents won't perform optimally but they often rapidly converge to the best arm and RL techniques are highly scalable and simple to implement. (In Bandits the agent already knows the structure of the MDP. So Bandits does not showcase the ability of RL to learn a good policy in a complex unknown MDP. We discuss more general RL techniques below). Outside of this chapter, we use term "utility" (e.g. in the <a href="/chapters/3a-mdp.html#mdp">definition</a> of an MDP) rather than "reward". This chapter follows the convention in Reinforcement Learning of using "reward". ### Softmax Greedy Agent This section introduces an RL agent specialized to Bandit: a "greedy" agent with softmax action noise. The Softmax Greedy agent updates beliefs about the hidden state (the expected rewards for the arms) using Bayesian updates. Yet instead of making sequential plans that balance exploration (e.g. making informative observations) with exploitation (gaining high reward), the Greedy agent takes the action with highest *immediate* expected return[^greedy] (up to softmax noise). We measure the agent's performance on Bernoulli-distributed Bandits by computing the *cumulative regret* over time. The regret for an action is the difference in expected returns between the action and the objective best action[^regret]. In the codebox below, the arms have parameter values ("coin-weights") of $$[0.5,0.6]$$ and there are 500 Bandit trials. [^greedy]: The standard Epsilon/Softmax Greedy agent from the Bandit literature maintains point estimates for the expected rewards of the arms. In WebPPL it's natural to use distributions instead. In a later chapter, we will implement a more general Greedy/Myopic agent by extending the POMDP agent. [^regret]:The "regret" is a standard Frequentist metric for performance. Bayesian metrics, which take into account the agent's priors, are beyond the scope of this chapter. ~~~~ ///fold: var cumsum = function (xs) { var acf = function (n, acc) { return acc.concat( (acc.length > 0 ? acc[acc.length-1] : 0) + n); } return reduce(acf, [], xs.reverse()); } /// // Define Bandit problem // Pull arm0 or arm1 var actions = [0, 1]; // Given a state (a coin-weight p for each arm), sample reward var observeStateAction = function(state, action){ var armToCoinWeight = state; return sample(Bernoulli({p : armToCoinWeight[action]})) }; // Greedy agent for Bandits var makeGreedyBanditAgent = function(params) { var priorBelief = params.priorBelief; // Update belief about coin-weights from observed reward var updateBelief = function(belief, observation, action){ return Infer({ model() { var armToCoinWeight = sample(belief); condition( observation === observeStateAction(armToCoinWeight, action)) return armToCoinWeight; }}); }; // Evaluate arms by expected coin-weight var expectedReward = function(belief, action){ return expectation(Infer( { model() { var armToCoinWeight = sample(belief); return armToCoinWeight[action]; }})) } // Choose by softmax over expected reward var act = dp.cache( function(belief) { return Infer({ model() { var action = uniformDraw(actions); factor(params.alpha * expectedReward(belief, action)) return action; }}); }); return { params, act, updateBelief }; }; // Run Bandit problem var simulate = function(armToCoinWeight, totalTime, agent) { var act = agent.act; var updateBelief = agent.updateBelief; var priorBelief = agent.params.priorBelief; var sampleSequence = function(timeLeft, priorBelief, action) { var observation = (action !== 'noAction') && observeStateAction(armToCoinWeight, action); var belief = ((action === 'noAction') ? priorBelief : updateBelief(priorBelief, observation, action)); var action = sample(act(belief)); return (timeLeft === 0) ? [action] : [action].concat(sampleSequence(timeLeft-1, belief, action)); }; return sampleSequence(totalTime, priorBelief, 'noAction'); }; // Agent params var alpha = 30 var priorBelief = Infer({ model () { var p0 = uniformDraw([.1, .3, .5, .6, .7, .9]); var p1 = uniformDraw([.1, .3, .5, .6, .7, .9]); return { 0:p0, 1:p1}; } }); // Bandit params var numberTrials = 500; var armToCoinWeight = { 0: 0.5, 1: 0.6 }; var agent = makeGreedyBanditAgent({alpha, priorBelief}); var trajectory = simulate(armToCoinWeight, numberTrials, agent); // Compare to random agent var randomTrajectory = repeat( numberTrials, function(){return uniformDraw([0,1]);} ); // Compute agent performance var regret = function(arm) { var bestCoinWeight = _.max(_.values(armToCoinWeight)) return bestCoinWeight - armToCoinWeight[arm]; }; var trialToRegret = map(regret,trajectory); var trialToRegretRandom = map(regret, randomTrajectory) var ys = cumsum( trialToRegret) print('Number of trials: ' + numberTrials); print('Total regret: [GreedyAgent, RandomAgent] ' + sum(trialToRegret) + ' ' + sum(trialToRegretRandom)) print('Arms pulled: ' + trajectory); viz.line(_.range(ys.length), ys, {xLabel:'Time', yLabel:'Cumulative regret'}); ~~~~ How well does the Greedy agent do? It does best when the difference between arms is large but does well even when the arms are close. Greedy agents perform well empirically on a wide range of Bandit problems refp:kuleshov2014algorithms and if their noise decays over time they can achieve asymptotic optimality. In contrast to the optimal POMDP agent from the previous chapter, the Greedy Agent scales well in both number of arms and trials. >**Exercises**: > 1. Modify the code above so that it's easy to repeatedly run the same agent on the same Bandit problem. Compute the mean and standard deviation of the agent's total regret averaged over 20 episodes on the Bandit problem above. Use WebPPL's library [functions](http://docs.webppl.org/en/master/functions/arrays.html). > 2. Set the softmax noise to be low. How well does the Greedy Softmax agent do? Explain why. Keeping the noise low, modify the agent's priors to be overly "optimistic" about the expected reward of each arm (without changing the support of the prior distribution). How does this optimism change the agent's performance? Explain why. (An optimistic prior assigns a high expected reward to each arm. This idea is known as "optimism in the face of uncertainty" in the RL literature.) > 3. Modify the agent so that the softmax noise is low and the agent has a "bad" prior (i.e. one that assigns a low probability to the truth) that is not optimistic. Will the agent always learn the optimal policy (eventually?) If so, after how many trials is the agent very likely to have learned the optimal policy? (Try to answer this question without doing experiments that take a long time to run.) ### Posterior Sampling Posterior sampling (or "Thompson sampling") is the basis for another algorithm for Bandits. This algorithm generalizes to arbitrary discrete MDPs, as we show below. The Posterior-sampling agent updates beliefs using standard Bayesian updates. Before choosing an arm, it draws a sample from its posterior on the arm parameters and then chooses greedily given the sample. In Bandits, this is similar to Softmax Greedy but without the softmax parameter $$\alpha$$. >**Exercise**: > Implement Posterior Sampling for Bandits by modifying the code above. (You only need to modify the `act` function.) Compare the performance of Posterior Sampling to Softmax Greedy (using the value for $$\alpha$$ in the codebox above). You should vary the `armToCoinWeight` parameter and the number of arms. Evaluate each agent by computing the mean and standard deviation of rewards averaged over many trials. Which agent is better overall and why? <!-- TODO maybe we should include this code so casual readers can try it? --> <!-- Modified act function: var act = dp.cache( function(belief) { var armToCoinWeight = sample(belief); // sample coin-weights return Infer({ model() { var action = uniformDraw(actions); factor(1000 * armToCoinWeight[action]) // pick arm with max weight return action; }}); }); --> ----------- ## RL algorithms for MDPs The RL algorithms above are specialized to Bandits and so they aren't able to learn an arbitrary MDP. We now consider algorithms that can learn any discrete MDP. There are two kinds of RL algorithm: 1. *Model-based* algorithms learn an explicit representation of the MDP's transition and reward functions. These representations are used to compute a good policy. 2. *Model-free* algorithms do not explicitly represent or learn the transition and reward functions. Instead they explicitly represent either a value function (i.e. an estimate of the $$Q^*$$-function) or a policy. The best known RL algorithm is [Q-learning](https://en.wikipedia.org/wiki/Q-learning), which works both for discrete MDPs and for MDPs with high-dimensional state spaces (where "function approximation" is required). Q-learning is a model-free algorithm that directly learns the expected utility/reward of each action under the optimal policy. We leave as an exercise the implementation of Q-learning in WebPPL. Due to the functional purity of WebPPL, a Bayesian version of Q-learning is more natural and in the spirit of this tutorial. See, for example "Bayesian Q-learning" refp:dearden1998bayesian and this review of Bayesian model-free approaches refp:ghavamzadeh2015bayesian. <!-- CODEBOX: Bayesian Q-learning. Apply to gridworld where goal is to get otherside of the and maybe there are some obstacles. For small enough gridworld, POMDP agent will be quicker. --> <!-- ### Policy Gradient --> <!-- - Directly represent the policy. Stochastic function from states to actions. (Can put prior over that the params of stochastic function. Then do variational inference (optimization) to find params that maximize score.) --> <!-- Applied to Bandits. The policy is just a multinomial probability for each arm. You run the policy. Then take gradient in direction that improves the policy. (Variational approximaton will be exact in this case.) Gridworld example of get from top left to bottom right (not knowing initially where the goal state is located). You are learning a distribution over actions in these discrete location. So you have a multinomial for each state. --> ### Posterior Sampling Reinforcement Learning (PSRL) Posterior Sampling Reinforcemet Learning (PSRL) is a model-based algorithm that generalizes posterior-sampling for Bandits to discrete, finite-horizon MDPs refp:osband2016posterior. The agent is initialized with a Bayesian prior distribution on the reward function $$R$$ and transition function $$T$$. At each episode the agent proceeds as follows: > 1. Sample $$R$$ and $$T$$ (a "model") from the distribution. Compute the optimal policy for this model and follow it until the episode ends. > 2. Update the distribution on $$R$$ and $$T$$ on observations from the episode. How does this agent efficiently balances exploration and exploitation to rapidly learn the structure of an MDP? If the agent's posterior is already concentrated on a single model, the agent will mainly "exploit". If the agent is uncertain over models, then it will sample various different models in turn. For each model, the agent will visit states with high reward on that model and so this leads to exploration. If the states turn out not to have high reward, the agent learns this and updates their beliefs away from the model (and will rarely visit the states again). The PSRL agent is simple to implement in our framework. The Bayesian belief-updating re-uses code from the POMDP agent: $$R$$ and $$T$$ are treated as latent state and are observed every state transition. Computing the optimal policy for a sampled $$R$$ and $$T$$ is equivalent to planning in an MDP and we can re-use our MDP agent code. We run the PSRL agent on Gridworld. The agent knows $$T$$ but does not know $$R$$. Reward is known to be zero everywhere but a single cell of the grid. The actual MDP is shown in Figure 1, where the time-horizon is 8 steps. The true reward function is specified by the variable `trueLatentReward` (where the order of the rows is the inverse of the displayed grid). The displays shows the agent's trajectory on each episode (where the number of episodes is set to 10). <img src="/assets/img/3d-gridworld.png" alt="gridworld ground-truth" style="width: 400px;"/> **Figure 1:** True latent reward for Gridworld below. Agent receives reward 1 in the cell marked "G" and zero elsewhere. ~~~~ ///fold: // Construct Gridworld (transitions but not rewards) var ___ = ' '; var grid = [ [ ___, ___, '#', ___], [ ___, ___, ___, ___], [ '#', ___, '#', '#'], [ ___, ___, ___, ___] ]; var pomdp = makeGridWorldPOMDP({ grid, start: [0, 0], totalTime: 8, transitionNoiseProbability: .1 }); var transition = pomdp.transition var actions = ['l', 'r', 'u', 'd']; var utility = function(state, action) { var loc = state.manifestState.loc; var r = state.latentState.rewardGrid[loc[0]][loc[1]]; return r; }; // Helper function to generate agent prior var getOneHotVector = function(n, i) { if (n==0) { return []; } else { var e = 1*(i==0); return [e].concat(getOneHotVector(n-1, i-1)); } }; /// var observeState = function(state) { return utility(state); }; var makePSRLAgent = function(params, pomdp) { var utility = params.utility; // belief updating: identical to POMDP agent from Chapter 3c var updateBelief = function(belief, observation, action){ return Infer({ model() { var state = sample(belief); var predictedNextState = transition(state, action); var predictedObservation = observeState(predictedNextState); condition(_.isEqual(predictedObservation, observation)); return predictedNextState; }}); }; // this is the MDP agent from Chapter 3a var act = dp.cache( function(state) { return Infer({ model() { var action = uniformDraw(actions); var eu = expectedUtility(state, action); factor(1000 * eu); return action; }}); }); var expectedUtility = dp.cache( function(state, action) { return expectation( Infer({ model() { var u = utility(state, action); if (state.manifestState.terminateAfterAction) { return u; } else { var nextState = transition(state, action); var nextAction = sample(act(nextState)); return u + expectedUtility(nextState, nextAction); } }})); }); return { params, act, expectedUtility, updateBelief }; }; var simulatePSRL = function(startState, agent, numEpisodes) { var act = agent.act; var updateBelief = agent.updateBelief; var priorBelief = agent.params.priorBelief; var runSampledModelAndUpdate = function(state, priorBelief, numEpisodesLeft) { var sampledState = sample(priorBelief); var trajectory = simulateEpisode(state, sampledState, priorBelief, 'noAction'); var newBelief = trajectory[trajectory.length-1][2]; var newBelief2 = Infer({ model() { return extend(state, {latentState : sample(newBelief).latentState }); }}); var output = [trajectory]; if (numEpisodesLeft <= 1){ return output; } else { return output.concat(runSampledModelAndUpdate(state, newBelief2, numEpisodesLeft-1)); } }; var simulateEpisode = function(state, sampledState, priorBelief, action) { var observation = observeState(state); var belief = ((action === 'noAction') ? priorBelief : updateBelief(priorBelief, observation, action)); var believedState = extend(state, { latentState : sampledState.latentState }); var action = sample(act(believedState)); var output = [[state, action, belief]]; if (state.manifestState.terminateAfterAction){ return output; } else { var nextState = transition(state, action); return output.concat(simulateEpisode(nextState, sampledState, belief, action)); } }; return runSampledModelAndUpdate(startState, priorBelief, numEpisodes); }; // Construct agent's prior. The latent state is just the reward function. // The "manifest" state is the agent's own location. // Combine manifest (fully observed) state with prior on latent state var getPriorBelief = function(startManifestState, latentStateSampler){ return Infer({ model() { return { manifestState: startManifestState, latentState: latentStateSampler()}; }}); }; // True reward function var trueLatentReward = { rewardGrid : [ [ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 1] ] }; // True start state var startState = { manifestState: { loc: [0, 0], terminateAfterAction: false, timeLeft: 8 }, latentState: trueLatentReward }; // Agent prior on reward functions (*getOneHotVector* defined above fold) var latentStateSampler = function() { var flat = getOneHotVector(16, randomInteger(16)); return { rewardGrid : [ flat.slice(0,4), flat.slice(4,8), flat.slice(8,12), flat.slice(12,16) ] }; } var priorBelief = getPriorBelief(startState.manifestState, latentStateSampler); // Build agent (using *pomdp* object defined above fold) var agent = makePSRLAgent({ utility, priorBelief, alpha: 100 }, pomdp); var numEpisodes = 10 var trajectories = simulatePSRL(startState, agent, numEpisodes); var concatAll = function(list) { var inner = function (work, i) { if (i < list.length-1) { return inner(work.concat(list[i]), i+1) } else { return work; } } return inner([], 0); } var badState = [[ { manifestState : { loc : "break" } } ]]; var trajectories = map(function(t) { return t.concat(badState);}, trajectories); viz.gridworld(pomdp, {trajectory : concatAll(trajectories)}); ~~~~ <!-- TODOS: <br> Gridworld maze example is unknown transition function. So requires a change to code below (which assumes same transitions for agent and simulate function. Clumpy reward uses same model below but has rewards be correlated. Should be easy to implement a simple version of this. Visualization should depict restaurants (which have non-zero rewards). Gridworld maze: Agent is in a maze in perfect darkness. Each square could be wall or not with even probability. Agent has to learn how to escape. Maze could be fairly big but want a fairly short way out. Model for T. Clumpy reward model. Gridworld with hot and cold regions that clump. Agent starts in a random location. If you assume clumpiness, then agent will go first to unvisited states in good clumps. Otherwise, when they start in new places they'll explore fairly randomly. Could we make a realistic example like this? (Once you find some bad spots in one region. You don't explore anywhere near there for a long time. That might be interesting to look at. Could have some really cold regions near the agent. Simple version: agent starts in the middle. Has enough time to go to a bunch of different regions. Regions are clumped in terms of reward. Could think of this a city, cells with reward are food places. There are tourist areas with lots of bad food, foodie areas with good food, and some places with not much food. Agent without clumping tries some bad regions first and keeps going back to try all the places in those regions. Agent with clumping tries them once and then avoids. [Problem is how to implement the prior. Could use enumeration but keep number of possibilities fairly small. Could use some approximate method and just do a batch update at the end of each episode. That will require some extra code for the batch update.] --> ---------- <!-- Table: Structure given / unknown MDP POMDP KNOWN Planning (Solve exactly DP) POMDP solve (Belief MDP) LEARNED POMDP solver (exact Bayesian), RL POMDP solve --> <!-- ### RL and Inferring Preferences Most IRL is actually inverse planning in an MDP. Assumption is that it's an MDP and human already knows R and T. Paper on IRL for POMDPs: assume agent knows POMDP structure. Much harder inference problem. We have discussion of biases that humans have: hyperbolic discounting, bounded planning. These are relevant even if human knows structure of world and is just trying to plan. But often humans don't know structure of world. Better to think of world as RL problem where MDP or POMDP also is being learned. Problem is that there are many RL algorithms, they generally involve lots of randomness or arbitrary parameters. So hard to make precise predictions. Need to coarsen. Show example of this with Thompson sampling for Bandits. Could discuss interactive RL. Multi-agent case. It's beyond scope of modeling. --> ### Footnotes
"2018-06-24T18:01:20"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
3d-reinforcement-learning.md
25ae0e08efcf1ce08fbf7423fd4fdb65
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/5a-time-inconsistency.html
agentmodels
markdown
--- layout: chapter title: "Time inconsistency I" description: Exponential vs. hyperbolic discounting, Naive vs. Sophisticated planning. --- ### Introduction Time inconsistency is part of everyday human experience. In the night you wish to rise early; in the morning you prefer to sleep in. There is an inconsistency between what you prefer your future self to do and what your future self prefers to do. Forseeing this inconsistency, you take actions in the night to bind your future self to get up. These range from setting an alarm clock to arranging for someone drag you out of bed. This pattern is not limited to attempts to rise early. People make failed resolutions to attend a gym regularly. Students procrastinate on writing papers, planning to start early but delaying until the last minute. Empirical studies have highlighted the practical import of time inconsistency both to completing online courses refp:patterson2015can and to watching highbrow movies refp:milkman2009highbrow. Time inconsistency has been used to explain not just quotidian laziness but also addiction, procrastination, and impulsive behavior, as well an array of "pre-commitment" behaviors refp:ainslie2001breakdown. Lab experiments of time inconsistency often use simple quantitative questions such as: >**Question**: Would you prefer to get $100 after 30 days or $110 after 31 days? Most people prefer the $110. But a significant proportion of people reverse their earlier preference once the 30th day comes around and they contemplate getting $100 immediately. How can this time consistency be captured by a formal model? ### Time inconsistency due to hyperbolic discounting This chapter models time inconsistency as resulting from *hyperbolic discounting*. The idea is that humans prefer receiving the same rewards sooner rather than later and the *discount function* describing this quantitatively is a hyperbola. Before describing the hyperbolic model, we provide some background on time discounting and incorporate it into our previous agent models. #### Exponential discounting for optimal agents The examples of decision problems in previous chapters have a *known*, *finite* time horizon. Yet there are practical decision problems that are better modeled as having an *unbounded* or *infinite* time horizon. For example, if someone tries to travel home after a vacation, there is no obvious time limit for their task. The same holds for a person saving or investing for the long-term. Generalizing the previous agent models to the unbounded case faces a difficulty. The *infinite* summed expected utility of an action will (generally) not converge. The standard solution is to model the agent as maximizing the *discounted* expected utility, where the discount function is exponential. This makes the infinite sums converge and results in an agent model that is analytically and computationally tractable. Aside from mathematical convenience, exponential discounting might also be an accurate model of the "time preference" of certain rational agents[^justification]. Exponential discounting represents a (consistent) preference for good things happening sooner rather than later[^exponential]. [^justification]: People care about a range of things: e.g. the food they eat daily, their careers, their families, the progress of science, the preservation of the earth's environment. Many have argued that humans have a time preference. So models that infer human preferences from behavior should be able to represent this time preference. [^exponential]: There are arguments that exponential discounting is the uniquely rational mode of discounting for agents with time preference. The seminal paper by refp:strotz1955myopia proves that, "in the continuous time setting, the only discount function such that the optimal policy doesn't vary in time is exponential discounting". In the discrete-time setting, refp:lattimore2014general prove the same result, as well as discussing optimal strategies for sophisticated time-inconsistent agents. What are the effects of exponential discounting? We return to the deterministic Bandit problem from Chapter III.3 (see Figure 1). Suppose a person decides every year where to go on a skiing vacation. There is a fixed set of options {Tahoe, Chile, Switzerland} and a finite time horizon[^bandit]. The person discounts exponentially and so they prefer a good vacation now to an even better one in the future. This means they are less likely to *explore*, since exploration takes time to pay off. <img src="/assets/img/5a-irl-bandit.png" alt="diagram" style="width: 600px;"/> >**Figure 1**: Deterministic Bandit problem. The agent tries different arms/destinations and receives rewards. The reward for Tahoe is known but Chile and Switzerland are both unknown. The actual best option is Tahoe. <br> [^bandit]: As noted above, exponential discounting is usually combined with an *unbounded* time horizon. However, if a human makes a series of decisions over a long time scale, then it makes sense to include their time preference. For this particular example, imagine the person is looking for the best skiing or sports facilities and doesn't care about variety. There could be a known finite time horizon because at some age they are too old for adventurous skiing. <!-- exponential_discount_vs_optimal_bandits --> ~~~~ ///fold: var baseParams = { noDelays: false, discount: 0, sophisticatedOrNaive: 'naive' }; var armToPlace = function(arm){ return { 0: "Tahoe", 1: "Chile", 2: "Switzerland" }[arm]; }; var display = function(trajectory) { return map(armToPlace, most(trajectory)); }; /// // Arms are skiing destinations: // 0: "Tahoe", 1: "Chile", 2: "Switzerland" // Actual utility for each destination var trueArmToPrizeDist = { 0: Delta({ v: 1 }), 1: Delta({ v: 0 }), 2: Delta({ v: 0.5 }) }; // Constuct Bandit world var numberOfTrials = 10; var bandit = makeBanditPOMDP({ numberOfArms: 3, armToPrizeDist: trueArmToPrizeDist, numberOfTrials, numericalPrizes: true }); var world = bandit.world; var start = bandit.startState; // Agent prior for utility of each destination var priorBelief = Infer({ model() { var armToPrizeDist = { // Tahoe has known utility 1: 0: Delta({ v: 1 }), // Chile has high variance: 1: categorical([0.9, 0.1], [Delta({ v: 0 }), Delta({ v: 5 })]), // Switzerland has high expected value: 2: uniformDraw([Delta({ v: 0.5 }), Delta({ v: 1.5 })]) }; return makeBanditStartState(numberOfTrials, armToPrizeDist); }}); var discountFunction = function(delay) { return Math.pow(0.5, delay); }; var exponentialParams = extend(baseParams, { discountFunction, priorBelief }); var exponentialAgent = makeBanditAgent(exponentialParams, bandit, 'beliefDelay'); var exponentialTrajectory = simulatePOMDP(start, world, exponentialAgent, 'actions'); var optimalParams = extend(baseParams, { priorBelief }); var optimalAgent = makeBanditAgent(optimalParams, bandit, 'belief'); var optimalTrajectory = simulatePOMDP(start, world, optimalAgent, 'actions'); print('exponential discounting trajectory: ' + display(exponentialTrajectory)); print('\noptimal trajectory: ' + display(optimalTrajectory)); ~~~~ #### Discounting and time inconsistency Exponential discounting is typically thought of as a *relative* time preference. A fixed reward will be discounted by a factor of $$\delta^{-30}$$ if received on Day 30 rather than Day 0. On Day 30, the same reward is discounted by $$\delta^{-30}$$ if received on Day 60 and not at all if received on Day 30. This relative time preference is "inconsistent" in a superficial sense. With $$\delta=0.95$$ per day (and linear utility in money), $100 after 30 days is worth $21 and $110 at 31 days is worth $22. Yet when the 30th day arrives, they are worth $100 and $105 respectively[^inconsistent]! The key point is that whereas these *magnitudes* have changed, the *ratios* stay fixed. Indeed, the ratio between a pair of outcomes stays fixed regardless of when the exponential discounter evaluates them. In summary: while a discounting agent evaluates two prospects in the future as worth little compared to similar near-term prospects, the agent agrees with their future self about which of the two future prospects is better. [^inconsistent]: One can think of exponential discounting in a non-relative way by choosing a fixed staring time in the past (e.g. the agent's birth) and discounting everything relative to that. This results in an agent with a preference to travel back in time to get higher rewards! Any smooth discount function other than an exponential will result in preferences that reverse over time refp:strotz1955myopia. So it's not so suprising that untutored humans should be subject to such reversals[^reversal]. Various functional forms for human discounting have been explored in the literature. We describe the *hyperbolic discounting* model refp:ainslie2001breakdown because it is simple and well-studied. Other functional form can be substituted into our models. [^reversal]: Without computational aids, human representations of discrete and continuous quantities (including durations in time and dollar values) are systematically inaccurate. See refp:dehaene2011number. Hyperbolic and exponential discounting curves are illustrated in Figure 2. We plot the discount factor $$D$$ as a function of time $$t$$ in days, with constants $$\delta$$ and $$k$$ controlling the slope of the function. In this example, each constant is set to 2. The exponential is: $$ D=\frac{1}{\delta^t} $$ The hyperbolic function is: $$ D=\frac{1}{1+kt} $$ The crucial difference between the curves is that the hyperbola is initially steep and then becomes almost flat, while the exponential continues to be steep. This means that exponential discounting is time consistent and hyperbolic discounting is not. ~~~~ var delays = _.range(7); var expDiscount = function(delay) { return Math.pow(0.5, delay); }; var hypDiscount = function(delay) { return 1.0 / (1 + 2*delay); }; var makeExpDatum = function(delay){ return { delay, discountFactor: expDiscount(delay), discountType: 'Exponential discounting: 1/2^t' }; }; var makeHypDatum = function(delay){ return { delay, discountFactor: hypDiscount(delay), discountType: 'Hyperbolic discounting: 1/(1 + 2t)' }; }; var expData = map(makeExpDatum, delays); var hypData = map(makeHypDatum, delays); viz.line(expData.concat(hypData), { groupBy: 'discountType' }); ~~~~ >**Figure 2:** Graph comparing exponential and hyperbolic discount curves. <a id="exercise"></a> >**Exercise:** We return to our running example but with slightly different numbers. The agent chooses between receiving $100 after 4 days or $110 after 5 days. The goal is to compute the preferences over each option for both exponential and hyperbolic discounters, using the discount curves shown in Figure 2. Compute the following: > 1. The discounted utility of the $100 and $110 rewards relative to Day 0 (i.e. how much the agent values each option when the rewards are 4 or 5 days away). >2. The discounted utility of the $100 and $110 rewards relative to Day 4 (i.e. how much each option is valued when the rewards are 0 or 1 day away). ### Time inconsistency and sequential decision problems We have shown that hyperbolic discounters have different preferences over the $100 and $110 depending on when they make the evaluation. This conflict in preferences leads to complexities in planning that don't occur in the optimal (PO)MDP agents which either discount exponentially or do not discount at all. Consider the example in the exercise <a href=#exercise>above</a> and imagine you have time inconsistent preferences. On Day 0, you write down your preference but on Day 4 you'll be free to change your mind. If you know your future self would choose the $100 immediately, you'd pay a small cost now to *pre-commit* your future self. However, if you believe your future self will share your current preferences, you won't pay this cost (and so you'll end up taking the $100). This illustrates a key distinction. Time inconsistent agents can be "Naive" or "Sophisticated": - **Naive agent**: assumes his future self shares his current time preference. For example, a Naive hyperbolic discounter assumes his far future self has a nearly flat discount curve (rather than the "steep then flat" discount curve he actually has). - **Sophisticated agent**: has the correct model of his future self's time preference. A Sophisticated hyperbolic discounter has a nearly flat discount curve for the far future but is aware that his future self does not share this discount curve. Both kinds of agents evaluate rewards differently at different times. To distinguish a hyperbolic discounter's current and future selves, we refer to the agent acting at time $$t_i$$ as the $$t_i$$-agent. A Sophisticated agent, unlike a Naive agent, has an accurate model of his future selves. The Sophisticated $$t_0$$-agent predicts the actions of the $$t$$-agents (for $$t>t_0$$) that would conflict with his preferences. To prevent these actions, the $$t_0$$-agent tries to take actions that *pre-commit* the future agents to outcomes the $$t_0$$-agent prefers[^sophisticated]. [^sophisticated]: As has been pointed out previously, there is a kind of "inter-generational" conflict between agent's future selves. If pre-commitment actions are available at time $$t_0$$, the $$t_0$$-agent does better in expectation if it is Sophisticated rather than Naive. Equivalently, the $$t_0$$-agent's future selves will do better if the agent is Naive. ### Naive and Sophisticated Agents: Gridworld Example Before describing our formal model and implementation of Naive and Sophisticated hyperbolic discounters, we illustrate their contrasting behavior using the Restaurant Choice example. We use the MDP version, where the agent has full knowledge of the locations of restaurants and of which restaurants are open. Recall the problem setup: >**Restaurant Choice**: Bob is looking for a place to eat. His decision problem is to take a sequence of actions such that (a) he eats at a restaurant he likes and (b) he does not spend too much time walking. The restaurant options are: the Donut Store, the Vegetarian Salad Bar, and the Noodle Shop. The Donut Store is a chain with two local branches. We assume each branch has identical utility for Bob. We abbreviate the restaurant names as "Donut South", "Donut North", "Veg" and "Noodle". The only difference from previous versions of Restaurant Choice is that restaurants now have *two* utilities. On entering a restaurant, the agent first receives the *immediate reward* (i.e. how good the food tastes) and at the next timestep receives the *delayed reward* (i.e. how good the person feels after eating it). **Exercise:** Run the codebox immediately below. Think of ways in which Naive and Sophisticated hyperbolic discounters with identical preferences (i.e. identical utilities for each restaurant) might differ for this decision problem. <!-- draw_choice --> ~~~~ ///fold: restaurant choice MDP var ___ = ' '; var DN = { name : 'Donut N' }; var DS = { name : 'Donut S' }; var V = { name : 'Veg' }; var N = { name : 'Noodle' }; var grid = [ ['#', '#', '#', '#', V , '#'], ['#', '#', '#', ___, ___, ___], ['#', '#', DN , ___, '#', ___], ['#', '#', '#', ___, '#', ___], ['#', '#', '#', ___, ___, ___], ['#', '#', '#', ___, '#', N ], [___, ___, ___, ___, '#', '#'], [DS , '#', '#', ___, '#', '#'] ]; var mdp = makeGridWorldMDP({ grid, noReverse: true, maxTimeAtRestaurant: 2, start: [3, 1], totalTime: 11 }); /// viz.gridworld(mdp.world, { trajectory: [mdp.startState] }); ~~~~ The next two codeboxes show the behavior of two hyperbolic discounters. Each agent has the same preferences and discount function. They differ only in that the first is Naive and the second is Sophisticated. <!-- draw_naive --> ~~~~ ///fold: restaurant choice MDP, naiveTrajectory var ___ = ' '; var DN = { name : 'Donut N' }; var DS = { name : 'Donut S' }; var V = { name : 'Veg' }; var N = { name : 'Noodle' }; var grid = [ ['#', '#', '#', '#', V , '#'], ['#', '#', '#', ___, ___, ___], ['#', '#', DN , ___, '#', ___], ['#', '#', '#', ___, '#', ___], ['#', '#', '#', ___, ___, ___], ['#', '#', '#', ___, '#', N ], [___, ___, ___, ___, '#', '#'], [DS , '#', '#', ___, '#', '#'] ]; var mdp = makeGridWorldMDP({ grid, noReverse: true, maxTimeAtRestaurant: 2, start: [3, 1], totalTime: 11 }); var naiveTrajectory = [ [{"loc":[3,1],"terminateAfterAction":false,"timeLeft":11},"u"], [{"loc":[3,2],"terminateAfterAction":false,"timeLeft":10,"previousLoc":[3,1]},"u"], [{"loc":[3,3],"terminateAfterAction":false,"timeLeft":9,"previousLoc":[3,2]},"u"], [{"loc":[3,4],"terminateAfterAction":false,"timeLeft":8,"previousLoc":[3,3]},"u"], [{"loc":[3,5],"terminateAfterAction":false,"timeLeft":7,"previousLoc":[3,4]},"l"], [{"loc":[2,5],"terminateAfterAction":false,"timeLeft":6,"previousLoc":[3,5],"timeAtRestaurant":0},"l"], [{"loc":[2,5],"terminateAfterAction":true,"timeLeft":6,"previousLoc":[2,5],"timeAtRestaurant":1},"l"] ]; /// viz.gridworld(mdp.world, { trajectory: naiveTrajectory }); ~~~~ <!-- draw_sophisticated --> ~~~~ ///fold: restaurant choice MDP, sophisticatedTrajectory var ___ = ' '; var DN = { name : 'Donut N' }; var DS = { name : 'Donut S' }; var V = { name : 'Veg' }; var N = { name : 'Noodle' }; var grid = [ ['#', '#', '#', '#', V , '#'], ['#', '#', '#', ___, ___, ___], ['#', '#', DN , ___, '#', ___], ['#', '#', '#', ___, '#', ___], ['#', '#', '#', ___, ___, ___], ['#', '#', '#', ___, '#', N ], [___, ___, ___, ___, '#', '#'], [DS , '#', '#', ___, '#', '#'] ]; var mdp = makeGridWorldMDP({ grid, noReverse: true, maxTimeAtRestaurant: 2, start: [3, 1], totalTime: 11 }); var sophisticatedTrajectory = [ [{"loc":[3,1],"terminateAfterAction":false,"timeLeft":11},"u"], [{"loc":[3,2],"terminateAfterAction":false,"timeLeft":10,"previousLoc":[3,1]},"u"], [{"loc":[3,3],"terminateAfterAction":false,"timeLeft":9,"previousLoc":[3,2]},"r"], [{"loc":[4,3],"terminateAfterAction":false,"timeLeft":8,"previousLoc":[3,3]},"r"], [{"loc":[5,3],"terminateAfterAction":false,"timeLeft":7,"previousLoc":[4,3]},"u"], [{"loc":[5,4],"terminateAfterAction":false,"timeLeft":6,"previousLoc":[5,3]},"u"], [{"loc":[5,5],"terminateAfterAction":false,"timeLeft":5,"previousLoc":[5,4]},"u"], [{"loc":[5,6],"terminateAfterAction":false,"timeLeft":4,"previousLoc":[5,5]},"l"], [{"loc":[4,6],"terminateAfterAction":false,"timeLeft":3,"previousLoc":[5,6]},"u"], [{"loc":[4,7],"terminateAfterAction":false,"timeLeft":2,"previousLoc":[4,6],"timeAtRestaurant":0},"l"], [{"loc":[4,7],"terminateAfterAction":true,"timeLeft":2,"previousLoc":[4,7],"timeAtRestaurant":1},"l"] ]; /// viz.gridworld(mdp.world, { trajectory: sophisticatedTrajectory }); ~~~~ >**Exercise:** (Try this exercise *before* reading further). Your goal is to do preference inference from the observed actions in the codeboxes above (using only a pen and paper). The discount function is the hyperbola $$D=1/(1+kt)$$, where $$t$$ is the time from the present, $$D$$ is the discount factor (to be multiplied by the utility) and $$k$$ is a positive constant. Find a single setting for the utilities and discount function that produce the behavior in both the codeboxes above. This includes utilities for the restaurants (both *immediate* and *delayed*) and for the `timeCost` (the negative utility for each additional step walked), as well as the discount constant $$k$$. Assume there is no softmax noise. ------ The Naive agent goes to Donut North, even though Donut South (which has identical utility) is closer to the agent's starting point. One possible explanation is that the Naive agent has a higher utility for Veg but gets "tempted" by Donut North on their way to Veg[^naive_path]. [^naive_path]: At the start, no restaurants can be reached quickly and so the agent's discount function is nearly flat when evaluating each one of them. This makes Veg look most attractive (given its higher overall utility). But going to Veg means getting closer to Donut North, which becomes more attractive than Veg once the agent is close to it (because of the discount function). Taking an inefficient path -- one that is dominated by another path -- is typical of time-inconsistent agents. The Sophisticated agent can accurately model what it *would* do if it ended up in location [3,5] (adjacent to Donut North). So it avoids temptation by taking the long, inefficient route to Veg. In this simple example, the Naive and Sophisticated agents each take paths that optimal time-consistent MDP agents (without softmax noise) would never take. So this is an example where a bias leads to a *systematic* deviation from optimality and behavior that is not predicted by an optimal model. In Chapter 5.3 we explore inference of preferences for time inconsistent agents. Next chapter: [Time inconsistency II](/chapters/5b-time-inconsistency.html) <br> ### Footnotes
"2019-08-24T14:52:08"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
5a-time-inconsistency.md
8adf0ba4ce94372feb4380f99a96c790
Modeling Agents with Probabilistic Programs
https://agentmodels.org/chapters/6c-inference-rl.html
agentmodels
markdown
--- layout: chapter title: Reinforcement learning techniques description: Max-margin and linear programming methods for IRL. status: stub is_section: false hidden: true --- - Could have appendix discussing Apprenticeship Learning ideas in Abbeel and Ng in more detail.
"2016-03-09T21:34:01"
[ "Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan" ]
[]
6c-inference-rl.md
End of preview.

AI Alignment Research Dataset

The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.

Sources

Here are the list of sources along with sample contents:

Keys

All entries contain the following keys:

  • id - string of unique identifier
  • source - string of data source listed above
  • title - string of document title of document
  • authors - list of strings
  • text - full text of document content
  • url - string of valid link to text content
  • date_published - in UTC format

Additional keys may be available depending on the source document.

Usage

Execute the following code to download and parse the files:

from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')

To only get the data for a specific source, pass it in as the second argument, e.g.:

from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')

Limitations and Bias

LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.

Contributing

The scraper to generate this dataset is open-sourced on GitHub and currently maintained by volunteers at StampyAI / AI Safety Info. Learn more or join us on Discord.

Rebuilding info

This README contains info about the number of rows and their features which should be rebuilt each time datasets get changed. To do so, run:

datasets-cli test ./alignment-research-dataset --save_info --all_configs

Citing the Dataset

For more information, here is the paper and LessWrong post. Please use the following citation when using the dataset:

Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).

Downloads last month
1