## Why models?

- Models help you think strategically: you must name the parts and identify the relationship between the parts.
- Understand the class of outcome: equilibrium, cycle, random, complex.
- Identify logical boundaries.
- Communicate more effectively.

## General model info

- Rational models assume that you maximize actions that produce a desired result.
- Behavioral models account for human limitations.
- Expect rationality when the stakes are high, the behavior is often repeated so people learn, group decisions, or easy problems.

- Schelling/rule-based models help us predict and simplify things.
- Decision models: only my actions matters.
- Game models: others’ choices also matter.

## Models

### Monty Hall problem

Pick a door at random. 1/3 chance of winning. Someone reveals a door that is NOT the right one among the two I didn’t select. Should I change my answer to the remaining unselected door? Yes, 2/3 chance of winning vs 1/3.

### Agent based models

Agents have rules or behaviors. Based on how agents individually optimize, extrapolate the general outcome.

## Studying segregation

### Schelling segregation models

*Segregation example:*people move if a certain percentage of their neighbors is not like them based on an attribute (race, income, etc.)*Calculation:*Model it as a grid in which each person has 8 neighbors.*Finding:*micro preferences translate into a bigger macro observation. If each individual household wants at least 30% of their neighbors to be like them at the macro level you’ll end up with 70% similar at equilibrium. There will be lots of segregation.*Terms**Exodus tip*: a person leaving causes another to move.*Genesis tip*: a person moving in causes another to move.

*Measuring overall segregation:*for each block of houses, see what the composition by race or economic status is. Calculate the deviation from fully mixed (e.g., from overall population in the area). Add up the variations for the overall population and divide to get average deviation.

## Studying Peer effects

### Model: Granovettor’s Model

- Models the likelihood of a standing ovation. The same model can be used for uprisings, etc.
- Imagine if there are:
- N individuals,
- Each person has a threshold Tj to stand
- They will also stand if T other people stand (follow-suit)

- Factors to consider
- Threshold to stand (T)
- Quality (Q)
*Rule 1:*if S>T, stand- S = Signal received = Q + E
- Q is quality of the performance.
- E is considered an “error” or “variance” in perceived quality. Could be because of diversity!

*Rule 2:*Collective influence: If more than X% of people stand, I’ll stand as well.

- Insights: What would make collective action more likely?
- Higher quality
- Lower thresholds to stand (could be due to lack of knowledge, less knowledgeable crowd)
- Larger peer effect
*More variation in the thresholds. Due to diversity or error.**Celebrities: those in locations seen by more people stand.**Big groups: higher chance of collective influence.*

### Game of life

*Rules:*If off, turn on if 3 neighbors are on. If on, stay on if 2 or 3 neighbors are on.- Terms
- Self organization: patterns without a designer.
- Emergence: functionalities appear
- Logic Right: simple rule produces incredible phenomena.

- Insights
- Through simple rules you can create models that have all sorts of patterns: random, predictable, complex, equilibrium.
- Chaos is caused by a high lambda; substantial interdependence.

### Preference aggregation

*Transitive preferences*- If A>B and B>C, then A>C.
- This works at the individual level, but breaks down at the group level.

- Terms
- Normative: inform decisions
- Positive: explains what you see.

- Evaluating multi-criterion problems
- Qualitative: list of criteria and compare (e.g., house 1 vs. house 2)
- Quantitative: add weight to each criterial

- Spatial choice model
- Look @ difference between ideal & choices available.

## Probability basics

### Axioms

- Always between 0 and 1
- Sum of all probabilities is 1
- If A is a subset of B, then P(A) < or = P(B)

### Methods

- Classical: use math to figure it out (rolling dice)
- Counting: use frequency.
- Subjective
- Model-based probabilities.

## Behavioral models

Account for biases found in WEIRD (western educated industrial rich developed) countries.

- Base rate bias
- Status quo bias
- Hyperbolic discounting
- Prospect theory

When modeling these

- Start with rationality
- Account for biases
- Look at the real world to fine tune.

Some rules to consider (generally easy to model)

- Random choice
- Most direct choice
- Fixed strategy
- Tit for tat, grim trigger (nice until you are not, mean ever after)
- Gradient: slowly adjust to good outcome.
- Best response
- Mimicry

Rules generally

- Are easy to model
- Capture the main efffects
- Ad-hoc
- Easy to exploit

## Types of models

- Linear
- Categorical: lump things into categories and make decisions based on those categories.
- Calculation: Validate categorization by looking at the % of reduction in variation due to the categories. Look at R-squared of differences to the mean for that category to calculate the variance reduction.
- Insight: experts are those with more categories that minimize variation by categorizing properly.
- Important: Critical to chose the right category (e.g., Amazon not valuable if just a delivery company, valuable if seen as an information company.)

- Non-linear
- Big coefficients

## Tipping points

Often exponential models are confused as tipping point.

### Percolation model

- Forest fires, information percolation, mathematical activity bursts, bank and country failures.

### Tipping point classification

- Direct tip: small action or events has a huge implication in value.
- Contextual tip: change in environment
- Within class: system remains (e.g., continuously alternating, or in equilibrium)
- Between classes: system changes classes (e.g., from equilibrium to random)

### Measuring tips (Entropy and Diversity Index)

## Contagion models

### Diffusion: not a tipping point.

Formula:

- N = number of meetings
- C = contact rate
- T = rate activity
- Wt/N = % of people with it
- (N-Wt)/N = % of people without it

### SIS Model (causes a tip)

Person w/ contagion can move back to the cured pile.

Formula:

- If Wt is small, then
- Spread only occurs if ct>1 or ct/a>1
- Disease spreads if vaccination rate is less than:
- V = % vaccinated.

## Country Growth models

### Solow Growth model

- A = technology
- K = Capital
- L = Labor

Capital to labor ratios and technology drive the long term growth of an economy.

Why do nations fail?

- Government removes capital from investments
- Creative destruction -> as innovation happens, it displaces other labor on industries. If government is controlled by a few they may prevent LTE to improve by fear of creative destruction.

Markov models: gives us a way to model, given various states and probabilities of each state, what the processes looks like and the end result will be.

- Assumes: (1) a finite number of states, (2) ability to get from one state to any other, (3)
*fixed transition probabilities*and (4) not a simple cycle (e.g., A->B->C->A)*Markov convergence*will give us the steady state given a few dimensions/assumptions if it’s not a simple cycle.

- Calculations: use a 2-dimensional matrix. The horizontal axis will have your variables at time t, vertical is the same variables at t+1. Each cell will have the probability of staying in a state or transitioning to a new state. Probability of A(t) -> A(t+1) will be in (0,0), probability of A(t) -> B(t+1) in (0,1), etc.
- Statistical equilibrium: when you hit equilibrium things are still changing states, but it’s at the same rate. Find equilibrium by looking at when the % in a given state is the same at a given t and t+1.
- Powerful takeaways: initial state, history, interventions on the state don’t matter *in the long run*, so long as you don’t change the transition probabilities.
- Exapting: identify an anonymous writer by looking at transition probabilities from one keyword to another (e.g., from “for” to “example, etc.); look at transition probabilities based on existing work from that writer.

### Finding the best solution

Perspective: representation of all possible solutions. A good perspective minimizes local optima.

Heuristics: how you search within a perspective.

Maximizing group perspectives and heuristics help a group look at more solutions.

Local optima for a team is the intersection for the individual.

**Growth model**

Does money makes you happier? From 0-10k, seems that yes ,from 20-60k, not really.

**Economic Growth**

Consider compounding, exponential growth, rule of 72 (will ignore labor)

*Rule of 72:*divide 72 by your growth rate to estimate in how many years money will double.*Continuous compounding:*e^(rt)

### Simple Growth Model

- Workers
- Coconuts
- Picking Machines
- Machines wear out

**Assumptions**

- We will skip/simplify the labor market. In a real labor market, people decide whether to go to work based on wages, which depend on the market. We will skip that and simplify Lt to be 100.
- Depreciation = .25
- Savings rate = 0.3

If you keep running the model, you’ll find a long term equilibrium point.

- Output = 10 sqrt(M)
- Investment is .3 x 10 sqrt(M) = 3sqrt(M)
- Depreciation = .25M
- Equilibrium at 3sqrt(M)=.25(M)
- At this point, your investment and depreciation equal and the output no longer grows.

Example:

- Output 5sqrt(M)
- D=0.10
- Country 1, s=.90
- Equilibrium when .9*(5sqrt(M)) = .1(M)
- Machines = 2025
- Output = 45*5 = 225

- Country 2, s=.10
- .1*5sqrt(M) = .1M
- Output = 5*5 = 25

For sustained growth, we need new tech!

## Rent vs. Buy (Khan Academy)

Cons | Benefits | |

Buy | Loan interestProperty tax (~1%)Upkeep | Tax benefitStabilityCan customize the house |

Rent | Costs are fixed | Down payment can be invested (return)FlexibilityNo concerns with house value |

Buy

- Loan interest

## System Equilibrium

### Lyapunov Functions

Helps you know whether a system will go to equilibrium.

- For a function F(x)
- A1: has a maximum value
- A2: there is a k > 0 such that if x(t+1) != x(t) > F(xt) + k
- So if it’s not fixed, then it increases by at least k
- Need to specify at least k, because if you increase slower over time you may never hit the max.

- Claim: at some point x(t+1) = x(t) since at some point you have to reach F(x)

Could do the same but with a minimum and something decreasing by at least k each step.

The tricky part – how to construct the *Lyapunov function*.

### Organization of cities

*Behavior:* people change their weekly routes to minimize the number of people the meet (avoid crowds)

*Insights*: simple model shows how cities self-organize without a central planner telling each person where to go. People make their own decisions optimizing for their want of avoiding crowds and the global phenomena observed is an organized city.

### Market and exchanges

- In a fish market, total happiness goes to equilibrium. There’s a max happiness level (when all get what they want) and trades help increase happiness or they wouldn’t happen.
- However, there is the possibility of trades made by two entities cause a third entity to be less happy (e.g., international trade) – in these cases, with more externalities that counteract the function, the system may not reach equilibrium.

Some Lyapunov functions may have a max, which is never reached, because the system goes to equilibrium before hitting that max/min.

### Lyaponuv or Markov?

Lyaponuv | Markov | |

Summary of model | Max/min value, each period get closer to it by at least k | Finite states, fixed transition probabilities, can get from any state to another, not a simple cycle |

Starting point | History matters, equilibrium may change based on starting criteria. | Doesn’t matter. Equilibrium is unique. |

Equilibrium vs. stop | At equilibrium, the system stops, no more trades, etc. | At equilibrium, the system keeps churning, but the ratios remain consistent (thus equilibrium) |

## Culture

- Differences between cultures, similarities within the culture, and those are interesting (odd, seemingly suboptimal, etc.)
- Help the organization or group organize and live.

Why it matters? Culture -> Trust, trust is needed for transactions.

### Coordination and culture

- Culture is simply a set of things that people coordinate around in daily life. For example, language, whether to cross the street when there stop sign is up, whether to wear shoes in the house, etc.
- Pure coordination game basically goes like this – there are many coordination points. The more coordination points that two people agree on, the more likely that they are to meet. If they do meet, there may be another coordination point that they don’t agree on which they familiarize on and end up agreeing with. Typically one person (follower) adopts the other person’s value on that coordination point.
- What this ends up showing you is that if you model a lot of people each as a vector with different coordination point values, culture emerges in regions and there are big differences across cultures. Otherwise, there would be more coordination points causing interactions and ultimately leading to similarity.

### Coordination and consistency

- This model adds coherence (people want consistency). Also add innovation/errors.
- For example, someone is a 1,1,4,4 -> assume that the first two values are hugging family and hugging friends. Then the person goes to college and there to assimilate they learn to hug friends. They turn into a 1,5,4,4. Then they go home and realize that if they hug friends, why not hug family? (desire for consistency). Then they become 5,5,4,4, with the first 5 being the newly found desire to hug family.
- It’s similar to a Markov process, various states, but there’s one thing that differs -> from equilibrium you likely can’t get to any other state, so it doesn’t fulfill the Markov model’s axiom of being able to go from one state to any other state (eventually).

- With this, you may find that some people choose to be consistent in a way that’s suboptimal for a specific situation.

## Path dependence

Outcome dependent: probabilities depend upon the sequence of events. History matters!

**When does history matter?**

When it **changes the transition probabilities**! Otherwise, you’ve got something similar to a Markov process, with fixed transition probs.

### Example: Ann Arbor vs. Jackson

- Ann Arbor: largest public university.
- Jackson: largest four walled prison!

**Population**

Choosing to build a University vs. the prison totally changed the future of the city!

### Chaos

Extreme sensitivity to initial conditions (ESTIC). Chaos deals with initial conditions, not with the path taken.

Chaos is NOT path dependence.

### Urn Models

- Class of models that help describe path dependence.
- Use colored balls to represent a problem and solve an equilibrium question. Some are path-dependant, some are not.
- Equilibrium is what happens in the long run.

### Bernoulli

- Simple model that is path independent and whatnot.
- Take a ball, put it back in, don’t change anything else.
- Describes roulette, blackjack.

### Polya model

Path dependent. For example, pick a ball at random from a bag with blue and red balls. If you pick a red ball, add another red ball to the bag (and replace the one you picked). This makes probabilities change over time.

Interesting learnings

- Any probability of red balls is an equilibrium and equally likely.
- Any history of B blue and R red balls is equally likely.

### The Sway

For each ball picked, add more weight to the ball picked earlier. Decisions made earlier hold more sway/weight.

### Balancing model

Opposite of Polya model. In this, if you pick red then add a blue and vice-versa. In this case the balancing process converges to equal percentage of the two colors of balls.

### Independence, initial conditions, path dependence, phat

Independent: outcome doesn’t depend on starting point or what happens along the way.

Initial conditions: outcome depends on the starting state.

Path dependent: the outcome probabilities depend upon the sequence of past outcomes.

Phat dependent: outcome probabilities depend upon past outcomes, but not their order.

## Historians: path dependence

In general, order of when things happen make a big difference as to how civilization and society look.

## Increasing returns

**Is increasing returns the same thing as path dependant? **

*Model: electric vs. gas*

- Use an Urn model with gas being a blue ball, red being electric.
- Start with U = 5 blue, 1 red.
- If red, add 1 blue and 1 red
- If blue, add 10 blue (due to increasing result)
- This process is NOT path dependent, but it is increasing results.

**Can you get path dependant without increasing results? **

Yes, look at Symbiots.

- U = 1 blue, 1 red, 1 green, 1 yellow
- If red, add green,
- If green, add red
- If Blue add yellow
- If yellow, add blue

- Path dependent, but not increasing returns.

## Externalities: beyond Urn

Consider that there are externalities (both positive and negative) that could create additional path dependence since doing A first may mean that doing B next has a negative (or positive) externality that promotes or discourages you from doing B.

## Path Dependence and Tipping Points

Path dependence outcomes: color of ball in a given period depends on the path.

Path dependence equilibrium: percentage of red balls in the long run depends on the path.

Tipping point: a single instance in time where something happens and BAM you get a result.

Path dependence is basically gradual accumulation of steps resulting in something, while in tipping points there is an abrupt change.

So, the diversity index in the probability distribution decreases a LOT within a single step in a tipping point.

## Networks

Only yellow or purple lines here show discourse among bloggers, with blue bloggers representing liberal bloggers, red representing conservative bloggers.

### Structure of networks

Degree (node): number of edges attached to a node.

Degree (network): average degree of all nodes.

- Easily calculated by counting connections (edges) * 2 and divide by nodes. Multiply by 2 since an edge connect two nodes!

Theorem: the average degree of neighbors of nodes will be at least as large as the average degree of the network.

- E.g., your friends on average have more friends than you do.

Path length from A to B: minimal number of edges that must be traversed to go from node A to node B.

Connectedness: you can get from any node to any other node by walking through the edges.

Disconnected: can’t transverse from one set of nodes to another set of nodes.

Markov process can be represented as a network. Write down the states and add directed arrows with probabilities of transitions.

Clustering coefficient: percentage of triples of nodes that have edges between all three nodes.

- Total number of possible clusters of 3 (do factorials) n!/3!
- Clustering coefficient is just number of clusters of 3 (triangles) seen, divided by the total possible clusters of 3.

Higher clustering coefficient is correlated with:

- Redundancy
- Social Capital
- Innovation adoption

## Logic of Network Creation

### Random

N nodes, P = probability of two nodes being connected.

*Contextual Tipping Point:* for large N, the network almost always becomes connected when P > 1/(N -1)

### Small Worlds

People have some percentage of “local” or “clique” friends and some percentage of random friends.

As you have more random friends, the clustering coefficient and the avg path length both decrease.

### Preferential attachment

Node arrives, then they figure out connections.

Probability of connecting is proportional of what already has a connection.

You end up with some very highly connected nodes, but the majority of the nodes are only connected to a few other nodes. Skewed. Long tail distribution.

See preferential attachment model on netlogo for a visualization

Network is path dependant, but equilibrium is not.

### Network function

What does the network do?

**Six Degrees**

Clique friends and random friends.

If C = 140, R = 10

1-degree: 150

2-degree: CR, RC, RR = 1400, 1400, 100 = 2900

3 degree: RRR, RRC, RCR, CRR, CRC = 1000, 14000, 14000, 14000, 196000 = 239000

Research: new jobs, friends, partners, etc. are found through weak ties.

## Randomness

A random walk is a process in which the cumulative value of something depends on a sequence of random variables.

Randomness is defined by (1) the distribution of it and (2) the source of the randomness.

X + e. Epsilon is the error term: noise, error, uncertainty (e.g., cost of project), complexity, capriciousness.

## Model for luck

Outcome = a*Skill + (1-a)*Luck

Determining a factor: You can observe variability in outcome of a team over time. If the skill matters a lot more than luck, then projects done by the same team or teams with similar skill levels would lead to similar outcomes.

*Importance:* assess outcomes, anticipate reversion to the mean if luck, give good feedback, fair allocation of resources.

The paradox of skill: when a group of super talented individuals are competing, the outcome is more luck-based since the variability in skill is smaller.

## Binary Random Walk

You have binary +1 or -1 result in each step.

**Result 1:** After N (even #) of flips, expected earning is 0.

**Result 2:** for any number K, a randome walk will pass both -K and +K an infinite number of times.

**Result 3:** for any number K, a random walk will have a streak of K heads (and K tails) an infinite number of times.

E.g., winning 16 times in a row = (½)^16. About 1 in 100,000 people will get this.

You will get a **regression to the mean**, similar to the “No Free Lunch” theorem you may want to see heuristics that apply to the current situation which may be different from the past heuristics that worked.

## Normal Random Walk

Set X = 0, in each period change value by a normal amount.

**Efficient market hypothesis**

Assume a normal distribution, but it’s a bit more concentrated towards 0 (days where not much happens).

Prices reflect all available information, so price changes are basically random. News are random.

## Finite Memory Random Walk

Value at time T includes only the last 5 values.

So Vt = X(t) + X(t-1) +…+ X(t-4)

“Sliding window of memory”

## Colonel Blotto Game

Allocating of troops based on other person. No best strategy. Anything can be defeated. It’s a zero-sum game.

- 2 players with T troops.
- N fronts (T >> N)
- Actions: allocation of troops around fronts
- Payoffs: # of fronts won

**Applications of Blotto Game**

- US electoral college: focus “troops” on the right states.
- Terrorism: terrorists can attack tons of places, government needs to decide where to deploy troops.
- Trials: lawyers (players), fronts (lines of defense), resources (time spent preparing for each line of defense).
- Hiring: applicants (players), relevant skills (fronts)
- Sports, etc.

**Blotto: Troop Advantages**

The more the difference in troops, the more fronts needed to not guarantee winning by the one with more troops. Weaker player wants to add more fronts.

Cycles when multiple players (e.g., A can beat B, B can beat C, and C beat A).

## Prisoner’s Dilemma and Collective Action

### The Prisoner’s Dilemma

One player is P1, the other is P2. Each can either cooperate (C) or defect (D). If they both cooperate, they do well. If they both defect, they are both worse off. If one defects and the other one doesn’t, the one that defects does best.

Note: the BEST overall payoff would be when they both cooperate (overall gain of 8). Thus is Prisoner’s Dilemma.

### Formal definition of prisoner’s dilemma

**Given: **

**T > R****F > T****2T > F**

### Insights from the Prisoner’s Dilemma

- The worst, most inefficient outcome is when both players defect. The overall payoff is bad, and there is an alternative in which both players could be better off, which is by both cooperating. (Which would be Pareto Efficient)
- However, the individual player strategies would optimize for defecting. That is because taken from the context of the individual, he or she is always better off by defecting. If she assumes that the other person will cooperate, the payoff for defecting is higher. If the assumes that the other person will defect, she should also defect. (Nash Equilibrium)
- As such, individual incentives lead to the least efficient outcome.

### Examples were Prisoner’s Dilemma is interesting

**Countries and arms control**

**Companies and Price Competition**

**Banks & ATM machines**

### Ways to reach cooperation

Assume that you have one action, you get a cost by cooperating and a benefit from the action.

- Repetition: repeat the game many times thus you can adjust the strategy and it will be dependent on prior game’s action.
- Likelihood of repeat encounter (e.g., you’ll be nicer to people you’ll likely meet again)
- Indirect reciprocity (e.g., reputation that she’s nice, thus she gets helped by others.)
- Network reciprocity: it will work so long as your neighbors/network is k, and k is less than b/c, where b is the benefit and c the cost.

- Group selection: within groups defectors do better, but groups where there’s a lot of cooperation are more likely to be better off than those with large % of defectors.
- Kin selection:

Getting cooperation in human societies

- Laws
- Incentives

## Collective action and common pool resource problem

I make a choice (Xj) and it has an impact in the collective (society).

There’s a cost to me, -xj, beta is the benefit to me, and there’s a benefit to others.

### Collective action

E.g., carbon emissions. Need some sort of regulation or incentive to drive the change.

### Common pool resource problem

Example: Cod

Imagine that Cod available at time 1 will be that which hasn’t been fished, squared.

So if you have 25, eat 20, then at time 1 you’ll have (25-20)^2 = 25. You’re at equilibrium.

If you eat 21, then you only get 16 in the next period and you are at risk of running out.

### Collective action or common resource problems in the real world

There are many models you’d have to draw out for the common resource problem. Each problem may be slightly different. For example:

- Cattle overgrazing: you can have a rotation scheme here.
- Lobster fishing: this is more difficult since you can’t tag and count the lobster and rotate it. The harvest each person may get could be different. Need to model population effects more so than in the grass issue before.
- Drawing out water from a stream: person upstream can do a lot more damage than the person downstream.

## Mechanism Design

Hidden actions: even if action are hidden you want to influence what action someone takes.

Hidden information: can’t figure out what type a person is (how risky, etc.)

### Hidden Actions

For example: you hire employees and don’t know how much effort they will put in. How do you structure your contract so that they work? You can’t measure/observe the action, but you can observe the result. If the action isn’t taken, you know the result will be bad. If the action is taken, there’s a probability that the result will be good. The cost effort is c.

You can pay only if the result is good. This would be ** incentive compatible**.

How much should you pay them?

M – c >/= pM

- C = cost of effort put in by the employee
- P = probability that they’ll succeed at the task/outcome.
- M = how much you pay.

In other words, it makes sense for the worker to put in the effort if the amount of money received (M) adjusted for the effort put in (c ) is greater than the probability that the outcome will be good and they’ll get the money anyway.

### Hidden Information

Want to hire high-ability employees, but ability is unknown. Could do the following: I will hire you for pay M, but first you must work K hours.

High ability workers have a lower cost (c) for the same K hours. Low ability workers have higher cost (C) thus they will only work if M is higher.

Thus, by setting a contract cost of K, only folks with high ability will work for you for a fee M.

### Auctions

3 types

- Ascending Bid: you should pay what you really value things at. The net benefit is how much you bid (what you think it’s worth to you) minus what you pay.
- Second Price: sealed bid. Highest bid wins it, but pays second highest bid as the price.
- Sealed Bid

Different types of bidders

- Rational
- Emotional/Psychological
- Rule-following

Surplus is value – bid (V-B)

So you maximize your value by bidding V/2.

### Public Projects or Public Goods

Hidden information issue, how do you help people reveal their value.

Idea: pay the minimum amount you have to pay to make the project viable, others will contribute the rest.

Clark, Groves, and Vicry pivot: ask each person to say how much they’d pay for the project. They have the incentive to reveal the true amount to maximize the chances that they’ll benefit for the project. However, each person wants to pay the least they can knowing the value of the public good to others. So if I value something at $200 that costs $500, and two other people value it at $200 each, I will claim that I value it at $100 so that I pay less and the project is still funded.

You can show that it’s impossible to design a mechanism that accomplishes all of the following:

- Efficient
- People always join (not coerce them)
- Incentive compatible
- Balanced

## Replicator Dynamics

Various strategies and the proportion on each changes over time based on various factors. You can apply this to evolution. You have various phenotypes and the prevalence of one particular phenotype will vary based on current percentage of this phenotype (how many there are) and the fitness/adaptability of that.

### Replicator equation

Balance the two process (rational vs. sociological)

Probability(t+1) = Probability(t) * payoff.

Interesting idea: we have a lot of people driving SUV because of collective action. Ideally, we would drive compact cars (easier to park, fuel efficient, etc.), but if we encounter an SUV we feel unsafe and can’t see around that SUV, thus we tend to migrate to SUVs even though it’s inefficient.

### Fisher’s theorem

Higher variance leads to higher adaptability. The more variation, the faster the population adapts to the better phenotype.

Theorem: Change will be proportional to the variance.

## Prediction and the many model thinker

Crowd’s error = average error – diversity!

Crowd diversity = (sum(guess(i) – average crowd guess)^2/)n