Consider the expected revenue of a mechanism designed to sell a good to one agent who’s value of the good is distributed uniformly with support $[ 0, 1 ]$. This expected revenue is known to be
$R[p] = \int_0^1 p(v) (2 v - 1)\, dv$where $p(v)$ is a function that determines the probability of selling the good to the agent when her value is $v$ and $2 v - 1$ is the virtual surplus. The above equation, $R[p]$, is a functional of $p$. That is, it maps the function $p$ to a real number. Suppose that you wanted to find the revenue maximizing mechanism $p$. Then, the problem you want to solve is
$\max_p \int_0^1 p(v) (2 v - 1)\, dv$subject to $0 \leq p(v) \leq 1$. If you look at this problem closely, you will note that the optimal function must be
$p(v) = \begin{cases} 0 &\text{ if } v < 0.5 \\ 1 &\text{ if } v > 0.5 \end{cases}$where the value at one half does not matter. This is because $p$ is multiplied by some “slope”. When the slope is positive, we want $p$ to be as large as possible. When the slope is negative, we want $p$ to be as small as possible. Indeed, the functional derivative of $R[p]$ is
$\frac{\delta R}{\delta p}(v) = 2v - 1$We will see that the functional derivative formalizes our intuition about this slope. It will allow us to solve optimization problems that are nonlinear.
Optimization over function spaces is a natural extension of optimization over the reals. So, it’s important to remember exactly how optimization works on functions of real vectors. Consider the following function $g : \mathbb{R}^N \to \mathbb{R}$.
$g(X) = (x_1)^2 + (x_2)^2$where $X = (x_1, x_2)$. The directional derivative in the direction $V = (v_1, v_2)$ is
$\begin{align*} dg_V(X) &= \lim_{\epsilon \to 0} \left[ \frac{g(X + \epsilon V) - g(X)}{\epsilon} \right] \\ &= \nabla g(X) \cdot V \\ &= \begin{bmatrix} 2 x_1 \\ 2 x_2 \end{bmatrix} \cdot V \\ \end{align*}$If $X^\star$ is a minimum, there cannot be a direction, $V$, such that $dg_V(X^\star) < 0$. This would imply that there exists an $\epsilon > 0$ such that $g(X^\star + \epsilon V) < g(X^\star)$. This contradicts the assumption that $X^\star$ is a minimum. In addition, there cannot be a $V$ such that $dg_V(X^\star) > 0$ because his would imply $dg_{(-V)}(X^\star) = - dg_{V}(X^\star) < 0$.
Therefore, we know that for any $X^\star$, $dg_V(X^\star) = 0$ for all $V \in \mathbb{R}^2$. This is only possible if $\nabla g(X^\star) = (0, 0)$ which means $\frac{\partial g(X^\star)}{\partial x_1} = 2 x_1 = 0$ and $\frac{\partial g(X^\star)}{\partial x_2} = 2 x_2 = 0$. So, the only local stationary point is at $(0,0)$. However, stationarity is a necessary but not sufficient condition. We would need to test for convexity to ensure that this is a minimum. Though, it obviously is.
Functional optimization follows the same principles.
Suppose there is a known continuous function $\phi$ that we want to approximate with another continuous function using least squares on the interval $[0, 1]$. That is, you want to minimize the following functional
$L[f] = \int_0^1 (f(x) - \phi(x))^2\, dx.$Of course, the minimum is going to be at $f^* = \phi$. We will solve this by finding the Gateaux differential of $L$ with respect to some direction, $\psi$. This is defined in the same way as the directional derivative on the reals
$dL_{\psi}[f] = \lim_{\epsilon \to 0} \left[ \frac{L[f + \epsilon \psi] - L[f]}{\epsilon} \right].$So, the Gateaux differential of $L$ is
$\begin{align*} dL_{\psi}[f] &= \lim_{\epsilon \to 0} \left[ \int_0^1 \frac{ (f(x) + \epsilon \psi(x) - \phi(x))^2 - (f(x) - \phi(x))^2}{\epsilon} \, dx \right] \\ &= \lim_{\epsilon \to 0} \left[ \int_0^1 \frac{ 2f(x) - 2\phi(x) + \epsilon\psi(x)}{\epsilon} \epsilon\psi(x) \, dx \right] \\ &= \int_0^1 2 (f(x) - \phi(x)) \psi(x) \, dx\\ \end{align*}$which is the inner product of the functional derivative,
$\frac{\delta L}{\delta f}(x) = 2 (f(x) - \phi(x)),$and the direction, $\psi$. Like in calculus on the reals, we need this to be zero for all directions. It turns out that, just like before, we only need to set the directional derivative to zero. This is due to the following lemma.
Fundamental lemma of calculus of variations If $g$ is continuous on $[a,b]$ and satisfies the equality
$\int_a^b g(x) \psi(x) \, dx = 0$for all continuous functions $\psi$ such that $\psi(a)=\psi(b)=0$, then $f(x) = 0$ for all $x \in [a,b]$.
This means that $dL_{\psi}[f]=0$ if and only if $2 (f(x) - \phi(x)) = 0$ for all $x \in [0,1]$. So, $f = \phi$ as anticipated.
]]>Contests are models of conflict in which risk neutral players exert costly effort to win a prize. When one player is more capable than another, contests become less competitive. This lack of competitiveness decreases total effort. A contest designer can use two common tools to increase the total effort: reserve bids (Bertoletti 2016) and direct discrimination (Ewerhart 2017; Franke, Leininger, and Wasser 2018).
Reserve bids (i.e. inefficient allocation rules) allow the contest designer to require either player to exert some minimum amount of effort to qualify for the prize. A large enough reserve bid can make the lack of competition from a competitor irrelevant. For example, a professional figure skater would not need to exert much effort to beat me in a figure skating competition. However, if winning a large prize requires a high score, then the professional is not really competing against me. He is competing against this reserve bid. A large enough reserve bid can ensure any amount of individually rational effort.
Discrimination allows the contest designer to discriminate against the stronger player to reduce or remove her advantage. Handicaps in sporting events are a common example. This requires the contest designer to know which contestant is more talented. Otherwise, the designer does not know who to handicap. If this information is known, direct discrimination can be used to level the playing field of any contest.
We consider an alternative way of maximizing revenue – “fair” design. The contest designer maximizes the revenue of a contest under complete information without withholding the prize or treating contestants differently. We show that under these conditions, the designer is unable to achieve the first best and must give a positive payoff to the stronger player. However, allowing an arbitrarily reserve bid or direct discrimination ensures that the first best revenue can be achieved.
This work takes a similar approach to Letina, Liu, and Netzer (2020). However, unlike this work, I do not allow the principal to engage in direct discrimination or modify/withhold the prize. This work on optimal contest design contributes to a literature on revenue dominance in symmetric efficient contests (Fang 2002; Franke, Kanzow, Leininger, and Schwartz 2014) by characterizing an efficient symmetric contest which weakly dominates all others and contributes to a much larger literature on contest design.
Suppose that the prize of contest is worth one to both players. So, the two players have the same value. However, their scores have different costs. In particular, we will say that the score is $k > 1$ times more costly for Player 2 than for Player 1. You can interpret this as saying that Player 1 is more skilled. So, it takes less effort for her to produce a high score.
The probability of Player $i$ winning the prize in a contest when she chooses score $s_i$ and her opponent chooses score $s_{-i}$ is:
$p_{i}(s_i, s_{-i}).$The final payoffs are:
$U_1(s_1, s_2) = p_1 (s_1, s_2) - s_1$ $U_2(s_1, s_2) = p_2 (s_2, s_1) - k s_2.$We define two notions of “fairness” that are central to our analysis:
Efficiency means that someone is always a winner. Symmetry means that both players are treated the same by the designer. We will see that these conditions do not individually constrain the designer’s revenue. However, when the two conditions are imposed together, the designer is meaningfully constrained.
Note that these conditions jointly imply $p_i(x,x) = 0.5$.
Suppose that there is some contest designer who can choose $p_1, p_2$ in order to maximize the expected revenue, $E[s_1 + s_2]$ given the (possibly mixed) strategies of the players. If strategies are pure, then the revenue is deterministic. Note that incentive compatibility implies
$E[s_1 + k s_2] \leq E[p_1 (s_1, s_2) + p_2 (s_2, s_1)] \leq 1.$Therefore, expected revenue is bound above by $1 - (k - 1) E[s_2]$ and the first best is achieved iff $s_1 = 1$ and $s_2 = 0$.^{1} We can immediately see that no contest which satisfies both symmetry and efficiency will achieve the first best. This is because both players receive a payoff of zero in the first best, but Player 1 can choose zero instead of one to receive a payoff of $p(0,0) = 0.5$.
If we allow for a reserve bid, but still enforce symmetry, is is easy for the designer to achieve the first best. For example, the following all-pay auction with a reserve bid achieves the first best:
$p(x, y) = \begin{cases} 0 &\text{if } x < \max(y,1) \\ \frac{1}{2} &\text{if } x \geq 1, x=y \\ 1 &\text{if } x \geq 1, x > y. \end{cases}$This is because Player 1 is willing to meet the reserve bid of 1 and Player 2 is not.
One might wonder if the designer can still extract the full surplus with a smaller reserve bid. The answer is yes. In fact, $p(0,0) = 0$ is sufficient for the first best to be attainable under symmetry. That is, the designer need only be able to withhold the prize when both players play zero. To see this, consider the following contest which satisfies both properties except at zero.
$p(x,y) = \begin{cases} 1 &\text{if } x-y >1 \\ x-y &\text{if } x-y \in(0,1] \\ 0 &\text{if } x = y = 0 \\ \frac{1}{2} &\text{if } x=y \neq 0 \\ 1+x-y &\text{if } x-y <0. \end{cases}$This contest has an equilibrium at the first best yet only denies the prize to players when they both exert no effort.
Direct discrimination is similar to a reserve bid. It is possible to mimic any reserve bid through direct discrimination by promising the prize to the weaker player whenever the reserve is not met. In particular, take either example from the previous section and set $p_1(x,y) = p(x,y)$ and $p_2(y,x) = 1 - p(x,y)$. Such a contest is efficient and will be strategically equivalent to the original contest. For example, consider the aforementioned all-pay auction with a reserve bid. With this transformation,
$p_1(x, y) = \begin{cases} 0 &\text{if } x < \max(y,1) \\ \frac{1}{2} &\text{if } x \geq 1, x=y \\ 1 &\text{if } x \geq 1, x > y \end{cases}$ $p_2(x, y) = \begin{cases} 0 &\text{if } y \geq 1, y > x \\ \frac{1}{2} &\text{if } y \geq 1, x=y \\ 1 &\text{if } y < \max(x,1). \end{cases}$This contest has an equilibrium at the first best.
We now restrict the principal to use an efficient symmetric contest. In equilibrium, each Player must weakly prefer her equilibrium payoff over copying the strategy of her opponent. Therefore, the following weak incentive compatibility condition is necessary for equilibrium:
$\frac{1}{2} - E[s_2] \leq E[U_1(s_1, s_2)].$Together with $E[U_2(s_1, s_2)] \geq 0$ this implies
$\begin{aligned} \frac{1}{2} - E[s_2] &\leq E[U_1(s_1, s_2) + U_2(s_1, s_2)] \\ \frac{1}{2} - E[s_2] &\leq 1 - E[s_1 + k s_2] \end{aligned}$where the last line follows from efficiency. Rearranging this equation gives an upper bound on the revenue:
$E[s_1 + s_2] \leq \frac{1}{2} + (2 - k) E[s_2]$which can be further bounded by
$E[s_1 + s_2] \leq \begin{cases} \frac{1}{k} &\text{if } k < 2 \\ \frac{1}{2} &\text{if } k \geq 2. \end{cases}$I show by construction that this upper bound is tight. That is, there exist optimal contests which achieve these bounds.
The second bound comes from the fact that $E[s_2] \leq \frac{1}{2k}$ because Player 2 cannot win with probability more than one half. To see this, consider that the following two conditions must hold in equilibrium
$\begin{aligned} \frac{1}{2} - k E[s_1] &\leq E[p(s_2, s_1) - k s_2] \\ E[s_2] &\leq E[s_1] + (k)^{-1} \left(E[p(s_2, s_1)] - \frac{1}{2}\right) \end{aligned}$and
$\begin{aligned} \frac{1}{2} - E[s_2] &\leq E[p(s_1, s_2) - s_1] \\ E[s_2] &\geq E[s_1] + \left(E[p(s_2, s_1)] - \frac{1}{2}\right) \end{aligned}$which imply $E[p(s_2, s_1)] \leq \frac{1}{2}$.
Case 1: $k < 2$. This means that revenue cannot exceed $\frac{1}{k}$. We can reach this upper bound with an all-pay auction with a bid cap at $\frac{1}{2k}$ as in Che and Gale (1998). That is
$p(x,y) = \begin{cases} 1 &\text{if } \frac{1}{2k} \geq x > y \text{ or } y > \frac{1}{2k} \\ \frac{1}{2} &\text{if } x=y \\ 0 &\text{if } \frac{1}{2k} \geq y > x \text{ or } x > \frac{1}{2k}. \end{cases}$This has an equilibrium at $s_1 = s_2 = \frac{1}{2k}$ which achieves the upper bound for $k < 2$.
Case 2: $k \geq 2$. The above implies that revenue cannot exceed one half. Consider the following difference form contest as in Che and Gale (2000):
$p(x,y) = \begin{cases} 1 &\text{if } x-y > \frac{1}{2} \\ \frac{1}{2} + x-y &\text{if } x-y \in \left[-\frac{1}{2},\frac{1}{2}\right] \\ 0 &\text{if } x-y < - \frac{1}{2} \end{cases}$The above contest has an equilibrium at $s_1 = \frac{1}{2}$ and $s_2 = 0$ which achieves the upper bound for $k \geq 2$.
Note that these strategies must be pure because no mixed strategy can have expectation zero and no individually rational mixed strategy can have expectation one. ↩
\underbar
.
There are two breaking changes:
\mathnormal
instead of \mathcal
In order to use the new gem on your website (eg. with Jekyll), you’ll want to update to the 0.13.3 stylesheet. For example, if you previously used the following CSS declaration
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.11.0/dist/katex.min.css" crossorigin="anonymous">
then you’ll want to replace it with
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.13.3/dist/katex.min.css" crossorigin="anonymous">
Otherwise, you might get display issues.
]]>The solution is to set the R_LIBS_USER
environment variable to a folder that is not synced by OneDrive. This can be done by running this script or by following these steps:
rundll32 sysdm.cpl,EditEnvironmentVariables
and hit OK.R_LIBS_USER
C:\Users\username\R
)Now that you changed your library directory, you’ll need to reinstall your libraries. You could copy them from OneDrive, but some may be damaged.
Steps 1 and 2 can be replaced by searching for some part of “Edit environment variables for the current user” in the start menu. This is easier to remember, but this guide provides the shortest path.
]]>Contests are models of conflict where risk neutral players exert costly effort to win a prize. A lottery contest is any contest where there is not a deterministic relationship between effort and victory. For example, scores could be measured incorrectly or there could be some randomness in the relationship between effort and the final scores. Alternatively, you can interpret lottery contests as giving each player a slice of a prize. For example, a 5% probability of receiving the prize is the same as receiving 5% of the prize with certainty.
The most popular lottery contest is the Tullock contest.^{1} We discuss ways to increase competitiveness in Tullock contests by discriminating against the stronger player. Handicaps in sporting events are a real world example of such discrimination. The purpose of a handicap is to make a match more even so that both teams/players exert more effort.
Suppose that the prize of contest is worth one to both players. So, the two players have the same value. However, their scores have different costs. In particular, we will say that the score is $k > 1$ times more costly for Player 2 than for Player 1.^{2} You can interpret this as saying that Player 1 is more skilled. So, it takes less effort for her to produce a high score.
The probability of Player 1 winning the prize in a Tullock contest when she chooses score $s_1$ and her opponent chooses score $s_2$ is:
$p_1(s_1, s_2) = \frac{s_1^{r}}{s_1^{r} + (\delta s_2)^{r} }.$The parameter $\delta$ sets the level (and direction) of direct discrimination. If $\delta = 0$, then Player 1 always wins. If $\delta = \infty$, then Player 2 always wins. In general, $\delta > 1$ implies discrimination against Player 1. Note that the prize is symmetric when $\delta = 1$. In this case, there is no direct discrimination.
The other parameter, $r$, is more subtle. It affects the variance of the lottery in the contest. If $r \to \infty$, the prize always goes to the player with the higher score.^{3} If $r = 0$, the prize is allocated at random. This precision parameter changes the marginal returns to effort at different levels. For example, if $r < 1$, then the lower effort player gets a disproportionately high return. While such a contest is fair, in the sense that the two players are not treated differently, it is actually rigged in favor of the weaker player.
The final payoffs are:
$U_1(s_1, s_2) = \frac{s_1^{r}}{s_1^{r} + (\delta s_2)^{r} } - s_1$ $U_2(s_1, s_2) = \frac{(\delta s_2)^{r}}{s_1^{r} + (\delta s_2)^{r} } - k s_2$We consider pairs $r, \delta$ such that the following two conditions are satisfied.
The first assumption prevents excessive discrimination against Player 1 and is without loss of optimality. This is made for simplicity. Without it, we would have to consider more cases.
The second assumption is more important. It is a necessary and sufficient condition for there to be an equilibrium in pure strategies. It is always satisfied if $r \leq 1$ and is never satisfied if $r > 2$. The condition is equivalent to
$r \leq \bar{r}(\delta/k) = 1 - \frac{W \left( - (\delta/k) \log{\left( \delta/k \right)} \right)}{\log{\left( \delta/k \right)}}$where $W$ is the Lambert W function. The upper bound, $\bar{r}$ is a strictly increasing function.
It is not obvious, but this condition is also without loss of optimality.
To find the Nash equilibrium, we need to find the best response functions for each of the players. We do this by maximizing each payoff function. Player 1’s first order condition is
$\frac{r s_1^{r} (\delta s_2)^{r}}{s_1 \left(s_1^{r}+(\delta s_2)^{r}\right)^2} = 1$and Player 2’s first order condition is
$\frac{r s_1^{r } (\delta s_2)^{r}}{s_2 \left(s_1^{r}+(\delta s_2)^{r}\right)^2} = k.$This is a system of two equations with two unknowns. Dividing the second by the first gives us $k = \frac{s_1^{\star}}{s_2^{\star}}$. So the ratio of the scores does not depend on the discrimination parameters. Thus, any change that increases one also increases the other. With this, we can quickly solve the system of equations to get $s_1^{\star} = \frac{r (k \delta)^{r} }{\left( k^{r}+\delta^{r} \right)^2}$ and $s_2^{\star} = \frac{r (k \delta)^{r} }{k \left( k^{r}+\delta^{r} \right)^2}$.
In order for the first order approach to be valid, we have to check the second order and boundary conditions.
The second order conditions require the second derivative of the payoffs to be negative at the equilibrium. They are satisfied if and only if $\left( \frac{k}{\delta} \right)^{r} > \frac{r - 1}{r + 1}$. This inequality holds by Assumption 1.
The boundary conditions require that payoffs are non-negative at the proposed equilibrium. Otherwise, the player would prefer to chose a score of zero. Using $k = \frac{s_1^{\star}}{s_2^{\star}}$, we can dramatically simplify the equilibrium payoffs:
$U_1(s_1^{\star}, s_2^{\star}) = \frac{k^{r}}{k^{r} + \delta^{r}} - s_1^{\star}$ $U_2(s_1^{\star}, s_2^{\star}) = \frac{\delta^{r}}{k^{r} + \delta^{r}} - s_1^{\star}.$By Assumption 1, $U_2(s_1^{\star}, s_2^{\star}) \leq U_1(s_1^{\star}, s_2^{\star})$. So, we only need a condition such that Player 2 has a positive payoff. Substituting $s_1^{\star} = \frac{r (k \delta)^{r} }{\left( k^{r}+\delta^{r} \right)^2}$ gives our lowest payoff
$U_2(s_1^{\star}, s_2^{\star}) = \frac{\delta^{r}}{k^{r} + \delta^{r}} - \frac{r (k \delta)^{r} }{\left( k^{r}+\delta^{r} \right)^2}.$It’s not obvious that this is non-negative by Assumption 2. However, it just requires some factoring.
Suppose that there is some contest designer who can choose $r$ and/or $\delta$ in order to maximize the revenue, $s_1 + s_2$. Because we know that $k = \frac{s_1}{s_2}$, we are actually trying to maximize $(1 + k^{-1}) s_1$. Because $k$ is constant, maximizing $s_1$ is enough.
Suppose $r$ is fixed. For now, assume it is one. To maximize revenue, we need to maximize $s_1$. Suppose that we can directly discriminate using $\delta$. Then, our problem is
$\max_{\delta} \frac{k \delta}{( k + \delta )^2}.$This maximum is obtained at $\delta^\star = k$.
In this case, the revenue is
$\begin{aligned} s_1 + s_2 &= \left( 1 + \frac{1}{k} \right) s_1 \\ &= \left( 1 + \frac{1}{k} \right) \frac{k^2}{( 2 k )^2} \\ &= \frac{1 + k}{4 k} \end{aligned}$If we allow $r$ to take any positive value, then the optimal delta will still be $\delta^\star = k$. This result comes from the optimization problem
$\max_{\delta} \frac{r (k \delta)^{r} }{\left( k^{r}+\delta^{r} \right)^2}$which has the following first order condition:
$\frac{r^2 ( \delta k)^{r} ( k^{r} - \delta^{r} )}{ \delta ( k^{r} + \delta^{r} )^3 } = 0.$Suppose that the contest designer cannot discriminate directly. So $\delta = 1$, and the designer chooses $r$ in order to maximize the revenue of the contest. Then, our problem is
$\max_{r} \frac{r k^{r}}{( 1 + k^r )^2}.$If we take first order conditions and solve, we get
$r = \frac{1}{\log(k)} \left( 1 - \frac{2}{1 + k^{r}} \right)^{-1}$which defines implicit function $r^\star(k)$. However, these values cannot be attained unless $k$ is sufficiently large. Recall that there is no pure strategy equilibrium if $r > \bar{r}$. So, there is no way to reach some of these values.
If you set up a Lagrangian as in Proposition 3 of Nti (2004), then you find that the constrained optimum is the minimum of the two pictured curves. No exact representation of the intersection is known. However, it is approximately $k = 3.509$.
We know that if both forms of discrimination are allowed, then $\delta^\star = k$ because this is the optimum for any $r$. As you can see in the first plot, this means that $\bar{r} = 2$. We will show that this is also the constrained optimum. Our optimization problem is
$\max_{r \leq 2} \frac{r}{4}$which is obviously attained at $r = 2$. So, the optimal revenue is $(1 + k^{-1}) \frac{2}{4} = \frac{1 + k}{2 k}$.
Assumption 2, which guarantees that equilibria exist in pure strategies is without loss of optimality. So, the solutions in the previous sections hold for all $r \geq 1$.
If payoffs are symmetric, Assumption 2 is violated if and only if $r > 2$. In this case, Baye et al. (1994) shows that when there is a symmetric mixed strategy equilibrium in which both players receive a payoff of zero.^{4} This means that when $\delta = k$,
$\begin{aligned} E[s_1] &= E \left[ \frac{s_1^{r}}{s_1^{r} + (k s_2)^{r} } \right] \\ k E[s_2] &= E \left[ \frac{(k s_2)^{r}}{s_1^{r} + (k s_2)^{r} } \right] \end{aligned}$Symmetry implies that that each player’s expected probability of winning is one half. Therefore, $E[s_1] = \frac{1}{2}$, $E[s_2] = \frac{1}{2 k}$, and $E[s_1 + s_2] = \frac{1 + k}{2 k}$. This is the same maximum revenue as in the overall optimum in pure strategies. It’s not obvious that this is the maximum revenue. For example, one might imagine that the principal could ensure zero payoffs but have Player 1 win with probability greater than one half. In order to confirm that Assumption 2 is without loss of optimality, we need to answer a few questions.
Are equilibria revenue equivalent? Yes. Ewerhart (2017a) shows that the equilibrium is unique when $r \leq 2$ and Ewerhart (2017b) shows that equilibria are revenue and payoff equivalent when $r > 2$.
Is $r \in (\bar{r}( \delta / k ), 2]$ ever optimal No. Wang (2010) shows that revenue is
$\frac{2 \delta}{r k} \left( \frac{1 + k}{2 k} \right) (r - 1)^{(r - 1)/r}$on this interval which is weakly decreasing in $r$. Therefore, $r = \bar{r}( \delta / k )$ yields weakly greater revenue.
Is $r > 2$ ever optimal? No. Alcade and Dahm (2010) show that if $r > 2$, the Tullock contest is revenue and payoff equivalent to an all-pay auction. Revenue in this auction is $\left( \frac{\delta}{k} \right) \frac{1 + k}{2 k}$. This is the same revenue as when $r = 2$.
Is $\delta < k$ ever optimal? No. Note that the revenue in the last two points is increasing in $\delta$.
The above shows an example of how revenue depends on $r$. Note that the maximum is reached before $\bar{r}(\delta / k)$ when the players are very heterogeneous (orange) and at the upper bound when players are more homogeneous (blue).^{5}
This table summarizes the results that we have about each type of optimal contest. There is not much to summarize for covert discrimination because there aren’t any closed form expressions.
$r,\delta$ | Payoffs | Revenue | |
---|---|---|---|
No discrimination | 1, 1 | $\frac{k^2}{(1+k)^2}, \frac{1}{(1+k)^2}$ | $\frac{1}{k+1}$ |
Direct ($r$ fixed) | $r,k$ | $\frac{2-r}{4}, \frac{2-r}{4}$ | $\frac{1+k}{4k}$ |
Covert ($\delta = 1$) | $r(k),1$ | ||
Unrestricted | $2,k$ | 0, 0 | $\frac{1+k}{2k}$ |
The case of covert discrimination demonstrates an important precision tradeoff. Choosing a large value of $r$ increases competitiveness by awarding the prize more frequently to the player with the higher score. However, choosing a low value allows you to discriminate against the stronger player – which also increases competitiveness. The effect that wins out depends on the size of the asymmetry between players. When $k$ is large enough, this discrimination effect wins out. However, when direct discrimination is possible, there is no advantage to covert discrimination.
Tullock contests were introduced 1980 by Gordon Tullock. The other major lottery contest is the rank order contest (Lazear and Rosen 1981). ↩
The papers mentioned give different prize values instead of different costs. The two are equivalent. I prefer to say that one player is more skilled than to say that one player values the prize more. ↩
In this case, it is not a lottery contest. It converges to the all-pay auction – the main deterministic contest. ↩
The symmetric equilibrium is characterized in Ewerhart (2015). ↩
More figures like the above can be found in Chowdhury et al. (2020). ↩
The war of attrition is a second price all-pay auction. That is, an auction where players pay a function of the second highest bid. The story is that two players enter into a battle royal to win some reward. Each moment that they fight, they must pay some cost. Each player chooses a secret time when they will exit the fight and forfeit the prize.
Because the last player wins as soon his opponent exits, the winner only fights until her opponent’s exit time. So, both players only face the cost of the lower exit time.
There are two players, Player 1 and Player 2. Suppose player i exits at $t_i$ and the other player exits at $t_{-i}$. Then, the payoff of player i is defined by:
$U_i( t_i ; t_{-i} ) = \begin{cases} \ell_i(t_i) &\text{ if } t_i < t_{-i} \\ s_i(t_i) &\text{ if } t_i = t_{-i} \\ f_i(t_{-i}) &\text{ if } t_i > t_{-i} \end{cases}$where $\ell_i$ is the payoff of losing, $s_i$ is the payoff of a tie, and $f_i$ is the payoff of the winner. Note that the two players can have entirely different payoffs. In the most common formulation, $\ell_i(t) = -t$ and $f_i(t) = V_i - t$.
We make one regularity assumption and another assumption which characterizes the war of attrition.
Assumption 1: The function $f_i(t)$ is continuous and $\ell_i(t)$ is continuously differentiable.
Assumption 2: Players like to win and war is costly. Specifically,
Without loss of generality, we will say that $\ell_i(0) = 0$. The second assumption guarantees that ties occur with probability zero in equilibrium. So, I will not mention them again. The last two points of Assumption 2 (4 and 5) prevent pathological behavior around infinity. Without these assumptions, it is possible that the players would never exit. In particular, 5 guarantees that the benefit of winning is not so large relative to the cost of the war.
If $\lim_{t \to \infty} f_i(t) < \ell_i(0)$, there are asymmetric pure strategy Nash equilibria where one player, i, chooses a very large exit time t such that $w_{-i}(t) < 0$. The other player best responds by playing zero because fighting enough to win is not worthwhile. Because the winner’s payoff does not depend on her own score, she is indifferent between all actions except zero. So, this is a Nash equilibrium.
In addition to the pure strategy equilibria, there is one Nash equilibrium in mixed strategies. The equilibrium will have full support over the real line. This is established in several steps.
We will see from the construction of the equilibrium that no player can have an atom and that the support cannot be bounded. However, these four points are enough to get us started.
In order for the player to be willing to mix, they must be indifferent between all points on the support. So, the following indifference condition applies:
$P(t_{-i} < t_i) E[ f_i(t_{-i}) \vert t_{-i} < t_i ] + (1 - G_{-i}(t)) \ell_i(t) = u_i$where $u_i$ is the player’s (constant) payoff. This is the same as
$\int_0^t f_i(x) dG_{-i}(x) + (1 - G_{-i}(t)) \ell_i(t) = u_i$where $G_{-i}$ is the equilibrium strategy distribution of the opponent. We want to solve for $G_{-i}$. If we take derivatives of both sides with respect to $t$, we get
$f_i(x) g_{-i}(x) + (1 - G_{-i}(t)) \ell'_i(t) - g_{-i}(t) \ell_i(t) = 0$which simplifies to the following differential equation
$- \ell'_i(t) = \left(f_i(t) - \ell_i(t) \right) \frac{g_{-i}(t)}{1 - G_{-i}(t)}.$This equation ensures that the marginal cost of participating in the war of attrition is equal to the prize value times the rate that your opponent exits (hazard rate). So, we are equating marginal cost with marginal benefit.
The differential equation has a known solution:
$G_{-i}(t) = 1 - \exp \left( \int _0^t \frac{ \ell'_i(z) }{ f_i(z) - \ell_i(z) } dz \right).$One might be concerned that the above expression may not satisfy the properties of a distribution function. For example, it may be decreasing or $\lim_{t \to \infty} G_{-i}(t) \neq 1$. This is not the case. The function is strictly increasing because the term inside the integral is negative. Assumption 2.5 guarantees that the distribution approaches one. However, it never reaches one at any time. So, the support is unbounded. Another way to see this is to note that the survival function is
$S_{-i}(t) = \exp \left( \int _0^t \frac{ \ell'_i(z) }{ f_i(z) - \ell_i(z) } dz \right) > 0.$The solution is somewhat difficult to work with. So, in most applications, the prize is taken to be fixed.
If we assume $f_i(z) = V_i + \ell_i(y)$, where $V_i$ is a constant prize, then
So, the strategy reduces to $\begin{aligned} G_{-i}(t) &= 1 - \exp \left( \int _0^t \frac{ \ell'_i(z) }{ f_i(z) - \ell_i(z) } dz \right) \\ &= 1 - \exp \left( \int _0^t \frac{ \ell'_i(z) }{ V_i } dz \right) \\ &= 1 - \exp \left( \frac{ \ell_i(t) }{ V_i } \right). \end{aligned}$
This equation is about as simple as the equilibrium of the linear war of attrition (where $\ell_i(x) = -x$). So, it is pretty popular. You can try other assumptions in the general equation to to get other expressions. For example, the equilibrium where players only pay a fraction of $\ell_i$ when they win is also relatively simple.
]]>One long running joke on the page is that the randomized articles have zero citations and are “not forthcoming”. That is, we remind that the random text is not a reference for any other article and is not scheduled for publication in any journal. These facts were intended to be self-evident. So, I was surprised when a student contacted me to ask if he could use some random paragraphs in a TeX package he was working on.
Naturally, I was ecstatic that someone liked Econ Ipsum enough to make a spinoff. So, I’m happy to announce that you can now use Econ Ipsum paragraphs in TeX via the econlipsum package by Jack Coleman. So, Econ Ipsum’s citation counter – intended to remain forever at zero – has now advanced to one.
The number of api executions is the maximum number of paragraphs requested by a single person divided by 100 (and rounded up). Each day (typically) has between 3 and 30 executions. On one day, there were 184 executions – which means that one person generated 18,400 paragraphs. ↩
This mechanism has also been referred to as Liberal Radicalism or LR.
I wanted to take the time to clarify a few common misconceptions about QF and explain what it actually is and is not. Moreover, implementations of QF have modified the mechanism in ways that are not efficient and often dominated by more common allocation approaches.
The basic problem with funding a public good is that people do not pay if they can take advantage of the good for free. So, collected payments/donations are always less than they should be. This is not a problem that QF solves. Several sources claim QF is “the mathematically optimal way to fund public goods in a democratic community” (Gitcoin). This is not entirely true. QF was not designed to solve the problem of funding public goods. It is not a mechanism for optimal fundraising. It was instead designed to solve the following problem:
How can we efficiently allocate resources to public goods given that we have unlimited access to resources.
Note that this is not a fundraising problem. It is a voting problem. QF was not designed to have the greatest revenue of any mechanism. It doesn’t tell you how to raise money. It instead tells you where you should put it once it is raised. QF is a system where people “vote with their wallets” to allocate resources. Yes, it raises funds as part of the process, but this is not the point.
This is a classic Economic problem that was technically solved by Vickrey, Clarke, and Groves in three separate papers. A quick summary of their mechanism (VCG) is essential for understanding what QF is about.
The idea behind VCG is pretty simple. Suppose we are looking at funding webpack, a popular open source project. Suppose the value for person $i$ of webpack having an annual budget of $x$ is $v_i(x)$ where $v_i$ is a concave, differentiable, and increasing function^{1} with $v_i(0) \equiv 0$. The efficient level of funding for webpack is the $x^\star$ that maximizes society’s payoff:
$\left[ \sum_{i} v_i(x) \right] - x. \tag{1}$However, people only care about their own value – not the value of others. So, they would not be willing to give as much. To fix this, we conduct a three step process.
When they add their own $v_i(x)$ to the transfer, their personal payoff is exactly the same as society’s (Equation 1). Because of this, they want $x^\star$ to be calculated correctly. So, they have no reason to lie about their value, $v_i$.
The transfers noted above are very expensive. However, they can be made much cheaper because you can subtract any constant from these transfers. In fact, you can subtract any function that does not depend on $v_i$.^{2} When the ideal function is chosen,^{3} the assumptions we made ensure the VCG mechanism raises funds from each person. The amount raised from person $i$ is.
$\max_x \left( \left[ \sum_{j \neq i} v_j( x ) \right] - x \right) - \left( \left[ \sum_{j \neq i} v_j( x^\star ) \right] - x^\star \right) > 0$There is no incentive compatible efficient mechanism that raises more total funds (Krishna and Perry 1998).
There are two major issues with the VCG mechanism:
VCG requires each participant to submit their exact valuation for each level of total funding for every open source project. This imposes a preparation cost for most participants as they likely do not know these valuations offhand. Issue 2 is somewhat overblown as the ways to cheat in VCG are generally NP-hard to compute. However, there are interesting restricted cases where the optimal collusion scheme can be computed in polynomial time.
The third issue is an important one that cannot be easily solved. Remember that there is no incentive compatible efficient mechanism that collects more revenue than VCG. So, collecting more funds requires sacrificing efficiency – an issue not dealt with here.
The aforementioned paper considers QF, a different allocation mechanism which solves the first issue with VCG (i.e. that it requires too much information) and it solves this brilliantly.
The QF mechanism allocates resources according to
$x^\star = \left( \sum_i \sqrt{c_i} \right)^2 \tag{2}$where $c_i$ is an individual contribution made by each participant. So, participants no longer need to send their value for every level of funding. They just need to choose a contribution. This requires less information on the part of the contributors. It is also more private because less information is surrendered. The main point of the paper is that, in equilibrium, this $x^\star$ is the same as the one we considered in the VCG section.
The goal of QF is not to maximize $\sum_i c_i$, but to instead find a function that achieves the socially optimal resource allocation, $x^\star$.
The deficit of QR is
$D_{QF} = x^\star - \sum_i c_i = \left( \sum_i \sqrt{c_i} \right)^2 - \sum_i c_i$which is positive when there is more than one contributor by Jensen’s inequality. Therefore, outside funding is always required for QR.
The issues with QF are similar to VCG.
Collusion in QF is particularly easy. If you want to contribute $c_i$ and have friends who want to contribute nothing, then you can do better by contributing less and dividing your contributions amongst your friends.
The second problem is a big problem for applying both QF and VCG because the deficit is can be difficult to predict. Therefore, unlimited resources are needed guarantee that the costs are covered. This is somewhat reasonable for governments who can tax their citizens.^{4}
It is not reasonable when the funds are raised by a foundation or philanthropist. What do you do if $D_{QF}$ is larger than your grant funding, $G$? However, the authors note this problem and propose an approximate solution called CQF (also known as CLR)
$x^\circ = \alpha \left( \sum_i \sqrt{c_i} \right)^2 + (1 - \alpha) \sum_i c_i$where $0 \leq \alpha \leq 1$. When $\alpha = 0$, CQF is the standard donation model with no subsidies. When $\alpha = 1$, CQF is the same as QF and thus achieves the optimal allocation. At every $\alpha$ in between, CQF is a mix of the two. The deficit of CQF is
$D_{CQF} = x^\circ - \sum_i c_i = \alpha D_{QF}.$So, CQF has a smaller deficit than QF. It results in a better^{5} allocation than private donations. However, CQF does not yield the efficient allocation unless $\alpha = 1$.
Implementations of QF have not actually implemented QF or CQF. They have instead implemented the following rule which I call NQF:
$\tilde{x}^p = \left[ \sum_i c_i^p \right] + G \frac{\left( \sum_i \sqrt{c_i^p} \right)^2}{ \sum_p \left( \sum_i \sqrt{c_i^p} \right)^2}$where $G$ is the size of an outside grant and $p$ is an index for the open source project. A calculator for NQF is available with source code. This formulation guarantees that the deficit of the mechanism is exactly $D_{NQF} = G$. This is a desirable property because it ensures that the round never goes over budget and that all of $G$ is used. However, this model is not ideal. In the next section, I’ll show a better way.
NQF lacks the desirable properties of QF and CQF. It not not efficient. It is a completely different mechanism.
The first thing to note is that a private good still gets a subsidy under NQF. Under VCG, QF, and CQF, any project with one contributor gets no subsidy. You can see in the equation above that this is not true for NQF because the subsidy for each project
$G \frac{\left( \sum_i \sqrt{c_i^p} \right)^2}{ \sum_p \left( \sum_i \sqrt{c_i^p} \right)^2}$is always positive. This means that any individual who can list a project can fraudulently profit from the mechanism.
The second thing to note about NQF is funds allocated to project $p$ are decreasing in the contributions to all other projects. For example, when you donate to WebPack, you are taking funds away from all other projects. This generates inefficiency and is a major deviation from QF and CQF.
NQF does not necessarily beat private contributions. In fact, we show an example where $G$ is wasted entirely.
Example: Suppose there are two contributors and two projects. Contributor 1 likes both projects equally while Contributor 2 only likes Project 1. More concretely, assume
$v_1^1(x) = v_1^2(x) = v_2^1(x) = 2 \sqrt{x}$while $v_2^2(x) = 0$. Contributor 1 has no reason to contribute to Project 1 through the mechanism. He will give only to Project 2 such that the grant is split evenly between the two projects. This is despite the fact that Project 1 is more valuable to society. The quadratic nature of this mechanism actually exacerbates the problem because even a small contribution by Player 1 to Project 1 will divert a large amount of funding away from Project 2. Allocations can be found in the table below.
NQF (G = 1)^{6} | Return Grant (G = 0) | |
---|---|---|
Player 1 | (0, 1/2) | (0, 1) |
Player 2 | (1/2, 0) | (1, 0) |
Grant | (1/2, 1/2) | (0, 0) |
Total | (1, 1) | (1, 1) |
Optimal | (4, 1) | (4, 1) |
The above table shows the contributions to Project 1 and Project 2 by each player and the grants. As you can see, the total contributions are the same between NQF and private contributions. So, in our example, NQF does not outperform private contributions even though a grant has been collected. Of course, both are outperformed by matching contributions or performing QF with any $\alpha > 0$.
This points to a general issue with this mechanism. People who like both popular and unpopular projects have an incentive to divert resources to the unpopular ones. This is a classic voting issue and is exactly the sort of behavior that quadratic voting mechanisms were designed to prevent.
NQF has the nice property that the deficit is exactly equal to your grant funding. So you can never generate more deficit than you have funding for. However, it lacks all other nice properties of QF or CQF. It does not result in an approximately efficient allocation and it can be defrauded by a single agent. However, it is possible to get the one advantage of NQF ($D = G$) without losing any of these properties. You just have to stop the mechanism when $D = G$.
That is:
This last step could be achieved by cutting off donations completely or by putting a “donation matching is currently disabled” message on the screens of potential donors. The choice of $\alpha$ would determine how long you can run the match.
These assumptions are from the QF/LR papers. VCG doesn’t need all of these assumptions. ↩
You can subtract any $h_i(v_{-i})$ where $v_{-i} = (v_1, \dots, v_{i-1}, v_{i+1}, \dots, v_n)$. ↩
This ideal function is $\max_x \left[ \sum_{j \neq i} v_j( x ) \right] - x$. ↩
As the paper points out, efficiency is lost when people consider how their choices affect their taxes. Though this goes away as the number of participants becomes large. It’s hard to see this as an issue if the mechanism played out on a national scale. ↩
‘Better’ in that it is closer to the efficient allocation ↩
The assumption that $G=1$ isn’t terribly important. It does not affect total contributions. Naturally, it’s possible to get above $(1,1)$ when $G > 2$. However, contributors give nothing in this case. So, it is not terribly interesting. ↩
user@host.tld
or mailto:user@host.tld
in your HTML.
I used to use the Scrape Shield provided by Cloudflare. However, when I switched away from using Cloudflare a few months ago, I started to receive a lot of spam that could not be easily blocked. There are many solutions that you can find on the internet to stop people from scraping your email. The most common is to write your email as:
user [at] host [dot] tld
This is not ideal for three reasons:
user@host.tld
. It’s just a little less common.This third issue is the most difficult to fix as there are many solutions that work for displaying an email if you don’t care about functional mailto:
links. For example
<span>user@</span>
<span style="display:none;">HIDDEN JUNK</span>
<span>host.tld</span>
or the equivalent with CSS.
There are two solutions that I know of that fix the problem and allow mailto:
links to work. The first is common and easy for scrapers to circumvent while the other is harder to circumvent, but requires javascript.
One simple and very common solution is to replace all characters in the mailto:
links with HTML entities. For example user@host.tld
becomes user@host.tld
. You can then make an email link like so
<a href="mailto:user@host.tld">email me</a>
which renders as email me. As you can see, the browser translates these HTML entities completely for the end user. So this obfuscation does not create any issues for human visitors. However, it is very easy to transform all HTML entities before scraping. So, this offers only minimal protection.
I have tested a very simple method in javascript for the last few months. The idea is to replace your email with the correct string using javascript. A simplified version of my implementation follows.
<script>
/* 1. define variables */
var me = "name";
var place = "host.tld";
/* 2. find email link to replace */
var elink = document.getElementById("mlink");
/* 3. replace link href with variables */
elink.href = `mailto:${me}@${place}`;
</script>
<a id="mlink" href="#">email me</a>
So, we define the email in two parts, name
and host.tld
. We then combine these parts and put them into the email link. This solution is simple and resistant to scraping. The easiest way to scrape this document would be to execute the above javascript. However, scrapers do not execute javascript on arbitrary pages for several reasons. First, it is not performant to execute javascript on the thousands of websites that you scrape. Second, it opens scrapers up to various types of attacks and tracking that they would prefer to avoid.
At some point, scrapers may evolve to the point that such a simple method is not effective. However, I have found that I no longer receive spam after making this change. While I am generally against adding unnecessary javascript to pages, as it slows them down, I feel this snippet is both necessary and highly performant.
This javascript method is similar to the one used by CloudFlare’s builtin Scrape Shield, but is simpler. Cloudflare uses a hash that is decoded in javascript by using the last few characters as a “key” to unlock the hashed email.
I use the second method on my site and can confirm that it works. A few months ago, I ran an experiment to see if the spammers were really getting my email from my website. In the test, I replaced the email in the HTML of my site with a dummy address, 6adegxtzp@relay.firefox.com
, which I still receive. However, I sort these into a special folder. When, the page loads, my script replaces this email with my main address. Human visitors with javascript enabled do not encounter this address.
Since making this change, I no longer receive spam on my main email. However, I receive a significant amount of spam though my Firefox relay address. This is despite the fact that I have never used this address anywhere else. This change took a few months though. So don’t expect results right away.
]]>