Martingales, Probability Theory

The star-ship Enterprise problem


The navigation system on the star-ship Enterprise is broken. There is nothing one can do but Chief Engineer Scott is able to fix a distance to be traveled, then the star-ship will make a space-hop with an uniformly random direction. Assume that the current distance from the star-ship to the Sun is {R} and the radius of the Solar System is {r<R}. The star-ship will make consecutive space-hops with independent uniformly random directions until it reaches Solar System. How large is the probability that the star-ship eventually reaches the Solar System after a finite number of space-hops? How to maximize the chance to accomplish this trip?

Enterprise

Assume that {\Theta_1, \Theta_2, \dots } are i.i.d. uniformly random point on the unit sphere

\displaystyle \partial B(0,1)=\{x\in \mathbb R^3: \|x\|=1\},

which are also random directions that the star-ship will travel along. Let {O=(0,0,0)} be the position of the Sun and {X_n} be the position of the star-ship after {n} space-hops. We can set {X_0=(R,0,0)} being the initial position of the star-ship. Set {\mathcal{F}_n=\sigma(X_0,X_1,\dots, X_n)}. We have

\displaystyle X_{n}=X_{n-1}+C_n \Theta_n,

where {C_n} is the size of the {n}-th space-hop which is a {\mathcal F_{n-1}}-measurable random variable. We call the random sequence {(C_n)_{n\ge0}} a strategy.

Proposition 1. Set {M_n=1/\|X_n\|}. Then {(M_n)_{n\ge0}} is a supermartingale which converges with probability 1 to some {M_{\infty}} as {n\rightarrow\infty}. It is a martingale if and only if {C_n\le \|X_{n-1}\|} for all {n\ge 1}.

Proof: We have

\displaystyle \mathbb E[M_n |\mathcal{F}_{n-1}]= \mathbb E\left[\frac{1}{\|X_{n-1}+C_n\Theta_n\|}|\mathcal{F}_{n-1}\right]=\frac{1}{4\pi}\int_{\partial B(0,1)}\frac{1}{\|X_{n-1}+C_n \theta\|} {\rm d}\sigma(\theta),

where {\sigma} is the Lebesgue measure on the unit sphere {\partial B(0,1)}. Applying the mean value property to the harmonic function {f(x)=1/\|x\|} with {\|x\|>\rho}, we have

\displaystyle \frac{1}{4\pi}\int_{\partial B(0,1)}\frac{1}{\|x+\rho y\|}\text{d}\sigma(y)= \frac{1}{\|x\|}.

If {\|x\|\le \rho}, by the maximum principle, we have

\displaystyle \frac{1}{4\pi}\int_{\partial B(0,1)}\frac{1}{\|x+\rho y\|}\text{d}\sigma(y)= \frac{1}{\rho}.

Thus,

\displaystyle \mathbb E[M_n |\mathcal{F}_{n-1}] =\frac{1}{\|X_{n-1}\|\vee C_n} \le \frac{1}{\|X_{n-1}\|}=M_{n-1}.

Hence {(M_n)_{n\ge0}} is a positive supermartingale. By Doob’s convergence theorem, {M_n} tends to some {M_{\infty}} with probability 1. It is also obvious that {(M_n)_{n\ge0}} is a martingale if and only if {C_n\le \|X_{n-1}\|} for all {n\ge 1}. \Box

Denote by {\tau=\inf\{n\ge 1: \| X_n\|\le r \}} the first time that the star-ship reaches the Solar System.

Proposition 2. The probability that the star-ship ever reaches the Solar System is strictly smaller than {r/R}, i.e.

\displaystyle \mathbb P(\tau<\infty)< \frac{r}{R}.

Proof: Note that {(M_{n\wedge \tau})_{n\ge 0}} is also a non-negative supermartingale. We thus have

\displaystyle \mathbb E[ M_{n\wedge \tau}]\le \mathbb E[ M_{0}]=\frac{1}{R}.

On the event {\{\tau\le n\}}, we have {M_{\tau}={1}/{\|X_{\tau}\|}\ge1/r}. Therefore,

\displaystyle \mathbb E[ M_{n\wedge \tau}]= \mathbb E[ M_{\tau}\mathbf{1}_{\{\tau\le n\}}]+\mathbb E[ M_{n}\mathbf{1}_{\{\tau> n\}}]\ge \frac{1}{r}\mathbb E[ \mathbf{1}_{\{\tau\le n\}}]= \frac{1}{r} \mathbb P(\tau\le n).

It immediately follows that {\mathbb P(\tau\le n)\le r/R}. Since {\{\tau\le n\}} defines a sequence of increasing events, we have

\displaystyle \mathbb P(\tau<\infty)=\mathbb P\left(\bigcup_{n\ge 1}\{\tau \le n\} \right)=\lim_{n\rightarrow\infty}\mathbb P(\tau\le n)\le \frac{r}{R}.

We now show that {r/R} is actually a strict upper bound for {\mathbb{P}(\tau<\infty)}. Notice that the distribution of {X_n} is non-atomic for all {n\ge 1}. Indeed, assuming that {\mathbb P(X_{n-1}=z)=0} for all {z\in \mathbb R^3}, we have that {X_{n}} conditioning on {\mathcal{F}_{n-1}} is uniformly distributed on the sphere {\partial B(X_{n-1},C_n)}. It follows that {\mathbb P(X_n=z|\mathcal{F}_{n-1})=0} and thus {\mathbb P(X_n=z)=0} for all {z\in \mathbb R^3}. It is also clear that {X_1} is non-atomic. By the principle of induction, {X_n} is non-atomic for all {n\ge 1}. Therefore, on the event {\{\tau<\infty\}},

\displaystyle M_{\tau}=\frac{1}{\|X_{\tau}\|}=\sum_{n=1}^{\infty}\frac{1}{\|X_n\|}\mathbf{1}_{\{\tau=n\}}>\frac{1}{r}.

It follows that { M_{\tau}\ge M_{\tau} \mathbf{1}_{\{\tau<\infty\}}> {1}/{r}\mathbf{1}_{\{\tau<\infty\}}}. By Fatou’s lemma, we thus have

\displaystyle \frac{1}{r}\mathbb{P}(\tau<\infty)< \mathbb E[M_{\tau}]=\mathbb E\left[\lim_{n\rightarrow\infty}M_{n\wedge\tau}\right]\le \liminf_{n\rightarrow\infty}\mathbb E[M_{n\wedge \tau}]\le \frac{1}{R}.

\Box

Let m>R be a fixed integer. Set {\eta_m=\inf\{n\ge 1: \|X_n\|> m\}}.

Proposition 3. Assume that {C_n=\epsilon} with {\epsilon>0} for all {n\ge 1}. Then for all {m}, {\mathbb{P}( \eta_m< \infty)=1}, i.e. the star-ship will reaches to a position with an arbitrary large distance from the Sun after a finite number of space-hops with probability 1.

Proof: Let {X_n^1} the first coordinate of {X_n}. Note that {X_n^1} is a sum of i.i.d. random symmetric variables. We thus have

\displaystyle \mathbb P(\eta_m =\infty) = \mathbb P\left(\sup_{n\ge1} \|X_n\|\le m \right)\le \mathbb P\left(\sup_{n\ge1} |X_n^1|\le m \right)=0.

\Box

Proposition 4. For each {\epsilon\in (0,r)}, there exists a strategy {(C_n^{\epsilon})_{n\ge1}} such that

\displaystyle \mathbb P(\tau<\infty)\ge \frac{r-\epsilon}{R}.

Proof: Set {C_n=\epsilon} with {\epsilon\in (0,r)} for all {n\ge 1}. Note that {(M_{n\wedge \tau })_{n\ge 1}} is a positive martingale since {\|X_{n-1}\|> r> \epsilon = C_{n}} for all {n\le \tau}. We also note that for {n\le \tau}, we have {\|X_n\|=\|X_{n-1}+C_n \Phi_n\|\ge \|X_{n-1}\|-C_n\ge r-\epsilon}. Thus for all {n\ge1},

\displaystyle M_{\tau\wedge n}= \frac{1}{\|X_{\tau\wedge n}\|}\le \frac{1}{r-\epsilon}.

Applying the optional stopping theorem to the martingale {(M_{n\wedge\tau})_{n\ge0}} and the stopping time {\eta_m}, we have

\displaystyle \frac{1}{R}=\mathbb E[M_{0}]=\mathbb E[M_{\tau\wedge \eta_m}]= \mathbb E[M_{\tau} \mathbf{1}_{\{\tau\le \eta_m\}}] +\mathbb E[M_{\eta_m} \mathbf{1}_{\{\tau>\eta_m\}}]\le \frac{1}{r-\epsilon} \mathbb{P}(\tau\le\eta_m) + \frac{1}{m} \mathbb P(\tau>\eta_m).

It immediately follows that

\displaystyle \mathbb{P}(\tau\le \eta_m)\ge \frac{1/R- 1/m}{1/(r-\epsilon)-1/m}.

Taking {m\rightarrow\infty}, we obtain that {\mathbb{P}(\tau<\infty)\ge(r-\epsilon)/R}.

\Box

Martingales, Probability Theory

Bellman’s optimality principle


Let us consider a betting game as follows. Let {(X_n)_{n\ge1}} be a sequence of i.i.d. random variables taking values in {\{1,-1\}} such that {\mathbb{P}(X_1=1)=p>\frac12} and {\mathbb{P}(X_1=-1)=1-p=:q>0}. Suppose that at the initial time your capital is {Y_0}. We start a sequence of betting rounds. At time {n}, you either win your stake if {X_n=1}, or you loose if {X_n=-1}. Let {Y_n} and {C_n} respectively stand for your capital and your stake at time {n}. Then {(Y_n)_{n\ge1}} can be recursively defined by

\displaystyle Y_{n}=Y_{n-1}+ C_n X_n=\left\{ \begin{matrix}Y_{n-1}+C_n& \text{if you win,}\\ Y_{n-1}-C_n & \text{if you loose}.\end{matrix}\right.

Let {\epsilon} be a fixed (small) number in (0,1). Assume that {0\le C_n\le (1-\epsilon)Y_{n-1}} and {C_n} is predictable, i.e. {C_n} is {\mathcal{F}_{n-1}}-measurable with {\mathcal{F}_n=\sigma(X_1,X_2,\dots, X_n)}. You certainly wish to find an optimal strategy of stakes to maximize your fortune. This is equivalent to the maximization of the expected interest rate given by

\displaystyle \mathbb{E}\left[\log\left(\frac{ Y_N}{Y_0}\right)\right],

where {N} is the length of the game. For {n\ge1}, set {R_n={C_n}/{Y_{n-1}}}, which is a {\mathcal{F}_{n-1}}-measurable random variable. The sequence {(R_1,R_2,...,R_N)} is called a strategy.

Proposition 1 Let

\displaystyle M_n=\log\left(\frac{Y_n}{Y_0}\right)-n\alpha

with {\alpha=p\log(2p)+q\log(2q)}. Then {(M_n)_{n\ge0}} is a supermartingale.

Proof: We fisrt show that {\mathbb{E}|M_n|<\infty} for all {n}. Indeed, we notice that

\displaystyle M_n=\log\left(\frac{ Y_n}{Y_{n-1}}\right)+\log\left(\frac{Y_{n-1}}{Y_0}\right)-n\alpha=\log(1+R_nX_n)+M_{n-1}-\alpha.

We also have {\epsilon \le 1+R_n X_n\le \log(2-\epsilon))} since {0\le R_n={C_n}/{Y_{n-1}}\le 1-\epsilon}. It immediately follows that

\displaystyle |M_n|\le |\log(1+R_nX_n)|+|M_{n-1}|+\alpha\le |\log(\epsilon )|+|\log(2-\epsilon )|+\alpha+|M_{n-1}| =\alpha+\log\left(\frac{2-\epsilon}{\epsilon}\right)+|M_{n-1}|.

Hence, by the principle of induction, we obtain

\displaystyle |M_n|\le n \left(\alpha+\log\left(\frac{2-\epsilon}{\epsilon}\right) \right).

and thus {\mathbb{E}|M_n|<\infty}. On the other hand, we have

\mathbb{E} [M_{n}|\mathcal F_{n-1}] =\mathbb{E}\left[\log(1+R_nX_n)|\mathcal F_{n-1}\right]-\alpha+M_{n-1}
=p\log(1+R_n)+q\log(1-R_n)-p\log(2p)-q\log(2q)+M_{n-1}
=p\log\left( \frac{1+R_n}{2p}\right)+q\log\left( \frac{1-R_n}{2q}\right)+M_{n-1}.

Applying Jensen’s inequality {p\log(x)+(1-p)\log(y)\le \log(p x+ (1-p)y)}, we thus have

\displaystyle \mathbb{E} [M_{n}|\mathcal F_{n-1}]\le \log \left(p\cdot\frac{1+R_n}{2p}+q\cdot\frac{1-R_n}{2q}\right)+M_{n-1}=M_{n-1}.

Hence {(M_{n})_{n\ge1}} is a supermartingale. \Box

Proposition 2 We have that {(R_1,R_2,\dots,R_N)=(2p-1,\dots, 2p-1)} is an optimal strategy to maximize the expected interest rate.

Proof: Since {(M_{n})_{n\ge0}} is a supermartingale, we have {\mathbb{E}[M_N]\le M_0=0}. Therefore,

\displaystyle \mathbb{E}\left[\log\left(\frac{ Y_N}{Y_0}\right)\right]\le N\alpha.

We note that {(M_{n})_{n\ge0}} is a martingale if

\displaystyle p\log \left(\frac{1+R_n}{2p}\right)+q\log\left(\frac{1-R_n}{2q}\right)=0

for all {n}. This is equivalent to {R_n=2p-1}. Choosing {R_n=2p-1} for all {1\le n\le N}, we must have that {\mathbb{E}[M_N]=M_0=0}. In this case, we obtain

\displaystyle \mathbb{E}\left[\log\left(\frac{ Y_N}{Y_0}\right)\right]= N\alpha.

Hence, {(2p-1,\dots, 2p-1)} is an optimal strategy. \Box

Martingales, Probability Theory

Maximization of discounted capital gain


A businessman has a piece of equipment for sale. Assume that he will be approached by a purchaser with offers made in week 0, 1, 2,\dots being {X_0, X_1, X_2,\dots} where {(X_n)_{n\ge0}} is a sequence of i.i.d. positive random variables with finite mean and probability density function {f_X}. At the same time, the cost for storage of this equipment is {c} per week. The current rate of interest per week is {\alpha>0}.

Recall that if you initially have an amount {s_0} of money in your bank account with interest rate {\alpha} per week, then after {n} weeks, your account will increase to {s_n=s_0(1+\alpha)^n.}

If the businessman sells the equipment in week {n},

\displaystyle \pi_n:=(1+\alpha)^{-n}X_n

stands for the discounted profit, that is the amount he can put in the bank in week 0 such that it will increase to {X_n} in week {n}. His discounted cost of storage is equal to

\displaystyle K_n:=\sum_{k=1}^{n}\frac{c}{(1+\alpha)^k}=\frac{c}{\alpha}\left(1-\frac{1}{(1+\alpha)^n}\right),

that is the necessary amount he can put in the bank in week 0 such that he can partially withdraw from it in week {1, 2, \dots, n} to pay for the cost of storage per week. Let

\displaystyle \mu(n):=\mathbb{E}[\pi_n-K_n]= \mathbb{E}\left[\frac{Z_n}{(1+\alpha)^{n}} \right]-\frac{c}{\alpha},

where {Z_n:=X_n+{c}/{\alpha}}. Note that {\mu(n)} is the expected discounted gain when the businessman sells the equipment in week {n}. He certainly wish to find a strategy to maximize his discounted gain, i.e to find a stopping time {T} such that {\mu(T)\rightarrow \max}.

Proposition 1. Let {\gamma> \frac{c}{\alpha}} be a positive integer such that

\displaystyle \gamma\alpha =\int_{\gamma}^{\infty}\mathbb{P}(Z_n>u){\rm d}u.

Then {\gamma} uniquely exists and {V_n=(1+\alpha)^{-n}\max\{Z_n,\gamma\}} defines a supermartingale.

Proof: The first part of the proposition is trivial. Let {\mathcal{F}_n:=\sigma(X_0,X_1,\dots,X_n)=\sigma(Z_0,Z_1,\dots,Z_n)}. Let {f_Z} be the common probability distribution function of {(Z_n)_{n\ge0}}. Since {Z_n} is independent of {\mathcal{F}_{n-1}}, we have
\mathbb{E}[\max\{Z_n,\gamma\}|\mathcal{F}_{n-1}] = \mathbb{E}[\max\{Z_n,\gamma\}]
= \mathbb{E}[\gamma \mathbf{1}_{\{Z_n\le \gamma\}}]+\mathbb{E}[Z_n \mathbf{1}_{\{Z_n> \gamma\}}]
= \gamma \mathbb{P}(Z_n\le \gamma)+\int_{\gamma}^{\infty}u f_Z(u){\rm d} u. 
Taking integration by parts, we get
\displaystyle \int_{\gamma}^{\infty}u f_Z(u){\rm d} u=\gamma\mathbb{P}(Z_n>\gamma)+\int_{\gamma}^{\infty}\mathbb{P}(Z_n>u){\rm d}u
and thus
\mathbb{E}[V_n||\mathcal{F}_{n-1}] = \mathbb{E}[V_n]= (1+\alpha)^{-n} \mathbb{E}[\max\{Z_n,\gamma\}]=(1+\alpha)^{-n}\left(\gamma+ \int_{\gamma}^{\infty}\mathbb{P}(Z_n>u){\rm d}u\right)
=(1+\alpha)^{-n+1}\gamma \le (1+\alpha)^{-n+1}\max\{Z_{n-1},\gamma\}=V_{n-1}.
We also have that {\mathbb{E}[|V_n|]=\mathbb{E}[V_n]\le \mathbb{E}[V_0]=(1+\alpha){\gamma}.} Hence, {(V_n)_{n\ge0}} is a supermartingale. \Box

Proposition 2. Assume that the businessman sets a target price {\sigma} and sells the equipment when the first time he is offered at least this target price. Denote this time by {T(\sigma)=\inf\{n: X_n\ge \sigma \}}. Then {\mu(T(\sigma))\rightarrow \max} when {\sigma=\gamma-\frac{c}{\alpha}.}

Proof: For simplicity, we put {\tau=\sigma+\frac{c}{\alpha}} and thus {T(\sigma)=\inf\{n: Z_n> \tau \}}. We have that
\mu(T(\sigma))+\frac{c}{\alpha} =\mathbb{E}\left[\frac{Z_{T(\sigma)}}{(1+\alpha)^{T(\sigma)}} \right]=\sum_{n=0}^{\infty}\mathbb{E}\left[\frac{Z_n}{(1+\alpha)^n}\mathbf{1}_{\{T(\sigma)=n\}} \right]
=\sum_{n=0}^{\infty}\mathbb{E}\left[\frac{Z_n}{(1+\alpha)^n}\mathbf{1}_{\{Z_n>\tau, Z_{m} \le \tau, \forall m\in\{0,1,\dots, n-1\} \}} \right]
= \sum_{n=0}^{\infty}\mathbb{E}\left[\frac{Z_n}{(1+\alpha)^n}\mathbf{1}_{\{Z_n>\tau\}}\right]\mathbb{E}\left[\mathbf{1}_{\{ Z_{m} \le \tau, \forall m\in\{0,1,\dots, n-1\} \}} \right]
= \sum_{n=0}^{\infty}(1+\alpha)^{-n}\mathbb{E} \left[Z_{n}\mathbf{1}_{\{Z_n>\tau\}}\right]\mathbb{P}(Z_{m} \le \tau, \forall m\in\{0,1,\dots, n-1\})
= \sum_{n=0}^{\infty}(1+\alpha)^{-n} \left(\tau\mathbb{P}(Z_0>\tau)+\int_{\tau}^{\infty}\mathbb{P}(Z_0>u){\rm d}u\right)\mathbb{P}(Z_0\le u)^{n}
= \frac{1+\alpha}{1+\alpha-\mathbb{P}(Z_0\le \tau)}\left(\tau\mathbb{P}(Z_0>\tau)+\int_{\tau}^{\infty}\mathbb{P}(Z_0>u){\rm d}u\right).
Let \displaystyle \phi(\tau)=\frac{1+\alpha}{1+\alpha-\mathbb{P}(Z_0\le \tau)}\left(\tau\mathbb{P}(Z_0>\tau)+\int_{\tau}^{\infty}\mathbb{P}(Z_0>u){\rm d}u\right).
We notice that {\tau=\gamma} is the unique solution to the equation {\phi'(\tau)=0} and we can easily to check that {\phi(\tau)\rightarrow \max} when {\tau=\gamma}. Therefore {\mu(T(\sigma))\rightarrow \max} when {\sigma=\gamma-{c}/{\alpha}.} Note that {\mu(T(\gamma-{c}/{\alpha}))=\gamma(1+\alpha)-{c}/{\alpha}.} \Box

Proposition 3. The strategy with respect to the target price {\sigma^*=\gamma-\frac{c}{\alpha}} is optimal, i.e. for any stopping time {S} such that {\mathbb{P}(S<\infty)=1} then

\displaystyle \mu(S)\le \mu(T(\sigma^*)).

Proof: Let {S} be a stopping time {S} such that {\mathbb{P}(S<\infty)=1}. Note that

\displaystyle \mathbb{E}[V_{S}]\le \sum_{n=0}^{\infty}\mathbb{E}[V_n]=\sum_{n=0}^{\infty} \gamma (1+\alpha)^{-n+1}=\frac{\gamma}{\alpha}(1+\alpha)^2<\infty

On the other hand, {(V_{S\wedge n})_{n\ge0}} is also a supermartingale. It follows that

\displaystyle \mathbb{E}[V_{S\wedge n}]\le \mathbb{E}[V_0]=(1+\alpha){\gamma}.

Taking {n\rightarrow \infty}, we obtain {\mathbb{E}[V_S]\le (1+\alpha)\gamma}. Hence {\mu(S)=\mathbb{E}[V_S] -{c}/{\alpha}\le (1+\alpha)\gamma-{c}/{\alpha}=\mu(T(\sigma^*))}. \Box