p^{\ast}$ (platikurtic). We can refine this further by considering a particular $\al$, e.g., if $\al=0.975$ (Basel III), then $H(\beta,p)>1$ for $p>0.23$. % %%%%%%%%%%%%% \begin{figure}[htb!] \caption{Contour plot of $H(\beta,p)$ as a function of its arguments.}% \label{fig:H.contour.fun} \begin{center} \includegraphics[scale=0.9]{Plots/H-contour-col} \end{center} \end{figure} %%%%%%%%%%%%% % \nc \begin{remk}\label{remk-evt-are} In EVT terminology, a result for the ARE in (\ref{are-var-cvar}) corresponding to heavy-tailed distributions with tail index $\theta>0$ in the \emph{intermediate} case where $\ka/n\rightarrow 1$ as $n\rightarrow\infty$, was derived by Danielsson and Zhou (2016): \[ \are(\hvaralph,\hcvar) = \frac{1-\beta}{1-\alpha}\cdot\frac{\theta-2}{2(\theta-1)}. \] By comparison, recall that the type of convergence for the number of order statistics at which the summand \eqref{var-cvar-npes} starts in the \emph{central} case, $\ka=[n\al]$, is that $\ka/n\rightarrow \beta$ as $n\rightarrow\infty$, which is fundamentally different, and in our view more relevant for practical QRM. \nc \end{remk} \nc %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Discussion}\label{sec:discuss} We established limiting results concerning the ratio of asymptotic variances of the classical empirical estimators of location, tail median vs.~tail mean, in the context of the flexible EPD family of distributions. The most remarkable result concerned the fact that in the limit of the right tail, the asymptotic variance of the tail median is approximately 36\% larger than that of the tail mean, irrespective of the EPD shape parameter. Equating ``efficiency'' with ``reduction in asymptotic variance'', this translates equivalently into the tail mean being approximately 26\% more efficient than the tail median. The findings also offer a generalization of the solution to the age-old statistical quandary concerning which of the sample median vs.~sample mean is the most efficient estimator of centrality. The central tenet of this paper is that for sufficiently ``light'' tails, the tail mean is a more efficient estimator than the tail median, when the population quantiles that they are estimating coincide. This appears to remain true whether one is in a light-tailed (tail index $\theta=0$) or heavy-tailed (tail index $\theta>0$) regime. \nc From a practical perspective, this message may have important repercussions for QRM practitioners with regard to choice of risk measure, VaR or ES, as follows: \begin{itemize} \item If the data on hand is believed to follow a light-tailed distribution ($\theta=0$). Proceed by fitting an EPD via, e.g., maximum likelihood. Guided by efficiency considerations, the corresponding choice of risk measure in implementing Basel III (where $\alpha=0.99$ and $\beta=0.975$) would then be dictated by ascertaining whether or not the estimated value of the EPD shape parameter, $p$, exceeds $0.23$. For other risk quantile levels $\alpha$, the corresponding $\al$ can be determined from Figure \ref{fig:g-fun} or equation (\ref{gb}), whence the appropriate choice can be made from Figure \ref{fig:H.contour.fun} or equation (\ref{H_2}), bearing in mind that for $p>1.4074$ ES is always more efficient. \item If the data follows a heavy-tailed distribution ($\theta>0$). Remark \ref{remk-evt-are} can be used to determine which of VaR/ES is more efficient, given an estimate of $\theta$. However, relating $\alpha$ and $\beta$ to yield the same value of the risk measures like we did through the function $g(\beta,p)$ for the light-tailed case of EPD, still requires distribution-specific knowledge. Thus, for example, setting $(1-\beta)/(1-\alpha)=2.5\approx e$ as suggested by Basel III, we see that $\are(\hvaralph,\hcvar)>1$ only for $\theta>6$ (Danielsson and Zhou, 2016), and so for really heavy tails it is VaR that is less variable than ES at these specific quantiles. A further problem with this EVT approach is the fact that comparisons are made for the intermediate rather than central quantile case, whereas we argue the latter is more realistic than the former in practical applications since the probability corresponding to $\text{VaR}_\al$ is converging to $\al$ instead of $1$ (Remark~\ref{remk-evt-are}). Rather, the intermediate quantile case is considered merely because it leads to a general tractable solution. \nc \end{itemize} There is therefore room for improvement in both approaches. The light-tailed case could benefit from a more general result for $\lim_{p\rightarrow 0}H(1,p)$ that would be independent of the distributional family, of the type mentioned for the heavy-tailed case. On the other hand, the heavy-tailed situation might benefit from a similar analysis as was done for light-tailed, where the focus is the central rather than intermediate quantile case. At present, both extensions would seem to be offer substantial analytical challenges. %\bigskip %\rc (\dag) Roger: if we could say that $\lim_{p\rightarrow 0}H(1,p)=e/2$ also %when we go along line (\ref{general-basel-3}), then we %would have a limit that holds for all light-tailed % distributions (subject to regularity conditions), and this would be a % POWERFUL message. (Note that the fact that the $g(\al,p)$ functions % are converging to this line as $p\rightarrow 0$ seems hopeful...)\nc \nc \appendix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Overview of Proof Techniques}\label{app:prelim} Throughout the proofs in the remaining sections, we will make liberal use of certain mathematical results. We document the primary ones here in order to add transparency to the proofs. \begin{itemize} \item[(i)] ``Big $O$'' and ``little $o$'' notation. For sequences of real numbers $\{u_n\}$ and $\{v_n\}$, recall that $u_n=O(v_n)$ if and only if $u_n/v_n$ is bounded, and $u_n=o(v_n)$ if and only if $\lim_{n\rightarrow\infty}u_n/v_n=0$. Rules for manipulating these can be found in any advanced text. In particular, note that if $x_n=o(u_n)$ and $y_n=o(v_n)$, then $x_n+y_n=o(\max\{u_n,v_n\})$, and $x_ny_n=o(u_nv_n)$. \item[(ii)] Asymptotic equivalence of (non-random) sequences. For real-valued sequences $a(x)$ and $b(x)$, we write $a(x)\approx b(x)$ if and only if $a(x)/b(x)\rightarrow 1$ as $x \rightarrow\infty$. An equivalent way of stating this definition (which points the way to arithmetic manipulations) is: \[ a(x)\approx b(x) \qquad\Longleftrightarrow\qquad \frac{a(x)-b(x)}{a(x)}=o(1). \] Note however that for bounded sequences this simplifies: $a(x)\approx b(x)\Leftrightarrow a(x)-b(x)=o(1)$. \item[(iii)] Establishing limits of ratios. For sufficiently small $x\downarrow 0$, recall from the geometric series expansion that \[ \frac{1}{1+x} = \frac{1}{1-(-x)} = 1- x + x^2 + o(x^{-2}). \] Suppose now that $\eta(z)=1+a/z+b/z^2+o(z^{-2})$, where $z\rightarrow\infty$. A standard technique for dealing with limits of ratios is to set $x=a/z+b/z^2+o(z^{-2})$, and to then employ the representation \be \label{our-common-trick} \frac{1}{\eta(z)} = \frac{1}{1+x} = 1-\left(\frac{a}{z}+\frac{b}{z^2}+o(z^{-2})\right) +\frac{a^2}{z^2}+o(z^{-2}) = 1-\frac{a}{z}+\frac{a^2-b}{z^2}+o(z^{-2}). \ee \end{itemize} \nc %\section{Four Limiting Cases of Interest} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Proof of Lemma~\ref{lemma:H-fun}}\label{app:lemma} %This follows by carefully combining (\ref{Fn_function}), \nc (\ref{mu_beta}), (\ref{gb}), and %(\ref{sb2}), using the notational simplification of (\ref{helper-hn}). Define the numerators ($V_1,V_2$) and denominators ($W_1,W_2$) by writing: \[ H(\beta,p) =\frac{\alpha(1-\alpha)}{f^2(\varalph)}\cdot \frac{1-\beta}{\sigma_{\beta}^2 + \beta(\mu_{\beta} - \var)^2}\equiv \frac{V_1}{W_1}\cdot\frac{V_2}{W_2} . \] Now compute each of these 4 terms as follows. \begin{description} \item[$V_1$:] From \eqref{gb} and \eqref{helper-hn} we have $\alpha = F_0(\mu_{\beta}) = F_0(h_1)$, whence noting that $A_0=1$, we obtain, in view of \eqref{FGA2}, $1-\alpha=A_0-F_0(h_1)=G_0(h_1)$. Putting these together gives: $V_1=\alpha(1-\alpha)=F_0(h_1)G_0(h_1)$. \item[$V_2$:] By definition, $\beta=F_0(\var)$, whence the fact that $A_0=1$ and \eqref{FGA2} gives: $V_2=1-\beta=A_0-F_0(\var) =G_0(\var)$. \item[$W_1$:] $\varalph=F^{-1}(\alpha)$, thus since it follows by \eqref{gb} that $\alpha =g(\beta,p) = F_0(h_1)$, we have $\varalph=F^{-1}(F(h_1))=h_1\geq 0$, since $\varalph=\cvar\geq 0$ (see Figure \ref{fig:var-cvar}). Therefore, substituting this into \eqref{expow-pdf} gives: $\sqrt{W_1}=f(\varalph)=p\exp\{-(h_1)^p\}/[2\Gamma(1/p)]$. \item[$W_2$:] From \eqref{sb2} and \eqref{helper-hn}, $\sigma_{\beta}^2=h_2-h_1^2$, and from \eqref{mu_beta} and \eqref{helper-hn}, $\cvar-\var=h_1-\var$. These then give: $W_2=\sigma_{\beta}^2+\beta(\cvar-\var)^2=h_2 -h_1^2 + F_0(\B)(h_1 -\B)^2$. \end{description} \nc %\section{Four Limiting Cases of Interest} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Proof of Theorem~\ref{theom:lim-ARE-beta}}\label{app:lims-theom-beta} %--------------------------------------------------------- To assess the limiting behavior of $H(\beta,p)$, we take the approach of considering the limits of its individual components, separately for the cases $\beta\rightarrow 0$ (Case 1) and $\beta\rightarrow 1$ (Case 2). A key idea is to make the change of variable $\beta = F_0(\B)$ early on in each case. This sheds light on the connections to the Gamma and incomplete Gamma, thus naturally allowing one to invoke the properties and asymptotics of these functions. In all cases, the basic strategy will be to ascertain the limits of each piece of $H(\beta,p)$ as given in Lemma \ref{lemma:H-fun}, which are then combined to obtain the desired result.\nc \subsection*{Case 1: $\beta \rightarrow 0$ (for fixed $p$) } We can rewrite (\ref{are-var-cvar}), using $\alpha = g_{\beta}$ so that $\mu_{\beta} = \xi_{\alpha}$, as $H(\beta,p) = P_1 \cdot Q_1$, where \[ P_1 = \frac{\alpha(1-\alpha)}{f^2(\varalph)}, \ \ \ \ Q_1 = \frac{1-\beta}{\sigma_{\beta}^2 + \beta(\mu_{\beta} - \var)^2}. \] If we let $\beta \rightarrow 0$, then $\mu_{\beta} \rightarrow 0$ which is the average value of $F_0^{-1}$ over $(0,1)$. Hence, as $\beta \rightarrow 0$, we have $g_{\beta} \rightarrow 1/2$ which is the value of $F_0$ at $x=0$. Clearly as $\beta \rightarrow 0$, $f(\mu_{\beta}) \rightarrow f(0) = \frac{p}{2\Gamma(1/p)}$. Thus, $P_1 \rightarrow \frac{1}{4}/\left[ \frac{p}{2\Gamma(1/p)} \right]^2$ as $\beta \rightarrow 0$. \vspace{0.1in} \noindent Define $\B$ by $F_0(\B) = \beta$. Then, $\beta \rightarrow 0$ implies $\B = F_0^{-1}(\beta) \rightarrow -\infty$. Considering the term $\beta(\mu_{\beta} - \B)^2$ in $Q_1$, we have, since $\mu_{\beta} \rightarrow 0$ as $\beta \rightarrow 0$, \ba \label{limit_1} \lim_{\beta \rightarrow 0} \beta(\mu_{\beta} - \B)^2 &=& \lim_{\beta \rightarrow 0} -2\mu_{\beta} \beta F_0^{-1}(\beta) + \beta[F_0^{-1}(\beta)]^2 . \ea Making a change of variable $\beta = F_0(\B)$ and applying l'Hopital's rule, we note, for $k=1,2$, \[ \lim_{\beta \rightarrow 0} \beta[F_0^{-1}(\beta)]^k = \lim_{\B \rightarrow -\infty} F_0(\B)\B^k = \lim_{\B \rightarrow -\infty} \frac{F_0(\B)}{\B^{-k}} = \lim_{\B \rightarrow -\infty} \frac{\frac{p}{2\Gamma(1/p)}e^{-|\B|^p}}{-k\B^{-k-1}} = 0. \] Hence, the limit in (\ref{limit_1}) is $0$. Finally, we note that as $\beta \rightarrow 0$, ${\ds \sigma_{\beta}^2 \rightarrow \sigma_0^2 = \int_0^1 [F_0^{-1}(u)]^2 du}$. Making a change of variable, $u = F_0(t)$, we can write \bes \sigma_0^2 = \int_{-\infty}^{\infty} \frac{p}{2\Gamma(1/p)} t^2 e^{-|t|^p} dt = \frac{p}{\Gamma(1/p)} \int_0^{\infty} t^2e^{-t^p} dt = \frac{\Gamma(3/p)}{\Gamma(1/p)}. \ees Thus, $Q_1 \rightarrow 1/[\Gamma(3/p)/\Gamma(1/p)]$ as $\beta \rightarrow 0$. \vspace{0.1in} \noindent Hence, we have as $\beta \rightarrow 0$, \be \label{L_1} \lim_{\beta \rightarrow 0} H(\beta,p) = \lim_{\beta \rightarrow 0} P_1 \cdot Q_1 = \frac{1/4}{\left[\frac{p}{2\Gamma(1/p)}\right]^2}\cdot \frac{1}{\frac{\Gamma(3/p)}{\Gamma(1/p)}} = \frac{1}{p^2} \frac{[\Gamma(1/p)]^3}{\Gamma(3/p)} = \frac{3[\Gamma(1+1/p)]^3}{\Gamma(1+3/p)}. \ee %--------------------------------------------------------- \subsection*{Case 2: $\beta \rightarrow 1$ (for fixed $p$) } We can rewrite (\ref{H_2}) as $H(\beta,p) = P_2 \cdot Q_2$ where \[ P_2 = \frac{G_0(h_1)G_0(\B)} {\left(\frac{1}{2\Gamma(1/p)}\right)^2\exp\{-2\left(h_1\right)^p\}}, \ \ \ Q_2 = \frac{F_0(h_1)}{p^2[h_2 -\left(h_1\right)^2 + F_0(\B)\left(h_1 -\B \right)^2]}. \] Define $\B$ by $F_0(\B) = \beta$. Then, $\beta \rightarrow 1$ implies $\B \rightarrow \infty$. In particular, we have, since $\B > 0$, from (\ref{G_n+}) that \[ G_n(\B) = \frac{\Gamma(\frac{n+1}{p},\B^p)}{2\Gamma(1/p)} , \qquad h_n = \frac{G_n(\B)}{G_0(\B)} = \frac{\Gamma(\frac{n+1}{p},\B^p)}{\Gamma(1/p,\B^p)}, \qquad\text{for }n = 1,2. \] Defining, \be\label{Erdelyi-s} s(a,z) = 1 + \frac{a -1}{z} + \frac{(a-1)(a-2)}{z^2} + o\left(\frac{1}{z^2}\right),\qquad z \rightarrow \infty, \ee we can represent $\Gamma(a,z)$, see Erdelyi~\etal{} (1953, p.~135,~(6)), as \be \Gamma(a,z) = \frac{z^a e^{-z}}{z}s(a,z), \qquad z \rightarrow \infty. \ee Consequently, we have, using (\ref{Erdelyi-s}) in the second step, \bas G_n(\B) &=& \frac{1}{2\Gamma(1/p)}\frac{(\B^p)^{(n+1)/p}e^{-\B^p}}{\B^p} s\left(\frac{n+1}{p},\B^p\right) \\ &=& \frac{1}{2\Gamma(1/p)}\frac{ \B^{n+1} e^{-\B^p}}{\B^p} \left(1 + \frac{(n+1)/p-1}{\B^p} + o\left(\frac{1}{\B^{p}}\right) \right), \eas whence, employing the technique of (\ref{our-common-trick}) to deal with the term $1/G_0(\B)$, gives \ba h_n &=& \frac{G_n(\B)}{G_0(\B)}= \B^{n}\;s\left(\frac{n+1}{p},\B^p\right)\bigg/ s\left(\frac{1}{p},\B^p\right) \label{my.star.eqn}\\ &=&\B^{n} \left(1 + \frac{(n)/p}{\B^p} + o\left(\frac{1}{\B^{p}}\right) \right).\nonumber \ea In particular, \bas G_0(\B) &=& \frac{1}{2\Gamma(1/p)}\frac{\B e^{-\B^p}}{\B^p} \left(1 + \frac{1/p-1}{\B^p} + o\left(\frac{1}{\B^{p}}\right) \right), \\ h_1 &=& \B \left(1 + \frac{1/p}{\B^p} + o\left(\frac{1}{\B^{p}}\right) \right), \eas which implies $G_0(\B) \rightarrow 0$, $F_0(\B) \rightarrow 1$, and $h_1 \rightarrow \infty$, as $\B \rightarrow \infty$. Furthermore, we have, once again using (\ref{our-common-trick}), that \be G_0(h_1) = \frac{1}{2\Gamma(1/p)}\frac{h_1 e^{-(h_1)^p}}{(h_1)^p} \left(1 + \frac{1/p-1}{h_1^p} + o\left(\frac{1}{h_1^p}\right) \right) . \ee Consequently, we have $G_0(h_1) \rightarrow 0$ and $F_0(h_1) \rightarrow 1$ as $\B \rightarrow \infty$. Thus, we have, as $\B \rightarrow \infty$, \ba \nonumber P_2 = \frac{G_0(h_1)G_0(\B)}{(\frac{1}{2\Gamma(1/p)})^2\exp\{-2\left(h_1\right)^p\}} &\approx& \frac{ h_1}{e^{(h_1)^p}(h_1)^p} \frac{\B}{e^{\B^p}\B^p} e^{2(h_1)^p} =(h_1)^{1-p} \B^{1-p} e^{(h_1)^p-\B^p} \nonumber \\ %&=&(h_1)^{1-p} \B^{1-p} e^{(h_1)^p-\B^p} \\ &\approx& \B^{2-2p} e^{(h_1)^p-\B^p}.\label{expo} \ea Now, note that for the exponent of the exponential in (\ref{expo}) we have \bas (h_1)^p-\B^p &=& \B^p \left(1+\frac{1/p}{\B^p} + o\left(\frac{1}{\B^p}\right) \right)^p -\B^p = \B^p \left(1+p\frac{1/p}{\B^p} + o\left(\frac{1}{\B^p}\right) \right) - \B^p = 1 + o(1), \eas whence, $e^{(h_1)^p-\B^p} \rightarrow e$ as $\B \rightarrow \infty$. Thus, we have as $\B \rightarrow \infty$ \be \label{P2} P_2 \approx {\B}^{2-2p} \cdot e \ee Considering the terms in $Q_2$, we have, as $\B \rightarrow \infty$, \bas (h_1 - \B)^2 &=& \left[\B\left( 1 + \frac{1/p}{\B^p} + o\left(\frac{1}{(\B)^p}\right) \right) -\B\right]^2 =\B^2\left(\frac{1/p}{\B^p} + o\left(\frac{1}{(\B)^p}\right) \right)^2 \\ &=&\frac{\B^{2-2p}}{p^2} \left(1 + o\left(\frac{1}{(\B)^p}\right) \right), \eas and finally, using (\ref{my.star.eqn}), \be h_2 - (h_1)^2 = \B^2\left[ \frac{s(3/p,\B^p)s(1/p,\B^p) - s(2/p,\B^p)s(2/p,\B^p)}{s(1/p,\B^p)s(1/p,\B^p)}\right] . \ee Since, using the second order terms in (\ref{Erdelyi-s}), \bas s(3/p,z)s(1/p,z) - s(2/p,z)s(2/p,z) &=& \frac{1/p^2}{z^2} + o\left(\frac{1}{z^2}\right), \\ s(1/p,z)s(1/p,z) &=& 1 + \frac{2/p-2}{z} + o\left(\frac{1}{z}\right), \eas and, again using (\ref{our-common-trick}) to deal with the inversion of the denominator, \bes \frac{s(3/p,z)s(1/p,z) - s(2/p,z)s(2/p,z)}{s(1/p,z)s(1/p,z)} = \frac{1/p^2}{z^2} + o\left(\frac{1}{z^2}\right), \ees we have, as $\B \rightarrow \infty$, \bes h_2 - (h_1)^2 \approx \frac{\B^{2-2p}}{p^2} . \ees % Thus, we have as $\B \rightarrow \infty$, \be \label{Q2} Q_2 \approx \frac{1}{p^2[\frac{{\B}^{2-2p}}{p^2} + \frac{{\B}^{2-2p}}{p^2}]} \ee Combining, (\ref{P2}) and (\ref{Q2}), we have \nc as $\B \rightarrow \infty$, \be\label{case2-H-final} H(\beta,p) = P_2 \cdot Q_2 \approx \frac{[1-0] \B^{2-2p}\cdot e} {p^2\left[\frac{\B^{2-2p}}{p^2} + (1-0)\frac{\B^{2-2p}}{p^2}\right]} , \ee and thus ${\ds H(\beta,p) \rightarrow \frac{e}{2} }$ as $\B \rightarrow \infty$ ($\Leftrightarrow\al \rightarrow 1$). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %--------------------------------------------------------- %--------------------------------------------------------- %\section{Case 3: $p \rightarrow 0$ (for fixed $\beta$) } %--------------------------------------------------------- %--------------------------------------------------------- \cb \section{Discussion of Conjecture~\ref{conj:lim-ARE-p} and Proof of Theorem~\ref{theom:lim-ARE-p}}\label{app:lims-theom-p} %--------------------------------------------------------- For continuity with Cases 1 and 2 above, let Case 3 denote the limiting result of $p \rightarrow 0$ given by Conjecture~\ref{conj:lim-ARE-p}, and Case 4 the limiting result of $p \rightarrow \infty$ stated in Theorem~\ref{theom:lim-ARE-p}. \nc In Case 3 we are only able to rigorously show that the limiting EPD PDF is zero everywhere, whence it follows that the corresponding CDF is converging to $1/2$. For the resulting convergence of $H(\beta,p)$ to zero as $p \rightarrow 0$, we have only graphical evidence. Case 4 is tackled by considering the limiting distribution that results when $p \rightarrow\infty$. The computation of $H(\beta,\infty)$ is then straightforward for the limiting $\mathcal{U}[-1,1]$. Usage of the (well-defined) generalized quantile function to invert the CDF permits the interchange of limits and integrals, via the Lebesgue Dominated Convergence theorem, thus justifying the $\mathcal{U}[-1,1]$ computation. \subsection*{Case 3: $p \rightarrow 0$ (for fixed $\beta$)} Note that since the EPD PDF, $f(x;p) = p[2\Gamma(1/p)]^{-1}\exp\{-|x|^p\}$, is the product of term 1 and term 2, where term 2 is the exponential, \cb it converges (uniformly) to $0$ as $p\rightarrow 0$\nc. This follows from the fact that term 2 is uniformly bounded by $1$ for all $x$ and all $p>0$, whereas term 1 converges to $0$ as $p\rightarrow 0$. Since the total area under the PDF is $1$ (for any $p$), then for smaller $p$ the PDF has to spread further out to capture this total area. Since half the area occurs for $x\in(-\infty,0)$ and half for $x\in(0,\infty)$, this in turn forces the CDF $F(x;p)$ to converge to $1/2$ for each $x$ as $p\rightarrow 0$. It is not obvious how this fact implies that $H(\beta,p) \rightarrow 0$ as $p\rightarrow 0$. A critical complicating factor for constructing an analytical proof for this case is the fact that although $\beta$ is fixed, $\B$ is not fixed (as a function of $p$), and, indeed, exhibits the following limiting behavior: \[ \lim_{p\rightarrow 0}\B = \begin{cases} -\infty, & \text{if }\beta<0.5, \\ 0, & \text{if }\beta=0.5, \\ +\infty, & \text{if }\beta>0.5. \\ \end{cases} \] Possibilities we investigated included efforts to establish asymptotic expansions for the incomplete Gamma functions in the representation of $G_n(\B)$ given $\sigma^2_{\beta}$. Nevertheless the result seems to be true, as is apparent from Figure~\ref{fig:H-fun-case3}. % %%%%%%%%%%%%% \begin{figure}[htb!] \caption{Plot of $H(\beta,p)$ as a function of $p$ for select values of $\beta$.}% \label{fig:H-fun-case3} \begin{center} \includegraphics[scale=0.9]{Plots/H-fun-case3} \end{center} \end{figure} %%%%%%%%%%%%% % \nc %--------------------------------------------------------- \subsection*{Case 4: $p \rightarrow \infty$ (for fixed $\beta$)} Denote by $f(x;p)$ and $F(x;p)$ the EPD PDF and CDF, respectively, for given shape parameter $p$. Formally define $f(x;\infty) = \lim_{p \rightarrow \infty} f(x;p)$ and $F(x;\infty) = \lim_{p \rightarrow \infty} F(x;p)$. Note that $f(x;\infty) = \frac{1}{2}\indicator{[-1,1]}(x)$ is a uniform distribution on $[-1,1]$. Thus, $F(x;\infty) = \frac{1+x}{2}\indicator{[-1,1]}(x) + \indicator{(1,\infty)}(x)$, and we have from the Lebesgue Dominated Convergence theorem (Royden, 1988) that: \be\label{F-converges-as-p-infty} \lim_{p \rightarrow \infty}F(x;p) = F(x;\infty), \qquad\text{for }x\in\R. \ee \nc Throughout, we will let $F^{-}(u;\infty)$ denote the generalized inverse of $F(x;\infty)$, also known as the generalized quantile function (Embrechts and Hofert, 2013). Thus, we have $F^{-}(u;\infty) = 2u-1$ for $0 < u \le 1$. Noting (now and for the remainder of the proof) that in the limit when $p=\infty$ we are dealing with a $\mathcal{U}[-1,1]$ distribution, \nc straightforward calculations yield \[ \mu_{\beta}(\infty) \equiv \mu(\beta,\infty) = \frac{1}{1-\beta}\int_{\beta}^1 F^{-}(u;\infty) du = \beta, \qquad g_{\beta}(\infty) \equiv g(\beta,\infty)= F(\mu(\beta;\infty),\infty) = \frac{1+\beta}{2}, \] \[ \sigma_{\beta}^2(\infty) \equiv \sigma^2(\beta,\infty) = \frac{1}{1-\beta}\int_{\beta}^1 [F^{-}(u;\infty) - \mu(\beta,\infty)]^2 du = \frac{(1-\beta)^2}{3}, \] and hence, \bas H(\beta,\infty) &\equiv& \frac{g_{\beta}(\infty)(1-g_{\beta}(\infty))}{[f(\mu_{\beta}(\infty);\infty)]^2}\cdot \frac{1-\beta}{\sigma_{\beta}^2(\infty) + \beta[\mu_{\beta}(\infty) - F^{-}(\beta;\infty)]^2} = \frac{1+\beta}{1/3 + \beta}. \eas Having thus demonstrated that $H(\beta,\infty)$ exists, we will now show that indeed $\lim_{p\rightarrow\infty} H(\beta,p) = H(\beta,\infty)$, and thus verify Theorem \ref{theom:lim-ARE-p}. \nc Recall from Section \ref{sec:prelim} that \[ \mu_{\beta} = \mu(\beta,p) =\frac{1}{1-\beta} \int_{\beta}^1 F^{-}(u;p)du, \qquad g_{\beta} = g(\beta,p) =F(\mu(\beta,p);p), \] \[ \sigma_{\beta}^2 = \sigma^2 (\beta,p) = \frac{1}{1-\beta} \int_{\beta}^1 [F^{-}(u;p)-\mu(\beta,p)]^2 du. \] For $0 < \delta < 1$, define \[ e_n(\delta) =\lim_{p \rightarrow \infty} \int_{\delta}^\infty t^n f(t,p) dt, \] and note that, since $\lim_{x\rightarrow 0}x\Gamma(x)=\lim_{x\rightarrow 0}\Gamma(1+x)=1$, we have, setting $x=1/p$, that \nc $\lim_{p \rightarrow \infty} \frac{p}{2\Gamma(1/p)} = 1/2$. Since, for $n=0,1,2$ and for $p \ge 2$ and $t \ge 1$, we have $t^n e^{-t^p} \le e^{-t^{p/2}} \le e^{-t}$, we see that \be \label{c4} e_n(\delta) =\lim_{p \rightarrow \infty} \int_{\delta}^\infty t^n f(t,p) dt = 1/2 \int_{\delta}^\infty t^n \indicator{[-1,1]}(t) dt = \frac{1-\delta^{n+1}}{2(n+1)}. \ee (This follows by noting that since the integrand on the left hand side of (\ref{c4}) is bounded by the (integrable) function $e^{-t}$, we can, applying the Lebesgue Dominated Convergence theorem, bring the limit inside the integrand.) \nc Now define $B = B(p)$ by $F(B;p) = \beta$. Then, making a change of variable $u = F(t;p)$ we obtain \[ \mu_{\beta} = \mu(\beta,p) = \frac{1}{1-\beta} \int_{B(p)}^{\infty} t f(t;p) dt, \qquad g_{\beta} = g(\beta,p) =F(\mu(\beta,p);p), \] \[ \sigma_{\beta}^2 = \sigma^2 (\beta,p) = \frac{1}{1-\beta} \int_{B(p)}^{\infty} (t-\mu(\beta,p))^2 f(t;p) dt. \] % Since $e_n(\delta)$ given by (\ref{c4}) is continuous in $\delta$ and (by an analogous argument) $F(x;p)$ is also continuous in $x$, the convergence in \eqref{F-converges-as-p-infty} (in the appropriate topology) for $x=\mu(\beta;p)$, where $0 < \beta <1$, implies that \[ \lim_{p \rightarrow \infty} B(p) = \lim_{p \rightarrow \infty} F^{-}(\beta;p) = F^{-}(\beta;\infty) = 2\beta -1, \] whence we then have that, assembling these and earlier results, \nc \bas \lim_{p \rightarrow \infty} \mu(\beta,p) &=& \lim_{p \rightarrow \infty} \frac{1}{1-\beta} e_1(B(p)) = \frac{1}{1-\beta} e_1(2\beta -1) = \beta, \\ \lim_{p \rightarrow \infty} g(\beta,p) &=& \lim_{p \rightarrow \infty} F(\mu(\beta,p); p) = F(\beta;\infty) = \frac{1+\beta}{2}, \\ \lim_{p \rightarrow \infty} \sigma^2(\beta,p) &=& \lim_{p \rightarrow \infty} \frac{1}{1-\beta} \left[ e_2(B(p)) - 2\mu(\beta,p) e_1(B(p)) + \mu^2(\beta,p) e_0(B(p)) \right] \\ &=& \frac{1}{1-\beta} \left[ e_2(2\beta -1) - 2\beta e_1(2\beta -1) + {\beta}^2 e_0(2\beta -1)\right] = \frac{(1-\beta)^2}{3}. \eas Thus, we obtain \be \label{L_4} \lim_{p \rightarrow \infty} H(\beta,p) = H(\beta,\infty) = \frac{1+\beta}{1/3 + \beta}. \ee %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Conflict of Interest} The authors declare that they have no conflict of interest. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Acknowledgements} We are indebted to detailed comments from two anonymous referees that led to vast improvements in the paper. %--------------------------------------------------------- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } % \MRhref is called by the amsart/book/proc definition of \MR. \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{thebibliography}{10000} \bibitem{AbramStegun1972} Abramowitz, M., Stegun, I.A., eds. (1972). \emph{Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables}, New York: Dover Publications.\nc \bibitem{AcTa2002} Acerbi, C., and Tasche, D. (2002). ``On the coherence of expected shortfall'', \emph{Journal of Banking \& Finance}, 26, 1487--1503. \bibitem{Art1999} Artzner P., Delbaen, F., Eber, J., and Heath, D. (1999). "Coherent measures of risk", {\it Mathematical Finance}, 9, 203--228. \bibitem{Basel2013} Basel III (2013). ``Fundamental review of the trading book: A revised market risk framework'', \emph{Technical report}, Basel Committee on Banking Supervision, October 2013.\nc \bibitem{CSU2012} Chun, S.Y., Shapiro A., and Uryasev, S. (2012). ``Conditional Value-at-Risk and Average Value-at-Risk: Estimation and Asymptotics'', \emph{Operations Research}, 60, 739--756. \bibitem{DanielZHou2016} Danielsson, J., and Zhou, C. (2016). ``Why Risk Is So Hard to Measure'', \emph{DNB Working Paper No. 494}, De Nederlandsche Bank NV, Amsterdam (The Netherlands). \bibitem{DaNa2003} David, H., and Nagaraja, H. (2003). \emph{Order Statistics}, 3rd ed., Hoboken: Wiley. \bibitem{DuSi2003} Duffie, D., Singleton, K.J. (2003). \emph{Credit Risk: Pricing, Measurement, and Management}, Princeton: Princeton University Press. \bibitem{EH2013} Embrechts, P., and Hofert, M. (2013). ``A note on generalized inverses'', \emph{Mathematical Methods in Operations Research}, \textbf{77}, 423--432. \bibitem{EH2014} Embrechts, P., and Hofert, M. (2014). ``Statistics and Quantitative Risk Management for Banking and Insurance'', \emph{Annual Review of Statistics and Its Application}, \textbf{1}, 493--514. \bibitem{EH2014b} Embrechts, P., Puccetti, G., R{\"u}schendorf, L., Wang, R., and Beleraj, A. (2014). ``An academic response to Basel 3.5'', \emph{Risks}, 2, 25--48. \bibitem{EMOT1} Erdelyi, A., Magnus, W., Oberhettinger, F., and Tricomi, F.G. (1953). \emph{Higher Transcendental Functions}, Vol.~1, New York: McGraw-Hill. \bibitem{FoSc2011} Follmer, H., and Schied, A. (2011). \emph{Stochastic Finance: An Introduction in Discrete Time}, 3rd ed., Berlin: de Gruyter. \bibitem{GiTr2007} Giurcanu, M. and Trindade, A.A. (2007). ``Establishing Consistency of M-Estimators Under Concavity with an Application to Some Financial Risk Measures,'' \emph{Journal of Probability and Statistical Science}, 5, 123--136. \bibitem{GGM98} Gomez, E., Gomez-Villegas, M.A., Marin, J.M.~(1998). ``A multivariate generalization of the power exponential family of distributions'', \textit{Communications in Statistics}, A27, 589--600. \bibitem{Jor2003} Jorion, P. (2003). \emph{Financial Risk Manager Handbook}, 2nd ed., New York: Wiley. \bibitem{LandsValdez2003} Landsman, Z., and Valdez, E. (2003). ``Tail conditional expectations for elliptical distributions'', \emph{North American Actuarial Journal}, 7, 55--71. \bibitem{Maronna2011} Maronna, R. (2011). ``Robust Statistical Methods'', in \emph{International Encyclopedia of Statistical Science}, M.~Lovric (ed.), pp.~1244--1248. Berlin, Heidelberg: Springer. \nc \bibitem{McNFreyEmb2005} McNeil, A.J., Frey, R., and Embrechts, P. (2005). \emph{Quantitative Risk Management: Concepts, Techniques, Tools}, Princeton, NJ: Princeton Univ.~Press. \bibitem{MR2005} Mineo, A.M., and Ruggieri, M.~(2005). ``A software tool for the exponential power distribution: the normalp package'', \emph{Journal of Statistical Software}, 12(4), 1--23. \bibitem{Nad2005} Nadarajah, S.~(2005). ``A generalized normal distribution'', \emph{Journal of Applied Statistics}, 32, 685--694. \bibitem{PfRo2007} Pflug, G., and Romisch, W. (2007). \emph{Modeling, Measuring, and Managing Risk}, London: World Scientific. \bibitem{PBM} Prudnikov, A.P., Brychkov, Y.A. and Marichev, O.I. (1986). \emph{Integrals and Series (Volume Two: Special Functions)}, Amsterdam: Overseas Publishers Association. \bibitem{RoUr2000} Rockafellar, R., and Uryasev, S., (2000). ``Optimization of conditional value-at-risk'', \emph{Journal of Risk}, 2, 21--41. \bibitem{Roy1988} Royden, H.L. (1988). \emph{Real Analysis}. Englewood Cliffs: Prentice Hall.\nc \bibitem{Sherman1997} Sherman, M. (1997). ``Comparing the Sample Mean and the Sample Median: An Exploration in the Exponential Power Family'', \emph{The American Statistician}, \textbf{51}, 52--54. \bibitem{Temme} Temme, N.M. (1996). \emph{Special Functions: An Introduction to the Classical Functions of Mathematical Physics}, New York: Wiley. %\bibitem{T1} %Tricomi, F. (1950). \emph{Expansion of the hypergeometric function in series of confluent ones and application to the Jacobi polynomials}, \emph{Commentarii Mathematici Helvetici}, \textbf{25}, 196--204. \bibitem{JoBandF2007} Trindade, A.A., Uryasev, S., Shapiro, A., and Zrazhevsky, G. (2007). ``Financial Prediction with Constrained Tail Risk'', \emph{Journal of Banking and Finance}, 31, 3524--3538. \bibitem{YaYo2002} Yamai, Y., and Yoshiba, T. (2002). ``Comparative Analyses of Expected Shortfall and Value-at-Risk: Their Estimation Error, Decomposition, and Optimization'', \emph{Monetary and Economic Studies (Bank of Japan)}, 20, 87--121. %Abstract: We compare expected shortfall with value-at-risk (VaR) in three aspects: estimation errors, decomposition into risk factors, and optimization. We describe the advantages and the disadvantages of expected shortfall over VaR. We show that expected shortfall is easily decomposed and optimized while VaR is not. We also show that expected shortfall needs a larger size of sample than VaR for the same level of accuracy. \bibitem{YaYo2005} Yamai, Y., and Yoshiba, T. (2005). ``Value-at-risk versus expected shortfall: A practical perspective'', \emph{Journal of Banking \& Finance}, 29, 997--1015. \end{thebibliography} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \appendix \end{document}