diff --git a/disctospec.tex b/disctospec.tex index 2f5f59d4a9aa3880f8e7769887a8df9f399ef3b6..2d83974ca362ff07de86ecaf101f647b8232a26e 100644 --- a/disctospec.tex +++ b/disctospec.tex @@ -8,7 +8,7 @@ \title{Blah Blah Blah} \author{ - Shravas Rao + Jop Bri\"{e}t \and Shravas Rao } \date{\today} @@ -81,9 +81,9 @@ \title{Notes on hypergraph versions of Conlon--Zhao} \maketitle -We attempt to prove that small discrepancy implies small spectral expansion for Cayley hypergraphs over certain types of abelian groups. In the case of regular graphs, this is known to hold for Cayley graphs over any group by a result of Conlan and Zhao, but does not hold for general graphs~\cite{Conlon:2016}. +We attempt to prove that small discrepancy implies small spectral expansion for Cayley hypergraphs over certain types of abelian groups. In the case of regular graphs, this is known to hold for Cayley graphs over any group by a result of Conlon and Zhao, but does not hold for general graphs~\cite{Conlon:2016}. -We start by giving definitions. Because these are just my personal notes, I will let the definition of Cayley hypergraphs be the exact same as that in my paper with Jop, except we let $\mathbf{g} = (0, g, g^2, \ldots, g^{t-1})$ and $q = \mathbf{1}$ for now. In particular, $K = \cay^{(t)}(\Gamma, q, S)$ is a $t$-uniform Cayley hypergraph with edges induced by the set $S \subseteq \Gamma$, and we let $A_K$ be the corresponding adjacency tensor. Given a Cayley hypergraph $K$, we say that a sub-hypergraph of $K$, $H = \cay^{(t)}(\Gamma, q, S')$ for some $S' \subseteq S$ has spectral expansion $\lambda$ with respect to $K$ if \[\lambda \geq \|A_H-A_K\|_{\ell_t, \ldots, \ell_t}.\] We say that $H$ has discrepancy $\epsilon$ with respect to $K$ if for all $T_1, \ldots, T_t \subseteq \Gamma$, \[|A_H(1_{T_1}, \ldots, 1_{T_t})-A_K(1_{T_1}, \ldots, 1_{T_t})| \leq \epsilon |\Gamma|.\] We can lower bound discrepancy in terms of the $\ell_{\infty}, \ldots, \ell_{\infty}$-norm of $A_H-A_K$ as stated in the following Lemma (the upper bound is trivial). The proof follows almost exactly that of the same fact for matrices in Lemma 2.1 in~\cite{Conlon:2016}. +We start by giving definitions. Because these are just my personal notes, I might reuse a lot of the definitions from my paper with Jop without stating them fully. In particular, let the definition of Cayley hypergraphs by the same, but let $\mathbf{g} = (0, g, g^2, \ldots, g^{t-1})$ and $q = \mathbf{1}$ be fixed for now. In particular, $K = \cay^{(t)}(\Gamma, q, S)$ is a $t$-uniform Cayley hypergraph with edges induced by the set $S \subseteq \Gamma$, and we let $A_K$ be the corresponding adjacency tensor. Given a Cayley hypergraph $K$, we say that a sub-hypergraph of $K$, $H = \cay^{(t)}(\Gamma, q, S')$ for some $S' \subseteq S$ has spectral expansion $\lambda$ with respect to $K$ if \[\lambda \geq \|A_H-A_K\|_{\ell_t, \ldots, \ell_t}.\] We say that $H$ has discrepancy $\epsilon$ with respect to $K$ if for all $T_1, \ldots, T_t \subseteq \Gamma$, \[|A_H(1_{T_1}, \ldots, 1_{T_t})-A_K(1_{T_1}, \ldots, 1_{T_t})| \leq \epsilon |\Gamma|.\] We can lower bound discrepancy in terms of the $\ell_{\infty}, \ldots, \ell_{\infty}$-norm of $A_H-A_K$ as stated in the following Lemma (the upper bound is trivial). The proof follows almost exactly that of the same fact for matrices in Lemma 2.1 in~\cite{Conlon:2016}. \begin{lem}\label{lem:disctoinfty} If $A$ is an $n \times \cdots \times n$ $t$-linear form, there exist $T_1, \ldots, T_t \subseteq [n]$ such that \[\|A\|_{\ell_{\infty}, \ldots, \ell_{\infty}} \leq \pi^t|A(1_{T_1}, \ldots, 1_{T_t})|\] @@ -125,9 +125,9 @@ where $\Delta_{h}$ is the multiplicative derivative The following lemma relates the Gowers norm to \snote{not sure how to describe this}. We state the form given in~\cite{Bhatt:2016} \begin{lem}[Generalized von Neumann inequality]\label{lem:genvn} -Let $(\mathcal L_1, \ldots, \mathcal L_{t+1})$ be a collection of affine-linear forms $\mathcal L_i: (\mathbb{F}_p^n)^m \rightarrow \mathbb{F}_p^n)^m$ for some prime $p$ such that no form is a multiple of another. Then for $f_1, \ldots, f_{t+1}: \mathbb{F}_p^n)^m \rightarrow \mathbb{C}$, +Let $(\mathcal L_1, \ldots, \mathcal L_{t+1})$ be a collection of affine-linear forms $\mathcal L_i: (\mathbb{F}_p^n)^m \rightarrow \mathbb{F}_p^n$ for some prime $p$ such that no form is a multiple of another. Then for $f_1, \ldots, f_{t+1}: \mathbb{F}_p^n \rightarrow \mathbb{C}$, \[ -|\mathbb{E}_{z_1, \ldots, z_m \in \mathbb{F}_p^n)^m} \prod_{i=1}^{t+1} f_i(\mathcal L_i(z_1, \ldots, z_m))| \leq \min_{1 \leq i \leq t+1} \|f_i\|_{U^t}. +|\mathbb{E}_{z_1, \ldots, z_m \in \mathbb{F}_p^n} \prod_{i=1}^{t+1} f_i(\mathcal L_i(z_1, \ldots, z_m))| \leq \min_{1 \leq i \leq t+1} \|f_i\|_{U^t}. \] \end{lem} @@ -165,38 +165,41 @@ Let $\delta > 0$ and $s \geq 0$. Then there exists an $\epsilon = \epsilon_{\de Given the structure of non-classical polynomials that take on values in $\mathbb{T}$ described above, we have the following lemma. \begin{lem}\label{lem:decomp} -Let $P: \mathbb{F}_p^n \rightarrow \mathbb{T}$ be a non-classical polynomial of degree $d < p$. Then there exist non-classical polynomials $P_0, \ldots, P_{d}$ such that +Let $P: \mathbb{F}_p^n \rightarrow \mathbb{T}$ be a non-classical polynomial of degree $d < p$. Then there exist non-classical polynomials $P_0, \ldots, P_{d}: \mathbb{F}_p^n \rightarrow \mathbb{T}$ of degree $d$ such that \[ -\sum_{j=0}^d P_i(x+jy) = P(y) +\sum_{j=0}^d P_j(x+jy) = P(y) \] for all vectors $x, y \in \mathbb{F}_p^n$. \end{lem} \snote{This is the lemma to change if we want to make our statement more general, I think} \begin{proof} -Decompose $P$ into a sum of linear forms as in Lemma~\ref{lem:classtonon}. For each $T_i$, we show that there exist $\alpha_0, \ldots, \alpha_{d}$ such that +Decompose $P$ into a sum of linear forms as in Lemma~\ref{lem:classtonon}. For each $T_i$, we show that there exist $\alpha^i_0, \ldots, \alpha^i_{d}$ such that \begin{equation} -\sum_{j=0}^d \alpha_j T_i(x+jy, \ldots, x+jy) = T_i(y, \ldots, y).\label{eq:need} +\sum_{j=0}^d \alpha^i_j T_i(x+jy, \ldots, x+jy) = T_i(y, \ldots, y).\label{eq:need} \end{equation} -This is enough to construct the $P_i$ desired. +We can then construct $P_j$ satisfying the lemma as follows. +\[ +P_j(x) = \sum_{i=0}^d \alpha_j^i T_i(x). +\] Because $T_i$ is an $i$-linear form, we can rewrite the left-hand side of~\eqref{eq:need} as \[ -\sum_{j=0}^d \sum_{s \in \{0, 1\}^i} \alpha_j T_i((1-s_1)x+js_1y, \ldots, (1-s_i)x+js_iy) = -\sum_{j=0}^d \sum_{s \in \{0, 1\}^i} \alpha_j j^{|s|}T_i((1-s_1)x+s_1y, \ldots, (1-s_i)x+s_iy) +\sum_{j=0}^d \sum_{s \in \{0, 1\}^i} \alpha^i_j T_i((1-s_1)x+js_1y, \ldots, (1-s_i)x+js_iy) = +\sum_{j=0}^d \sum_{s \in \{0, 1\}^i} \alpha^i_j j^{|s|}T_i((1-s_1)x+s_1y, \ldots, (1-s_i)x+s_iy) \] where $|s|$ denotes the Hamming weight of $s$. Then~\eqref{eq:need} holds if for all $0 \leq k < i$, \[ -\sum_{j=0}^d \alpha_j j^{k} = 0 +\sum_{j=0}^d \alpha^i_j j^{k} = 0 \] and \[ -\sum_{j=0}^d \alpha_j j^{i} = 1. +\sum_{j=0}^d \alpha^i_j j^{i} = 1. \] -This is a system of linear equations whose coefficients come from the first $i+1$ rows of a $(d+1) \times (d+1)$ Vandermonde matrix. Because $d < p$, the determinant of such a matrix is non-zero and thus the matrix is invertible and $\alpha_j$ exist as desired. +This is a system of linear equations whose coefficients come from the first $i+1$ rows of a $(d+1) \times (d+1)$ Vandermonde matrix. Because $d < p$, the determinant of such a matrix is non-zero and thus the matrix is invertible and $\alpha^i_j$ exist as desired. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:disctospec}] -We prove the contrapositive. Assume that $\cay^{(t)}(\Gamma, q, S')$ does not have spectral expansion $\lambda$, that is that there exist vectors $x_1, \ldots, x_t$ such that $\|x_i\|_{\ell_p} = 1$ for all $i$, but $(A_H-A_K)(x_1, \ldots, x_t) \geq \lambda$. Let $1_{H, K} = \left(\frac{1}{|S'|}-\frac{1}{|S|}\right)(1_{S'}-1_{S})$. Then +We prove the contrapositive. Assume that $\cay^{(t)}(\Gamma, q, S')$ does not have spectral expansion $\lambda$, that is that there exist vectors $x_1, \ldots, x_t$ such that $\|x_i\|_{\ell_t} = 1$ for all $i$, but $(A_H-A_K)(x_1, \ldots, x_t) \geq \lambda$. Let $1_{H, K} = \left(\frac{1}{|S'|}-\frac{1}{|S|}\right)(1_{S'}-1_{S})$. Then \begin{equation} (A_H-A_K)(x_1, \ldots, x_t) = \frac{1}{t!} \sum_{\sigma \in S_t} \mathbb{E}_{u, g}[x_1(ug^{\sigma(1)-1})x_2(ug^{\sigma(2)-1})\cdots x_t(ug^{\sigma(t)-1})1_{H, K}(g)]. \label{eq:unravel} \end{equation}