Chapter 10 Thin \(T\)-Module, II

Wednesday, February 10, 1993

Let \(\Gamma = (X, E)\) be any graph.

Fix a vertex \(x\in X\). Let \(E^*_i \equiv E^*_i(x)\), \(T\equiv T(x)\), the subconstituent algebra over \(\mathbb{C}\), and \(V = \mathbb{C}^{|X|}\) the standard module.

Lemma 10.1 With above notation, let \(W\) denote a thin irreducible \(T(x)\)-module with endpoint \(r\) and diameter \(d\). Let \[\begin{align} a_i & = a_i(W) \quad (0\leq i \leq d)\\ x_i & = x_i(W) \quad (1\leq i \leq d)\\ p_i & = p_i(W) \quad (0\leq i \leq d+1) \end{align}\] be from Lemma 9.1, and measure \(m = m_W\). Then,

\((i)\) \(p_0, \ldots, p_{d+1}\) are orthogonal with respect to \(m\), i.e.,

\[\sum_{\theta\in \mathbb{R}}p_i(\theta)p_j(\theta)m(\theta) = \delta_{ij}x_1x_2\cdots x_i \quad (0\leq i,j\leq d+1) \text{ with }\; x_{d+1}=0.\]

\(\quad (ia)\) \({\displaystyle \sum_{\theta\in \mathbb{R}}p_i(\theta)^2m(\theta) = x_1\cdots x_i \quad (0\leq i\leq d)}\).
\(\quad (iia)\) \({\displaystyle \sum_{\theta\in \mathbb{R}}m(\theta) = 1}\).
\(\quad (iiia)\) \({\displaystyle \sum_{\theta\in \mathbb{R}}p_i(\theta)^2\theta m(\theta) = x_1\cdots x_ia_i \quad (0\leq i\leq d)}\).

Proof. Pick \(0\neq w_0\in E^*_rW\). Set \[w_i = p_i(A)w_0 \in E^*_{r+i}W.\] Since \(E^*_iW\) and \(E^*_jW\) are orthogonal if \(i\neq j\),

\[\begin{align} \delta_{ij}\|w_i\|^2 & = \langle w_i, w_j\rangle\\ & = \langle p_i(A)w_0, p_j(A)w_0\rangle\\ & = \left\langle p_i(A)\left(\sum_{\ell=0}^R E_\ell\right)w_0, p_j(A)\left(\sum_{\ell=0}^R E_\ell\right)w_0\right\rangle\\ & = \left\langle \sum_{\ell=0}^R p_i(\theta_\ell)E_\ell w_0, \sum_{\ell=0}^R p_j(\theta_\ell)E_\ell w_0\right\rangle && (\text{as } AE_j = \theta_jE_j)\\ & = \sum_{\ell=0}^R p_i(\theta_\ell)\overline{p_j(\theta_\ell)}\|E_\ell w_0\|^2\\ & \qquad\qquad (\text{as } \; p_j\in \mathbb{R}[\lambda], \quad \theta_\ell\in \mathbb{R}, \quad m(\theta_i)\|w_0\|^2 = \|E_iw_0\|^2)\\ & = \sum_{\theta\in \mathbb{R}} p_i(\theta)p_j(\theta)m(\theta)\|w_0\|^2. \end{align}\] Now we are done by Lemma 9.2 as \[\|w_i\|^2 = \|w_0\|^2 x_1x_2\ldots x_i.\] For \((ia)\), set \(i = j\), and for \((ib)\), set \(i=j=0\).

\((ii)\) We have \[\begin{align} \langle w_i,Aw_i\rangle & = \langle w_i, w_{i+1} + a_iw_i + x_i w_{i-1}\rangle\\ & = \overline{a_i}\|w_i\|^2\\ & = a_i x_1\cdots x_i\|w_0\|^2, \end{align}\] as \(a_i\in \mathbb{R}\) by Lemma 9.1.

Also, \[\begin{align} \langle w_i, Aw_i\rangle & = \langle p_i(A)w_0, Ap_i(A)w_0\rangle \\ & = \left\langle p_i(A)\left(\sum_{\ell=0}^R E_\ell\right)w_0, A p_i(A)\left(\sum_{\ell=0}^R E_\ell\right)w_0\right\rangle && (\text{ as in $(i)$})\\ & = \sum_{\ell = 0}^D p_i(\theta_\ell)^2 \theta_\ell \|E_\ell w_0\|^2\\ & = \sum_{\theta\in \mathbb{R}}p_i(\theta)^2\theta m(\theta)\|w_0\|^2. \end{align}\] Thus, we have \((ii)\).

Lemma 10.2 With above notation, let \(W\) be a thin irreducible \(T(x)\)-module with measure \(m\). Then \(m\) determines diameter \(d(W)\), \[\begin{align} a_i & = a_i(W) \quad (0\leq i\leq d)\\ x_i & = x_i(W) \quad (1\leq i\leq d)\\ p_i & = p_i(W) \quad (0\leq i\leq d+1). \end{align}\]

Proof. Note that \(d+1\) is the number of \(\theta\in \mathbb{R}\) such that \(m(\theta)\neq 0\). Hence \(m\) determines \(d\).

Apply \((ia)\), \((ii)\) of Lemma 10.1. \[\begin{align} & \sum_{\theta\in\mathbb{R}}m(\theta) = 1 && p_0 =1.\\ & \sum_{\theta\in\mathbb{R}}\theta m(\theta) = a_0 && p_1 = \lambda - a_0\\ & \sum_{\theta\in\mathbb{R}}p_1(\theta)^2 m(\theta) = x_1 \\ & \sum_{\theta\in\mathbb{R}}p_1(\theta)^2\theta m(\theta) = x_1a && \to a_1\\ & \qquad p_2 = (\lambda - a_1)p_1 - x_1p_0\\ & \sum_{\theta\in\mathbb{R}}p_2(\theta)^2 m(\theta) = x_1x_2 && \to x_2\\ & \sum_{\theta\in\mathbb{R}}p_2(\theta)^2\theta m(\theta) = x_1x_2a_2 && \to a_2\\ & \qquad p_3 = (\lambda-a_2)p_2 - x_2p_1\\ & \qquad\qquad \vdots\\ & \sum_{\theta\in\mathbb{R}}p_d(\theta)^2 m(\theta) = x_1x_2\cdots x_d && \to x_d\\ & \sum_{\theta\in\mathbb{R}}p_d(\theta)^2\theta m(\theta) = x_1x_2\cdots x_da_d && \to a_d\\ & \qquad p_{d+1} = (\lambda-a_d)p_d - x_dp_{d-1}.\\ \end{align}\] This proves the assertions.

Corollary 10.1 With above notation, let \(W\), \(W'\) denote thin irreducible \(T(x)\)-modules. The following are equivalent.

\((i)\) \(W\), \(W'\) are isomorphic as \(T\)-modules.
\((ii)\) \(r(W) = r(W')\) and \(m_W = m_{W'}\).
\((iii)\) \(r(W) = r(W')\), \(d(W) = d(W')\), \(a_i(W) = a_i(W')\) and \(x_i(W) = x_i(W')\) \(\quad (0\leq i\leq d)\).

Proof. \((i)\Rightarrow (iii)\) Write \(r\equiv r(W)\), \(r' \equiv r(W')\), \(d \equiv d(W)\), \(d' \equiv d(W')\), \(a_i \equiv a_i(W)\), \(a_i' \equiv a_i(W')\), \(x_i \equiv x_i(W)\) and \(x_i' \equiv x_i(W')\).

Let \(\sigma: W\to W'\) denote an isomorphism of \(T\)-modules. (See Definition 5.1.)

For every \(i\), \[\sigma E^*_iW = E^*_i\sigma W = E^*_iW'.\] So, \(r = r'\) and \(d = d'\).

To show \(a_i = a_i'\), pick \(w\in E^*_{r+i}W \setminus \{0\}\). Then, \[E^*_{r+i}AE^*_{r+i}\sigma (W) = \sigma(E^*_{r+i}AE^*_{r+i}w) = \sigma(a_iw) = a_i\sigma(w), \] and \(\sigma w\neq 0\). So, \[\begin{align} a_i & = \text{eigenvalue of $E^*_{r+i}AE^*_{r+i}$ on $E^*_{r+i}W$}\\ & = a_i'. \end{align}\] It is similar to show \(x = x'\).

HS MEMO

Pick \(w\in E^*_{r+i-1}W \setminus \{0\}\), then \[E^*_{r+i-1}AE^*_{r+i}AE^*_{r+i-1}\sigma(W) = \sigma(E^*_{r+i-1}AE^*_{r+i}AE^*_{r+i-1}w) = x_i\sigma(w).\] Hence, \(x_i\) is the eigenvalue of \(E^*_{r+i-1}AE^*_{r+i}AE^*_{r+i-1}\) on \(E^*_{r+i-1}W = x_i'\).

\((iii)\Rightarrow (i)\)

Pick \(0\neq w_0\in E^*_rW\), \(0\neq w_0'\in E^*_rW'\). Let \(p_i\) be in Lemma 9.1, and set \[\begin{align} w_i & = p_i(A)w_0\in E^*_{r+i}W \quad (0\leq i\leq d), \\ w_i' & = p_i'(A)w_0' \in E^*_{r+i}W \quad (0\leq i\leq d). \end{align}\]

Define a linear transformation, \[\sigma: W \to W' \quad (w_i \mapsto w_i').\] Since \(\{w_i\}\) and \(\{w_i'\}\) are bases with \(d = d'\), \(\sigma\) is an isomorphism of vector spaces.

We need to show \[a\sigma = \sigma a \quad (\text{for all }\; a\in T).\] Take \(a = E^*_j\) for some \(j\) \((0\leq j\leq d(x))\). Then for all \(i\), we have \[E^*_j \sigma w_i = E^*_jw_i' = \delta_{ij}w_i',\] \[\sigma E^*_jw_i = \delta_{ij}\sigma(w_i) = \delta_{ij}w_i'.\] \[E^*_j \sigma w_i = \sigma E^*_jw_i?\] Take an adjacency matrix \(A\) of \(a\). Then, \[A\sigma w_i = Aw_i' = w_{i+1}' + a_i'w_i' + x_i'w_{i-1}' = \sigma(w_{i+1} + a_iw_i + x_iw_{i-1}) = \sigma Aw_i.\]

\((ii)\Rightarrow (iii)\) Lemma 10.2.

\((iii)\Rightarrow (ii)\) Given \(d\), \(a_i\), \(x_i\), we can compute the polynomial sequence \[p_0, p_1, \ldots, p_{d+1}\] for \(W\).

Show \(p_0, p_1, \ldots, p_{d+1}\) determines \(m = m_W\). Set \[\Delta = \{\theta\in \mathbb{R}\mid p_{d+1}(\theta) = 0\}.\]

Observe: \(|\Delta| = d+1\). See ‘An Introcuction to Interlacing’.

\(m(\theta) = 0\) if \(\theta\not\in\Delta\quad (\theta\in \mathbb{R})\). So it suffices to find \(m(\theta)\), \(\theta\in \Delta\).

By Lemma 10.1 \((i)\), \[ \begin{cases} \sum_{\theta\in\Delta} m(\theta)p_0(\theta) & = 1,\\ \sum_{\theta\in\Delta} m(\theta)p_1(\theta) & = 0,\\ \qquad \vdots & \\ \sum_{\theta\in\Delta} m(\theta)p_d(\theta) & = 0. \end{cases} \] \(d+1\) linear equation with \(d+1\) unknowns \(m(\theta)\) (\(\theta\in \Delta\)).

But the coefficient matrix is essentially Vander Monde (since \(\deg p_i = i\)). Hence the system is nonsingular and there are unique values for \(m(\theta)\) \((\theta\in \Delta)\).

HS MEMO

\[\begin{pmatrix} \theta-a_0 & -1 & \cdots & 0 & 0 \\ -x_1 & \theta - a_1 & \cdots & 0 & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & \theta-a_{d-1} & -1\\ 0 & 0 & \cdots & -x_d & \theta - a_d \end{pmatrix} \begin{pmatrix} p_0(\theta)\\ \vdots\\ \vdots\\ \vdots\\ p_d(\theta) \end{pmatrix} = 0,\] where \(\theta\) is an eigenvalue of a diagonalizable matrix \[ L = \begin{pmatrix} a_0 & 1 & \cdots & 0 & 0 \\ x_1 & a_1 & \cdots & 0 & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & a_{d-1} & 1\\ 0 & 0 & \cdots & x_d & \theta a_d \end{pmatrix} \] with multiplicity \(\dim (\mathrm{Ker}(\theta I - L) = 1)\).