Skip to main content

Research

Pattern-matching syntax trees

I’m doing some very fun things with pattern-matching syntax trees today.

For context: I write a computer-based assessment system called Numbas, and it’s focused on assessing mathematics. A big part of that assessment involves asking the student to enter an algebraic expression as the answer to a question. How do you decide whether to mark the student’s answer as correct or not? Historically, there have been two approaches, given a representative “correct” expression written by the teacher:

  • Put the student’s answer and the correct answer into a canonical form. If their syntax trees are exactly the same, then the student’s answer is the same as the teacher’s. This is the model used by, for example, STACK.
  • Identify the variables used in the student’s answer and the teacher’s. Pick a few random numbers as values for those variables, and evaluate both expressions. If they come out to the same number, then the student’s answer is the same as the teacher’s. This is the model used by systems in the CALM line.

Read more…

Talk: Computability of Bass-Serre structures in the Grzegorczyk hierarchy

My chum the inimitable David Cushing has started a postgrad pure maths seminar at Newcastle. Because there are only a few pure postgrads here, he asked me to give a talk about the stuff I was looking at for the PhD I gave up on last year.

I’ve written a few posts here in the past about the Grzegorczyk hierarchy, computable groups, and so on, but I think this might be the first time I’ve presented my work to real people (apart from an impromptu hour-long braindump when one of the real seminar speakers cancelled and I decided to test my memory).

Read more…

Gatterdam’s proof that free products with amalgamation are \(\mathcal{E}^{n+1}\)-computable

I haven’t posted any research in ages! I haven’t done any research in ages! Here’s a rewriting of a theorem from Ronald Gatterdam’s doctoral thesis, in more sensible language. I’ve proved the same theorem for fundamental groups of graphs of groups.

The main problem with Gatterdam’s proof was that he didn’t invent any new notation to make life simpler, so he wrote out pages and pages of very repetitive function applications.

I’ve made up a lot of notation to make dealing with functions in the Grzegorczyk hierarchy a lot easier, so you’ll probably need to refer to my draft paper for a few things. I keep all my research notes in a repository on github, by the way.

Read more…

Double coset normal form for FPAs

Liza Frenkel is visiting the department, and she wants me to write an algorithm to compute double coset representation normal forms for free products with amalgamation.

Here are some notes I typed up in write maths, see maths as she and Andrew talked. I’m just putting them up here so I don’t lose them.

Suppose:
\(F_{1}\), \(F_{2}\) free groups (finitely generated by \(X\) and \(Y\), respectively)
\(H_1 \leq F_1\), \(H_2 \leq F_2\)
\(\exists\) an isomorphism \(\phi: H_1 \rightarrow H_2\)
i.e. we have a finite generating set (in fact a basis) \(\{h_1, \dots, h_n \}\) for \(H_{1}\) (and\(\{h_1′, \dots, h_n’\}\) for \(H_{2}\)) and generating sets \(X,Y\) for \(F_{1}\) and \(F_{2}\)

Consider a group generated by \(z_1, \dots, z_m\), with
\(z_1 = h_1(x_1, \dots, x_n)=h_1′(y_1,\dots,y_r), \dots, z_m = h_m(x_1,\dots,x_n) = h_m'(y_1,\dots,y_r)\)
\(G = F_1 \underset{H_1=H_2}{\ast} F_2\) has presentation \(\langle X,Y | h_i = h_i’, i=1 \dots m\rangle\)
Choose a set \(S \subseteq F\) such that \(F_1 = \displaystyle{\bigcup_{s \in S} H_1sH_1}\)
(we are assuming \(S\) is infinite)
every element \(w\) of \(F_{1}\) is equal to \(w = h_{1} sh_{2}\), for unique \(s \in S\),
\(h_1,h_2 \in H_1\)
can do the same for \(F_2 = \displaystyle{\bigcup_{t \in T}} H_2tH_2\)

Let \(g \in G\).
A word \(w\) representing \(g\) is in normal form if \(w = h_{i_1}(z)p_1h_{i_2}(z)p_2 \dots h_{i_k}(z)p_kh_{i_{k+1}}(z)\), with \(p_i \in S\), \(i = 1 \dots k\).

Suppose \(g = g_1g_2 \dots g_k\). Rewrite using double coset representatives
\(g = h_1(X)s_1h_2(Y)t_2h_3(X) \dots\)

\(g\) is in reduced form if \(g = g_1 \dots g_k\) and
\(k=1 \Rightarrow\) \(g \in F_1\) or \(g \in F_2\)
\(k \gt 1 \Rightarrow g_i \in F_1 \backslash H_1\) or \(g_i \in F_2 \backslash H_2\) and \(g_{i}\), \(g_{i+1}\) belong to different factors

Let \(\Gamma_{H_1}\) be a folded subgroup graph of \(H_{1}\). Take two copies.

For some word \(w = h_{1} sh_{2}\), the question is how to find \(s\).

Algorithm:
Read all possible loops round first graph, to get \(h_{1}\).
Keep reading letters round the graph until there is no edge corresponding to the current letter. Call this \(s_{1}\).
Start at the other end, reading loops to get \(h_{2}\), and then maximal \(s_{2}\) partway round a loop.
Then either there are no letters left to look at, and \(s = s_{1} s_{2}\), or some bit \(f_{0}\) in the middle, and \(s = s_{1} f_{0} s_{2}\).

We want \(h_1,h_2,s\) to all have maximal length.

Example

i) \(\begin{align} g &= x_1^3x_2^2y_1y_2 \\ &= (x_1^2)(x_1x_2)(x_2^{-1})^{-1} \cdot (y_1)(y_2^{-1})^{-1}\end{align}\)

Membership problem

\[\begin{align} G &= F_1 \underset{H_1=H_2}{\ast}F_2 \\ H_3 &= \langle c_1, \dots, c_l \rangle \leqslant G\end{align}\]
Can write the \(c_{i}\) in normal form.
Question: Given \(w \in G\) in normal form, is \(w \in H_3\)?

Constructing a folded graph which makes use of the normal form is tricky.

An \(E_3\) fgoagog and an \(E_4\) fgoagog

Over Christmas I typed up my Bass-Serre / Grzegorczyk stuff in quite a lot of detail and fairly rigorously. It’s far too much to put up here as blog posts, so here’s a link to the PDF.

The big theorem I prove says that the fundamental group of a graph of \(E_n\)-computable groups is \(E_{n+1}\)-computable. But are there cases when the fgoagog stays at \(E_n\), and can we pin down what conditions cause it to move up to \(E_{n+1}\)? I’ve found a couple of examples which begin to answer those questions.

Corollary 1 says that if the edge groups are finitely generated then you don’t go up a level, while Corollary 2 gives an example of a graph with infinitely generated edge groups that does go up a level.

Little Facts

  1. The \(n\)th prime \(p_n \approx n \log n\).
  2. \(\exp(x,y) := x^y \in E_3\).
  3. The Gödel encoding of a word of length \(n\) on an alphabet \(X\) is \(\leq p_n^{J(n,n)}\), an \(E_3\) function.

Theorem 6.7 (from the paper)

Suppose we have \(G = \pi_1(\mathbf{G}, \Gamma, T, v_0)\), where \(\Gamma\) is finite. Suppose all the \(G_i \in \mathbf{G}\) are finitely generated \(E_n\)-computable groups, \(n \geq 3\). Assume all the identified subgroups are \(E_n\)-decidable, and all the isomorphisms \(\phi_e\) are \(E_n\)-computable. Then \(G\) is \(E_{n+1}\)-computable.

Corollary 1

Same assumptions as the Theorem 6.7, but also assume that the edge groups are finitely generated. So each edge homomorphism \(\phi_{e_{ij}}\) sends a finitely generated subgroup \(E_{ij} \leq G_i\) to another finitely generated subgroup \(E_{ji} \leq G_j\). Suppose everything is \(E_n\)-computable.

Then \(G = \pi_1(\mathbf{G}, \Gamma, T, v_0)\) is \(E_n\)-computable.

Proof

In the algorithm from Theorem 6.7, the potential for unbounded recursion comes from applying the edge homomorphisms repeatedly. We will show that the result of applying the edge homomorphisms repeatedly is bounded by an \(E_3\) function.

Let \(w\) be a word on \(G\) of length \(n\). An upper bound on the number of syllables in \(w\) is \(n\), so let’s say there are \(n\) applications of edge homomorphisms.

A homomorphism \(\phi_{e_{ij}}\) rewrites generators \(x \in X_i\) as words \(\phi_{e_{ij}}(x) \in X_j^{\ast}\). As \(G-i\) is finitely generated, there is a word \(\phi_{e_{ij}}(x)\) of maximum, finite length \(k_{ij}\). So a word \(w_i \in E_{ij}\) of length \(m\) is rewritten to a word \(w_j \in E_j\) of length at most \(k_{ij}m\).

Because the graph is finite, we can find a greatest \(k \in \{k_{ij}\}\). So after reducing all the syllables (assuming that can be done for the input word), the resulting word in \(G_0\) has length at most \(k^nn = L\).

Hence the index of the resulting word is bounded by

\[ p_{k^nn}^{J(k^nn,k^nn)},\]

an \(E_3\) function, by Fact 2. So recursion on the edge homomorphisms is bounded by an \(E_n\) function, and so \(G\) is \(E_n\)-computable.

\(\square\)

Corollary 2

We will construct a graph of \(E_3\)-computable groups whose fundamental group is \(E_4\)-computable.

Let \(\mathbf{G} = \{G_0,G_1\}\) be two copies of the free group on two generators, so \(G_0 = \langle x,y \rangle\) and \(G_1 = \langle a,b \rangle\). The graph has a single edge \(e\) from \(v_0\) to \(v_1\), so \(\Gamma = T = (\{v_0,v_1\},\{e,\bar{e}\})\).

Let \(E_{01} = \left \langle \{x^{-n}yx^n, n \in \mathbb{N} \} \right \rangle\), \(E_{10} = \left\langle \{ a^{-n}ba^n, n \in \mathbb{N} \} \right\rangle\). So the edge groups are infinitely generated.

Define

\[ \begin{align} \phi_{e}(x^{-2n}yx^{2n}) &= a^{(-2n)!}ba^{(2n)!}, \\ \\ \phi_{\bar{e}}(a^{-2n-1}ba^{2n+1}) &= x^{-(2n)!-1}yx^{(2n)!+1}. \end{align} \]

Then \(G = \pi_1(\mathbf{G},\Gamma,T,v_0)\) is \(E_4\)-computable.

Proof

First of all, the homomorphisms are \(E_3\)-computable because words of the form \(a^{-n}ba^n\) can be recognised by an \(E_3\) function. Hence all the ingredients of the graph of groups are \(E_3\)-computable.

A word of the form

\[ ea^{-1}e^{-1} \dots x^{-1}ea^{-1}e^{-1}x^{-n}yx^neae^{-1}x \dots eae^{-1} \]

will be reduced to a word on \(G_0\) whose length is proportional to \(\operatorname{fact}^{(m)}(n)\), where \(m\) is the number of syllables. As \(m\) depends on the input, this length can only be computed by unbounded recursion on an \(E_3\) function, so is only \(E_4\)-computable.

Therefore, \(G\) is \(E_4\)-computable. \(\square\)

Some notation for FPAs and HNN extensions

In order to define a free product with amalgamation \(A \ast_C B\) you need to define \(A,B,C\) and embeddings of \(C\) into \(A\) and \(B\). Alternatively, instead of \(C\) you can use a homomorphism \(phi: A \rightarrow B\) and its inverse. A third option is to pick subgroups \(D \subset A\) and \(E \subset B\) and give an isomorphism between them.

All three of these options require you to define some groups and some morphisms. You can define a group nicely by giving a presentation but defining maps usually takes a few lines. So, I’d like a quicker way of describing the kinds of maps we need here.

Let’s take the third option. If we have generating sets so that \(D = \langle X \rangle\) and \(E = \langle Y \rangle\), then if we can write each of the elements of \(X\) as words in \(Y^*\) then that defines the required isomorphism completely. After you’ve done that, the generating set \(Y\) might as well just be those words.

So how about this as a presentation for an FPA?

\[ \langle a_1, a_2, \dots | R_A \rangle [ x_1 = y_1, x_2 = y_2, \dots ] \langle b_1, b_2, \dots | R_B \rangle \]

The bits in angle brackets are presentations of \(A\) and \(B\). The \(x_i\) are words from \(A\) and generate \(D\), and the \(y_i\) are words from \(B\) and generate \(E\).

Because this is all on one line and delimited adequately, you can chain things together and use brackets and all that, like this contrived example:

\[ \left ( \langle a,b \rangle [ b = c ] \langle c,d | cd = dc \rangle \right ) [ a=e ] \langle e,f \rangle \]

For HNN extensions, you just need to say what the original group is, what the extending letter is, and what the identified subgroups are. So how about something like:

\[ \langle g_1, g_2, \dots | R_G \rangle [ t | x_1 = y_1, x_2 = y_2, \dots ] \]

Apart from the different kinds of brackets being hard to distinguish visually, what have I missed?

Infinite trees are \(E_3\) (maybe)

Not completely sure about this, but here we go: I will show that an ordered, plane recursive tree is \(E_n\)-computable, \(n \geq 3\), if each vertex has its number of children bounded by an \(E_n\) function. So this rules out trees which grow sideways a lot quicker than they grow in height, but it has to be a lot quicker.

Read more…

Lists and finite trees are \(E_3\)

In order to prove the fgoagog thing we need to be able to do computations on trees at a low level of the Grzegorczyk hierarchy, so it doesn’t mess up the complexity of the whole group computation. I will give a model of finite trees that lies in \(E_3\), the same level as Cannonito/Gatterdam’s standard index for the free group. (( this post has been sitting as a draft since the beginning of May because I wanted to make some nice tree pictures, but I’d rather just publish it now. So, the trees section is copied out of my notes and might not make sense ))

Read more…

Computable categories

So I read a bit about categories after Peter Hines came up to talk to us.

Categories

A category \(C\) is a collection of objects \(obj(C)\) (or just Obj), a collection of arrows (or morphisms) \(hom(C)\) (or just Hom), and an arrow composition operation \(\circ\) such that there is an identity arrow for each object.

An arrow has a single source object and a single target object, and is written \(A \overset{f}{\rightarrow} B\).

Read more…

The computability of a fgoagog (intro)

I’m going to write up a result I think I’ve proved, which follows on from Gatterdam’s thesis. He showed that if a group is \(E_n\) computable, then doing a free product with amalgamation or any number of HNN extensions to it results in a group which is at worst \(E_{n+1}\) computable. What I’m going to say is that the fundamental group of a graph of \(E_n\) groups is at most \(E_{n+1}\) computable. This isn’t unreasonable, because a fgoagog (( ‘fgoagog’ is short for ‘fundamental group of a graph of groups’, because I can’t be doing with writing that out once every three minutes )) represents a load of HNN extensions and amalgamated products.

The tactic will be to follow Gatterdam’s method for amalgamated products – if we can find an algorithm which solves the word problem of the fgoagog (decides if a word on the generators is equivalent to the identity) in \(E_{n+1}\) time, then the whole group is \(E_{n+1}\) computable as a quotient of the free group by the set of trivial words.

I will define an admissible form for words from the fgoagog, which is an alternating series of edge letters and words from the vertex groups, such that the edge letters make up a cycle starting and ending at the base vertex. Once we have a word in admissible form, we can reduce it by swapping the image of an edge group in one vertex with its counterpart in the vertex at the other end, and eventually a trivial word will end up as just a word on the base vertex, which is an \(E_n\) decision.

Clearly the process of putting a word in admissible form involves knowing something about the shape of the graph, so we need to make some \(E_n\) computable functions which can represent the graph and compute paths between vertices. That’s a tiny bit interesting on its own, so I’ll spend my next post talking about that. It will need some pretty pictures of trees though, which is why I’ve put off writing it for so long.