Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section2.1Walks: the basics

If the edges in a graph \(\Gamma\) represent connections between different cities, it is obvious to strart planning longer trips that compose several of these connections. The notion of a walk formally captures this definition; the formal notions of path and trail further ask that we not double back on ourselves or repeat ourselves in certain formally defined ways.

Once we've done that, we investigate what it means for a graph to be connected or disconnected.

Subsection2.1.1Walks and connectedness

Before we see the formal definition of a walk, it will be useful to see an example:

Figure2.1.1Example of a walk

In the graph shown, the vertices are labelled with letters, and the edges are labelled with numbers, and we have a walk highlighted in red, and with arrowtips drawn on the edges. Starting from vertex \(A\text{,}\) we can take edge 6 to vertex \(D\text{,}\) and then edge 5 to vertex \(C\text{,}\) edge 5 to vertext \(F\text{,}\) edge 3 back to vertex \(D\text{,}\) and finally edge 8 to vertex \(E\text{.}\) Intuitively,then, a walk strings together several edges that share vertices in between. Makign that formal, we have the following.

Definition2.1.2Walk

\(walk\) in a graph \(\Gamma\) is a sequence

\begin{equation*} v_0, e_1, v_1,e_2, v_2,\dots, v_{n-1}, e_n, v_n \end{equation*}

where the \(v_i\) are vertices, the \(e_j\) are edges, and the edge \(e_j\) goes between vertices \(v_{j-1}\) and \(v_j\text{.}\)

We say that the walk is between vertices \(a=v_0\) and \(b=v_n\)

With this notation for a walk, Example Figure 2.1.1, the walk shown would be written \(A, 6, D, 4, C, 5, F, 3, D, 8, E\text{.}\) The visual representation of the walk on the graph is vastly more intuitive, the written one feeling cumbersome in comparison.

The definition of walk above contains some extra information. If we just know the sequence of edges we can reconstruct what the vertices have to be (assuming we have at least two edges in the walk). Alternatively, if the graph \(\Gamma\) does not have multiple edges, it is enough to just know the vertices \(v_i\text{,}\) but if \(\Gamma\) has multiple edges that just knowing the vertices does not determine the walk.

Definition2.1.3Connected

We say a graph \(\Gamma\) is connected if for any two vertices \(v,w\text{,}\) there is a walk from \(v\) to \(w\) in \(\Gamma\text{.}\)

Definition2.1.4Disjoint union

Given two graphs \(\Gamma_1\) and \(\Gamma_2\text{,}\) the disjoint union \(\Gamma_1\sqcup \Gamma_2\) is obtained by taking the disjoint union of both the vertices and edges of \(\Gamma_1\) and \(\Gamma_2\text{.}\) So \(\Gamma_1\sqcup\Gamma_2\) consists of a copy of \(\Gamma_1\) and a copy of \(\Gamma_2\text{,}\) with no edges in between the two graphs.

Definition2.1.5Disconnected

A graph \(\Gamma\) is disconnected if we can write \(\Gamma=\Gamma_1\sqcup \Gamma_2\) for two proper (i.e., not all of \(\Gamma\)) subgraphs \(\Gamma_1\) and \(\Gamma_2\text{.}\)

We now have a definition for what it means for a graph to be connected, and another for what it means for a graph to be disconnected. From everday usage of this words, we would certainly hope that a graph is disconnected if and only if it is not connected. However, it is not immediately clear from the definitions as written that this is the case.

1 implies 2: Supppose that \(\Gamma\) is connected, and let \(v, w\in V(\Gamma)\text{;}\) we want to show that there is a walk from \(v\) to \(w\text{.}\)

Define \(S\subset V(\Gamma)\) to be the set of all vertices \(u\in V(\Gamma)\) so that there is a walk from \(v\) to \(u\text{;}\) we want to show that \(w\in S\text{.}\)

First, observe that there are no edges from \(S\) to \(V(\Gamma)\setminus S\text{.}\) Suppose that \(e\) was an edge between \(a\in S\) and \(b\in\Gamma\setminus S\text{.}\) Since \(a\in S\text{,}\) by the definition of \(S\) there is a walk \(v=v_0v_1v_2\cdots v_m=a\) from \(v\) to \(a\text{.}\) We can add the edge \(e\) to the end of the walk, to get a walk from \(v\) to \(b\text{,}\) and hence by definition \(b\in S\text{.}\)

Now suppose that \(w\notin S\text{.}\) Then \(S\) and \(V(\Gamma)\setminus S\) are both nonempty, and by the above there are no edges between them, and so \(\Gamma\) is not connected, a contradiction.

To prove 2 implies 1, we prove the contrapositive. If \(\Gamma\) is not connected, then there are two vertices \(v,w\in V(\Gamma)\) so that there is no walk from \(v\) to \(w\text{.}\)

Suppose that \(\Gamma=\Gamma_1\sqcup\Gamma_2\text{,}\) and pick \(v\in V(\Gamma_1), w\in V(\Gamma_2)\text{.}\) Any walk from \(v\) to \(w\) starts in \(V(\Gamma_1)\) and ends in \(V(\Gamma_2)\text{,}\) and so at some point there must be an edge from a vertex in \(\Gamma_1\) to a vertex in \(\Gamma_2\text{,}\) but there are no such edges \(\square\)

Subsection2.1.2Types of Walks

Many questions in graph theory ask whether or not a walk of a certain type exists on a graph: we introduce some notation that will be needed for these questions.

Definition2.1.7

We say a walk is closed if it starts and ends on the same vertex; i.e., \(v_0=v_n\text{.}\) The length of a walk is \(n\text{,}\) the number of edges in it. The distance between two vertices \(v\) and \(w\) is the length of the shortest walk from \(v\) to \(w\text{,}\) if one exists, and \(\infty\) if one does not exist.

It is sometimes convenient to have terminology for walks that don't backtrack on themselves:

Definition2.1.8
  1. If the edges \(e_i\) of the walk are all distinct, we call it a trail
  2. If the vertices \(v_i\) of the walk are all distinct (except possibly \(v_0=v_m\)), we call the walk a path. The exception is to allow for the possibility of closed paths.

As is often the case, the formal write-up of the proof makes something that can seem very easy intuitively look laborious, so it's worth anlysing it briefly for our example walk \(A-D-C-F-D-E\) from Figure 2.1.1. This walk is not a path as it repeats the vertex \(D\text{;}\) however, we may simply remove the triangle \(D-C-F-D\) from the walk to get the trail \(A-D-E\text{.}\) this idea is what works in general.

It is immediate from the definitions that 3 implies 2 which implies 1, as any path is a trail, and any trail is a walk.

That 1 implies 3 is intuitively obvious: if you repeat a vertex, then you've visited someplace twice, and weren't taking the shortest route. Let's make this argument mathematically precise.

Suppose we have a walk \(v=v_0,e_1,\dots, e_m, v_m=w\) that is not a path. Then, we must repeat some vertex, say \(v_i=v_k\text{,}\) with \(i\lt k\text{.}\) Then we can cut out all the vertices and edges between \(v_i\) and \(v_k\) to obtain a new walk

\begin{equation*} v=v_0,e_1, v_1,\dots, e_i, v_i=v_k, e_{k+1}, v_{k+1}, e_{k+2}, v_{k+2}, \dots, v_m \end{equation*}

Since \(i \lt k \text{,}\) the new walk is strictly shorter than our original walk. Since the length of a walk is finite, if we iterate this process the result must eventually terminate. That means all our vertices are distinct, and hence is a path.