The Art Gallery Guardian

Lights out game on a grid


Problem1

Let be a graph, let be the adjacency matrix of . Solve the equation in .

The problem is equivalent to the lights out game. Each vertex has state or . Activate a vertex flips the state of itself and all its neighbors. Find a set of activation that turns all state into .

Originally I thought this problem can be solved in when is planar graph on vertices by nested dissection. However, only recently I found out the matrix must be non-singular [1]. Therefore, nested dissection does not apply. In particular, even grid graphs can have singular adjacency matrix. For example, a grid.

If the graph is a grid, then solving the linear system takes time. Recently, I saw an time solution. The solution by Zheng Wang and can be seen here(in Chinese). Here I gave my interpretation of the algorithm.

1 The algorithm

Given a grid graph. Let be the node on the th row and th column. Let be the state of the vertex in the input. Each state is in . If we activates a node, the state of the node and its neighbors change by . The set of activated node is called the activation set.

We are interested in finding an activation set , such the state of all nodes after activate is .

Let be a activation set, and to be the activation set of the first row.

Theorem2

uniquely determines . Moreover, One can compute from in time.

Proof

Indeed, consider we’ve applied the activation to the nodes in . For any vertex in row . If it , then the remaining neighbor (on the second row) cannot be activated. If it is , then the remaining neighbor has to be activated. This allows us to compute the activation pattern for the second row. We can apply induction to obtain the activation pattern for each row is determined by the previous row.

Naively, by the above theorem, we can obtain a time algorithm by trying all possible until we find one that works.

Wang realized the proof of the theorem actually allows us to solve for directly. In the beginning, we don’t know what should be, but we do know its relations with . The high level idea is a step process.

  1. Express the activation state of as a linear combination of the activation state of for all . This allows us to express parametrized by .
  2. Solve for a by setting up a system of linear equations with the activation states of the last row.
  3. Using to obtain .

Formally, we create formal variables . Here is an indicator variable that represents if . Let be a linear combination of , such that it evaluates to if and otherwise.

We compute through dynamic programming. First, set for all . For each and , . We can compute in time. Computing all for takes time.

At this point, we have expressed if through a linear combination of . How do we find a that works? The idea is to consider the linear relations defined by . We can see it is of the following form.

If we solve the equation , we find , which implies we find . Note is just a matrix, solving the equation takes time. Now, we use the theorem again to find the entire activation set through in time.

Building the table takes time, solving also takes time, compute from takes time. The total running time is .

However, why is this algorithm correct? Zheng did not provide a proof, hence leaving a gap to fill.

2 Remarks

One can see this is similar to the light chasing technique. There is no need to access a precomputed lookup table, since we compute the correct first row on the fly.

One can generalize this further. We can obtain running time for a grid, where . Also, there is no reason we have to work in , any field is fine.

Theorem3

Let be a grid and is a matrix where the non-zero entries are precisely the position of s in the adjacency matrix of . Finding can be done in time.

How can this be generalized to other graphs?

Turns out this is related to the zero forcing set [2]. Consider the vertices are colored black and white. If a black vertex has only a single white neighbor, color the white neighbor black. If eventually all vertices become black, then we say the set of black vertices in the beginning is a zero forcing set. The zero forcing number of is the size of the smallest zero forcing set of .

Theorem4

Let be a graph with edges and is a matrix where the non-zero entries are precisely the position of s in the adjacency matrix of . If we are given a zero forcing set of size . Finding can be done in time.

Jianbo Wang, Siyun Zhou and I wrote up a paper on this. We also improved the running time for solving the grid lights out game to .

Unfortunately, minimum (weighted) zero forcing set is NP-hard to find [3]. It is also very large for simple graphs. Caterpillars have a size zero forcing set.

Is this a new viable algorithm for solving system of linear equations?

The zero forcing number is at least the pathwidth [3], which is in turn at least the treewidth, which implies there are good separators. This should mean nested dissection is better than this zero forcing number algorithm for non-singular matrices.

References

[1]
N. Alon, R. Yuster, Matrix sparsification and nested dissection over arbitrary fields, Journal of the ACM. 60 (2013) 1–18 10.1145/2508028.2505989.
[2]
AIM Minimum Rank – Special Graphs Work Group, Zero forcing sets and the minimum rank of graphs, Linear Algebra and Its Applications. 428 (2008) 1628–1648 10.1016/j.laa.2007.10.009.
[3]
Posted by Chao Xu on .
Tags: algorithm, algebra.