Metro Lights out Solver

In a previous post on creating the Metro lights out game, I mentioned the solver.  The solver is intended to demonstrate a solution to the current game state.  Had I just wanted to demonstrate the solution to a specific puzzle, I could have inverted the puzzle creation matrix and then executed the moves backwards.  The general form solver is tricky because when you are solving lights out puzzles, you need to have a strategy for actually solving; randomly clicking pieces on the board can create circular loops that return the game back to previous states. If you go too far down a path of a solution, you may end up worse off than when you started. For my solver, I produced an algorithm that attempts to intelligently brute force a solution from a solver state.

Moves simulation

To begin working on a solver, you need to create a means of simulating moves so that attempts can be recorded and played back. I took an object oriented approach and added a LevelSolver class. The LevelSolver class is constructed from a Level and then creates its own internal representation of the Level.  A second class, SolverState contains a list of the moves that were performed to get the level to its current state and the state of the current game (a temporary game board).  The SolverState class can simulate moves and then it will have a new state.

From this point, you could try and brute force the solution by randomly clicking moves.  Solving in this manner will probably never finish as making moves and not tracking them would not suffice.

Representing the problem programmatically

You can think of the board and its current state as a search space.  Thinking of it like this, the puzzle can be thought of as a graph of potential moves.  The following diagram illustrates such a graph:

Graph of the moves

The deeper into the graph that is traversed, the move moves that the solution has. That particular graph was represented as an n-child tree.  Each of the children of a node is a potential move that can be made from that particular game state.  At this point, we can traverse our imaginary graph to find a solution as the path that leads to the game state where there are no more blocks to eliminate.

Avoiding duplication of moves

Now that we have a way of thinking about the problem, solving it becomes a little easier.  The first thing that you probably want to do is avoid making the same move twice so that you eliminate circular logic in your graph.  Since we have a grid of 49 tiles (7×7) and each has a state of on / off, you have potentially 2^49 states – many billion potential states…  but still small enough to store in a 64-bit value.  You can represent the state by summing up the squares to a value. The following image illustrates this.

Grid of squares showing the summed values

By storing the value in a hash table (or HashSet, specifically), you can mark off used moves and then avoid circular graphs.  The following code shows the hash calculation:

            // Optimization (0) avoid circular logic
            //
            // Turn the board into a state represented by a long
            // note this is limited to 64 bits (squares)
            // TODO: cache the state on first run, slight perf improvement
            public long stateToHash()
            {
                long hashValue = 0;
                int count = 0;
                for (int row = 0; row < rows; row++)
                {
                    for (int col = 0; col < cols; col++)
                    {
                        hashValue += (long)((_state[row][col] == 0) ? 0 : Math.Pow((double)2, (double)count));
                        count++;
                    }
                }

                return hashValue;
            }

Incorporating a strategy

Now that we have a way of eliminating moves, we can traverse the graph completely and can find all of the solutions.  This is great, except, the problem space is too big to do this quickly (2^49).  So, we need to have the solver pick more intelligently. For example, the first optimization should be that the algorithm does not choose moves that don’t eliminate squares.

The strategy that I ended up going with was:

  • Find the moves that eliminate the most squares

This works very well for simple games and works alright when we start seeing more complicated ones.  It’s not good enough for solving all of the games that I programmed into my app, but is a good starting point for a general form solution.

Programming the solution

The solution works as follows:

  • Test whether we’re complete and return if so
  • Mark the current move as “seen”
  • Get the available moves
  • Sort the potential moves using our strategy
  • Test the potential moves ordered by rank
  • …Repeat until finished..

The following code shows the implementation.

        void SolveHelper(SolverState theState)
        {

            if (solutionsFound > 0) { return; }
            // level complete?
            // the stack of moves has the solution           

            if (theState.isComplete())
            {

                // yay!
                solutionsFound = solutionsFound+1;

                theState.ListMoveHistory();
                return;
            }

            // hash the state / test for circular states
            long stateHash = theState.stateToHash();

            if (tested.Contains(stateHash) || (theState.getMoveCount() > maxdepth - 1))
            {
                theState = null;
                return;
            }
            else
            {
                // Mark!
                tested.Add(theState.stateToHash());
            }

            // Step 1:
            // Get the available moves
            List<Move> available = theState.findPotentialMoves();           

            // Step 2:
            // Simulate all promising moves, store in list
            List<SolverState> simulatedMoves = new List<SolverState>();

            foreach (Move m in available)
            {
                if (solutionsFound > 0) {
                    return;
                }
                SolverState toTest = new SolverState(theState);
                toTest.simulateMove(m);

                if (m.row == m.col && m.row == 0)
                {
                    // Freak out, you have entered the weird state
                }

                if (toTest.isComplete())
                {
                    // yay!
                    solutionsFound = solutionsFound+1;
                    toTest.ListMoveHistory();
                    return;
                }

                else
                {
                    // free
                    simulatedMoves.Add(toTest);
                }
            }

            simulatedMoves.Sort(new SolverStateComparer());

            // TODO: can we free theState?
            foreach (SolverState toTest in simulatedMoves)
            {
                if (solutionsFound > 0) { return; }

                SolveHelper(toTest);
            }
        }

Conclusions

The lights out solver was really fun to program.  The algorithm could be improved significantly.  Some potentially  fun things to do with this would be:

  • Incorporate some evolutionary programming by changing the maximum depth that is searched or evaluate other variables such as the distance to a board edge and try and iterate improved strategies over time.
  • Program the brute-force solution as a multithreaded app and move it to GPGPU.

Let me know what you think? Are there other cool strategies you can think of?