# Achieving superhuman performance in the board game Squadro using AlphaZero on a single computer with no GPU

AlphaZero is an algorithm created by Deepmind (a company owned by Google) that showed impressive performance in multiple games. Their work became popular through the game of Go, as they have beaten the #1 world champion with very unconventional plays. This feat is particularly meaningful to me, as it was one of the events that led me to choose my major in Artificial Intelligence.

To achieve such results [1] [2], AlphaZero was trained 40 hours using 5000 first-generation TPUs and 64 second-generation TPUs. This computing power is still inaccessible to most companies today. Many researchers have worked on implementing AlphaZero in order to reproduce their feat on Go, chess or shōgi. Here are some popular implementations: ELF [3], Leela Zero [4], KataGo [5], AZFour [6]... Few implementations focus on a modest usage of computing resources, and none of them try to tackle games that are not solvable with exact algorithms.

The goal of my work was to replicate the algorithm starting from the official publications and apply it to a moderately complex board game, Squadro, using a single computer with no GPU. In this publication, I first explain how AlphaZero works, tune its hyperparameters, discuss the different challenges I have faced, and finally, I detail the learning process and my results.

# Methods

A Monte Carlo tree search (MCTS) is a tree where each node corresponds to a board game. The root of the tree is the current state of the board, and the goal is to search for the best action in order to win. Intuitively, an MCTS agent will estimate the probability of winning for each possible action and decide accordingly which one to play. To do that, the algorithm consists of 4 different steps: selection, expansion, simulation and backpropagation.


# Selection

Go down the tree in a best-first search fashion, until you reach a leaf node. Select nodes based on their UCT score: . In this formula:

  • is the number of won games backpropagated to the successor i
  • is the number of times the successor i has been visited
  • is the number of times that the node, parent of i, has been visited
  • is the exploration parameter

The first term of the addition is referred as the exploitation term, while the second term is the exploration term. The exploitation term takes advantage of the moves it knows are great, meanwhile the exploration term tends to favor unvisited nodes. In theory, the exploration parameter should be equal to but it is often fine-tuned to each specific application.

# Expansion

If the selected node does not correspond to an end of the game, add the successors of this node to the tree. They correspond to the legal moves from the selected game board.

# Simulation

From the selected node, play randomly until you reach a game end (win/loss/draw).

# Backpropagation

Push the outcome of the game up in the tree: add it to the 's of the winner, but only increment the visit counter 's of the looser.


Of course, you have to repeat this process several thousands of times in order to get precise results. Depending on the complexity of the game, this technique may already be sufficient to obtain decent performance. The biggest downside about this algorithm is that it must do a huge number of rollouts in order to play accurately. Imagine deriving the true winning probability in the game of Go (19x19 board) with a best-first search strategy... it would simply be computationally too huge!

# Algorithm 1 - Monte Carlo search tree

import math

# The "Node" class is voluntarily omitted, it is really
# straightforward to implement and does not add information
# for your understanding of the algorithm.


class MCTSAgent:
    

    """
    Constructor for a Monte Carlo tree search agent.

    @param rollouts: number of rollouts to make at each turn
    @param c: exploration parameter
    """
    def __init__(self, rollouts, c=1.4):
        self.rollouts = rollouts
        self.c = c
        self.root = None


    """
    Play an action for the current state, given the last action played.

    @param state: current state of the game
    @param last_action: last action played
    @return: selected action
    """
    def play(self, state, last_action):
        # find root node for the search
        if self.root is None:
            self.root = Node(state.copy(), last_action)
        else:
            self.root = self.root.children[last_action]

        # apply rollouts
        for _ in range(self.rollouts):
            self._rollout(self.root)

        # play the most visited child
        max_node = max(self.root.children, key=lambda n: n.N)

        # reflect our action in the search tree
        self.root = self.root.children[max_node.last_action]

        return max_node.last_action


    """
    Apply one rollout. It consists in four different steps:
      - selection
      - expansion
      - simulation
      - backpropagation

    @param node: root node from where to apply the rollout
    """
    def _rollout(self, node):
        path = self._selection(node)
        leaf = path[-1]
        self._expansion(leaf)
        reward = self._simulation(leaf)
        self._backpropagation(path, reward)

    
    """
    Go down the tree and select the first leaf node it encounters.

    @param node: node from where to apply the selection phase
    @return: path from `node` to the selected leaf node
    """
    def _selection(self, node):
        path = []
        while True:
            path.append(node)

            if not node.is_expanded() or node.is_terminal():
                return path

            node = self._uct_select(node)
    

    """
    Select one children based on their UCT score.

    @param node: parent node
    @return: selected children node
    """
    def _uct_select(self, node):
        def uct(n):
            if n.N == 0:
                # unvisited children have the highest priority
                return float("inf")

            return n.W / n.N + self.c * math.sqrt(math.log(node.N) / n.N)

        return max(node.children, key=uct)


    """
    Expand the given `node`.

    @param node: node to expand
    """
    def _expansion(self, node):
        if node.is_expanded():
            # already expanded
            return

        node.expand()


    """
    Play the game until a terminal node is reached.

    @param node: node from where to play
    @return: outcome of the game (reward)
    """
    def _simulation(self, node):
        invert_reward = True
        while True:
            if node.is_terminal():
                reward = node.reward()
                return -reward if invert_reward else reward

            node = node.get_random_child()
            invert_reward = not invert_reward


    """
    Backpropagrate the outcome of the game through the selected path.

    @param path: selected path
    @param reward: outcome of the game
    """
    def _backpropagation(self, path, reward):
        for node in reversed(path):
            node.N += 1
            node.W += reward
            reward = 1.0 - reward

# AlphaZero

The idea behind AlphaZero is to use a modified Monte Carlo tree search and to link it to a neural network such that it replaces the simulation phase. Indeed, the neural network will predict:

  • A value ("reward") of the given state. It corresponds to the estimated probability of winning for the current player. It takes a value in the range . A reward of means that we are sure that the opponent is going to win, while a reward of gives us confidence about our win. A reward of zero means that both players have the same probability of winning.
  • A policy vector which corresponds to a prior probability over each possible action, in order to favor the selection of some over the others.

The value is directly backpropagated while the policy is used in the selection phase, where we slightly modify the UCT score's formula: .

  • is the total reward backpropagated to the successor i
  • is the number of times the successor i has been visited
  • is the number of times that the node, parent of i, has been visited
  • is the exploration parameter
  • is the prior probability of the successor i

AlphaZero also introduces noise into the Monte Carlo tree search. Instead of using the raw policy output of the network, we add some noise () to each action such that: , where is the output policy of the network for successor i, and is the noise added to the successor i. In the original publications, is equal to , or (for chess, shōgi and Go respectively), and is equal to . For , the Dirichlet noise will randomly emphasise one of the component of the policy vector:

Action of the Dirichlet noise on a policy vector

At early training stages, when the policy vector is more uniformly distributed, this ensures AlphaZero does sometimes play randomly and thus consider different types of game moves. While is beneficial to not overfit, it is important to understand that one should not set it to a value above 1. Indeed, this would instead flatten the signal without impacting which component is the strongest:

Bad usage of the Dirichlet noise on a policy vector

Finally, AlphaZero also adds a new temperature parameter . It is used in the choice of which action to play: instead of choosing the most visited child, we stochastically pick one of the successors proportionally to . During training, is set to for the first few moves of the game (30 for the game of Go), to allow more exploration. It is then fixed to an infinitesimal value, which corresponds to choosing the most visited child as before.

# Neural network

AlphaZero's neural network implements a deep residual architecture body and then splits into two distinct heads: one for the value and the other for the policy vector.

The body consists of a single convolutional block followed by 19 residual blocks. The convolutional block is composed by:

  1. A convolution of 256 filters of kernal size 3x3 with stride 1
  2. Batch normalization
  3. A rectifier nonlinearity

Each residual block is made of:

  1. A convolution of 256 filters of kernel size 3x3 with stride 1
  2. Batch normalization
  3. A rectifier nonlinearity
  4. A convolution of 256 filters of kernel size 3x3 with stride 1
  5. Batch normalization
  6. A skip connection that adds the input to the block
  7. A rectifier nonlinearity

The output of the residual tower is passed into the two separate value and policy heads. The policy head consists in:

  1. A convolution of 256 filters of kernel size 1x1 with stride 1
  2. Batch normalization
  3. A rectifier nonlinearity
  4. A fully connected linear layer

Finally, the value head contains:

  1. A convolution of 1 filter of kernel size 1x1 with stride 1
  2. Batch normalization
  3. A rectifier nonlinearity
  4. A fully connected linear layer to a hidden layer of size 256
  5. A rectifier nonlinearity
  6. A fully connected linear layer to a scalar
  7. A tanh nonlinearity outputting a scalar in the range

Here is the corresponding Keras code:

# Algorithm 2 - AlphaZero's neural network

from tensorflow.keras import layers, optimizers, regularizers, Input, Model

NUM_FILTERS = 256
NUM_BLOCKS = 19
CONV_SIZE = 3
REGULARIZATION = 0.0001


"""
Build a convolution layer.

@param input_data: input of the layer
@param filters: number of filters
@param conv_size: size of the convolution
@return: output of the convolution layer
"""
def conv_layer(input_data, filters, conv_size):
    conv = layers.Conv2D(
        filters=filters,
        kernel_size=conv_size,
        padding="same",
        use_bias=False,
        activation="linear",
        kernel_regularizer=regularizers.l2(REGULARIZATION)
    )(input_data)
    conv = layers.BatchNormalization(axis=-1)(conv)
    conv = layers.LeakyReLU()(conv)
    return conv


"""
Build a residual block.

@param input_data: input of the block
@param filters: number of filters for the convolutions
@param conv_size: size of the convolutions
@return: output of the residual block
"""
def res_block(input_data, filters, conv_size):
    res = conv_layer(input_data, filters, conv_size)
    res = layers.Conv2D(
        filters=filters,
        kernel_size=conv_size,
        padding="same",
        use_bias=False,
        activation="linear",
        kernel_regularizer=regularizers.l2(REGULARIZATION)
    )(res)
    res = layers.BatchNormalization(axis=-1)(res)
    res = layers.Add()([input_data, res])
    res = layers.LeakyReLU()(res)
    return res


input_data = Input(shape=(19, 19, 17))

# residual tower
x = conv_layer(input_data, NUM_FILTERS, CONV_SIZE)
for _ in range(NUM_BLOCKS):
    x = res_block(x, NUM_FILTERS, CONV_SIZE)

# value head
value = conv_layer(x, 1, 1)
value = layers.Flatten()(value)
value = layers.Dense(
    256, activation="linear", use_bias=False,
    kernel_regularizer=regularizers.l2(REGULARIZATION)
)(value)
value = layers.LeakyReLU()(value)
value = layers.Dense(
    1, activation="tanh", name="value", use_bias=False,
    kernel_regularizer=regularizers.l2(REGULARIZATION)
)(value)

# policy head
policy = conv_layer(x, NUM_FILTERS, 1)
policy = layers.Flatten()(policy)
policy = layers.Dense(
    # The board of the game of Go is 19x19, and we add a "pass" action
    19*19 + 1, activation="softmax", name="policy", use_bias=False,
    kernel_regularizer=regularizers.l2(REGULARIZATION)
)(policy)

# model
model = Model(input_data, [policy, value])

losses = {
    "policy": "categorical_crossentropy",
    "value": "mean_squared_error"
}
loss_weights = {
    "policy": 0.5,
    "value": 0.5
}
optimizer = optimizers.SGD(learning_rate=0.2, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss=losses, loss_weights=loss_weights)

The policy head uses the categorical crossentropy loss, while the mean squared error is used for the value head. AlphaZero takes advantage of the Nesterov momentum in its Stochastic Gradient Descent (SGD) optimizer, using a momentum parameter of . One last interesting fact to note is that we also use a L2 regularization (with ) on each layer.

# Learning process

AlphaZero learns by self-play, this means that it plays against itself and learns from that experience to play better. It uses a policy iteration algorithm: the more it trains, the more its actions near an ending state becomes accurate, and thus it can learn to be precise even sooner in the game thanks to the stronger value and policy signals. Moreover, its neural network is able to learn game patterns that can also be useful in early training stages.

More concretely, the training pipeline of AlphaZero consists in 3 stages, executed in parallel:

  • Self-play: create a training set by recording games of the AlphaZero agent playing against himself. For each game state, we store: the representation of the game state, the policy vector from the Monte Carlo tree search and the winner (+1 if this player won, -1 if the other player won, 0 in case of a draw).
  • Retrain network: sample a mini-batch of 2048 game states from the last 500k games and retrain the current neural network on these inputs.
  • Evaluate network: after every 1000 training loops, test the new network to see if it is stronger. Play 400 games between the latest and the current best neural network. The latest neural network must at least win 55% of the games to be declared the new best player.

# Environment

Squadro is a 2 player board game edited by Gigamic [7]. The goal of the game is to move four of your five pawns across and back the board. During its turn, a player can move one of his pawns by the number of tiles indicated by the last checkpoint it passed. If his pawn reaches the other side of the board (a checkpoint), it stops and heads to the other side. If one or multiple enemy pawns cross its path, your pawn jumps over it/them and stop right after. The enemy pawn(s) go back to their last checkpoint.

Here is an example of Squadro game board:

Example of Squadro game board

Even if the rules are simple, Squadro is a complex game to master, even for computers. One aspect that makes it difficult is that it contains infinite loops, which seemed very counterintuitive to me. I was able to reproduce this behavior with an alpha-beta pruning agent whose goal is to maximize the opponent's distance to a goal state. Here is one example of an infinite loop:

One infinite loop in the game of Squadro

What makes Squadro interesting to work on is that it is not solvable by a deterministic algorithm. With a random player, Squadro's average number of turns is 70. Thus, we can compute an upper-bound for the number of nodes in the search tree for an average game: . Obviously, we would not be able to visit that much game states in a reasonable amount of time. For this reason, one may want to create an alpha-beta pruning agent with a custom heuristic function, but crafting a smart heuristic function takes time, and it would be potentially far from optimality. This actually motivates the use of reinforcement learning, and in our case, of AlphaZero.

# Challenges

# Computing resources

My implementation of AlphaZero was trained on a single laptop with no GPU at first, and then on a dedicated server (still no GPU - Intel Xeon E3-1230v6 - 4c/8t - 3.5GHz) for convenience. In practice, one should not train a (large) neural network on a CPU. I added this constraint to prove that AlphaZero can be used by anybody, even with a cheap non-distributed setup.

# Performance

Performance plays a critical role in the success of an implementation, even more in a non-distributed setting. I initially created my version of AlphaZero in Python before realizing memory management was a big issue. It now consists of a client (C++) and a server (Python). The client handles all the logic of the algorithm while the server corresponds to the neural network inference (implemented using TensorFlow). They communicate over gRPC [8]. Now, the bottleneck is the inference of the neural network: my CPU roughly takes 70ms to make a prediction for a batch size of 16 with a residual tower of 10 blocks with 128 filters. Proportionally, the computation time of the Monte Carlo tree search is practically neglectable. To improve the inference time, one could use a GPU and use a quantized version of the neural network (using for example, TensorRT [9]).

Because I am using a single computer, the 3-stages learning process is done sequentially instead of in parallel. In a more distributed setting, we may want to use a specific framework for handling the complexity of the communication between workers. Ray [10] seems to be a great candidate for that purpose.

To generate games more rapidly, the biggest improvement consists in using the parallel version of the Monte Carlo search tree: create a small batch of game states to be evaluated by the neural network, but each time you add a game state to the batch, temporarily consider it to be a loss (called a "virtual loss", this will prevent your search to always go down the same path). Then, evaluate your small batch, apply backpropagation and remove the virtual losses.

# Parameters tuning

The learning phase is the most time-consuming part by far. Indeed, AlphaZero contains many hyperparameters, and tuning them requires to restart the learning phase every time. With a single computer, we cannot easily sweep over all sets of parameters. For this reason, one should keep all the parameters to the defaults as explained in the original paper, and only tweak some of them.

# Monte Carlo tree search

  • Exploration parameter (): This parameter plays an important role in the classical Monte Carlo tree search. In fact, depending on the number of rollouts, carefully picking the exploration parameter is key to obtain the best performance. For Squadro, using or led to the best agents. This is probably due to the fact that early decisions gets you stuck in a winning/loosing situation, choosing a high exploration parameter therefore helps the pure MCTS agent to search more broadly instead of more deeply. However, AlphaZero does not use as much rollouts as a classical MCTS because they are much more expensive due to the neural network inference. Thus, using a high exploration parameter led to terrible performance, while choosing the typical value gave great results.

  • Number of rollouts ( true parallel ): AlphaGo Zero originally used 1600 rollouts, while AlphaZero used a more reasonable value of 800. Thus, I initially trained AlphaZero using 1024 rollouts (), which was extremely time-consuming. Even if that resulted in a stronger policy in early training stage, I finally decided to decrease the number of rollouts to 160 () as the branching factor of Squadro is much more reduced than Go, shōgi or even chess. I would advice not to go below 10 true rollouts because parallel rollouts increase exploration. The true rollouts are therefore necessary to make a more intensive usage of exploitation.

  • Amount of Dirichlet noise (, ): Some online resources advice you to pick an alpha above 1, you should not do so. As we argued earlier, is mandatory for the noise to have the expected behavior: randomize some of the actions in early training stages. I have not been able to find any rule of thumb to fix this parameter. As Squadro is closer to chess than it is to Go, I picked the same alpha than AlphaZero used for chess: . Concerning the proportion against the inferred policy signal, I let it to its default value .

# Neural network

  • Number of residual blocks (, or ): Increasing the number of residual blocks improves the performance of AlphaZero as it allows it to recognize more game patterns. However, it also increases the size of the neural network, as well as the inference time. I initially started training with only 5 blocks, as it allowed me to quickly get a strong feeling about the different parameters, and then eventually increased it to 10 and 20 when I wanted more performant agents.

  • Number of convolutional filters (, or ): Similarly to the number of residual blocks, the number of convolutional filters adds many parameters to the neural network but improves its performance. I chose 64, 128 and 256 filters for 5, 10 and 20 residual blocks respectively.

  • L2 regularization (): I initially trained AlphaZero with the default 0.0001 regularization value but I then realized that the neural network was overfitting a lot. Using a value resolved this issue and stabilized the learning.

  • Learning rate (): I initialize the learning rate to be as high as possible (), so that the loss function does not explode, and reduce it by a factor 10 each time it is not able to pass a validation round anymore.

  • Number of epochs ( or ): The number of epoch plays an important role in the learning. Set it too high and you will overfit, set it too low and the training will be very slow. A number of epochs of and gave me a great compromise between the two.

# Learning process

  • Mini-batch size (): The mini-batch size was chosen experimentally. A batch size of 2048 seemed to overfit so I increased it to 4096, which gave satisfying results. The mini-batch size of 2048 may have overfit because it was generated from 25 games only, which was not sufficient. In order to use a higher percentage of game states, I increased it to 50 games for a mini-batch size of 4096.

  • Number of games per training iteration (): As I only use a single computer, game generation is a major problem. The deal is to generate sufficiently many games so that your mini-batch is diverse enough. I first tried which was too few, so I increased it to games, which is a better compromise.

  • Number of iterations before evaluation (): The number of iterations before evaluation was chosen such that the total number of games played for training is significantly larger than the number of games for evaluation. It should also not be too large because failing an iteration would then be too time-costly.

  • Size of the replay buffer ( observations games): This parameter makes the learning much more stable. However, the size of the replay buffer should not be too large, because we would like that the new inferred policy is sufficiently used for training the neural network. For this reason, I set it to be 3 times as large as the number of games per training iteration.

  • Number of evaluation games (): The number of evaluation games was picked similarly as for the number of games per training iteration. This number ensures that we have enough diversification and thus correctly validates whether the last agent is better than the current best agent.

# Results

In this section, I describe the performance of three AlphaZero agents:

  • a small model: 5 residual blocks, 64 filters
  • an intermediate model: 10 residual blocks, 128 filters
  • a big model: 20 residual blocks, 256 filters

These three agents use the methods and parameters described earlier. They compete against two baseline agents:

  • a human agent
  • an alpha-beta pruning agent: its heuristic function was carefully crafted using multiple features. It is given 15 minutes for the entire game: in general, this allows it to see 13 actions in advance. It plays better than the humans I have tested it against (including myself).

Here are the results of the AlphaZero agents playing against the baselines:

ref \ opponent (W/D/L) human alpha-beta
small (5x64) 8/0/2 4/4/2
intermediate (10x128) 22/0/3 16/0/9
big (20x256) 50/0/0 50/0/0

The small model is of course the easiest to defeat. Indeed, its mid/late game looks invincible, but it may sometimes play inaccurately at the beginning of a game. I initially thought that it was due to the fact that I trained it with too few iterations, but it was not able to pass any more evaluation round. It was a great agent to experiment the different parameters of AlphaZero, but it is obviously way too small to reach superhuman performance.

The interpretation for the intermediate model is mostly similar to the small model: its early game is sometimes still too weak. One way to leverage this issue is to increase the number of rollouts, it takes more time to play but it looks much more accurate against a human. However, moderately increasing the number of rollouts does not impact its winrate against the alpha-beta agent.

Finally, the big model reaches our expectations: a 100% winrate against both baseline agents. It plays in a very aggressive way: for example, in game 2 of the Appendix, it has already finished 3 of its pawns on the 61th turn (for a total of 91 turns), while the human player did not finish any. It is generally a bad idea to finish that much pawns that early in a game, because having more pawns than the opponent gives more control on its pawns, and thus you are more likely to "steal" the win.

The following figure shows the confidence of AlphaZero (20x256, playing red) during game 1 of the Appendix. It also highlight two moments in the game, turns 30 and 70:

Confidence of AlphaZero (20x256) on game 1 of the Appendix

The first moment shows a fairly typical start: both agents did not make much progress and both have their pawns with a move step of 1 still under control. However, AlphaZero already decided to finish its middle pawn which would not be something a human player would do. Its confidence is only at , meaning that this action do not significantly improve its winrate. This is even more alarming because even if this move is very unusual for humans, it seems to be a crucial strategy for AlphaZero. A similar behavior is shown on game 2, confirming our previous arguments.

The second moment is also interesting and shows AlphaZero has a higher confidence () even though that may not be obvious at first sight for a human. One very basic way to see the advantage of one player against the other is to compute the equivalent of the Manhattan distance: sum the number of actions needed to win, neglecting the opponent pawns. In this case, it is 12 for the alpha-beta agent, and only 8 for AlphaZero. If we consider that they both have the same control on each other's pawns, which seems to be the case, then AlphaZero is approximatly 4 actions ahead of the alpha-beta agent.

# Conclusion

AlphaZero is a general reinforcement learning algorithm that performs great on many different games. As Deepmind have shown, it is able to play better than human professionals in complex games like Go, chess or shōgi using sound computing resources. In my work, I explained how the algorithm exactly works, I deeply discussed the usage of each hyper-parameter and finally, I showcased its performance on a moderately complex board game, Squadro, using a single computer with no GPU. This highlights the fact that AlphaZero is accessible to, literally, everyone. Reinforcement learning is an amazing domain to work in and I cannot wait to see how it will evolve in the coming years!

# References

  1. AlphaGo Zero: Starting from scratch. https://deepmind.com/blog/article/alphago-zero-starting-scratch
  2. AlphaZero: Shedding new light on chess, shogi, and Go. https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go
  3. ELF: a platform for game research with AlphaGoZero/AlphaZero reimplementation. https://github.com/pytorch/ELF
  4. Leela Zero: Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper. https://github.com/leela-zero/leela-zero
  5. KataGo: GTP engine and self-play learning in Go. https://github.com/lightvector/KataGo
  6. AZFour: Connect Four powered by the AlphaZero Algorithm. https://azfour.com
  7. Squadro. https://www.gigamic.com/jeu/squadro-classic
  8. gRPC: A high-performance, open source universal RPC framework. https://grpc.io/
  9. TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators. https://github.com/NVIDIA/TensorRT
  10. Ray: A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library. https://github.com/ray-project/ray

# Appendix

# Examples of games

# Game 1

Player 0: alpha-beta
Player 1: AlphaZero (20x256)
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0
|_|_|_|_|_|_|<|1   |_|_|_|_|_|_|<|1   |_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1   |_|_|_|_|<|_|_|1
|_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2
|_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|^|_|_|_|_|<|3   |_|^|_|_|_|_|<|3
|_|_|_|_|_|_|<|4   |_|^|_|_|_|_|<|4   |_|^|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|^|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|^|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0
|_|_|_|_|<|_|_|1   |_|_|_|<|_|_|_|1   |_|_|_|<|_|_|_|1   |_|_|<|_|_|_|_|1   |_|_|<|_|_|_|_|1
|_|^|_|_|_|_|<|2   |_|^|_|_|_|_|<|2   |_|^|_|_|_|_|<|2   |_|^|_|_|_|_|<|2   |_|^|^|_|_|_|<|2
|_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|_|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|_|^|^|_|    |_|_|^|_|^|^|_|    |_|_|_|_|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|^|_|_|_|<|0   |_|_|^|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0
|_|_|<|_|_|_|_|1   |_|_|_|_|_|_|<|1   |_|_|_|_|_|_|<|1   |_|_|_|_|_|_|<|1   |_|_|_|_|_|_|<|1
|_|^|^|_|<|_|_|2   |_|^|_|_|<|_|_|2   |_|^|<|_|_|_|_|2   |_|^|<|_|_|_|_|2   |>|_|_|_|_|_|_|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|_|_|_|^|^|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|^|_|    |_|^|_|_|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|v|_|_|_|<|0
|_|_|_|_|_|_|<|1   |_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|^|_|^|_|_|<|3   |_|^|_|^|_|_|<|3   |_|^|_|^|_|_|<|3
|_|^|_|_|_|_|<|4   |_|^|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|_|_|_|^|^|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0
|_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2   |>|_|_|_|_|_|_|2   |_|_|>|_|_|_|_|2
|_|^|_|^|_|<|_|3   |_|^|_|^|_|<|_|3   |_|^|_|^|_|<|_|3   |_|^|_|^|_|<|_|3   |_|^|_|^|_|<|_|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|^|<|4   |_|_|_|_|<|_|_|4   |_|_|_|_|<|^|_|4   |_|_|_|_|<|^|_|4
|_|_|_|_|^|^|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0   |_|_|v|_|_|_|<|0
|_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|^|_|_|<|_|_|1
|_|_|>|_|_|^|_|2   |_|_|_|_|>|^|_|2   |_|^|_|_|>|^|_|2   |_|^|_|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|^|_|^|_|_|<|3   |_|^|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3
|_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4
|_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|^|_|    |_|_|_|_|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|v|<|_|_|_|0   |_|_|v|<|_|_|_|0   |_|_|v|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0
|_|^|_|_|<|_|_|1   |_|^|_|_|<|_|_|1   |_|^|_|_|<|_|_|1   |_|^|v|_|<|_|_|1   |_|^|v|_|<|_|_|1
|_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3
|_|_|_|_|<|_|_|4   |_|_|_|_|<|^|_|4   |_|<|_|_|_|^|_|4   |_|<|_|_|_|^|_|4   |>|_|_|_|_|^|_|4
|_|_|_|_|^|^|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|^|_|<|_|_|_|0   |>|_|_|_|_|_|_|0   |>|_|_|_|_|_|_|0
|_|^|_|_|<|_|_|1   |_|^|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|_|_|v|_|_|_|>|2   |_|_|v|_|_|_|>|2   |_|_|v|_|_|_|>|2   |_|_|v|_|_|_|>|2   |_|_|v|_|_|_|>|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|^|_|^|_|_|<|3
|>|_|_|_|_|^|_|4   |_|>|_|_|_|^|_|4   |_|>|_|_|_|^|_|4   |_|>|_|_|_|^|_|4   |>|_|_|_|_|^|_|4
|_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|^|_|_|^|_|_|    |_|_|_|_|^|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0
|_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|_|_|v|_|_|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2
|_|^|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|^|<|3   |_|_|_|^|<|_|_|3
|>|_|_|_|_|^|_|4   |>|_|_|_|_|^|_|4   |>|_|_|_|_|^|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4
|_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|_|_|    |_|_|_|_|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0
|_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|_|_|<|1   |_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1
|_|^|v|_|^|_|>|2   |_|^|v|_|^|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|<|_|3   |_|_|_|^|_|<|_|3   |_|_|_|^|_|<|_|3   |_|_|_|^|_|<|_|3
|>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|^|_|4
|_|_|_|_|_|^|_|    |_|_|_|_|_|^|_|    |_|_|_|_|_|^|_|    |_|_|_|_|_|^|_|    |_|_|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0
|_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|_|>|2   |_|^|v|_|_|^|>|2   |_|^|v|_|_|^|>|2
|_|_|_|^|<|_|_|3   |_|_|_|^|<|^|_|3   |_|_|_|^|<|^|_|3   |_|_|_|^|<|_|_|3   |_|_|_|^|<|_|_|3
|>|_|_|_|_|^|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |_|>|_|_|_|_|_|4
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|v|_|_|_|_|_| 
|_|_|>|_|^|_|_|0   |_|_|>|_|^|_|_|0   |_|^|>|_|^|_|_|0   |_|^|>|_|^|_|_|0   |_|_|>|_|^|_|_|0
|_|^|_|_|<|_|_|1   |_|^|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|_|_|v|_|_|^|>|2   |_|_|v|_|_|^|>|2   |_|_|v|_|_|^|>|2   |_|_|v|_|_|^|>|2   |_|_|v|_|_|^|>|2
|_|_|_|^|<|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|<|_|_|_|_|_|3   |_|<|_|_|_|_|_|3
|_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4
|_|_|_|_|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|v|_|_|_|_|_|    |_|v|_|_|v|_|_|    |_|v|_|_|v|_|_|    |_|v|_|_|v|_|_|    |_|v|_|_|v|_|_| 
|_|_|>|_|^|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0
|_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1   |_|_|_|<|_|_|_|1   |_|_|_|<|_|^|_|1   |_|_|_|<|_|^|_|1
|_|_|v|_|_|^|>|2   |_|_|v|_|_|^|>|2   |_|_|v|_|_|^|>|2   |_|_|v|_|_|_|>|2   |_|_|v|_|_|_|>|2
|>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3
|_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4   |_|>|_|_|_|_|_|4   |_|_|>|_|_|_|_|4
|_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|v|_|_|    |_|_|_|_|v|_|_|    |_|_|_|_|v|_|_|    |_|_|_|_|v|_|_|    |_|_|_|_|_|_|_| 
|_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|^|_|0   |_|_|>|_|_|^|_|0   |_|_|>|_|v|^|_|0
|_|_|_|<|_|^|_|1   |_|_|<|_|_|^|_|1   |_|_|<|_|_|_|_|1   |_|<|_|_|_|_|_|1   |_|<|_|_|_|_|_|1
|_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2
|>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3
|_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4
|_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|_|_|>|v|^|_|0   |_|_|_|>|_|^|_|0   |_|_|_|_|>|^|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0
|_|<|_|_|_|_|_|1   |_|<|_|_|v|_|_|1   |_|<|_|_|v|_|_|1   |_|<|_|_|v|_|_|1   |>|_|_|_|v|_|_|1
|_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2
|>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3
|_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4
|_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0
|>|_|_|_|_|_|_|1   |_|_|_|>|_|_|_|1   |_|_|_|>|_|_|_|1   |_|_|_|>|_|_|_|1   |_|_|_|>|_|_|_|1
|_|v|v|_|v|_|>|2   |_|v|v|_|v|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2
|>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|v|_|_|3   |>|_|_|_|v|_|_|3   |>|_|_|^|v|_|_|3
|_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|_|_|>|_|_|_|4   |>|_|_|_|_|_|_|4
|_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|_|_|    |_|_|_|_|v|_|_|    |_|_|_|_|v|_|_|    |_|_|_|_|v|_|_| 
|_|_|_|_|_|>|_|0   |>|_|_|_|_|_|_|0   |>|_|_|_|_|_|_|0   |>|_|_|_|_|_|_|0   |>|_|_|_|_|_|_|0
|_|_|_|>|_|_|_|1   |_|_|_|>|_|v|_|1   |_|_|_|>|_|v|_|1   |_|_|_|>|_|_|_|1   |_|_|_|_|_|_|>|1
|_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2
|>|_|_|^|v|_|_|3   |>|_|_|^|v|_|_|3   |_|_|_|_|_|>|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3
|>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|_|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|>|_|_|_|v|_|_|0   |_|>|_|_|v|_|_|0   |_|>|_|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|_|_|>|1
|_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|v|_|>|2
|>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3
|>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4
|_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|>|_|_|_|0   |_|_|_|>|_|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|_|>|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|^|_|_|>|1   |_|_|_|^|_|_|>|1
|_|v|v|_|v|_|>|2   |_|v|v|_|v|_|>|2   |_|v|v|_|v|_|>|2   |_|v|v|_|v|_|>|2   |_|v|v|_|v|_|>|2
|>|_|_|_|_|_|_|3   |>|_|_|^|_|_|_|3   |>|_|_|^|_|_|_|3   |>|_|_|_|_|_|_|3   |>|_|_|_|_|_|_|3
|>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4
|_|_|_|^|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|>|_|0   |_|_|_|_|_|>|_|0   |_|_|_|_|_|>|_|0   |_|_|_|_|_|>|_|0   |_|_|_|_|_|>|_|0
|_|_|_|^|_|_|>|1   |_|_|_|^|_|_|>|1   |_|_|_|^|_|_|>|1   |_|_|_|^|_|_|>|1   |_|_|_|^|_|_|>|1
|_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2   |_|v|v|_|_|_|>|2
|>|_|_|_|v|_|_|3   |_|_|_|>|v|_|_|3   |_|_|_|>|_|_|_|3   |_|_|_|_|_|_|>|3   |_|_|_|_|_|_|>|3
|>|_|_|_|_|v|_|4   |>|_|_|_|_|v|_|4   |>|_|_|_|v|v|_|4   |>|_|_|_|v|v|_|4   |>|_|_|_|_|v|_|4
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|v|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _   
|_|_|_|_|_|_|_|  
|_|_|_|_|_|_|>|0 
|_|_|_|^|_|_|>|1 
|_|v|v|_|_|_|>|2 
|_|_|_|_|_|_|>|3 
|>|_|_|_|_|v|_|4 
|_|_|_|_|v|_|_|  
   0 1 2 3 4     

# Game 2

Player 0: human
Player 1: AlphaZero (20x256)
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0
|_|_|_|_|_|_|<|1   |_|_|_|_|_|_|<|1   |_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1
|_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2   |_|_|_|_|_|_|<|2
|_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|^|<|4   |_|_|_|_|_|^|<|4   |_|^|_|_|_|^|<|4   |_|^|_|_|<|_|_|4
|_|^|^|^|^|^|_|    |_|^|^|^|^|_|_|    |_|^|^|^|^|_|_|    |_|_|^|^|^|_|_|    |_|_|^|^|^|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0
|_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1   |_|_|_|_|_|<|_|1   |_|_|_|_|<|_|_|1   |_|_|_|_|<|_|_|1
|_|_|_|_|_|_|<|2   |_|_|_|_|<|_|_|2   |_|^|_|_|<|_|_|2   |_|^|_|_|<|_|_|2   |_|^|_|_|<|_|_|2
|_|^|_|_|_|_|<|3   |_|^|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|^|_|<|3
|_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4   |_|_|_|_|<|_|_|4   |_|_|_|_|_|_|<|4
|_|_|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|^|^|^|_|    |_|_|^|^|_|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|_|_|<|0   |_|_|_|_|_|_|<|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0
|_|_|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1
|_|^|_|_|<|_|_|2   |_|_|_|_|<|_|_|2   |_|_|_|_|<|_|_|2   |_|_|^|_|<|_|_|2   |_|<|_|_|_|_|_|2
|_|_|_|_|^|_|<|3   |_|_|_|_|^|_|<|3   |_|_|_|_|^|_|<|3   |_|_|_|_|^|_|<|3   |_|_|_|_|^|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|_|^|^|_|^|_|    |_|_|^|^|_|^|_|    |_|_|^|^|_|^|_|    |_|_|_|^|_|^|_|    |_|_|^|^|_|^|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0
|_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1
|_|<|^|_|_|_|_|2   |_|<|^|_|_|_|_|2   |_|<|^|_|_|_|_|2   |>|_|^|_|_|_|_|2   |>|_|^|_|_|^|_|2
|_|_|_|_|^|_|<|3   |_|_|_|_|^|<|_|3   |_|_|_|_|^|<|_|3   |_|_|_|_|^|<|_|3   |_|_|_|_|^|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|^|<|4   |_|_|_|_|_|^|<|4   |_|_|_|_|_|_|<|4
|_|_|_|^|_|^|_|    |_|_|_|^|_|^|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0   |_|_|_|<|_|_|_|0
|_|^|_|<|_|_|_|1   |_|^|_|<|_|_|_|1   |_|^|<|_|_|_|_|1   |_|^|<|_|_|^|_|1   |>|_|_|_|_|^|_|1
|_|_|_|>|_|^|_|2   |_|_|_|>|_|^|_|2   |_|_|_|>|_|^|_|2   |_|_|_|>|_|_|_|2   |_|_|_|>|_|_|_|2
|_|_|_|_|^|_|<|3   |_|_|_|^|^|_|<|3   |_|_|_|^|^|_|<|3   |_|_|_|^|^|_|<|3   |_|_|_|^|^|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|_|^|^|_|_|_|    |_|_|^|_|_|_|_|    |_|_|^|_|_|_|_|    |_|_|^|_|_|_|_|    |_|^|^|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|v|_| 
|_|_|_|<|^|_|_|0   |_|_|_|<|^|_|_|0   |_|_|_|<|^|^|_|0   |_|_|_|<|^|^|_|0   |_|_|_|<|^|_|_|0
|>|_|_|_|_|^|_|1   |>|_|_|_|_|^|_|1   |>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1
|_|_|_|>|_|_|_|2   |_|_|_|_|_|>|_|2   |_|_|_|_|_|>|_|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|^|^|_|_|_|_|    |_|^|^|_|_|_|_|    |_|^|^|_|_|_|_|    |_|^|^|_|_|_|_|    |_|^|^|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|v|v|_|    |_|_|_|_|v|v|_| 
|>|_|_|_|^|_|_|0   |>|_|_|_|^|_|_|0   |>|_|_|_|^|_|_|0   |>|_|_|_|_|_|_|0   |>|_|_|_|_|_|_|0
|>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1
|_|_|_|_|_|_|>|2   |_|_|^|_|_|_|>|2   |_|_|^|_|_|_|>|2   |_|_|^|_|_|_|>|2   |_|_|^|_|_|_|>|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|<|_|3   |_|_|_|^|_|<|_|3   |_|_|_|^|<|_|_|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4
|_|^|^|_|_|_|_|    |_|^|_|_|_|_|_|    |_|^|_|_|_|_|_|    |_|^|_|_|_|_|_|    |_|^|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|v|_|_|v|_| 
|>|_|_|_|v|_|_|0   |>|_|_|_|v|_|_|0   |>|_|_|_|v|_|_|0   |>|_|_|_|v|_|_|0   |>|_|_|_|v|_|_|0
|>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1   |>|_|_|_|_|_|_|1   |_|_|_|>|_|_|_|1   |_|_|_|>|_|_|_|1
|_|_|^|_|_|_|>|2   |_|_|^|_|_|_|>|2   |_|_|^|_|_|_|>|2   |_|_|^|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|_|_|^|<|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3
|_|_|_|_|_|_|<|4   |_|_|_|_|_|_|<|4   |_|^|_|_|_|_|<|4   |_|^|_|_|_|_|<|4   |_|^|_|_|_|_|<|4
|_|^|_|_|_|_|_|    |_|^|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|v|_|_|v|_|    |_|_|v|_|_|v|_|    |_|_|v|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|>|_|_|_|v|_|_|0   |>|_|_|_|_|_|_|0   |>|_|_|_|_|_|_|0   |>|_|v|_|_|_|_|0   |>|_|v|_|_|_|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1
|_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3
|_|^|_|_|_|_|<|4   |_|^|_|_|_|_|<|4   |_|^|_|<|_|_|_|4   |_|^|_|<|_|_|_|4   |>|_|_|_|_|_|_|4
|_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|^|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|>|_|v|_|_|_|_|0   |>|_|v|_|_|_|_|0   |>|_|v|_|_|_|_|0   |_|>|v|_|_|_|_|0   |_|>|_|_|_|_|_|0
|_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|v|_|v|_|>|1
|_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3   |_|_|<|_|_|_|_|3
|>|^|_|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|^|>|_|_|_|_|4   |_|^|>|_|_|_|_|4   |_|^|>|_|_|_|_|4
|_|_|_|^|_|_|_|    |_|^|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_|    |_|_|_|^|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0
|_|_|v|_|v|_|>|1   |_|_|v|_|v|_|>|1   |_|_|v|_|v|_|>|1   |_|_|v|_|v|_|>|1   |_|_|v|_|v|_|>|1
|_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2
|_|_|<|_|_|_|_|3   |_|_|<|^|_|_|_|3   |_|_|<|^|_|_|_|3   |_|_|<|^|_|_|_|3   |_|_|<|^|_|_|_|3
|_|^|_|>|_|_|_|4   |>|^|_|_|_|_|_|4   |_|_|>|_|_|_|_|4   |_|^|>|_|_|_|_|4   |_|^|_|>|_|_|_|4
|_|_|_|^|_|_|_|    |_|_|_|_|_|_|_|    |_|^|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0
|_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|v|_|>|1   |_|_|_|_|_|_|>|1
|_|_|v|_|_|_|>|2   |_|_|v|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|_|_|>|2   |_|_|_|_|v|_|>|2
|_|_|<|^|_|_|_|3   |_|_|<|^|_|_|_|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3
|_|^|_|>|_|_|_|4   |_|^|_|_|>|_|_|4   |_|^|v|_|>|_|_|4   |_|^|v|_|_|>|_|4   |_|^|v|_|_|>|_|4
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|^|_|_|>|1   |_|_|_|^|_|_|>|1
|_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2
|_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|^|_|_|<|3   |_|_|_|_|_|_|<|3   |_|_|_|_|_|<|_|3
|_|^|v|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4
|_|_|_|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|v|_|v|_|    |_|_|_|v|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|_|>|_|_|_|_|0   |_|_|_|>|_|_|_|0   |>|_|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|v|_|_|>|1   |_|_|_|v|_|_|>|1   |_|_|_|_|_|_|>|1
|_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2
|_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|v|_|<|_|3
|_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4
|_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_|    |_|_|v|_|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|v|_| 
|_|_|>|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|_|_|>|_|_|_|0   |_|_|_|>|_|_|_|0   |_|_|_|_|>|_|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1
|_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2
|_|_|_|v|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|^|_|_|_|<|_|3   |_|^|_|_|_|<|_|3
|_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|^|_|_|_|_|>|4   |_|_|_|_|_|_|>|4   |_|_|_|_|_|_|>|4
|_|_|v|_|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|v|_|    |_|_|_|_|_|v|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|_|>|_|_|0   |_|_|_|_|_|>|_|0   |>|_|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|v|>|1   |_|_|_|_|_|v|>|1   |_|_|_|_|_|_|>|1
|_|^|_|_|v|_|>|2   |_|^|_|_|v|_|>|2   |_|^|_|_|v|_|>|2   |_|^|_|_|v|_|>|2   |_|^|_|_|v|_|>|2
|_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|_|<|3
|_|_|_|_|_|_|>|4   |_|_|_|_|_|_|>|4   |_|_|_|_|_|_|>|4   |_|_|_|_|_|_|>|4   |_|_|_|_|_|v|>|4
|_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|>|_|_|_|_|_|0   |_|>|_|_|_|_|_|0   |_|_|>|_|_|_|_|0   |_|^|>|_|_|_|_|0   |_|^|_|>|_|_|_|0
|_|_|_|_|_|_|>|1   |_|^|_|_|_|_|>|1   |_|^|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1
|_|^|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2
|_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3
|_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4
|_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _      _ _ _ _ _ _ _  
|_|v|_|_|_|_|_|    |_|v|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_|    |_|_|_|_|_|_|_| 
|_|_|_|>|_|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|>|_|_|0   |_|_|_|_|_|>|_|0   |_|_|_|_|_|>|_|0
|_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1   |_|_|_|_|_|_|>|1
|_|_|_|_|v|_|>|2   |_|_|_|_|v|_|>|2   |_|v|_|_|v|_|>|2   |_|v|_|_|v|_|>|2   |_|_|_|_|v|_|>|2
|_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3   |_|_|_|_|_|<|_|3
|_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4   |_|_|_|_|_|v|>|4
|_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|_|v|v|_|_|_|    |_|v|v|v|_|_|_| 
   0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4          0 1 2 3 4    
 _ _ _ _ _ _ _   
|_|_|_|_|_|_|_|  
|_|_|_|_|_|_|>|0 
|_|_|_|_|_|_|>|1 
|_|_|_|_|v|_|>|2 
|_|_|_|_|_|<|_|3 
|_|_|_|_|_|v|>|4 
|_|v|v|v|_|_|_|  
   0 1 2 3 4