Proof of skill based gameplay

Proof of Skill Based Gameplay Document

Overview

In our gameplay loop, strategy, learning, and skill are fundamental factors for success. It is not a game of chance; players can adapt and develop strategies based on AI behavior patterns, use items to craft specific builds that require thoughtful planning, and continuously improve their performance. We define a skill-based game, in which an advanced player can reliably defeat a novice opponent 95% of the time, and there is a clear progression in the likelihood to win as a player increases in skill. We demonstrate this through the use of reinforcement learning trained on playing the game at an expert level.

Learning AI Enemy Patterns

The AI opponents in our game follow certain patterns and behaviors, which can be studied and understood over time. Knowledge of these patterns allows players to make strategic decisions that effectively counter the AI's actions, thereby enhancing their performance. This indicates that the game relies on learning and understanding, rather than randomness or chance.

Use of Items and Strategic Builds

Using Head, Body, and Weapon NFTs as items allows players to create specific builds for their characters, enabling tailored strategies based on the items' attributes. Designing effective combinations of these items demands careful consideration and planning, hallmarks of skill-based gameplay. A player can develop a specific build to use a certain set of actions as a strategy and modify that based on the enemies they encounter.

AI as a Proof of Skill-Based Gameplay: Use of Reinforcement Learning (RL)

In demonstrating our game as a test of skill rather than luck, we employed a Reinforcement Learning (RL) algorithm to play the game. Reinforcement Learning, a type of Machine Learning, enabled the AI to learn and improve its strategy over time. By demonstrating improvement, and reaching a near expert level, we show that the game is not a game of chance but rather thoughtful skill.

Methodology

The results underscore the importance of strategy, decision-making, and learning in our game. Rather than have the RL algorithm build its own set of armor, we show the impact of armor choices on the score of the player by crafting different builds and training the model.

  1. Environment and Agent: In the context of our game, the environment includes the AI opponent's behavior patterns, the player's damage, the opponent's damage, and the impact of NFT items. The agent is the RL algorithm, responsible for making decisions - selecting the action in each round. The opponent's name is also used as an observation so the agent can learn to play against specific opponent types.

  2. State, Action, Reward: The state refers to the current condition of the environment, based on which the agent decides its action. The chosen action directly influences the outcome of each round. The reward is a feedback mechanism: the score difference between the player and the AI opponent.

  3. Policy: The policy, which is the agent's strategy to decide the action from a given state, is continually refined. Initially, the AI makes random decisions (exploration). However, as it accumulates more knowledge from playing multiple rounds and understanding rewards, it starts making more informed decisions (exploitation).

  4. Learning and Improvement: After each round, the AI learns from its actions and their outcomes. It updates its policy based on the rewards earned. Over time, it becomes more skilled in predicting the opponent's actions, strategically choosing its actions, and minimizing the damage received.

This learning process's iterative nature, the transition from random actions to strategic decisions, and the constant improvement reflect how our game is skill-based. The AI's growth trajectory provides tangible evidence of our game's value of learning, strategy, and adaptability. It also emphasizes that success in our game is not a product of chance but a result of continuous learning and skill refinement.

Results

Coming soon.

Conclusion

By prioritizing learning enemy patterns, strategic utilization of items, and the development of individual gameplay strategies, we establish our game as one of skill rather than chance. The proven progression of an AI in playing the game underscores this conclusion. Our gameplay loop promotes continuous learning, strategic thinking, and skill improvement – traits distinguishing it as a game of skill.

Last updated