|  | 
| template<blackboard_trait T> | 
| plan< T > | aitoolkit::goap::planner (std::vector< action_ptr< T > > actions, T initital_blackboard, T goal_blackboard, size_t max_iterations=0) | 
|  | Create a plan. 
 | 
|  | 
Introduction
Goal Oriented Action Planning (GOAP) is a planning algorithm that can be used to find a sequence of actions that will lead to a goal state. The algorithm works by searching through a graph of possible actions and their effects. The algorithm is guaranteed to find a solution if one exists, but it is not guaranteed to find the optimal solution.
graph LR
  start[Start] --> action1
  action1[Get axe] --> action2
  action2[Chop tree] --> goal
  goal[Goal]
  action3[Get pickaxe] --> action4
  action3 --> action5
  action4[Mine gold]
  action5[Mine stone]
  style start fill:darkred
  style goal fill:darkgreen
Usage
First, include the header file:
#include <aitoolkit/goap.hpp>
Then, create a blackboard class that will hold the state of the planner:
struct blackboard_type {
  bool has_axe{false};
  bool has_pickaxe{false};
  int wood{0};
  int gold{0};
  int stone{0};
};
NB: The blackboard needs to be comparable (a == b) and hashable. 
Next, create a class for each action that you want to be able to perform:
using namespace aitoolkit::goap;
 
class get_axe final : 
public action<blackboard_type> {
 
  public:
    virtual float cost(const blackboard_type& blackboard) const override {
      return 1.0f;
    }
 
    virtual bool check_preconditions(const blackboard_type& blackboard) const override {
      return !blackboard.has_axe;
    }
 
    virtual void apply_effects(blackboard_type& blackboard, bool dry_run) const override {
      blackboard.has_axe = true;
    }
};
 
class get_pickaxe final : 
public action<blackboard_type> {
 
  public:
    virtual float cost(const blackboard_type& blackboard) const override {
      return 1.0f;
    }
 
    virtual bool check_preconditions(const blackboard_type& blackboard) const override {
      return !blackboard.has_pickaxe;
    }
 
    virtual void apply_effects(blackboard_type& blackboard, bool dry_run) const override {
      blackboard.has_pickaxe = true;
    }
};
 
class chop_tree final : 
public action<blackboard_type> {
 
  public:
    virtual float cost(const blackboard_type& blackboard) const override {
      return 1.0f;
    }
 
    virtual bool check_preconditions(const blackboard_type& blackboard) const override {
      return blackboard.has_axe;
    }
 
    virtual void apply_effects(blackboard_type& blackboard, bool dry_run) const override {
      blackboard.wood += 1;
    }
};
 
class mine_gold final : 
public action<blackboard_type> {
 
  public:
    virtual float cost(const blackboard_type& blackboard) const override {
      return 1.0f;
    }
 
    virtual bool check_preconditions(const blackboard_type& blackboard) const override {
      return blackboard.has_pickaxe;
    }
 
    virtual void apply_effects(blackboard_type& blackboard, bool dry_run) const override {
      blackboard.gold += 1;
    }
};
 
class mine_stone final : 
public action<blackboard_type> {
 
  public:
    virtual float cost(const blackboard_type& blackboard) const override {
      return 1.0f;
    }
 
    virtual bool check_preconditions(const blackboard_type& blackboard) const override {
      return blackboard.has_pickaxe;
    }
 
    virtual void apply_effects(blackboard_type& blackboard, bool dry_run) const override {
      blackboard.stone += 1;
    }
};
Finally, create a plan and run it:
auto initial = blackboard_type{};
auto goal = blackboard_type{
  .has_axe = true,
  .has_pickaxe = true,
  .wood = 1,
  .gold = 1,
  .stone = 1
};
 
auto p = planner<blackboard_type>(
  action_list<blackboard_type>(
    get_axe{},
    get_pickaxe{},
    chop_tree{},
    mine_gold{},
    mine_stone{}
  )
  initial,
  goal
);
 
auto blackboard = initial;
while (p) {
  p.run_next(blackboard); 
}
◆ planner()
template<blackboard_trait T> 
      
        
          | plan< T > aitoolkit::goap::planner | ( | std::vector< action_ptr< T > > | actions, | 
        
          |  |  | T | initital_blackboard, | 
        
          |  |  | T | goal_blackboard, | 
        
          |  |  | size_t | max_iterations = 0 ) | 
      
 
Create a plan. 
The plan is created by providing a list of actions, an initial blackboard state, and a goal blackboard state. The plan will then find a sequence of actions that will lead to the goal state.
- Parameters
- 
  
    | actions | The list of actions that can be performed. |  | initital_blackboard | The initial state of the blackboard. |  | goal_blackboard | The goal state of the blackboard. |  | max_iterations | The maximum number of iterations to perform before giving up. If 0, the plan will run until it finds a solution. |  
 
- Returns
- A plan that can be executed.