[1] From what I can remember without doing more research, one approach is to build the AI that analyzes all possible decisions, but then limit the depth of the search. For example, make it so that it can only make estimates up to 2 or 3 moves ahead. It will chose a good move based on the current situation, but it will often not make the most optimal decision.
[2] Another concept is to make an optimal AI, then randomize the weights by some amount. Thus each iteration of the AI will choose with some bias, not perfectly, and each one would have different bias meaning if done well, the player could identify their weakness and play against that.
[3] Then of course, you can make something like a deep-learning AI, and train it with imperfect human data. The output may be good, but it wouldn't be perfect.
[1] Interesting approach. I was working through this conceptually, and couldn't figure out how to simulate the feeling of "running through moves in your head, and making one after ~6 seconds" which is how I often feel while playing a TCG or Poker. This is helpful. Thank you.
[2] This was (to me) the most obvious solution, but it didn't sit right with me... Although, I have considered the idea of having different levels ("difficulties") and that this may be the best way to accomplish that. Huh.
[3] This is waaay above my abilities as a programmer, but it would certainly be interesting... having AI that "gets to know you" as a regular in a virtual casino.
If anyone here wants to "break it down" for us: I'm curious as to how that would work. I'm not asking for code, as much as a "10,000 foot overview" conceptually... Well, unless you have code to share. I'm certainly not discouraging that.
@Cloaked Games - that is a good approach, and reminds me of an algorithm called
alpha-beta pruning. That one doesn't look at all possible next states; instead it trims down states that don't pass a certain "threshold". In general, it seems to eliminate states that most rational players wouldn't voluntarily enter into (i.e. betting all your chips every time). If the game is complex enough, and you want an even higher depth search, this is often preferable.
To do what either I or Cloaked Games is talking about -
minimax in his case,
alpha-beta in mine, you need to score each state of the card game that will be entered into by the player(s). This is called a heuristic. Many heuristics have a "worst score" of 0 and increase from there. If you want a more "realistic" AI, you can create one such that the other player's hand is unknown to the AI. Then, we could only score the next state based on our current hand against all other possible hands, and (possibly) the behavior of the other player (i.e., would she behave this way if she had a bad hand, etc.).
This is great information to have. Sometimes the hardest thing about the "research phase" of design is knowing what to search for.
Alpha-beta pruning and
minimax are things I will be looking into. Gracias.
I've read only very little about AI decisionmaking, but I recall minimax algorithm is commonly used in simple decisionmaking situations, good example being tic-tac-toe, and not suitable to complex stuff like chess.
Gotcha.
I played around with a few different methods of AI decision making in my game
DRAW!!.
- Enemies with a fixed sequence of cards. These enemies were actually pretty fun to play against because if you figured out the pattern you could completely destroy them. The challenge relied around you making the right decisions to play around the cards you knew were going to be played. Although replayability was not that good because the same thing would happen every time.
- Enemies that played cards completely randomly. These were definitely the least fun and were the most difficult. Because there is no way to predict what the enemy was going to do the player's choices felt like they did not matter as much.
- Enemies that played a fixed sequence of "plays." These enemies had a fixed sequence of moves but there were not fixed to specific cards. For example the first enemy in the game's pattern is dodge, shoot, dodge, shoot, dodge, shoot.... For example if it was the "shoot" phase, the enemy would always shoot at the player instead of a random direction (DRAW!! had different cards for shooting different directions). These enemies had a good balance of feeling dynamic while also giving the player the ability to predict the majority of their attacks.
- Enemies that played a fixed sequence of "plays" with a random chance to make mistakes. In DRAW!! enemies never actually had their own hand of cards or a deck. Instead, I would create enemy patterns like in #3, if these patterns were too good I would add a chance that the AI would make a mistake. This is actually a very common thing in all sorts of games. For example in a lot of FPS games enemies will always miss their first shot to give the player a chance to react. I think this is my preferred method of AI as it gives enemies predictability, while also making it seem like they are making natural plays with occasional mistakes.
Hope this helps! If you want to play my game you will actually be able to see all these types of decision making in action.
1. Might be good for a tutorial, though.
2. This sounds awful. I agree.
3. I hadn't considered this at all. That might work for a card-game I'm meditating on. It's based on a board-game I played as a kid, but I've made something nearly-original out of it. This sounds like something I could pull off. Very duly noted.
4. That's greasy, but I can see how it would "feel" the best.
Very interesting. Thanks for sharing!
I've played DRAW! (very fun, reminds me of a mini-game from the original Kirby) but haven't checked out Decks of Dexterity, yet. I'll make sure to do so.
- - -
To contribute something to the discussion, here are a few videos from my collection of tutorials: