Tag Archives: Evolutionary Algorithms

Evolving Game-specific UCB Alternatives for General Video Game Playing

3 May

Interesting paper presented as a talk at EvoGAMES 2017 in Amsterdam (The Netherlands) by Ivan Bravi.

Co-authored by: Ahmed Khalifa , Christoffer Holmgard , Julian Togelius

ABSTRACT:

At the core of the most popular version of the Monte Carlo Tree Search (MCTS) algorithm is the UCB1 (Upper Confidence Bound) equation. This equation decides which node to explore next, and therefore shapes the behavior of the search process. If the UCB1 equation is replaced with another equation, the behavior of the MCTS algorithm changes, which might increase its performance on certain problems (and decrease it on others). In this paper, we use genetic programming to evolve replacements to the UCB1 equation targeted at playing individual games in the General Video Game AI (GVGAI) Framework. Each equation is evolved to maximize playing strength in a single game, but is then also tested on all other games in our test set. For every game included in the experiments, we found a UCB replacement that performs significantly better than standard UCB1. Additionally, evolved UCB replacements also tend to improve performance in some GVGAI games for which they are not evolved, showing that improvements generalize across games to clusters of games with similar game mechanics or algorithmic performance. Such an evolved portfolio of UCB variations could be useful for a hyper-heuristic game-playing agent, allowing it to select the most appropriate heuristics for particular games or problems in general.

PRESENTATION:

Ivan Bravi – EvoGAMES – Evolving UCT alternatives

LINK TO THE PAPER:

https://link.springer.com/chapter/10.1007/978-3-319-55849-3_26

http://julian.togelius.com/Bravi2017Evolving.pdf

You can start writing your papers to EvoGAMES. Time is running!

20 Sep

November 1st is the date when you must submit your high-quality contributions to EvoGAMES 2017. And to the rest of Evo* conferences and tracks, of course.

As you know, the topics of interest are mainly focused on the applications of bio-inspired algorithms in games or related research lines. Namely, we are interested in:
– Computational Intelligence in video games
– Intelligent avatars and new forms of player interaction
– Player experience measurement and optimization
– Procedural content generation
– Human-like artificial adversaries and emotion modelling
– Authentic movement, believable multi-agent control
– Experimental methods for gameplay evaluation
– Evolutionary testing and debugging of games
– Adaptive and interactive narrative and cinematography
– Games related to social, economic, and financial simulations
– Adaptive educational, serious and/or social games
– General game intelligence (e.g. general purpose drop-n-play Non-Player Characters, NPCs)
– Monte-Carlo tree search (MCTS)
– Affective computing in Games

The Evo* event will be held in Amsterdam on April 2017, so you’ll better do a very good work to get there!

You’ll have a lot of space to describe your work, up to 16 pages.

As usual, the accepted submissions will be included in the proceedings of Evo* (LNCS), but this year, a selection of the best papers in EvoAPPS will be invited to submit an extended version a special issue of Memetic Computing journal.

See you at the Red Light District in Amsterdam! 😀

Orthogonally Evolved AI to Improve Difficulty Adjustment in Video Games

5 Apr

Paper presented as talk at EvoGAMES 2016 in Porto (Portugal), and selected as one of the best papers of the conference.

BY:
Arend Hintze, Randal Olson, Joel Lehman

ABSTRACT:
Computer games are most engaging when their difficulty is well matched to the player’s ability, thereby providing an experience in which the player is neither overwhelmed nor bored. In games where the player interacts with computer-controlled opponents, the difficulty of the game can be adjusted not only by changing the distribution of opponents or game resources, but also through modifying the skill of the opponents. Applying evolutionary algorithms to evolve the artificial intelligence that controls opponent agents is one established method for adjusting opponent difficulty. Less-evolved agents (i.e. agents subject to fewer generations of evolution) make for easier opponents, while highly-evolved agents are more challenging to overcome. In this publication we test a new approach for difficulty adjustment in games: orthogonally evolved AI, where the player receives support from collaborating agents that are co-evolved with opponent agents (where collaborators and opponents have orthogonal incentives). The advantage is that game difficulty can be adjusted more granularly by manipulating two independent axes: by having more or less adept collaborators, and by having more or less adept opponents. Furthermore, human interaction can modulate (and be informed by) the performance and behavior of collaborating agents. In this way, orthogonally evolved AI both facilitates smoother difficulty adjustment and enables new game experiences.

PRESENTATION:

https://docs.google.com/presentation/d/1AYn6KV7hfQxPIY82wCHVqYS022OQtO9gSABKq3lgX50/pub?start=false&loop=false&delayms=60000#slide=id.p

Enjoy it!

The story of their lives: Massive procedural generation of heroes’ journeys using evolved agent-based models and logical reasoning

5 Apr

Paper presented as talk at EvoGAMES 2016 in Porto (Portugal).

BY:
Ruben H. García-Ortega, Pablo García-Sánchez, Juan J. Merelo, Aranzazu San-Ginés, Angel Fernández-Cabezas

ABSTRACT:

The procedural generation of massive subplots and backstories in secondary characters that inhabit Open World videogames usually lead to stereotyped characters that act as a mere backdrop for the virtual world; however, many game designers claim that the stories can be very relevant for the player’s experience. For this reason we are looking for a methodology that improves the variability of the characters’ personality while enhancing the quality of their backstories following artistic or literary guidelines. In previous works, we used multi agent systems in order to obtain stochastic, but regulated, inter-relations that became backstories; later, we have used genetic algorithms to promote the appearance of high level behaviors inside them.
Our current work continues the previous research line and propose a three layered system (Evolutionary computation – Agent-Based Model – Logical Reasoner) that is applied to the promotion of the monomyth, commonly known as the hero’s journey, a social pattern that constantly appears in literature, films, and videogames. As far as we know, there is no previous attempt to model the monomyth as a logical theory, and no attempt to use the sub-solutions for narrating purposes. Moreover, this paper shows for the first time this multi-paradigm three-layered methodology to generate massive backstories. Different metrics have been tested in the experimental phase, from those that sum all the monomyth-related tropes to those that promote distribution of archetypes in the characters. Results confirm that the system can make the monomyth emerge and that the metric has to take into account facilitator predicates in order to guide the evolutionary process.

PRESENTATION:

Enjoy it!

Design and Evaluation of an Extended Learning Classifier-based StarCraft Micro AI

4 Apr

Paper presented as poster + interactive presentation at EvoGAMES 2016 in Porto (Portugal).

BY:
Stefan Rudolph, Sebastian von Mammen, Johannes Jungbluth, Jorg Hahner

ABSTRACT:

Due to the manifold challenges that arise when developing an artificial intelligence that can compete with human players, the popular realtime-strategy game StarCraft (BroodWars) has received attention from the computational intelligence research community. It is an ideal testbed for methods for self-adaption at runtime designed to work in complex technical systems. In this work, we utilize the broadly-used Extended Classifier System (XCS) as a basis to develop different models of BW micro AIs: the Defender, the Attacker, the Explorer and the Strategist. We evaluate theses AIs with a focus on their adaptive and co-evolutionary behaviors. To this end, we stage and analyze the outcomes of a tournament among the proposed AIs and we also test them against a non-adaptive player to provide a proper baseline for comparison and learning evolution.
Of the proposed AIs, we found the Explorer to be the best performing design, but, also that the Strategist shows an interesting behavioral evolution.

PRESENTATION:

[poster] http://www.vonmammen.org/broodwars/2016-EvoStar-Broodwars.pdf
[slides] http://www.vonmammen.org/broodwars/XCS-Starcraft-5min-presentation.pdf

Enjoy it!

There can be only one: Evolving RTS Bots via joust selection

3 Apr

Paper presented as talk at EvoGAMES 2016 in Porto (Portugal).

BY:
Antonio Fernández Ares, Pablo García-Sánchez, Antonio M. Mora García, Pedro A. Castillo, Juan J. Merelo

ABSTRACT:
This paper proposes an evolutionary algorithm for evolving game bots that eschews an explicit fitness function using instead a match between individuals called joust and implemented as a selection mechanism where only the winner survives. This algorithm has been designed as an optimization approach to generate the behavioural engine of bots for the RTS game Planet Wars using Genetic Programming and has two objectives: first, to deal with the noisy nature of the fitness function and second, to obtain more general bots than those evolved using a specific opponent. In addition, avoiding the explicit evaluation step reduce the number of combats to perform during the evolution and thus, the algorithm time consumption is decreased. Results show that the approach performs converges, is less sensitive to noise than other methods and it yields very competitive bots in the comparison against other bots available in the literature.

PRESENTATION:

 

Enjoy it!

Evolving Chess-like Games Using Relative Algorithm Performance Profiles

3 Apr

Paper presented as talk at EvoGAMES 2016 in Porto (Portugal).

BY:
Jakub Kowalski, Marek Szykula

ABSTRACT:

We deal with the problem of automatic generation of complete rules of an arbitrary game. This requires a generic and accurate evaluating function that is used to score games. Recently, the idea that game quality can be measured using differences in performance of various gameplaying algorithms of different strengths has been proposed; this is called Relative Algorithm Performance Profiles. We formalize this method into a generally application algorithm estimating game quality, according to some set of model games with properties that we want to reproduce.
We applied our method to evolve chess-like boardgames. The results show that we can obtain playable and balanced games of high quality.

PRESENTATION:

http://kot.rogacz.com/Science/Research/publications/EvoGAMES2016_p.pdf

Enjoy it!

EvoGAMES is running today!

31 Mar

Actually, the main session has been run out already. Sorry, we forgot to post it before.

But as a reward for all of the dissapointed people we will  publish all the presentations here in a few hours.

…And you are still on time for attending the Interactive Session this afternoon at 16.15 at room 3 of the Seminário de Vilar in Porto, Portugal.

There will be presented some of the contributions to EvoGAMES.

We’ll wait for you!

Unreal Expert Bots at IWANN 2013

20 Jun

Last week there was held IWANN 2013 at Tenerife, an international conference mainly devoted to researches inside the neural networks scope. In it, Antonio Fernández Leiva, Raúl Lara and Me organized the Special Session on Artificial Intelligence and Games.

There were five works in the session, one of them “Designing and Evolving an Unreal Tournament— 2004 Expert Bot“.

It describes the designing and improvement, through off-line (not during the game) evolution, of an autonomous agent (or bot) for playing the game Unreal Tournament 2004. This was created by means of a finite state machine which models the expert behaviour of a human player in 1 vs 1 deathmatch mode, following the rules of the international competition.

Then, the bot was improved by means of a Genetic Algorithm, yielding an agent that is, in turn a very hard opponent for the medium-level human player and which can (easily) beat the default bots in the game, even in the maximum difficulty level.

The presentation can be seen at:

Moreover, you can watch one example of the evolution in the following video:

Finally, the Unreal Expert and Genetic bot’s source code are available at https://github.com/franaisa/ExpertAgent

Enjoy them. 😉