- Joined
- Feb 14, 2010
- Messages
- 4,409
- Reaction score
- 2,330
https://www.theverge.com/2018/3/22/17150918/ea-dice-seed-battlefield-1-ai-shooter
EA has started training AI players in Battlefield 1
7 comments
First-person shooters are a bigger challenge for AI than board games, and EA is taking on the task
By James Vincent and Vlad Savov Mar 22, 2018, 2:30pm EDT
Share
Battlefield 1
Image: EA DICE
The term “AI” has been used in video games since their inception, but it rarely means true artificial intelligence. Instead, it’s a generic term to describe a preprogrammed opponent or character that feigns intelligence but is really just following a narrow set of instructions. This is slowly changing, though — and the people who build video games are helping out.
At GDC today, EA announced that it’s been training AI agents in 2016’s WWI shooter Battlefield 1. The company says it’s the first time this sort of work has been done in a high-budget AAA title (which is disputable), but more importantly, it says the methods it’s developing will help improve future games: providing tougher, more realistic enemies for human players and giving developers new ways to debug their software.
EA’s AI agents — which, unlike bots, are expected to learn how to play instead of merely following instructions — are being trained using a combination of two standard methods: imitation learning and reinforcement learning. Both work exactly as you’d expect. The first part involves the agent watching human players and then attempting to mimic them. That constitutes roughly 2 percent of their knowledge, EA tells us, and sets them up on the right path.
After the initial learning is done, agents have to figure out the rest of the game themselves, with rewards for completing tasks (like killing enemies) helping them through a process of trial and error. That’s the reinforcement learning. EA’s agents play hundreds and hundreds of Battlefield games at an accelerated pace and thus improve over time. It’s similar to the methods DeepMind used to train its Go-playing AI, although the latter had a far greater focus on strategic thinking.
You can see the bots in action below:
Speaking to The Verge, the machine learning head of EA’s Search for Extraordinary Experiences Division (SEED) Magnus Nordin says the AI agents proved to be pretty capable Battlefield players — but definitely not as good as humans. “We did play tests against internal DICE developers, who are decent but not professionals. They beat the bots easily, but it wasn’t a total blowout,” Nordin says.
During their training, the bots picked up all sorts of skills. They learned how to adjust their aim to gun recoil, for example, and proved to be surprisingly good at dodging bullets. “They jump from side to side in order to not get hit,” says Nordin.
But some of their actions show how far there is to go before AI agents can play video games as naturally as humans can. Nordin explains that the bots developed a particular “scanning” behavior, where they would spin around looking for something to interact with. This is because they were trained using reinforcement learning, which rewards them if their score goes up or they pick up spare health and ammo. The latter reward was added because EA’s researchers thought it would encourage the agents to explore the map (as well as helping them survive). But it also had the side effect of limiting their ambition. “When they don’t see any enemies they just start scanning for something to do,” says Nordin. Think of it like a dog looking for a treat because it’s bored. It’s not unintelligent behavior, but it’s not that smart either.
“The more complex the game, the better the challenge.”
Arthur Juliani, a senior machine learning engineer at Unity, said EA’s work was similar to earlier research in this domain, but solid nonetheless. “Combining these approaches and demonstrating that they can be used to train agents which can be deployed in real games is an exciting prospect,” Juliani told The Verge. He pointed out that researchers at institutions like OpenAI and DeepMind are working on building their own bots for titles like Starcraft II and Dota 2. “The more complex the game, the better the challenge,” says Juliani, though he adds that retro games are still very useful for training new types of behavior because they’re less computationally intensive.
Nordin, acknowledging the prior AI work done in games like chess and Go, underscores the higher complexity of a first-person shooter like Battlefield. In board games, you have a complete model of the world and a finite number of possible outcomes, which a computer can model exhaustively and then select the best option. In a shoot-em-up, a player has to press multiple keys at once — to walk, run, crouch, fire, reload, swap weapons, and so on — and the combination of all those possible actions, together with trying to predict the actions of others, makes for a vastly more complex task.
For EA, the aim is not necessarily to further the general field of AI, but to find out how this technology can benefit games developers. Nordin says he doesn’t think we’ll see AI players taking on humans for a while yet. “They won’t be in the next Battlefield because that’s not very far away, but probably the one after that — as a hybrid of classical AI and neural networks,” he says.
Having AI agents that can roam around a map is a great way to speed up the nitty-gritty of game making. “Battlefield is a 64-player game, so to fully test it we need 64 players filling out a level, and AI agents can do that,” says Nordin. Juliani agrees and says this is why Unity is working on its own AI tools for the community to use. “Agents can be trained to attempt to find exploits in games, and to ensure that games are properly balanced,” he says.
So before AI players start beating us at our favorite video games, they’ll at least help us make them. Not a terrible deal.
EA has started training AI players in Battlefield 1
7 comments
First-person shooters are a bigger challenge for AI than board games, and EA is taking on the task
By James Vincent and Vlad Savov Mar 22, 2018, 2:30pm EDT
Share
Battlefield 1
Image: EA DICE
The term “AI” has been used in video games since their inception, but it rarely means true artificial intelligence. Instead, it’s a generic term to describe a preprogrammed opponent or character that feigns intelligence but is really just following a narrow set of instructions. This is slowly changing, though — and the people who build video games are helping out.
At GDC today, EA announced that it’s been training AI agents in 2016’s WWI shooter Battlefield 1. The company says it’s the first time this sort of work has been done in a high-budget AAA title (which is disputable), but more importantly, it says the methods it’s developing will help improve future games: providing tougher, more realistic enemies for human players and giving developers new ways to debug their software.
EA’s AI agents — which, unlike bots, are expected to learn how to play instead of merely following instructions — are being trained using a combination of two standard methods: imitation learning and reinforcement learning. Both work exactly as you’d expect. The first part involves the agent watching human players and then attempting to mimic them. That constitutes roughly 2 percent of their knowledge, EA tells us, and sets them up on the right path.
After the initial learning is done, agents have to figure out the rest of the game themselves, with rewards for completing tasks (like killing enemies) helping them through a process of trial and error. That’s the reinforcement learning. EA’s agents play hundreds and hundreds of Battlefield games at an accelerated pace and thus improve over time. It’s similar to the methods DeepMind used to train its Go-playing AI, although the latter had a far greater focus on strategic thinking.
You can see the bots in action below:
Speaking to The Verge, the machine learning head of EA’s Search for Extraordinary Experiences Division (SEED) Magnus Nordin says the AI agents proved to be pretty capable Battlefield players — but definitely not as good as humans. “We did play tests against internal DICE developers, who are decent but not professionals. They beat the bots easily, but it wasn’t a total blowout,” Nordin says.
During their training, the bots picked up all sorts of skills. They learned how to adjust their aim to gun recoil, for example, and proved to be surprisingly good at dodging bullets. “They jump from side to side in order to not get hit,” says Nordin.
But some of their actions show how far there is to go before AI agents can play video games as naturally as humans can. Nordin explains that the bots developed a particular “scanning” behavior, where they would spin around looking for something to interact with. This is because they were trained using reinforcement learning, which rewards them if their score goes up or they pick up spare health and ammo. The latter reward was added because EA’s researchers thought it would encourage the agents to explore the map (as well as helping them survive). But it also had the side effect of limiting their ambition. “When they don’t see any enemies they just start scanning for something to do,” says Nordin. Think of it like a dog looking for a treat because it’s bored. It’s not unintelligent behavior, but it’s not that smart either.
“The more complex the game, the better the challenge.”
Arthur Juliani, a senior machine learning engineer at Unity, said EA’s work was similar to earlier research in this domain, but solid nonetheless. “Combining these approaches and demonstrating that they can be used to train agents which can be deployed in real games is an exciting prospect,” Juliani told The Verge. He pointed out that researchers at institutions like OpenAI and DeepMind are working on building their own bots for titles like Starcraft II and Dota 2. “The more complex the game, the better the challenge,” says Juliani, though he adds that retro games are still very useful for training new types of behavior because they’re less computationally intensive.
Nordin, acknowledging the prior AI work done in games like chess and Go, underscores the higher complexity of a first-person shooter like Battlefield. In board games, you have a complete model of the world and a finite number of possible outcomes, which a computer can model exhaustively and then select the best option. In a shoot-em-up, a player has to press multiple keys at once — to walk, run, crouch, fire, reload, swap weapons, and so on — and the combination of all those possible actions, together with trying to predict the actions of others, makes for a vastly more complex task.
For EA, the aim is not necessarily to further the general field of AI, but to find out how this technology can benefit games developers. Nordin says he doesn’t think we’ll see AI players taking on humans for a while yet. “They won’t be in the next Battlefield because that’s not very far away, but probably the one after that — as a hybrid of classical AI and neural networks,” he says.
Having AI agents that can roam around a map is a great way to speed up the nitty-gritty of game making. “Battlefield is a 64-player game, so to fully test it we need 64 players filling out a level, and AI agents can do that,” says Nordin. Juliani agrees and says this is why Unity is working on its own AI tools for the community to use. “Agents can be trained to attempt to find exploits in games, and to ensure that games are properly balanced,” he says.
So before AI players start beating us at our favorite video games, they’ll at least help us make them. Not a terrible deal.