Ready to face intense challenges in Battlefield V? Today’s your lucky day, Tides of War Chapter 2: Lightning Strikes has arrived. What is the Tides of War? It’s the grand journey you and your Company of customized soldiers, weapons, and vehicles take through World War II. Each Chapter introduces additional gameplay, improvements, and battlefields – all entirely free for Battlefield V players. Want to find out more? Read on.
What You Can Play Now
Dive right into weekly events and challenges starting today. They’re a great way to rank up, skill up, and earn rewards. Hit the battlefield regularly to get XP and work towards a unique Chapter rank, then unlock weapons, skins, Company Coin (Battlefield V’s in-game currency), and more. There’s something new to do every week, so don’t miss out.
Next, we’re introducing a more intense and fast-paced version of the iconic Conquest mode, called Squad Conquest. This mode will only be available during Chapter 2, so don’t miss out. Two squads per team (16 players total) fight for map domination – defenders must maintain control of all the flags; otherwise, they’ll lose the ability to respawn and risk having their entire team wiped out. Join the battle and discover if you can survive this frenzied experience.
What’s Coming Soon
Looking for a great way to get the hang of Battlefield V combat while your buddies watch your six? The new co-op mode, Combined Arms, lets you squad up with three friends to take on a series of Combat Strike missions across several maps. Set your difficulty level as you attempt a surgical strike on a single objective against a variety of challenging AI opponents. It’s an excellent way for new players to work on their skills and learn how to play each of the Classes, or for vets to challenge themselves on hardcore mode.
Then, the ultimate multiplayer experience expands with a historic new Grand Operation. Grand Operations let players join in massive battles across multiple modes, in-game days, and maps. In Chapter 2, it’s May of 1940, and Axis forces are rumbling toward the city of Hannut in Belgium. The might of the fierce Panzer divisions will test the strength and courage of Allied defenses as they attempt to hold their ground. In the end, only one team will be left standing.
Tank drivers may have had the upper hand so far, but Chapter 2 introduces Tank Hunter vehicles like the Mosquito, the Archer Tank, and the Sturmgeschütz. These deadly vehicles will change the balance of power on the battlefield. The hunter will soon become the prey.
And Battlefield veterans will be excited to see the return of a classic 16 vs. 16 mode, first introduced in Battlefield: Bad Company. Rush mode is back, but only for a limited time during Lightning Strikes. Attackers must arm bombs with limited respawns to accomplish their mission; meanwhile, defenders have unlimited respawns to try and stop them. It’s challenging, intense, and coming soon to a battlefield near you.
The War Goes On
Chapter 2: Lightning Strikes isn’t the end of your World War II journey, not by a long shot. Your battlefield will continue to evolve over time, introducing new gameplay, new experiences, an expanding world, and game improvements. Don’t miss a single challenge, unlock, or incredible mode – enlist today and start building your Company and leading your squad to victory.
For all the latest Battlefield V news on Xbox One, keep it tuned here to Xbox Wire.
It’s no secret that Nintendo Labo hasn’t quite managed to reach the lofty heights Nintendo may have hoped for or expected, but it certainly isn’t done just yet.
From the start, Labo has been a particularly intriguing venture for Nintendo; rather than the standard marketing process for games, Labo has been pushed as something different entirely, perhaps most prominently as a creative tool for education. Thanks to this, the typical gaming audience didn’t hoover it up at launch and it didn’t take long at all for it to disappear from the gaming charts, never to be seen again.
So what exactly is going on with Labo now, and what can we expect to see from it in the future? Well, Nintendo President Shuntaro Furukawa has touched on this very point in a recent interview with Kyoto Shimbun (translated by Nintendo Everything).
Nintendo Labo was an innovative, new set of games that incorporated aspects of engineering. How has it been going?
It hasn’t sold as well as our other hit games have, but we did have an increase in sales for Labo during the end of the year. There are many new ways to experience Labo, and we’re working on formulating new methods that convey its allure so Labo’s sales will have longer legs.
Whether “formulating new methods” for Labo means new Toy-Con packs, or simply new advertisement and marketing plans, isn’t 100% clear, although it wouldn’t be surprising to see more sets arrive at some point this year. The genius at work behind its design, and the pure creativity that can be born because of it, are clear for all to see; perhaps all it needs is a little refinement?
What do you think? Do you think Nintendo Labo has the potential to succeed further in the future? Do you own any kits yourself? Let us know in the comments below.
The field of machine learning has advanced tremendously in the past few years, and canny game makers are constantly finding new and interesting ways of applying machine learning techniques to build better games.
At the 2019 Game Developers Conference in March and you’ll have the chance to dig in and spend a full day learning from some of the best and brightest in the game industry at the brand new GDC 2019 Machine Learning Tutorial!
This is just one of many great Bootcamps and Tutorials scheduled during the first two days of GDC (Monday and Tuesday, March 18th and 19th this year), albeit one that offers an up-to-the-minute, laser-focused look at the art and business of making and running games that make smart use of machine learning techniques.
For example, in “Beating Wallhacks Using Deep Learning With Limited Resources” Nexon Korea machine learning engineer Junsik Hwang will show you how Nexon Korea has developed a real-time automated wallhack detection system using Convolutional Neural Networks with a small dataset and a single GPU.
By using Class Activation Maps, the network finds suspicious areas within a screenshot that improves the credibility of the model’s performance and makes debugging datasets much more efficient. Model Interpretability plays a crucial role in incorporating deep learning with the existing abuser control policies. As a result, the system now detects abusers in real-time and reduces manual inspection labor significantly!
And in “Simple Head Pose Estimation for Dialogue Wheels“, Remedy lead character technical artist Antti Herva will show you a machine learning project aimed at helping animators liven up dialogue wheel interactions for an upcoming Remedy game project..
Make time to catch this talk if you want an introduction on performance capture, selecting image features and machine learning models, annotating data, training a neural network and finally evaluating the results in-game!
Plus, Electronic Arts’ Fabio Zinno will be presenting a Machine Learning Tutorial talk on “From Motion Matching to Motion Synthesis, and All the Hurdles In Between” that will give you an expert overview of state-of-the-art ML techniques (Phase-Functioned Neural Networks and Mode-Adaptive Neural Networks) that use neural networks to synthesize motion from examples. Zinno aims to explicitly call out important architecture and implementation details, and spark a discussion on how this technology can be used in a modern game development pipeline.
And you won’t want to miss “Smart Bots for Better Games: Reinforcement Learning in Production“, a presentation from Ubisoft data scientist Olivier Delalleau about various reinforcement learning algorithms and how they may help game studios create better games, more efficiently.
Besides AI development, the ability to train bots to play games during production opens up promising opportunities for automated testing and design assistance. But applying reinforcement learning to modern games brings up many challenges, illustrated through several examples, with a focus on recent experiments within Ubisoft games. Whether you want to directly learn from pixels to minimize the integration burden, or entirely rewrite your engine to make it more “reinforcement learning-friendly”, this presentation is packed with practical tips to help you reach your goal without (too many) tears.
For more details on these and all other announced talks head over to the online GDC 2019 Session Scheduler, where you can plan out your week at the show.
Bring your team to GDC! Register a group of 10 or more and save 10 percent on conference passes. Learn more here.
Gamasutra and GDC are sibling organizations under parent company Informa
It’s bee a varied, albeit low-key week this week. We decided to check in with Fortnite, as we haven’t talked about it in a while, and we’ve updated our guide to staying competitive on mobile. We also updated a few of our guides, as well as review some games we missed out on.
We’re on track with reviews of newer titles now though, with several already in the pipeline for the day of release.
Meanwhile, in mobile gaming…
Sheeping Around (iOS Universal) – Full review coming soon!
This one caught our eye – a multiplayer strategy card game (with deck-building, no less) where one person is the Thief and one person is the Shepard – you’re both fighting for ‘control’ over three Sheep. You must play cards that allow for various abilities, such as luring, trapping and so forth. We haven’t had a chance to take for a spin ourselves, but I’ve already got someone working on a review.
Two other games caught our eye, but we won’t write them up fully as we haven’t played them either and there’s no plans for review right now. Bit ballers (iOS) looks like a Kairosoft game about basketball, except not made by Kairosoft, and Lootbox RPG (iOS) is a cheap and cheerful dungeon crawler devoid of any kind of online functionality – buy once, play forever. Or at least, until you get bored. Maybe we will review this one after all.
Also, we mustn’t forget the global launch of NetEase’s UNO. I mean, I was super excited to give this a try myself but then I read TouchArcade’s write up and… yeah. No.
The Escapists 2: Pocket Breakout (iOS Universal & Android)
Team17 did a pretty good job when they brought sandbox/simulation game The Escapists to mobile back in 2017. Now they’re looking to do the same again with The Escapists 2. It’s due out on January 31st, but you can pre-order now on both the app store (for $6.99) and Google Play via pre-registration. We’ll try and have our review ready as soon as we can.
Unfortunately, there doesn’t seem like there’s anything really worth mentioning this week, although if you spot something do let us know in the comments. If you want a peak behind the curtain, we actually get a lot of our sales info (on iOS at least) from this website, if you want to have a look for yourself. Just make sure you’ve set it to ‘Games’, and then ‘Last 72 (or 24) Hours.
That’s all for this week, enjoy your weekends!
Streaming mogul Netflix claims Fortnite is now a bigger competitor than other media companies like Game of Thrones and True Detective producer HBO.
In a recent earnings report, Netflix said it earns around 10 percent of television screen time in the United States, though it averages less than that on mobile screens.
Curiously, the company then talked up its ability to earn consumer screen time away from a “very broad set of competitors,” before adding that it also competes with (and loses to) Fortnite more than HBO.
It’s a notable comparison that highlights not only how incredulously popular Epic’s last-man-standing shooter has become, but also how its impact is now being felt by some of the biggest names outside of the games industry.
Indeed, it’s easy to see why Netflix considers Fortnite a main rival. The free-to-play title is available on almost every platform imaginable, from consoles and PCs to smartphones, and hit 8.3 million concurrent players before Christmas.
Last time we checked, Fortnite had over 200 million registered users, and Epic continues to keep the game relevant by introducing new skins, weapons, and locations with the start of each new ‘season.’
The following blog post, unless otherwise noted, was written by a member of Gamasutras community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
The Elegance of First-Time User Experience in Valve’s The Lab
Even almost three years post-release, Valve’s The Lab is still my go-to way of introducing people to virtual reality. Why? It’s polished, and funny, and fun, and there’s a ton of very different content to muck around in. But what makes it such an ideal gateway drug for room-scale VR is the total elegance of its first-time user experience.
In the last year, VR games have, by and large, improved a lot at getting players fluent at their own controls and mechanics. Beat Saber is the phenomenon it is in part because it’s so easy to pick up the controllers and get cube-chopping. Google has shown continued improvement in Tilt Brush and Google Earth by expanding their tutorials to slowly introduce players to their full functionalities. With Creed: Rise to Glory, Survios has finally developed an experience as accessible as it is innovative. This trend is great. But I can think of no game that introduces a new player into the full grammar of room-scale VR as quickly and invisibly as 2016’s The Lab.
So how does Valve do it? Let’s take it step-by-step:
You’ve Loaded the Game. Now What?
When the The Lab finishes loading, you find yourself in a drab, barren environment. A pair of slightly animated figures. A single wall. A spotlight. A few props. A menu. Subtle ambient sound. This isn’t exciting! This isn’t Half-Life VR! What gives?
Actually, the simplicity of this scene is of great benefit to new users. Just having an HMD covering your eyes can be overwhelming for some new users, but most anyone who hasn’t experienced VR is going to find the realistic parallax a lot to take in. Simple magic is still magic.
So the lack of overwhelming stimulus is a comfort, especially considering how many first-time demos take place in public. Trying to negotiate a bunch of flashing lights and moving parts while still aware that there are people you can’t see looking at you… well, grandpa might find that a little intimidating! This simplicity here ensures that a new user isn’t going to have to worry they’re totally screwing up in front of friends and strangers.
But it’s also very functional. That all the detail of the room is consolidated in one direction means there is no confusion about where you should be looking…
…or what needs doing. But those buttons? They’re too far away to reach without physically walking forward. After a few abortive attempts and just pointing at them, players figure out they need to walk forward. So they walk forward, and just like that room-scale tracking is understood.
Physically walking around a virtual space is not something that anyone was fluent in before room-scale VR. First-timers I’ve put in Beat Saber have about a 50% chance of needing to be told they can take a step. Most everybody I’ve started in Google Earth doesn’t move their body until they decide to lean down to look at a city.
The functionality of the menu ensures that you understand your controllers as ways of interacting directly with the world. There’s no feedback until you physically touch the menu, at which point the controller vibrates and the relevant controller button glows. Soon you’ll learn that the controller has all sorts of abstract functionality too–teleportation, level selection, game-specific functions–but the concept of controllers as hands is introduced first.
Okay, You’ve Touched “Play Intro.” Now What?
After selecting “Play Intro,” the iconic little dudes demonstrate what you’re going to be doing: one of them picks up something and sticks his face in it. The motion blur suggests he’s been sucked into the globe. His friend rejoices, affirming this was the correct action to take. The sequence is basically a real-time three-panel comic.
Compare this approach to, say, the level of abstraction that text-based instructions for this would require: “Grab the mysterious orb. Try, then, to eat it. Instead, get sucked into the world it contains.” Not as clear, right? Text-based or verbal instructions would also need to be localized into dozens of languages.
The goal action is then repeated by the other little dude to reinforce the lesson. Then both controllers start vibrating to call your attention to the blinking button that’ll allow you to get over there.
Pressing the “Navigate” button gives you a lot of feedback: valid teleport locations are reinforced with color, a playspace box, an animated arc, an end-point icon, and a small cylinder appearing above the valid-teleport end-point icon.
I’ve seen a lot of new users get disoriented by teleportation, but not ever in this room. It’s always very clear where you are, and also clear that there’s no rush for you to do anything (no music, no larger story, no big moving parts, etc.).
The level orbs are, to me, one of the most beautiful designs in VR. The cubemap textures contrast starkly with the more cartoony Lab-world, and the perspective-shift within the orb invites curiosity, a curiosity which encourages bringing it closer for a better look–the very action you’re supposed to take to trigger its function.
This is elegant design. Exactly what you most want to do is exactly what you’re supposed to do.
So You’ve Been Transported to a Whole New World! Now What?
When the orb-world loads, you find yourself among photo-real mountains bathing in sunlight, accompanied by the dopiest/cutest robo-dog this side of Aibo. The message here: there are surprises in VR. Good surprises.
You’re then is reminded of the teleport buttons by haptic feedback, tooltips, and blinking, but as you look down to re-read the instructions, right in your line of sight are some sticks and a dog. So maybe you put two and two together…
But if you want to play fetch right this very minute, you’ll have to ignore the instructions. It’s a little subversive to do so, but it also reinforces The Lab’s goal to get you playing the way you want to play. The agency is yours.
And the world will respond. Shake the stick and the dog, like a dog will, runs over to play.
Two minutes ago a first-time user was probably feeling a little out of their depth getting strapped in to the headset, getting cut off from the world by headphones, and feeling the unfamiliar controllers in their hands.
Now they’re playing fetch on some scenic vista with a very eager companion, blissfully unaware that they’ve just gained proficiency in most everything they need to navigate immersive virtual worlds.
Takeaways For Designers
Two deeper VR design philosophies glimpsed in these first minutes are worth a closer look.
1. An Abundance of Feedback
The Lab never tells you anything once. It repeats what it’s explaining as it’s already repeating it. Most every action you take produces multiple forms of feedback–visual, audio, haptic–and sometimes, as we saw with the teleport, multiple reinforcements of that feedback.
This is a smart practice not because players are stupid, but because the experience of VR is so personal. I’ve put hundreds of people in VR for the first time and the #1 culprit for discomfort in VR is not simulator sickness but the fear that they are doing something incorrectly. It doesn’t help matters that there are so many different learning styles, and so many different ways we relate to our own bodies. The more information a VR experience gives, and the more ways that information is represented, the quicker a player can move from feeling like a student to being a full participant.
Most people have a “dominant” hand, but, for the most part, the functionality of our actual human hands is consistent. I can grab my coffee with my left or right hand (or both!) and perform the desired action with it without too much of a problem. This mirroring of functionality might not be right for every VR experience, but it’s a good starting-point for designing interaction.
For one, it’s immersive. I don’t have to further map and metaphorize the controllers, and their most fundamental use–physically interacting with the environment–is the same as my hands. That’s also the first use that any player learns in The Lab.
Secondly, it’s accessible. For players who have use of only one hand, only one mini-game, Longbow, is unplayable. Designing this way also avoids the awkward “I can’t see my hands but I need to switch controllers” moment that new users often have difficulty with.
Third–and the subject of a future post about how expertly this is done in The Lab–designing this way means a lot of the UI has to be diegetic. This means that even when you’re spending time navigating menus, you’re interacting with the world of the experience, rather than just a window or screen.
Thanks for nerding out with me!
If there’s interest, I’d love to spend some more time investigating how the rest of The Lab does VR so well, whether in a more linear experience-by-experience fashion or looking more deeply into the underlying design philosophies. Please share with any designers or developers you know who might be interested, and let me know if you’d like to hear about anything in particular!