Metaverse Adds New Dimensions to Games

This article is written by Steven Ma, Senior Vice President at Tencent.

The metaverse – that virtual world where people gather to socialize, play and work – is the subject of many discussions and articles in the information technology world these days. While the metaverse may be all the rage, at Tencent we are taking a more practical view of the “next big thing” and its impact on online gaming. 

For starters, this is not a new concept. We think the metaverse is the how some imagine our next-generation internet. Back in the late 1990s, online games were called “persistent worlds(PW), which refers to continuously and steadily running online worlds. The PW was similar to today’s metaverse, except that both the development technology and the consumption model were still immature at that time. To date, some technologies have started to break through, allowing people to imagine the future of the internet and leading to the rise of the metaverse.

In the gaming world, we are seeing a number of changes in player behavior. Due to the pandemic, people around the world are getting accustomed to using online services to solve real-world problems and communicate with friends or family members. As they spend more time online, people begin to imagine how our lives can be more closely integrated with the internet and whether the metaverse has great potential.

If we further break down the most essential elements of the metaverse, they include substantial upgrades in both user experience and content. 

User Experience Upgrade

Let’s first examine the next-generation user experience, and next-generation input and output. For screen experience, whether it’s a 5-inch mobile phone display or 50-inch television, the display used in the metaverse was created decades ago and offers mostly one-way interaction based on visual perception. The same is true when it comes to input and output, with tools such as the handle, keyboard and mouse still most widely used. The bandwidth of keyboards is quite low, at merely a few hundred keystrokes per minute, so the input is relatively inefficient while the output consists mainly of video and audio.

There have been a number of new technology applications in recent years including virtual reality(VR), augmented reality(AR) and extended reality(XR), and we expect more exciting prospects for gamers once these applications mature. What impact will be brought by these technological breakthroughs on the hardware and software ecology? Will it be an open ecology, like Android, or a vertically closed one like Apple iOS?

From our perspective, “both” is the probable answer. An open ecology will facilitate the rapid development of the whole industry. When looking back at the history of the internet, there were two highly successful open ecologies: the integration of Intel and Microsoft with the upgrade from DOS to Windows; and the Android ecology created by Google, HTC and other hardware manufacturers. Both were created in a special time period and involved a hardware manufacturer with a market share of more than 50 percent that chose to engage in hardware instead of software.

Today, most manufacturers would not choose to merely manufacture hardware product and some – maybe half – are opting for a relatively vertically closed ecology with integrated hardware and software, like Apple. Therefore, we should actively try everything, from software to hardware, including content, systems, and tool software development kits. 

Content Upgrade

More-realistic game images and interaction and more immersive large-scale scenarios are critical to the content upgrade. 

The Matrix Awakens demo, which was created using Unreal Engine 5(UE5), is a good example. Matrix Awakens simulates a highly immersive 16 square-kilometer city with over 30,000 digital humans, each with different behaviors, facial expressions and outfits. This has inspired us to imagine a more realistic, mega-scale city of the future.

When we recently discussed the criteria for large-scale projects, we concluded the large-scale scenario should at least deliver effects similar to those demonstrated in the demo. In addition, it should be generated through technical means instead of simply through manpower and materials.

In addition to player immersion, the other critical aspect in gaming realism is the massive volume of content. This is actually a quite difficult issue. There are several possible ways to generate massive amounts of content including mass production, which relies on AI, machine learning, more powerful production pipelines and more efficient development tools.

The second is the user-generated content(UGC) ecology, which is also a challenge for us. Both Weixin and Roblox are doing well in UGC by adhering to their own unique philosophies over many years. That is a totally different approach from PGC, or professionally-generated content. Normally, we do not reveal too much to players at the beginning of game development, so we can surprise them and exceed their expectations. But if we want to develop metaverse-level products, we need to develop a better UGC ecology.

Another way to generate the massive amounts of content is to broaden external connectivity. An important feature of the metaverse lies in its ability to go beyond entertainment and connect more external services and content.

We don’t want to pay too much attention to the discussions of the metaverse externally. Metaverse remains may not materialize in the near future. But in the face of this trend, we need to get ready with technology, hardware, core algorithms, game content and the development of some special features and functions.