Getting – and creating – the picture on image generation.
By Hank Hogan, MTI Correspondent
The goal is to fight like you train and train like you fight. So, what do you do when you have to prepare for operations all over the world against an ever-growing range of possible adversaries?
One solution widely adopted by militaries globally is to train using image generation, a technology shared with and taken from the video gaming industry. This can be teamed with an image database and a projection system to produce realistic simulations. Recent hardware and software advances have made such an approach more affordable and capable than ever before.
“Image generation will play a more pivotal and increasingly important role in training tomorrow’s warfighter as we continue to address the training needs in a rapidly changing threat space,” said Donnie Palmer, lead systems engineer at U.S. Army PEO STRI-Gaming.
He continued, “Being able to realistically represent the areas of interest virtually will definitely be dependent on image generation capabilities.”
But, Palmer notes, there are certain issues—such as the military need for a high degree of cyber security—that make a direct port of commercial technology impractical. At the same time, military purchases are a very small part of the $70 billion annual video gaming market, which means that the military can move the image generation needle only slightly. Finally, image generation cannot create new core capabilities, such as small, long duration UAVs or better targeting systems. Those take years to develop and deploy.
“While image generation can help to bridge the training gaps for operations, it isn’t a magic bullet.” Palmer said.
Creating a simulation environment that can perform training wizardry involves a balancing act, according to Vlad Argintaru, project manager at Aero Simulation. The Tampa, Fla.-based company provides products and services for commercial and military aviation training.
Argintaru said that the budget or space for a simulator may be limited, one reason tradeoffs may be made. For instance, the requirement may be that the simulator be in a containerized environment on a ship at sea. In that case, a full-fledged trainer will not work and compromises must be made. One such may be that the system does not gyrate to simulate the feel of aircraft movement but instead compensates with other training cues, such as realistic visuals.
Other aspects that have to be balanced may lie inside the trainer itself. For example, advances in different elements of a simulator can impact other components. Take the projection system. This moved from analog to digital, resulting in image stability over time and a significant increase in resolution.
“Modern projectors produce between four to eight million pixels, which are image elements, per projector, which adds significant information cues for training. But that also means that my image generator must compute four million to eight pixels at 60 hertz, which is a challenge,” Argintaru said.
Fortunately, that computation requirement is now easier to meet and can be done with cheaper hardware. Years ago image generation was done on proprietary hardware, with the cost measured in millions of dollars for a system. Now, computation can be handled by commercial off-the-shelf gear, using graphics processing unit, GPU, chips produced by NVidia and others. As a result, hardware system cost has fallen more than 10-fold as compared to what it was.
In another illustration of the tradeoffs found in image generation, higher resolution real world imagery means a database consumes more storage. At one time, a visual database might fit comfortably into a disk drive of a few megabytes. Now, it may be that tens of terabytes are not enough. Beyond storage, such large files also strain network resources, as all of those bits travel back and forth.
Processing, storing, and moving around all those pixels may not be necessary, according to Oliver Arup. He’s technical director of Bohemia Interactive Simulations of Prague, Czech Republic.
“There’s almost no evidence that higher fidelity simulations give better training,” Arup said.
He added that what is needed is a faithful enough reproduction to produce a suspension of disbelief in users. That is, the image generation, projection, and database have to be detailed enough that users forget that it’s a simulation and not real. Anything beyond that may not buy a training benefit and worth the cost.
What sometimes happens, Arup said, is that end users come to synthetic training and associated image generation with certain expectations driven by their experience with games. Thus, they may expect a very detailed and lifelike rendering of an armored vehicle. After all, that’s what they get with easily affordable game hardware and software.
That experience, though, is a result of an enormous expenditure of time and money. A game may cost $250 million to develop, but the cost is worth it because all of that can be earned back in a single, opening weekend, according to Arup. He has more than academic insight into this because Bohemia Interactive bases its image generation engine for military training on one developed for ArmA 2, a military simulation game.
As for the future, Arup said image generation, like everything else, is headed to the cloud—at least partly. Cloud-based servers will handle back end functions, like the artificial intelligence that directs synthetic constructs in training exercises. Local machines may do rendering, with the only data flowing back and forth being what’s needed to update changed pixels.
One outcome could be training image generation similar to what’s done for augmented reality. A warfighter may, for instance, look through a pair of binoculars and see a simulated plane flying by, with this image generation being done as part of air support training. Such an approach would be possible because image generation will be less and less confined to training centers. It also helps that most of the computational burden would actually be provided by the cloud.
Bohemia Interactive Simulations has created VBS Blue, a planetary rendering system, Arup said. This cut down on the amount of time it would take to create the input needed for image generation. The raw data required for this is readily available, thanks to the advent of LIDAR and photogrammetry data capture of features with millimeter precision
“There’s no lack of data anymore. With free sources of data nowadays, you can get good coverage of the entire planet,” Arup explained.
Solving the problem of how to feed image generators is also something that Cambridge, Mass.-based VT Mäk is addressing, according to Dan Brockway, vice president of marketing and new product innovation. The standard way to prepare terrain databases is time-consuming, complex and expensive, he said.
VT Mäk’s approach streams source data directly into the image generator and simulation systems. If the input data lacks the appropriate detail, it can be filled in via an automated process. “When newer, more accurate, or higher resolution source data is available, it can be added to the server in real time and available for the next simulation run,” Brockway said.
Thanks to this and similar methods, training can happen when and where needed. This means that rehearsals can accommodate more rapid changes in missions and objectives.
Other advances come from hardware improvements. For instance, COTS graphic processing units are now much more powerful, so much so that the conversion of elevation data into polygons that can be rendered, or tessellation, of the terrain can now be put off until just before it is needed. The hardware is fast enough that this can be done without a noticeable lag. As a result, simulation systems don’t have to wait for an entire terrain database to be built online before it can be used. Consequently, training scenario terrain can be active more quickly and systems can be more responsive than had been the case in the past.
It’s important to remember that image generators don’t only create visible scenes. They also work in such non-visible domains as those presented by night vision goggles, infrared, radar and other instruments or sensors, said Robert Brantley. He’s principal product line manager for image generation, synthetic environments and radar simulation in the simulations and training solutions division of Cedar Rapids, Iowa-based Rockwell Collins.
By operating in these different domains, image generation can provide awareness or immersion in the context of either training or rehearsal. Importantly, the same technology can form the basis for augmented reality, technology that presents warfighters with such information as the location of friend or foe accurately superimposed on a scene. Hence, image generation related to what’s used in training can provide an operations combat edge.
That has certain implications as to where image generation is headed in the future. In the past, simulators were composed of and image generation demanded large pieces of hardware.
“Today image generation is being applied to dynamic situations and tomorrow [it] will deliver immersive training and rehearsal to soldiers wherever they are and onto any number of visualization devices from simulators, to tablets, to head wearable eye display devices. Large, multi-cabinet hardware that was the standard in past years, is being replaced by man-wearable solutions today and will be replaced by cyber-secure cloud delivery in the future,” Brantley predicted.
Key to putting hardware on such an extensive diet will be use of cloud technology. Remote servers will shoulder the computational burden of image generation, except for a small slice that will be done locally.
Another image generation change that Brantley sees happening lies in the business model. In the commercial world, image generation and rendering costs have decreased to the point that low-end solutions are almost free.
Because of this, gaming companies are evolving to make their money in other ways. One such is from tie-ins, with an example being Amazon. It gives away a game engine but requires game developers to have all data flow through its cloud, thereby generating revenue from data fees. Companies engaged in the military simulation and image generation market may look to such models as a way to raise revenue in the future, Brantley says.
It isn’t only companies that deal with changing economics. So, too, do militaries around the world and that explains why image generation is of growing importance, said Brian Overy, vice president of marketing and sales at Vestal, N.Y.-based Diamond Visionics. The company supplies image generation solutions.
“The training of the military has had to adapt with real limitations of budget constraints,” Overy said. “Current Diamond Visionics GenesisRTX Image Generation allows for the troops to train in the most realistic environment available and fully prepare soldiers to confront real-time decision making instances at a fraction of the cost of using real munitions and equipment.”
The company has been working to cut processing time and increase image generation capabilities, he adds. For instance, a bottleneck in the past was that systems often waited while the processor figured out what to do next. Now Diamond Visionics software is bypassing the processor altogether, working directly with the GPU.
Another improvement has arisen from making use of parallel processing. This helps because many of the changes from frame to frame in a generated scene are the result of doing the same operation over-and-over again to different groups of pixels. By breaking such chores up into tasks that can be done in parallel to different groups of pixels simultaneously, the entire processing time can be reduced significantly and image generation made speedier.
Looking forward, Overy noted a growing demand to reduce simulator footprints in terms of maintenance, power consumption and cooling. The ability to provide greener, more environmentally-friendly solutions in the future could become an important discriminator between vendors, he says.
The increasing use of simulation for military training has created an opportunity not only in image generation but also in the data that feeds into image generation, said W. Garth Smith, president of MetaVR. The Brookline, Mass.-based company makes 3D real-time PC-based visual software systems.
Image generators, such as the company’s Virtual Reality Scene Generator, need data from which to build scenes and this data has traditionally been costly to acquire, Smith said. Now, MetaVR is taking advantage of technology advances to cut those costs.
“The development of commercial portable UAVs and the improvements in digital camera technology allow us to collect sub-inch resolution imagery with our own portable UAV and then build ultra-high resolution terrain databases using that imagery,” Smith said.
He mentioned two recent examples built from two centimeter resolution imagery. One is virtual terrain of two target areas at the Fallon Range Training Complex in Nev. The other is of the Prospect Square area of Yuma Proving Ground in Ariz.
With a two-centimeter resolution, training students can see bullet holes in vehicles, small shrubbery and small craters left from ordinance. Of course, with higher resolution input data, MetaVR had to make improvements to its image generator to render environments to the same sub-inch resolution.
Many of these innovations and advancements related to image generation
exploit the latest in commercial technology, an approach that yields significant benefits. For instance, this means that the military gets the results of many billions of dollars of research and development without having to itself spend that money.
However, for image generation the downside of COTS is that the military market must make do with what the commercial sector produces. One issue, for instance, is that video games go through frequent hardware iterations and revisions, rapid changes that render a system a few years old obsolete—and potentially hard to get replacement parts for.
The advancement of gaming technology is speeding up, which also presents another problem: the relatively slow government procurement process. “Without even taking into consideration development time for integration and cyber security to meet DoD standards, the military already cannot not move fast enough to deploy and keep up with the current pace of changes and advancements,” said the Army’s Palmer.
Finally, synthetic training may be cost effective and getting better but in the end it’s just that—synthetic. Even those in the image generation industry don’t see the synthetic experience ever fully replacing real training,
“You always at some point have to get into a real cockpit. For training to be useful, you always need to get out into the field, get wet and use your rifle,” said Bohemia Interactive Simulation’s Arup.
However, this doesn’t mean that advances in image generation and simulation aren’t still needed or valuable. As Arup said, “The [synthetic] training has to good enough to make that live training more useful.”