
Sample images of ‘texture synthesis’ using a unique artificial intelligence-based technique that trains a network to learn to expand small textures into larger ones. This data-driven method leverages an AI technique called generative adversarial networks (GANs) to train computers to expand textures from a sample patch into larger instances that best resemble the original sample. Credit: Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, and Hui Huang
Researchers have created a new tool that could aid designers for video games, virtual reality and animation in making more realistic virtual textures.
An international team of computer scientists is using an artificial intelligence-based technique called generative adversarial networks (GAN) to train a network to learn to expand small textures into larger ones that still resemble the original sample.
“Our approach successfully deals with non-stationary textures without any high level or semantic description of the large-scale structure,” Yang Zhou, lead author of the work and an assistant professor at Shenzhen University and Huazhong University of Science & Technology, said in a statement. “It can cope with very challenging textures, which, to our knowledge, no other existing method can handle. The results are realistic designs produced in high-resolution, efficiently, and at a much larger scale.”
The new method allows the network to learn to expand an arbitrary texture block cropped from an example model so that the expanded result is visually similar to a containing example block of the appropriate size.
A discriminative network then assesses the visual similarity between the automatically expanded block and the actual containing block. As typical of GANs, the discriminator is trained in parallel to the generator to distinguish between actual large blocks from the example and those produced by the generator.
“Amazingly, we found that by using such a conceptually simple, self-supervised adversarial training strategy, the trained network works near-perfectly on a wide range of textures, including both stationary and highly non-stationary textures,” Zhou said.
Virtual designers have found it difficult to efficiently design believable complex textures or patterns on a large scale.
The aim of example-based texture synthesis is to generate a texture—primarily larger than the input—that closely captures the visual characteristics of the sample input and maintains a realistic appearance.
Examples of non-stationary textures include textures with large-scale irregular structures, or ones that exhibit spatial variance in certain attributes such as color, local orientation, and local scale.
The researchers tested the new method on several complex examples, including peacock feathers and tree trunk ripples, which are seemingly endless in their repetitive patterns.
The researcher’s next plan to create a system that will be able to extract high-level information of textures in an unsupervised fashion. They also plan to train a universal model on a large-scale texture dataset and increase user control.