The LOOKY neural network generates images in various ways; users simply need to upload photos or selfies, upon which an AI-generated creative is produced. Users can craft unique images based on text descriptions or apply numerous thematic neuro filters and avatars.
DRESSCODE, a virtual fitting room by VRTech, represents a groundbreaking project for the Russian market. AI not only processes selfies and integrates them with specific models but also considers the user's body parameters.
Multiple options are available for generation in DRESSCODE: current store collections, clothing type or style, text descriptions, or random images. In the Collections section, users can purchase their favorite items directly from the application by linking to the seller's website.
This technology is also implemented in interactive fitting panels designed for in-store installations. Visitors can take a photo, assess how a particular item will look, compile an entire wardrobe, and make purchases accordingly.
We create real estate-focused VR showrooms that collect and systematize data about visitor movements. This allows us to split all customers into several categories — on-the-spot buyers, “window-shoppers”, and mortgage borrowers.
Our system observes the visitors and finds out specific behavioral scenarios for each category. For example, it might find out that on-the-spot buyers first go to the bathroom and then to the bedroom, while borrowers will examine the kitchen first. These scenarios can be used to give clues to the salesperson so that they know how to better structure their negotiations in order to close the deal.
Thus, Big Data helps improve sales and lets you know more about your clients. The same technology is used in auto VR showrooms. The system is agnostic to the data it feeds on, whether it’s the visitor’s movements, direction of sight, response to certain stimuli, and so on.Thus, Big Data helps improve sales and lets you know more about your clients. The same technology is used in auto VR showrooms. The system is agnostic to the data it feeds on, whether it’s the visitor’s movements, direction of sight, response to certain stimuli, and so on.
Originally, the PolygonVR platform captured a player’s position using 37 passive sensors located on all body points that bend, extend, or otherwise move relative to each other. To make the VR suite more practical, we decided to switch to just 6 active sensors — two on the hands, two on the feet, one on the head and one on the gaming backpack.
Now we had to teach the computer to recognize the player’s movements based on a limited number of sensors. We achieved this using artificial neural networks and machine learning. We had people wear both active and passive sensors and asked them to play different games, move around, and make random movements for a while.
The neural network considered all possible variations in the relative locations of active and passive sensors. Then we removed the passive ones, and the network managed to build a player skeleton based on just six points.
A high-profile railway company asked us to create a VR interface that would help them teach their ticket inspectors how to handle certain scenarios.
The system was heavily reliant on the player’s speech, so there were two tasks involved. The first, pretty common one, was converting speech to text. The second one, which we had to solve all on our own, was extracting the text’s meaning. The algorithm we wrote could compare the extracted semantic components with the original inputs. For example, it could match the words spoken by an inspector with the bureaucratese of a company policy prescribing their actions in crisis situations.
In this project, we delved into the artworks of the famous Russian artists, Natalia Goncharova and Kazimir Malevich. Our goal was to let people paint still lifes in the style of these great painters, and achieving it was no walk in the park. Here’s why.
A player can use a huge variety of items to paint a still life. But how can we make it look like the work of Malevich? To make it possible, we trained two artificial neural networks, which started training each other along the way. The first network would try painting in Malevich’s style, while the second one would guess if a given picture was actually painted by Malevich or made by the first network.
With every iteration, each of the networks learned something new. One would make even more masterful paintings, the other one would reveal plagiarism with an even higher degree of precision.
We use VR to predict all kinds of situations depending on people’s actions. Here’s how it works. We create a huge artificial neural network that feeds on the data from all processes of a given environment — e.g., an oil refinery.
We try to capture absolutely all processes — from hallway cleaning to tanker refueling. Then we simulate an industrial accident — some process going out of control. The neural network begins to evaluate the behavior of all systems depending on the consequences of the accident.
The thing is, the neural network can generate an infinite number of variations in emergency situations. The actions of the people involved can be used to draw conclusions about the logic behind the processes and ways to optimize them.This technology results in an incredibly effective simulator, one that you cannot get used or adapt to.
ALL OF THIS HAPPENS IN VIRTUAL REALITY, WITH NO REAL ACCIDENTS AND NO ONE GETTING INJURED.