5 patents related to augmented reality
The invention relates to a vehicle computer system. The vehicle computer system gathers data from a safety sensor to determine whether the proper safety conditions are present for the vehicle operator to interact with the vehicle computer system. A safety controller receives safety condition data gathered from the safety sensor and instructs the display manager to disable the display of information to the vehicle operator during unsafe operating conditions. The vehicle computer system advantageously employs a transparent display screen to provide greater field of vision of the vehicle operator than could be provided by a traditional display screen.
Various techniques pertaining to methods, systems, and computer program products a spatial persistence process that places a virtual object relative to a physical object for an extended-reality display device based at least in part upon a persistent coordinate frame (PCF). A determination is made to decide whether a drift is detected for the virtual object relative to the physical object, upon or after detection of the drift or deviation, the drift or deviation is corrected at least by updating a tracking map into an updated tracking map and further at least by updating the persistent coordinate frame (PCF) based at least in part upon the updated tracking map, wherein the persistent coordinate frame (PCF) comprises six degrees of freedom relative to the map coordinate system.
Disclosed herein are systems and methods for distributed computing and/or networking for mixed reality systems. A method may include capturing an image via a camera of a head-wearable device. Inertial data may be captured via an inertial measurement unit of the head-wearable device. A position of the head-wearable device can be estimated based on the image and the inertial data via one or more processors of the head-wearable device. The image can be transmitted to a remote server. A neural network can be trained based on the image via the remote server. A trained neural network can be transmitted to the head-wearable device.
Systems, methods, and non-transitory computer readable media including instructions for managing privacy in an extended reality environment are disclosed. The instructions include receiving image data of a physical environment from an image sensor associated with a wearable extended reality appliance; accessing data characterizing virtual objects in the physical environment, the data representing a first and a second virtual object; accessing privacy settings classifying a first physical location of the first virtual object as private, classifying a first appliance as approved for presenting private information, and classifying a second appliance as non-approved for presenting the private information; and based on the privacy settings, simultaneously enabling a presentation of an augmented viewing of the physical environment, such that the first appliance presents the first and the second virtual objects in the physical environment, and the second appliance presents the only second virtual object, in compliance with the privacy settings.
A computer-implemented method is disclosed for generating scene reconstructions from image data. The method includes: receiving image data of a scene captured by a camera; inputting the image data of the scene into a scene reconstruction model; receiving, from the scene reconstruction model, a final spatial model of the scene, wherein the scene reconstruction model generates the final spatial model by: predicting a depth map for each image of the image data, extracting a feature map for each image of the image data, generating a first spatial model based on the predicted depth maps of the images, generating a second spatial model based on the extracted feature maps of the images, and determining the final spatial model by combining the first spatial model and the second spatial model; and providing functionality on a computing device related to the scene and based on the final spatial model.