34 patents in CPC class H04N
Devices, systems and methods are disclosed for performing image processing at a camera-level. For example, a camera service may run on top of a camera hardware abstraction layer (HAL) and may be configured to perform image processing such as applying a blurring algorithm, applying a color filter and/or other video effects. An application may pass metadata to the camera service via an application programming interface (API) and the camera service may use the metadata to determine parameters for the image processing. The camera service may apply the blurring algorithm for a first period of time before transitioning to unblurred image data over a second period of time.
A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map.
Systems and methods related to identifying locations and/or ranges to objects using airborne imaging devices are disclosed. An object tracking system may include a plurality of aerial vehicles having associated imaging devices and a control station. Information related to positions and orientations of aerial vehicles and associated imaging devices may be received. In addition, imaging data may be received and processed to identify optical rays associated with objects within the imaging data. Further, a three-dimensional mapping of the identified optical rays may be generated, and locations or ranges of the objects relative to the aerial vehicles may be determined based on any intersections of optical rays within the three-dimensional mapping.