63 patents in CPC class H04N
In some examples, a first device includes multiple fixed first cameras and a movable second camera. A processor is configured to receive, from at least one of the fixed first cameras, a plurality of first images of an airspace corresponding to an area of operation of an unmanned aerial vehicle, and detect, based at least on the first images, a candidate object approaching or within the airspace. Based on detecting the candidate object, the processor controls a movable second camera to direct a field of view of the movable second camera toward the candidate object. Based on one or more second images from the movable second camera captured at a first location and one or more third images from a third camera captured at a second location, the processor may determine that the candidate object is an object of interest and perform at least one action.
Systems and methods are provided herein for altering a start time of an event based on indicia of how late event attendees will be for the event that are retrieved from location applications corresponding to each attendee of the attendees. For example, a media guidance application may determine a start time of an event, a location of the event, and attendees of the event. The media guidance application may determine location applications corresponding to the attendees. The media guidance application may query the location applications for arrival times of each attendee. The media guidance application may calculate a plurality of differences between each of the arrival times and the start time of the event. Based on the plurality of differences, the media guidance application may delay the start time of the event.
The present disclosure relates to methods and systems for delivering online multi-media content in a manner that provides embedded point-of-sale transaction functionality so that a user can simultaneously view the online multi-media content and purchase product(s) and/or service(s) associated with and/or featured in such content.
An item recognition system uses a top camera and one or more peripheral cameras to identify items. The item recognition system may use image embeddings generated based on images captured by the cameras to generate a concatenated embedding that describes an item depicted in the image. The item recognition system may compare the concatenated embedding to reference embeddings to identify the item. Furthermore, the item recognition system may detect when items are overlapping in an image. For example, the item recognition system may apply an overlap detection model to a top image and a pixel-wise mask for the top image to detect whether an item is overlapping with another in the top image. The item recognition system notifies a user of the overlap if detected.
Provided are a vehicle monitoring method and a vehicle monitoring system. The vehicle monitoring method includes that: a polarization angle of polarized light in a sky image reflected by a vehicle window in a monitoring scenario is calculated, and a light-filtering polarization angle is calculated according to the polarization angle of the polarized light in the sky image reflected by the vehicle window, where the polarized light in the sky image is formed by scattered sunlight in a sky region corresponding to the sky image; the polarized light in the sky image reflected by the vehicle window in the monitoring scenario is filtered out according to the light-filtering polarization angle; and the monitoring scenario is imaged to form a monitoring image.