64 patents in CPC class H04N
An eyebox region is illuminated with a fringe illumination pattern. An event sensor is configured to generate event-signals. Eye motion is determined from the event-signals. Eye-features are extracted from data generated by the event sensors and a predicted gaze vector is generated from the eye-features.
A head-mountable display can include a structural frame defining a viewing opening, an optical module coupled to the structural frame. The optical module can include a display screen to project light through the viewing opening. The display screen can define an inner edge, an outer edge opposite the inner edge, a lower edge extending between the inner edge and the outer edge, and an upper edge opposite the lower edge. The optical module can include a first camera disposed adjacent the inner edge and closer to the lower edge than the upper edge, and a second camera disposed adjacent the lower edge and closer to the outer edge than the inner edge.
The technology disclosed relates to identifying an object in a field of view of a camera. In particular, it relates to identifying a display in the field of view of the camera. This is achieved by monitoring a space including acquiring a series of image frames of the space using the camera and detecting one or more light sources in the series of image frames. Further, one or more frequencies of periodic intensity or brightness variations, also referred to as ‘refresh rate’, of light emitted from the light sources is measured. Based on the one or more frequencies of periodic intensity variations of light emitted from the light sources, at least one display that includes the light sources is identified.
A digital camera includes a system control unit. The system control unit detects a touch operation on a touch panel, does not perform processing corresponding to the touch operation performed on the touch panel within a predetermined time period from a shift of a user's eye with respect to a finder from an eye-separation state to an eye-proximity state, and performs the processing corresponding to the touch operation performed after the predetermined time period has elapsed from the shift of the user's eye with respect to the finder from the eye-separation state to the eve-proximity state.
Method for displaying multimedia content, electronic device for performing same, and recording medium in which program for executing same is recorded are disclosed. In one embodiment, a method for displaying multimedia content comprises acquiring multimedia content including video data which is reproduced as a video, and slide data including a key scene which is matched with event time point in a reproduction time period of the video data and is displayed in a slideshow manner, acquiring a text data corresponding to the multimedia content, displaying the multimedia content in a first area according to a video mode for reproducing the video data as the video or a slideshow mode for displaying the key scene in the slideshow manner, displaying at least a portion of the text data in a second area; and adjusting the displayed text data according to the displayed multimedia content.