Patentable/Patents/US-10594927
US-10594927

Mobile terminal and operating method thereof

Published
March 17, 2020
Technical Abstract

A mobile terminal includes: a display; and a controller configured to: cause the display to display a plurality of videos captured by a 360-degree camera; generate a 360-degree video by combining or stitching the plurality of videos; and cause the display to display a stitching region corresponding to a focused photographing object when the focused photographing object included in the 360-degree video is placed in the stitching region that is a boundary region in which at least two of the plurality of videos are connected.

Patent Claims
20 claims

Legal claims defining the scope of protection. Each claim is shown in both the original legal language and a plain English translation.

Claim 1

Original Legal Text

1. A mobile terminal comprising: a display; and a controller configured to: acquire a plurality of videos captured by a 360-degree camera; generate a 360-degree video by combining or stitching the plurality of videos; cause the display to display a partial area of a whole region of the generated 360-degree video, wherein the whole region includes a plurality of focused photographing objects and at least one of the plurality of focused photographing objects is not included in the displayed partial area; cause the display to display a stitching region corresponding to a first focused photographing object on the displayed partial area, when the first focused photographing object included in the 360-degree video is displayed and placed in a boundary region in which at least two of the plurality of videos are connected; increase an area of the displayed stitching region as a distance between the 360-degree camera and the first focused photographing object decreases; and cause the display to display an identifier corresponding to a second focused photographing object when the second focused photographing object among the plurality of focused photographing objects is not displayed, wherein the identifier includes information indicating that the second focused photographing object is placed in the boundary region when the second focused photographing object, which is not displayed, is placed in the boundary region.

Plain English translation pending...
Claim 2

Original Legal Text

2. The mobile terminal of claim 1 , wherein the controller is further configured to provide a guide for the first focused photographing object to avoid the stitching region in response to an input for selecting the stitching region.

Plain English translation pending...
Claim 3

Original Legal Text

3. The mobile terminal of claim 2 , wherein the stitching region corresponding to the first focused photographing object is no longer displayed when the 360-degree camera is turned toward a different direction or angle.

Plain English translation pending...
Claim 4

Original Legal Text

4. The mobile terminal of claim 2 , wherein the guide is provided in at least one of a text format or a voice format.

Plain English translation pending...
Claim 5

Original Legal Text

5. The mobile terminal of claim 1 , wherein the controller is further configured to cause the display to display the stitching region as a bold dashed-line.

Plain English translation pending...
Claim 6

Original Legal Text

6. The mobile terminal of claim 1 , wherein the first focused photographing object is a human or an object.

Plain English translation pending...
Claim 7

Original Legal Text

7. The mobile terminal of claim 1 , wherein the first focused photographing object is set by a user.

Plain English translation pending...
Claim 8

Original Legal Text

8. The mobile terminal of claim 1 , wherein: the 360-degree camera comprises a plurality of cameras such that each of the plurality of cameras captures a respectively corresponding one of the plurality of videos; and the controller is further configured to cause the display to display information of each of the plurality of cameras on the respectively corresponding one of the plurality of videos.

Plain English Translation

A mobile terminal includes a 360-degree camera system and a display. The 360-degree camera system captures multiple videos simultaneously, with each video corresponding to a different camera in the system. The mobile terminal also includes a controller that processes the captured videos and displays them on the screen. The controller is further configured to overlay information specific to each camera onto its corresponding video. This information may include metadata, status indicators, or other relevant data associated with each camera. The system allows users to view multiple perspectives from different cameras at once, with clear identification of which information pertains to which camera. This setup is useful for applications requiring multi-angle surveillance, immersive media capture, or real-time monitoring where distinguishing between different camera feeds is important. The design ensures that users can easily correlate displayed information with the correct video source, improving usability and accuracy in scenarios where multiple camera inputs are needed.

Claim 9

Original Legal Text

9. The mobile terminal of claim 8 , wherein the controller is further configured to cause the display to display information of at least one of a resolution, a type, or a position of each of the plurality of cameras on the respectively corresponding one of the plurality of videos.

Plain English Translation

A mobile terminal includes a display and a plurality of cameras, each capturing a video stream. The terminal processes these video streams to generate a composite video output, where the individual video streams are displayed in a combined format. The terminal's controller is configured to analyze the video streams to detect and track objects, such as faces, within the captured footage. When an object is detected, the controller adjusts the display of the composite video to emphasize the object, such as by enlarging or highlighting the portion of the video where the object appears. Additionally, the controller can display metadata related to the cameras, including resolution, type, and position, overlaid on the respective video streams in the composite output. This allows users to identify which camera captured each segment of the composite video and understand the technical specifications of each camera. The system enhances situational awareness by dynamically adjusting the display based on detected objects and providing contextual information about the cameras.

Claim 10

Original Legal Text

10. The mobile terminal of claim 8 , wherein the controller is further configured to cause the display to display a central region of a view angle of each of the plurality of cameras on the respectively corresponding one of the plurality of videos.

Plain English translation pending...
Claim 11

Original Legal Text

11. The mobile terminal of claim 8 , wherein: the plurality of cameras comprise a high resolution camera and a low resolution camera; and the controller is further configured to cause the display to display a first video captured by the high resolution camera and a second video captured by the low resolution camera such that the first video and the second video are distinguished from each other among the plurality of videos.

Plain English translation pending...
Claim 12

Original Legal Text

12. The mobile terminal of claim 11 , wherein the controller is further configured to cause the display to display the first video in a first region indicated by a bold dashed-line.

Plain English translation pending...
Claim 13

Original Legal Text

13. The mobile terminal of claim 12 , wherein the controller is further configured to cause the display to display the second video in a second region faintly relative to the first video in the first region.

Plain English Translation

A mobile terminal includes a display and a controller that processes video data to display a first video in a first region of the display and a second video in a second region. The controller adjusts the visibility of the second video by displaying it faintly compared to the first video. This allows users to view both videos simultaneously while prioritizing the first video for clearer visibility. The faint display of the second video can be achieved through techniques such as reduced opacity, dimming, or other visual attenuation methods. The terminal may also include a camera for capturing video data and a communication interface for receiving video streams from external sources. The controller manages the display of multiple video streams, ensuring the primary video remains prominent while secondary videos are visually subdued to avoid distraction. This design enhances multitasking capabilities by enabling users to monitor multiple video sources without overwhelming the display. The faint display of the second video can be dynamically adjusted based on user preferences or system settings.

Claim 14

Original Legal Text

14. The mobile terminal of claim 1 , wherein: the second focused photographing object included in the whole region is not included in the displayed partial area; and the information is not included in the identifier when the second focused photographing object is not placed in the boundary region.

Plain English translation pending...
Claim 15

Original Legal Text

15. The mobile terminal of claim 14 , wherein the controller is further configured to cause the display to display the second focused photographing object in response to selection of the identifier.

Plain English translation pending...
Claim 16

Original Legal Text

16. A method of operating a mobile terminal, the method comprising: acquiring a plurality of videos captured by a 360-degree camera; generating a 360-degree video by combining or stitching the plurality of videos; displaying a partial area of a whole region of the generated 360-degree video, wherein the whole region includes a plurality of focused photographing objects and at least one of the plurality of focused photographing objects is not included in the displayed partial area; displaying a stitching region corresponding to a first focused photographing object on the displayed partial area, when the first focused photographing object included in the 360-degree video is displayed and placed in a boundary region in which at least two of the plurality of videos are connected; increasing an area of the displayed stitching region as a distance between the 360-degree camera and the first focused photographing object decreases; and displaying an identifier corresponding to a second focused photographing object when the second focused photographing object among the plurality of focused photographing objects is not displayed, wherein the identifier includes information indicating that the second focused photographing object is placed in the boundary region when the second focused photographing object, which is not displayed, is placed in the boundary region.

Plain English translation pending...
Claim 17

Original Legal Text

17. The method of claim 16 , further comprising providing a guide for the first focused photographing object to avoid the stitching region in response to an input for selecting the stitching region.

Plain English translation pending...
Claim 18

Original Legal Text

18. The method of claim 17 , wherein the stitching region corresponding to the first focused photographing object is no longer displayed when the 360-degree camera is turned toward a different direction or angle.

Plain English translation pending...
Claim 19

Original Legal Text

19. The method of claim 16 , wherein the first focused photographing object is a human or an object.

Plain English translation pending...
Claim 20

Original Legal Text

20. The method of claim 16 , wherein: the 360-degree camera comprises a plurality of cameras such that each of the plurality of cameras captures a respectively corresponding one the plurality of videos; and information of each of the plurality of cameras is displayed on the respectively corresponding one of the plurality of videos.

Plain English translation pending...
Classification Codes (CPC)

Cooperative Patent Classification codes for this invention.

H04N
H04N
H04N
H04N
H04N
H04N
H04N
H04N
G06F
G06F
H04N
Patent Metadata

Filing Date

March 10, 2017

Publication Date

March 17, 2020

Want to Explore More Patents?

Discover thousands of AI-analyzed patents with comprehensive breakdowns, multimedia content, and expert insights.