Misplaced Pages

Pixel Camera

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Pixel Camera is a camera phone application developed by Google for the Android operating system on Google Pixel devices. Development with zoom lenses for the application began in 2011 at the Google X research incubator led by Marc Levoy , which was developing image fusion technology for Google Glass . It was publicly released for Android 4.4 + on the Google Play on April 16, 2014. The app was initially released as Google Camera and supported on all devices running Android 4.4 KitKat and higher. However, in October 2023, coinciding with the release of the Pixel 8 series, it was renamed to Pixel Camera and became officially supported only on Google Pixel devices.

#59940

50-503: Google Camera contains a number of features that can be activated either in the Settings page or on the row of icons at the top of the app. Starting with Pixel devices, the camera app has been aided with hardware accelerators , a hidden image processing chip, to perform its image processing. The first generation of Pixel phones used Qualcomm 's Hexagon DSPs and Adreno GPUs to accelerate image processing. The Pixel 2 and Pixel 3 (but not

100-646: A GPU , fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs). Hardware acceleration is advantageous for performance , and practical when the functions are fixed, so updates are not as needed as in software solutions. With the advent of reprogrammable logic devices such as FPGAs, the restriction of hardware acceleration to fully fixed algorithms has eased since 2010, allowing hardware acceleration to be applied to problem domains requiring modification to algorithms and processing control flow . The disadvantage, however,

150-459: A register file ). Hardware accelerators improve the execution of a specific algorithm by allowing greater concurrency , having specific datapaths for their temporary variables , and reducing the overhead of instruction control in the fetch-decode-execute cycle. Modern processors are multi-core and often feature parallel "single-instruction; multiple data" ( SIMD ) units. Even so, hardware acceleration still yields benefits. Hardware acceleration

200-619: A 4-minute exposure to significantly reduce shot noise . By dividing the shot into several shorter exposures, the camera manages to achieve the light capture of a long exposure without having to deal with star trails , which would otherwise require moving the phone very precisely during the exposure to compensate for the Earth's rotation. Astrophotography mode also includes improved algorithms to remove hot pixels and warm pixels caused by dark current and convolutional neural network to detect skies for sky-specific noise reduction . Astrophotography mode

250-564: A 6-second exposure. The motion metering and tile-based processing of the image allows to reduce, if not cancel, camera shake, resulting in a clear and properly exposed shot. Google claims it can handle up to ~8% displacement frame to frame. And each frame is broken into around 12,000 tiles. It also introduced a learning-based AWB algorithm for more accurate white balance in low light. Night Sight also works well in daylight, improving WB, detail and sharpness. Like HDR+ enhanced, Night Sight features positive-shutter-lag (PSL). Night Sight also supports

300-481: A delay-timer as well as an assisted selector for the focus featuring three options (far, close and auto-focus). Night Sight was introduced with the Pixel 3, all older Pixel phones were updated with support. Astrophotography mode activates automatically when Night Sight mode is enabled and the phone detects it is on a stable support such as a tripod. In this mode, the camera averages up to fifteen 16-second exposures, to create

350-505: A feature that, using Google's new ARCore platform, allowed the user to superimpose augmented reality animated objects on their photos and videos. With the release of the Pixel 3, AR Stickers was rebranded to Playground. The camera offers a functionality powered by Google Lens , which allows the camera to copy text it sees, identify products, books and movies and search similar ones, identify animals and plants, and scan barcodes and QR codes , among other things. The Photobooth mode allows

400-479: A single FPGA or ASIC. Similarly, specialized functional units can be composed in parallel, as in digital signal processing , without being embedded in a processor IP core . Therefore, hardware acceleration is often employed for repetitive, fixed tasks involving little conditional branching , especially on large amounts of data. This is how Nvidia 's CUDA line of GPUs are implemented. As device mobility has increased, new metrics have been developed that measure

450-399: A single instant in time.) This produces predictable distortions of fast-moving objects or rapid flashes of light. This is in contrast with " global shutter " in which the entire frame is captured at the same instant. The rolling shutter can be either mechanical or electronic . The advantage of this electronic rolling shutter is that the image sensor can continue to gather photons during

500-459: A snapshot representing a “relative” single instant in time and therefore do not suffer from the motion artifacts caused by rolling shutters. Rolling shutters can cause such effects as: The effects of a rolling shutter can prove difficult for visual effects filming. The process of matchmoving establishes perspective in a scene based on a single point in time, but this is difficult with a rolling shutter that provides multiple points in time within

550-475: A time from a row of icons at the top of the screen. Google Camera allows the user to create a 'Photo Sphere', a 360-degree panorama photo , originally added in Android 4.2 in 2012. These photos can then be embedded in a web page with custom HTML code or uploaded to various Google services. The Pixel 8 released without the feature, the first Pixel phone not to have the feature, and thus leading many to believe that

SECTION 10

#1732876228060

600-400: A video (in a video camera) is captured not by taking a snapshot of the entire scene at a single instant in time but rather by scanning across the scene rapidly, vertically, horizontally or rotationally. In other words, not all parts of the image of the scene are recorded at exactly the same instant. (Though, during playback, the entire image of the scene is displayed at once, as if it represents

650-482: Is a multi-frame super-resolution technique introduced with the Pixel 3 that shifts the image sensor to achieve higher resolution, which Google claim is equivalent to 2-3x optical zoom . It is similar to drizzle image processing. Super Res Zoom can also be used with telephoto lens, for example Google claims the Pixel 4 can capture 8x zoom at near-optical quality. When Motion Photos is enabled, Top Shot analyzes up to 90 additional frames from 1.5 seconds before and after

700-489: Is also adjustable to defined presets. In October 2019, Photobooth was removed as a standalone mode, becoming an "Auto" option in the shutter options, later being removed altogether. Night Sight is based on a similar principle to exposure stacking, used in astrophotography . Night Sight uses modified HDR+ or Super Res Zoom algorithms. Once the user presses the trigger, multiple long exposure shots are taken, up to 15x 1/15 second exposure or 6x of 1 second exposure, to create up to

750-475: Is done by processing Boolean functions on the binary input, and then outputting the results for storage or further processing by other devices. Because all Turing machines can run any computable function , it is always possible to design custom hardware that performs the same function as a given piece of software. Conversely, software can always be used to emulate the function of a given piece of hardware. Custom hardware may offer higher performance per watt for

800-442: Is limited in parallel processing capability only by the area and logic blocks available on the integrated circuit die . Therefore, hardware is much more free to offer massive parallelism than software on general-purpose processors, offering a possibility of implementing the parallel random-access machine (PRAM) model. It is common to build multicore and manycore processing units out of microprocessor IP core schematics on

850-424: Is overhead to decoding instruction opcodes and multiplexing available execution units on a microprocessor or microcontroller , leading to low circuit utilization. Modern processors that provide simultaneous multithreading exploit under-utilization of available processor functional units and instruction level parallelism between different hardware threads. Hardware execution units do not in general rely on

900-466: Is suitable for any computation-intensive algorithm which is executed frequently in a task or program. Depending upon the granularity, hardware acceleration can vary from a small functional unit, to a large functional block (like motion estimation in MPEG-2 ). Rolling shutter#Distortion effects Rolling shutter is a method of image capture in which a still picture (in a still camera) or each frame of

950-474: Is that in many open source projects, it requires proprietary libraries that not all vendors are keen to distribute or expose, making it difficult to integrate in such projects. Integrated circuits are designed to handle various operations on both analog and digital signals. In computing, digital signals are the most common and are typically represented as binary numbers. Computer hardware and software use this binary representation to perform computations. This

1000-442: Is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both. To perform computing tasks more efficiently, generally one can invest time and money in improving

1050-882: The Pixel 3a ) include the Pixel Visual Core to aid with image processing. The Pixel 4 introduced the Pixel Neural Core . Note that the Visual Core's main is to bring the HDR+ image processing that's symbolic of the Pixel camera to any other app that has the relevant Google APIs . Pixel Visual Core is built to do heavy image processing while conserving energy, saving battery. Unlike earlier versions of high dynamic range (HDR) imaging , HDR+, also known as HDR+ on, uses computational photography techniques to achieve higher dynamic range. HDR+ takes continuous burst shots with short exposures. When

SECTION 20

#1732876228060

1100-412: The instruction cycle ), to execute the instructions constituting the software program. Relying on a common cache for code and data leads to the "von Neumann bottleneck", a fundamental limitation on the throughput of software on processors implementing the von Neumann architecture. Even in the modified Harvard architecture , where instructions and data have separate caches in the memory hierarchy , there

1150-572: The server industry, intended to prevent regular expression denial of service (ReDoS) attacks. The hardware that performs the acceleration may be part of a general-purpose CPU, or a separate unit called a hardware accelerator, though they are usually referred to with a more specific term, such as 3D accelerator, or cryptographic accelerator . Traditionally, processors were sequential (instructions are executed one by one), and were designed to run general purpose algorithms controlled by instruction fetch (for example, moving temporary results to and from

1200-515: The Nexus 5X and Nexus 6P. In mid-2017, a modified version of Google Camera was created for any smartphone equipped with a Snapdragon 820, 821 or 835 processor. In 2018, developers released modified versions enabling Night Sight on non-Pixel phones. In August 2020, a new way of accessing extra cameras was introduced, removing the need to use root on phones that don't expose all cameras for third-party apps. Hardware acceleration Hardware acceleration

1250-427: The Pixel 2. Motion Photo is disabled in HDR+ enhanced mode. Fused Video Stabilization, a technique that combines Optical Image Stabilization and Electronic/Digital image stabilization , can be enabled for significantly smoother video. This technique also corrects Rolling shutter distortion and Focus breathing , amongst various other problems. Fused Video Stabilization was introduced on the Pixel 2. Super Res Zoom

1300-453: The Pixel 4a (5G) and 5. With Bracketing is supported in Night Sight for the Pixel 4 and 4a. Google Camera's Motion photo mode is similar to HTC 's Zoe and iOS' Live Photo . When enabled, a short, silent, video clip of relatively low resolution is paired with the original photo. If RAW is enabled, only a 0.8MP DNG file is created, not the non-motion 12.2MP DNG. Motion Photos was introduced on

1350-535: The acquisition process, thus effectively increasing sensitivity. It is found on many digital still and video cameras using CMOS sensors. The effect is most noticeable when imaging extreme conditions of motion or the fast flashing of light. While some CMOS sensors use a global shutter, the majority found in the consumer market use a rolling shutter. CCDs (charge-coupled devices) are alternatives to CMOS sensors, which are generally more sensitive and more expensive. CCD-based cameras often use global shutters, which take

1400-465: The app, Ultra HDR was backported to the Pixel 7 and 6. Many developers have released unofficial ports that allow for their use in non-Google phones, or implement its premium features on older Google phones. These unofficial apps often work around the lack of certain hardware features present in Google's top tier devices, and sometimes even go as far as enabling features not exposed by the official version of

1450-474: The app. There are numerous different versions, targeted at different Android phones. Although many of the features are available on the ported versions, it is not unusual for some features not to be available, or not work properly, on phones without proper API support or incompatible hardware. Google Play Services or a replacement like microG is also required for the app to run. In 2016 a modified version brought HDR+ featuring Zero Shutter Lag (ZSL) on back to

1500-403: The application of machine learning to identify what should be kept in focus and what should be blurred out. Portrait mode was introduced on the Pixel 2. Additionally, a "face retouching" feature can be activated which cleans up blemishes and other imperfections from the subject's skin. The Pixel 4 featured an improved Portrait mode, the machine learning algorithm uses parallax information from

1550-439: The cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup , reduced power consumption , lower latency, increased parallelism and bandwidth , and better utilization of area and functional components available on an integrated circuit ; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification , times to market, and

Pixel Camera - Misplaced Pages Continue

1600-400: The cost of general-purpose utility. Greater RTL customization of hardware designs allows emerging architectures such as in-memory computing , transport triggered architectures (TTA) and networks-on-chip (NoC) to further benefit from increased locality of data to execution context, thereby reducing computing and communication latency between modules and functional units. Custom hardware

1650-603: The default mode or Night Sight mode, its automatically applied if there a person or people. Portrait Light was a collaboration between Google Research, Google Daydream , Google Pixel , and Google Photos teams. With the launch of the Pixel 8, Google announced that the Pixel Camera would receive support for Ultra HDR. Ultra HDR is a format that stores an additional set of data alongside the JPG, with additional luminosity information to produce an HDR photo. Shortly after, with version 9.2 of

1700-510: The dynamic range compared to HDR+ on. HDR+ enhanced on the Pixel 3 uses the learning-based AWB algorithm from Night Sight. Starting with the Pixel 4, Live HDR+ replaced HDR+ on, featuring WYSIWYG viewfinder with a real-time preview of HDR+. HDR+ live uses the learning-based AWB algorithm from Night Sight and averages up to nine underexposed pictures. 'Live HDR+' mode uses Dual Exposure Controls, with separate sliders for brightness ( capture exposure ) and for shadows ( tone mapping ). This feature

1750-413: The feature has been discontinued. Portrait mode (called Lens Blur previous to the release of the Pixel line) offers an easy way for users to take 'selfies' or portraits with a Bokeh effect, in which the subject of the photo is in focus and the background is slightly blurred. This effect is achieved via the parallax information from dual-pixel sensors when available (such as the Pixel 2 and Pixel 3), and

1800-447: The need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in

1850-656: The relative performance of specific acceleration protocols, considering characteristics such as physical hardware dimensions, power consumption, and operations throughput. These can be summarized into three categories: task efficiency, implementation efficiency, and flexibility. Appropriate metrics consider the area of the hardware along with both the corresponding operations throughput and energy consumed. Examples of hardware acceleration include bit blit acceleration functionality in graphics processing units (GPUs), use of memristors for accelerating neural networks , and regular expression hardware acceleration for spam control in

1900-444: The same frame. Final results depend on the readout speed of the sensor and the nature of the scene being filmed; as a rule of thumb, higher-end cinema cameras will have faster readout speeds and therefore milder rolling shutter artifacts than low-end cameras. Images and video that suffer from rolling shutter distortion can be improved by algorithms that do rolling shutter rectification , or rolling shutter compensation . How to do this

1950-634: The same functions that can be specified in software. Hardware description languages (HDLs) such as Verilog and VHDL can model the same semantics as software and synthesize the design into a netlist that can be programmed to an FPGA or composed into the logic gates of an ASIC. The vast majority of software-based computing occurs on machines implementing the von Neumann architecture , collectively known as stored-program computers . Computer programs are stored as data and executed by processors . Such processors must fetch and decode instructions, as well as load data operands from memory (as part of

2000-412: The shutter is pressed the last 5–15 frames are analysed to pick the sharpest shots (using lucky imaging ), which are selectively aligned and combined with image averaging. HDR+ also uses Semantic Segmentation to detect faces to brighten using synthetic fill flash, and darken and denoise skies. HDR+ also reduces shot noise and improves colors, while avoiding blowing out highlights and motion blur . HDR+

2050-430: The shutter is pressed. The Pixel Visual Core is used to accelerate the analysis using computer vision techniques, and ranks them based on object motion, motion blur, auto exposure, auto focus, and auto white balance. About ten additional photos are saved, including an additional HDR+ photo up to 3 MP. Top Shot was introduced on the Pixel 3. Like most camera applications, Google Camera offers different usage modes allowing

Pixel Camera - Misplaced Pages Continue

2100-426: The software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency , increased throughput , and reduced energy consumption . Typical advantages of focusing on software may include greater versatility, more rapid development , lower non-recurring engineering costs, heightened portability , and ease of updating features or patching bugs , at

2150-554: The telephoto and the Dual Pixels, and the difference between the telephoto camera and wide camera to create more accurate depth maps. For the front facing camera, it uses the parallax information from the front facing camera and IR cameras. The blur effect is applied at the Raw stage before the tone-mapping stage for more realistic SLR-like bokeh effect. In late 2017, with the debut of the Pixel 2 and Pixel 2 XL , Google introduced AR Stickers,

2200-414: The user to automate the capture of selfies. The AI is able to detect the user smile or funny faces and shoot the picture at the best time without any action from the user, similar to Google Clips . This mode also feature a two level AI processing of the subject's face that can be enabled or disabled in order to soften its skin. Motion Photos functionality is also available in this mode. The white balance

2250-517: The user to take different types of photo or video. Slow motion video can be captured in Google Camera at either 120 or, on supported devices, 240 frames per second. Panoramic photography is also possible with Google Camera. Four types of panoramic photo are supported; Horizontal, Vertical, Wide-angle and Fisheye . Once the Panorama function is selected, one of these four modes can be selected at

2300-652: The von Neumann or modified Harvard architectures and do not need to perform the instruction fetch and decode steps of an instruction cycle and incur those stages' overhead. If needed calculations are specified in a register transfer level (RTL) hardware design, the time and circuit area costs that would be incurred by instruction fetch and decoding stages can be reclaimed and put to other uses. This reclamation saves time, power, and circuit area in computation. The reclaimed resources can be used for increased parallel computation, other functions, communication, or memory, as well as increased input/output capabilities. This comes at

2350-424: Was also redesigned to decide merged or not per pixel (like Super Res Zoom) & updated to handle long exposures (clipped highlights, more motion blur and different noise characteristics). with Bracketing enables further reduced read noise , improved details/texture and more natural colors. With Bracketing is automatically enabled depending on the dynamic range and motion. With Bracketing is supported in all modes for

2400-520: Was introduced on the Nexus 6 and brought back to the Nexus 5 . Unlike HDR+/HDR+ On, 'HDR+ enhanced' mode does not use Zero Shutter Lag (ZSL). Like Night Sight, HDR+ enhanced features positive-shutter-lag (PSL): it captures images after the shutter is pressed. HDR+ enhanced is similar to HDR+ from the Nexus 5, Nexus 6, Nexus 5X and Nexus 6P . It is believed to use underexposed and overexposed frames like Smart HDR from Apple . HDR+ enhanced captures increase

2450-411: Was introduced with the Pixel 4, and backported to the Pixel 3 and Pixel 3a. Portrait Light is a post process feature that allows adding light source to portraits. It simulates the directionality and intensity to complement the original photograph's lighting using machine learning models. Portrait Light was introduced with the Pixel 5, and backported to the Pixel 4, Pixel 4a and Pixel 4a 5G. When using

2500-430: Was made available for Pixel 4, and has not been retrofitted on older Pixel devices due to hardware limitations. In April 2021, Google Camera v8.2 introduced HDR+ with Bracketing, Night Sight with Bracketing and Portrait Mode with Bracketing. Google updated their exposure bracketing algorithm for HDR+ to include an additional long exposure frame and Night Sight to include 3 long exposure frames. The spatial merge algorithm

#59940