In photography and videography , multi-exposure HDR capture is a technique that creates high dynamic range (HDR) images (or extended dynamic range images) by taking and combining multiple exposures of the same subject matter at different exposures . Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on computational imaging techniques to capture and combine multiple exposures.
67-448: Google Tensor is a series of ARM64 -based system-on-chip (SoC) processors designed by Google for its Pixel devices. It was originally conceptualized in 2016, following the introduction of the first Pixel smartphone , though actual developmental work did not enter full swing until 2020. The first-generation Tensor chip debuted on the Pixel 6 smartphone series in 2021, and was succeeded by
134-574: A big.LITTLE configuration; but it will run only in AArch32 mode. ARMv8-A includes the VFPv3/v4 and advanced SIMD (Neon) as standard features in both AArch32 and AArch64. It also adds cryptography instructions supporting AES , SHA-1 / SHA-256 and finite field arithmetic . An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels. For example,
201-553: A "standout feature", though his colleague David Lumb described the chip's performance as "strong but not class-leading". ARM64 AArch64 or ARM64 is the 64-bit Execution state of the ARM architecture family . It was first introduced with the Armv8-A architecture, and has had many extension updates. Extension: Data gathering hint (ARMv8.0-DGH). AArch64 was introduced in ARMv8-A and
268-534: A 64-bit hypervisor . ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012. Apple was the first to release an ARMv8-A compatible core ( Cyclone ) in a consumer product ( iPhone 5S ). AppliedMicro , using an FPGA , was the first to demo ARMv8-A. The first ARMv8-A SoC from Samsung is the Exynos 5433 used in the Galaxy Note 4 , which features two clusters of four Cortex-A57 and Cortex-A53 cores in
335-457: A HDR+ mode for the Nexus 5 and Nexus 6 smartphones in 2014, which automatically captures a series of images and combines them into a single still image, as detailed by Marc Levoy . Unlike traditional HDR, Levoy's implementation of HDR+ uses multiple images underexposed by using a short shutter speed, which are then aligned and averaged by pixel, improving dynamic range and reducing noise. By selecting
402-495: A JPEG file. The Canon PowerShot G12 , Canon PowerShot S95 , and Canon PowerShot S100 offer similar features in a smaller format. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the emphasis being on creating a realistic effect. Some smartphones provide HDR modes for their cameras, and most mobile platforms have apps that provide multi-exposure HDR picture taking. Google released
469-549: A Tensor-powered successor to the Pixelbook laptop with a planned 2023 release had been canceled due to cost-cutting measures. "Tensor" is a reference to Google's TensorFlow and Tensor Processing Unit technologies, and the chip is developed by the Google Silicon team housed within the company's hardware division , led by vice president and general manager Phil Carmack alongside senior director Monika Gupta, in conjunction with
536-436: A human sees when looking at the subject. This technique can be applied to produce images that preserve local contrast for a natural rendering, or exaggerate local contrast for artistic effect. HDR is useful for recording many real-world scenes containing a wider range of brightness than can be captured directly, typically both bright, direct sunlight and deep shadows. Due to the limitations of printing and display contrast ,
603-446: A limited exposure range (low dynamic range, LDR), may lose detail in highlights or shadows . Modern CMOS image sensors have improved dynamic range and can often capture a wider range of tones in a single exposure reducing the need to perform multi-exposure HDR. Color film negatives and slides consist of multiple film layers that respond to light differently. Original film (especially negatives versus transparencies or slides) feature
670-579: A mobile SoC able to do computational multi-exposure HDR video capture in 4K and also to record it in a format compatible with HDR displays . In 2021, the Xiaomi Mi 11 Ultra smartphone is able to do computational multi-exposure HDR for video capture. HDR capture can be implemented on surveillance cameras, even inexpensive models. This is usually termed a wide dynamic range (WDR) function Examples include CarCam Tiny, Prestige DVR-390, and DVR-478. The idea of using several exposures to adequately reproduce
737-406: A much wider dynamic range (multiple channels). For that purpose, they do not use integer values to represent the single color channels (e.g., 0–255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common values are 16-bit ( half precision ) or 32-bit floating-point numbers to represent HDR pixels. However, when the appropriate transfer function
SECTION 10
#1733094543699804-608: A patent on this concept in 1991, and several related patents in 1992 and 1993. In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured successively by a sensor or simultaneously by two sensors of the camera. This process is known as bracketing used for a video stream. In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols. Also in 1991, Georges Cornuéjols introduced
871-564: A reasonable HDR image manually in software by rearranging the image layers to merge in order of their actual luminosity. Because of the nonlinearity of some sensors image artifacts can be common. Camera characteristics such as gamma curves , sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images. High-dynamic-range photographs are generally composites of multiple standard dynamic range images, often captured using exposure bracketing . Afterwards, photo manipulation software merges
938-460: A staggered-blur strobe effect due to the merged images not being identical. Unless the subject is static and the camera mounted on a tripod there may be a tradeoff between extended dynamic range and sharpness. Sudden changes in the lighting conditions (strobed LED light) can also interfere with the desired results, by producing one or more HDR layers that do have the luminosity expected by an automated HDR system, though one might still be able to produce
1005-417: A too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive. Manual tone mapping
1072-412: A very high dynamic range (in the order of 8 for negatives and 4 to 4.5 for positive transparencies). Multi-exposure HDR is used in photography and also in extreme dynamic range applications such as welding or automotive work. In security cameras the term "wide dynamic range" is used instead of HDR. A fast-moving subject, or camera movement between the multiple exposures, will generate a "ghost" effect or
1139-453: A video in order to increase the dynamic range captured by the camera. This can be done via multiple methods: Some cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. In 2020, Qualcomm announced Snapdragon 888 ,
1206-449: Is a digital sensor or film. Outside this range, tonal information is lost and no features are visible; tones that exceed the range are "burned out" and appear pure white in the brighter areas, while tones that fall below the range are "crushed" and appear pure black in the darker areas. The ratio between the maximum and the minimum tonal values that can be captured in a single image is known as the dynamic range . In photography, dynamic range
1273-514: Is an example of a scene with a very wide dynamic range: Several software applications are available on the PC, Mac, and Linux platforms for producing HDR files and tone mapped images. Notable titles include: Several camera manufacturers offer built-in multi-exposure HDR features. For example, the Pentax K-7 DSLR has an HDR mode that makes 3 or 5 exposures and outputs (only) a tone mapped HDR image in
1340-616: Is complementary to, and does not replace, the NEON extensions. A 512-bit SVE variant has already been implemented on the Fugaku supercomputer using the Fujitsu A64FX ARM processor; this computer was the fastest supercomputer in the world for two years, from June 2020 to May 2022. A more flexible version, 2x256 SVE, was implemented by the AWS Graviton3 ARM processor. SVE is supported by
1407-437: Is difficult to quantify using synthetic benchmarks , but should instead be characterized by the many ML capabilities it enables, such as advanced speech recognition , real-time language translation, the ability to unblur photographs, and HDR -like frame-by-frame processing for videos. 5x Cortex-A725 (Performance Cores) at 2.85 GHz 2x Cortex-A520 (Efficiency Cores) at 2.40 GHz The first-generation Tensor chip debuted on
SECTION 20
#17330945436991474-611: Is included in subsequent versions of ARMv8-A. It was also introduced in ARMv8-R as an option, after its introduction in ARMv8-A; it is not included in ARMv8-M. The main opcode for selecting which group an A64 instruction belongs to is at bits 25–28. Announced in October 2011, ARMv8-A represents a fundamental change to the ARM architecture. It adds an optional 64-bit Execution state, named "AArch64", and
1541-430: Is measured in exposure value (EV) differences, also known as stops . The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. For most illumination levels, the response is approximately logarithmic . Human eyes adapt fairly rapidly to changes in light levels. HDR can thus produce images that look more like what
1608-499: Is often applied to HDR files by the same software package. Tone mapping is often needed because the dynamic range that can be displayed is often lower than the dynamic range of the captured or processed image. HDR displays can receive a higher dynamic range signal than SDR displays , reducing the need for tone mapping. HDR can be done via several methods: This is an example of four standard dynamic range images that are combined to produce three resulting tone mapped images: This
1675-428: Is used, HDR pixels for some applications can be represented with a color depth that has as few as 10 to 12 bits (1024 to 4096 values) for luminance and 8 bits (256 values) for chrominance without introducing any visible quantization artifacts . Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping
1742-463: The GCC compiler, with GCC 8 supporting automatic vectorization and GCC 10 supporting C intrinsics. As of July 2020 , LLVM and clang support C and IR intrinsics. ARM's own fork of LLVM supports auto-vectorization. In October 2016, ARMv8.3-A was announced. Its enhancements fell into six categories: ARMv8.3-A architecture is now supported by (at least) the GCC 7 compiler. In November 2017, ARMv8.4-A
1809-512: The Pixel Neural Core on the Pixel 4 series. By April 2020, the company had made "significant progress" toward a custom ARM -based processor for its Pixel and Chromebook devices, codenamed "Whitechapel". At Google parent company Alphabet Inc. 's quarterly earnings investor call that October, Pichai expressed excitement at the company's "deeper investments" in hardware, which some interpreted as an allusion to Whitechapel. The Neural Core
1876-514: The space shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC, in 1999 and then published in Hasselblad Forum . The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented
1943-603: The ARM Cortex-A32 supports only AArch32, the ARM Cortex-A34 supports only AArch64, and the ARM Cortex-A72 supports both AArch64 and AArch32. An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0. In December 2014, ARMv8.1-A, an update with "incremental benefits over v8.0", was announced. The enhancements fell into two categories: changes to the instruction set, and changes to
2010-510: The Google Research division. Tensor's microarchitecture consists of two large cores, two medium cores, and four small cores; this arrangement is unusual for octa-core SoCs, which typically only have one large core. Carmack explained that this was so Tensor could remain efficient at intense workloads by running both large cores simultaneously at a low frequency to manage the various co-processors. Osterloh has stated that Tensor's performance
2077-666: The HDR operator's judgment, experience, and training, but usually, fusion is performed automatically by software. Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images , which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred , in contrast to traditional digital images, which are device-referred or output-referred . Furthermore, traditional images are usually encoded for
Google Tensor - Misplaced Pages Continue
2144-574: The HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal-to-noise ratio . In 1993, another commercial medical camera producing an HDR video image, by the Technion. Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across
2211-518: The Lamp by W. Eugene Smith , from his 1954 photo essay A Man of Mercy on Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took five days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow. Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in
2278-570: The Pixel 6 and Pixel 6 Pro, which were officially announced in October 2021 at the Pixel Fall Launch event. It was later reused for the Pixel 6a , a mid-range variant of the Pixel 6 series which was announced in July 2022. Despite being marketed as developed by Google, close-up examinations revealed that the chip contains numerous similarities with Samsung 's Exynos series. A second-generation Tensor chip
2345-504: The Tensor G2 chip in 2022, G3 in 2023 and G4 in 2024. Tensor has been generally well received by critics. Development on a Google -designed system-on-chip (SoC) first began in April 2016, after the introduction of the company's first Pixel smartphone , although Google CEO Sundar Pichai and hardware chief Rick Osterloh agreed it would likely take an extended period of time before the product
2412-429: The amount of light applied to the light-sensitive detector, whether film or digital sensor such as a CCD . An increase or decrease of one stop is defined as a doubling or halving of the amount of light captured. Revealing detail in the darkest of shadows requires an increased EV, while preserving detail in very bright situations requires very low EVs. EV is controlled using one of two photographic controls: varying either
2479-407: The associated new "A64" instruction set, in addition to a 32-bit Execution state, "AArch32", supporting the 32-bit "A32" (original 32-bit Arm) and "T32" (Thumb/Thumb-2) instruction sets. The latter instruction sets provide user-space compatibility with the existing 32-bit ARMv7-A architecture. ARMv8-A allows 32-bit applications to be executed in a 64-bit OS, and a 32-bit OS to be under the control of
2546-492: The chip, named Tensor, in August, as part of a preview of its Pixel 6 and Pixel 6 Pro smartphones. Previous Pixel smartphones had used Qualcomm Snapdragon chips, with 2021's Pixel 5a being the final Pixel phone to do so. Pichai later obliquely noted that the development of Tensor and the Pixel 6 resulted in more off-the-shelf solutions for Pixel phones released in 2020 and early 2021. In September 2022, The Verge reported that
2613-441: The correct scene exposure), while similar tonal information from highlight areas can be recovered from images that are deliberately underexposed (negative EV). The process of selecting and extracting shadow and highlight information from these over/underexposed images and then combining them with image(s) that are exposed correctly for the overall scene is known as exposure fusion . Exposure fusion can be performed manually, relying on
2680-592: The cover of Life magazine in the mid-1950s. Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras in 1988 by a group from the Technion in Israel, led by Oliver Hilsenrath and Yehoshua Y. Zeevi. Technion researchers filed for
2747-599: The darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print , which prominently features dodging and burning, in the context of his Zone System . With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods. Color film capable of directly recording high-dynamic-range images
Google Tensor - Misplaced Pages Continue
2814-454: The dynamic range of the capturing medium. With a limited dynamic range, tonal differences can be captured only within a certain range of brightness. Outside of this range, no details can be distinguished: when the tone being captured exceeds the range in bright areas, these tones appear as pure white, and when the tone being captured does not meet the minimum threshold, these tones appear as pure black. Images captured with non-HDR cameras that have
2881-469: The entire image), and then tone mapping the result. Global HDR was first introduced in 1993 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard . On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (high dynamic range + graphic) image of STS-95 on the launch pad at NASA 's Kennedy Space Center . It consisted of four film images of
2948-641: The exception model and memory translation. Instruction set enhancements included the following: Enhancements for the exception model and memory translation system included the following: In January 2016, ARMv8.2-A was announced. Its enhancements fell into four categories: The Scalable Vector Extension (SVE) is "an optional extension to the ARMv8.2-A architecture and newer" developed specifically for vectorization of high-performance computing scientific workloads. The specification allows for variable vector lengths to be implemented from 128 to 2048 bits. The extension
3015-416: The extended dynamic range of HDR images must be compressed to the range that can be displayed. The method of rendering a high dynamic range image to a standard monitor or printing device is called tone mapping ; it reduces the overall contrast of an HDR image to permit display on devices or prints with lower dynamic range. One aim of HDR is to present a similar range of luminance to that experienced through
3082-519: The first step of Mann's process is called a lightspace image , lightspace picture , or radiance map . Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision , and other image processing operations. In February 2001, the Dynamic Ranger technique was demonstrated, using multiple photos with different exposure levels to accomplish high dynamic range similar to
3149-520: The global-HDR method for producing digital images having extended dynamic range at the MIT Media Lab . Mann's method involved a two-step procedure: First, generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods). Second, convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by
3216-549: The human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction . The values stored for HDR images are often gamma compressed using mathematical functions such as power laws logarithms , or floating point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges. HDR images often do not use fixed ranges per color channel , other than traditional images, to represent many more colors over
3283-408: The human visual system . The human eye, through non-linear response, adaptation of the iris , and other methods, adjusts constantly to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions. Most cameras are limited to a much narrower range of exposure values within a single image, due to
3350-536: The input files into a single HDR image, which is then also tone mapped in accordance with the limitations of the planned output or display. Any camera that allows manual exposure control can perform multi-exposure HDR image capture, although one equipped with automatic exposure bracketing (AEB) facilitates the process. Some cameras have an AEB feature that spans a far greater dynamic range than others, from ±0.6 in simpler cameras to ±18 EV in top professional cameras, as of 2020. The exposure value (EV) refers to
3417-471: The introduction of optional AArch64 support in the Armv8-R profile, the real-time capabilities have been further enhanced. The Cortex-R82 is the first processor to implement this extended support, bringing several new features and improvements to the real-time domain. Multi-exposure HDR capture A single image captured by a camera provides a finite range of luminosity inherent to the medium, whether it
SECTION 50
#17330945436993484-419: The most fluid and fastest performance" on a smartphone, though Android Authority 's Jimmy Westenberg was ambivalent. Ryne Hager of Android Police thought the chip's performance was acceptable to the everyday user, but was disappointed that Google did not offer more years of Android updates given it was no longer bound by Qualcomm's contractual terms. TechRadar reviewer James Peckham commended Tensor as
3551-402: The naked eye. In the early 2000s, several scholarly research efforts used consumer-grade sensors and cameras. A few companies such as RED and Arri have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture time-sequential HDRx images with a user-selectable 1–3 stops of additional highlight latitude in the "x" channel. The "x" channel can be merged with
3618-551: The normal channel in post production software. The Arri Alexa camera uses a dual-gain architecture to generate an HDR image from two exposures captured at the same time. With the advent of low-cost consumer digital cameras, many amateurs began posting tone-mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010, the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using
3685-486: The photographer must capture three or more images to obtain the desired luminance range, taking such a full set of images takes extra time. Photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is advised to minimize framing differences between exposures. Tonal information and details from shadow areas can be recovered from images that are deliberately overexposed (i.e., with positive EV compared to
3752-433: The sharpest image as the baseline for alignment, the effect of camera shake is reduced. Some of the sensors on modern phones and cameras may combine two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing. Although not as established as for still photography capture, it is also possible to capture and combine multiple images for each frame of
3819-545: The size of the aperture or the exposure time. A set of images with multiple EVs intended for HDR processing should be captured only by altering the exposure time; altering the aperture size also would affect the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image. Multi-exposure HDR photography generally is limited to still scenes because any movement between successive images will impede or prevent success in combining them afterward. Also, because
3886-425: Was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at
3953-416: Was announced, including: In October 2024, ARMv9.6-A was announced, including: The ARM-R architecture, specifically the Armv8-R profile, is designed to address the needs of real-time applications, where predictable and deterministic behavior is essential. This profile focuses on delivering high performance, reliability, and efficiency in embedded systems where real-time constraints are critical. With
4020-399: Was announced. Its enhancements fell into these categories: In September 2018, ARMv8.5-A was announced. Its enhancements fell into these categories: On 2 August 2019, Google announced Android would adopt Memory Tagging Extension (MTE). In March 2021, ARMv9-A was announced. ARMv9-A's baseline is all the features from ARMv8.5. ARMv9-A also adds: In September 2019, ARMv8.6-A
4087-586: Was announced. Its enhancements fell into these categories: For example, fine-grained traps, Wait-for-Event (WFE) instructions, EnhancedPAC2 and FPAC. The bfloat16 extensions for SVE and Neon are mainly for deep learning use. In September 2020, ARMv8.7-A was announced. Its enhancements fell into these categories: In September 2021, ARMv8.8-A and ARMv9.3-A were announced. Their enhancements fell into these categories: LLVM 15 supports ARMv8.8-A and ARMv9.3-A. In September 2022, ARMv8.9-A and ARMv9.4-A were announced, including: In October 2023, ARMv9.5-A
SECTION 60
#17330945436994154-791: Was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force ". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films , and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:10 . It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on
4221-434: Was in development by October 2021, codenamed "Cloudripper". At the annual Google I/O keynote in July 2022, Google announced that the chip would debut on the Pixel 7 and Pixel 7 Pro smartphones, which were officially announced on October 6 at the annual Made by Google event. The chip is marketed as "Google Tensor G2". The chip was also used to power the Pixel 7a , Pixel Fold foldable smartphone , and Pixel Tablet which
4288-409: Was not included on the Pixel 5 , which was released in 2020; Google explained that the phone's Snapdragon 765G SoC already achieved the camera performance the company had been aiming for. In April 2021, 9to5Google reported that Whitechapel would power Google's next Pixel smartphones. Google was also in talks to acquire Nuvia prior to its acquisition by Qualcomm in 2021. Google officially unveiled
4355-477: Was ready. The next year, the company's hardware division assembled a team of 76 semiconductor researchers specializing in artificial intelligence (AI) and machine learning (ML), which has since increased in size, to work on the chip. Beginning in 2017, Google began to include custom-designed co-processors in its Pixel smartphones, namely the Pixel Visual Core on the Pixel 2 and Pixel 3 series and
4422-528: Was unveiled in May 2023 during the annual I/O keynote. Samsung had begun testing Tensor G3 by August 2022, codenamed "Zuma". Announced in October 2023, the chip was used to power the Pixel 8a , Pixel 8 and Pixel 8 Pro . The Information reported in July 2023 that Google had initiated development on Tensor G5, codenamed "Laguna", which was to be designed fully in-house, manufactured by TSMC instead of Samsung, and built on TSMC's 3 nm process . At launch, Tensor
4489-412: Was well received. Philip Michaels of Tom's Guide praised the Pixel 6 and Pixel 6 Pro's Tensor-powered features and video enhancements, as did Marques Brownlee and Wired 's Julian Chokkattu. Chokkattu's colleague Lily Hay Newman also highlighted the chip's security capabilities, declaring them Tensor's strongest selling point. Jacon Krol of CNN Underscored wrote that Tensor delivered "some of
#698301