Misplaced Pages

LAME

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computational complexity theory , a computational resource is a resource used by some computational models in the solution of computational problems .

#879120

58-444: LAME is a software encoder that converts digital audio into the MP3 audio coding format . LAME is a free software project that was first released in 1998 and has incorporated many improvements since then, including an improved psychoacoustic model. The LAME encoder outperforms early encoders like L3enc and possibly the "gold standard encoder" MP3enc, both marketed by Fraunhofer. LAME

116-494: A form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC . Discrete cosine transform (DCT), developed by Nasir Ahmed , T. Natarajan and K. R. Rao in 1974, provided

174-415: A further refinement of the direct use of probabilistic modelling , statistical estimates can be coupled to an algorithm called arithmetic coding . Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as

232-594: A lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies. Lossy audio compression is used in a wide range of applications. In addition to standalone audio-only applications of file playback in MP3 players or computers, digitally compressed audio streams are used in most video DVDs, digital television, streaming media on

290-400: A lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack , and OptimFROG DualStream . When audio files are to be processed, either by further compression or for editing , it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of

348-656: A new psychoacoustic model he developed. A few key improvements since LAME 3.x, in chronological order: Like all MP3 encoders, LAME implemented techniques covered by patents owned by the Fraunhofer Society and others. The developers of LAME did not license the technology described by these patents. Distributing compiled binaries of LAME, its libraries, or programs that derive from LAME in countries where those patents have been granted may have constituted infringement , but since 23 April 2017, all of these patents have expired. The LAME developers stated that, since their code

406-447: A number of companies because the inventor refuses to get invention patents for his work. He prefers declaring it of Public Domain publishing it Computational resource The simplest computational resources are computation time , the number of steps necessary to solve a problem, and memory space , the amount of storage needed while solving the problem, but many more complicated resources have been defined. A computational problem

464-402: A representation of digital data that can be decoded to an exact digital duplicate of the original. Compression ratios are around 50–60% of the original size, which is similar to those for generic lossless data compression. Lossless codecs use curve fitting or linear prediction as a basis for estimating the signal. Parameters describing the estimation and the difference between the estimation and

522-582: A result, speech can be encoded at high quality using a relatively low bit rate. This is accomplished, in general, by some combination of two approaches: The earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the μ-law algorithm . Early audio research was conducted at Bell Labs . There, in 1950, C. Chapin Cutler filed the patent on differential pulse-code modulation (DPCM). In 1973, Adaptive DPCM (ADPCM)

580-617: A set of modifications against the 8Hz-MP3 encoder source code. After some quality concerns were raised by others, he decided to start again from scratch based on the dist10 MPEG reference software sources. His goal was only to speed up the dist10 sources, and leave its quality untouched. That branch (a patch against the reference sources) became Lame 2.0. The project quickly became a team project. Mike Cheng eventually left leadership and started working on tooLAME (an MP2 encoder). Mark Taylor then started pursuing increased quality in addition to better speed, and released version 3.0 featuring gpsycho,

638-418: A special case of data differencing . Data differencing consists of producing a difference given a source and a target, with patching reproducing the target given a source and a difference. Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This

SECTION 10

#1732851294880

696-772: A zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form. Examples of AI-powered audio/video compression software include NVIDIA Maxine , AIVC. Examples of software that can perform AI-powered image compression include OpenCV , TensorFlow , MATLAB 's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression. In unsupervised machine learning , k-means clustering can be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such as image compression . Data compression aims to reduce

754-650: Is distinguished as a separate discipline from general-purpose audio compression. Speech coding is used in internet telephony , for example, audio compression is used for CD ripping and is decoded by the audio players. Lossy compression can cause generation loss . The theoretical basis for compression is provided by information theory and, more specifically, Shannon's source coding theorem ; domain-specific theories include algorithmic information theory for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially created by Claude Shannon , who published fundamental papers on

812-434: Is generally defined in terms of its action on any valid input. Examples of problems might be "given an integer n , determine whether n is prime", or "given two numbers x and y , calculate the product x * y ". As the inputs get bigger, the amount of computational resources needed to solve a problem will increase. Thus, the resources needed to solve a problem are described in terms of asymptotic analysis , by identifying

870-411: Is on the order of 23 ms. Speech encoding is an important category of audio data compression. The perceptual models used to estimate what aspects of speech a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice is normally far narrower than that needed for music, and the sound is normally less complex. As

928-420: Is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain, typically the frequency domain . Once transformed, component frequencies can be prioritized according to how audible they are. Audibility of spectral components is assessed using the absolute threshold of hearing and

986-426: Is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency

1044-429: Is reduced, using methods such as coding , quantization , DCT and linear prediction to reduce the amount of information used to represent the uncompressed data. Lossy audio compression algorithms provide higher compression and are used in numerous audio applications including Vorbis and MP3 . These algorithms almost all rely on psychoacoustics to eliminate or reduce fidelity of less audible sounds, thereby reducing

1102-402: Is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data. The term differential compression is used to emphasize the data differencing connection. Entropy coding originated in the 1940s with the introduction of Shannon–Fano coding , the basis for Huffman coding which

1160-435: Is used in digital cameras , to increase storage capacities. Similarly, DVDs , Blu-ray and streaming video use lossy video coding formats . Lossy compression is extensively used in video. In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal . Compression of human speech is often performed with even more specialized techniques; speech coding

1218-668: Is used in the GIF format, introduced in 1987. DEFLATE , a lossless compression algorithm specified in 1996, is used in the Portable Network Graphics (PNG) format. Wavelet compression , the use of wavelets in image compression, began after the development of DCT coding. The JPEG 2000 standard was introduced in 2000. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. JPEG 2000 technology, which includes

SECTION 20

#1732851294880

1276-625: The Internet , satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression, by discarding less-critical data based on psychoacoustic optimizations. Psychoacoustics recognizes that not all data in an audio stream can be perceived by the human auditory system . Most lossy compression reduces redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at

1334-462: The Lempel–Ziv–Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP , and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in

1392-501: The Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004. Audio data compression, not to be confused with dynamic range compression , has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression formats compression algorithms are implemented in software as audio codecs . In both lossy and lossless compression, information redundancy

1450-470: The University of Buenos Aires . In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967, he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom . 35 years later, almost all the radio stations in the world were using this technology manufactured by

1508-500: The discrete cosine transform (DCT). It was first proposed in 1972 by Nasir Ahmed , who then developed a working algorithm with T. Natarajan and K. R. Rao in 1973, before introducing it in January 1974. DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG and HEIF ), video (such as MPEG , AVC and HEVC) and audio (such as MP3 , AAC and Vorbis ). Lossy image compression

1566-506: The linear predictive coding (LPC) used with speech, are source-based coders. LPC uses a model of the human vocal tract to analyze speech sounds and infer the parameters used by the model to produce them moment to moment. These changing parameters are transmitted or stored and used to drive another model in the decoder which reproduces the sound. Lossy formats are often used for the distribution of streaming audio or interactive communication (such as in cell phone networks). In such applications,

1624-606: The probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263 , H.264/MPEG-4 AVC and HEVC for video coding. Archive software typically has the ability to adjust the "dictionary size", where a larger size demands more random-access memory during compression and decompression, but compresses stronger, especially on repeating patterns in files' content. In

1682-506: The actual signal are coded separately. A number of lossless audio compression formats exist. See list of lossless codecs for a listing. Some formats are associated with a distinct system, such as Direct Stream Transfer , used in Super Audio CD and Meridian Lossless Packing , used in DVD-Audio , Dolby TrueHD , Blu-ray and HD DVD . Some audio file formats feature a combination of

1740-403: The amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely used image file format . Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation of digital images and digital photos . Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed in 1984. It

1798-418: The basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3, Dolby Digital , and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at

LAME - Misplaced Pages Continue

1856-487: The better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of

1914-410: The coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality. In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analyzed before a block of audio

1972-531: The computational problems that can be solved using a certain amount of a certain computational resource is a complexity class , and relationships between different complexity classes are one of the most important topics in complexity theory. The term "Computational resource" is commonly used to describe accessible computing equipment and software. See Utility computing . There has been some effort to formally quantify computing capability. A bounded Turing machine has been used to model specific computations using

2030-480: The computational resources or time required to compress and decompress the data. Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information , so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..."

2088-654: The core information of the original data while significantly decreasing the required storage space. Large language models (LLMs) are also capable of lossless data compression, as demonstrated by DeepMind 's research with the Chinchilla 70B model. Developed by DeepMind, Chinchilla 70B effectively compressed data, outperforming conventional methods such as Portable Network Graphics (PNG) for images and Free Lossless Audio Codec (FLAC) for audio. It achieved compression of image and audio data to 43.4% and 16.4% of their original sizes, respectively. Data compression can be viewed as

2146-592: The course of the 2005 Sony BMG copy protection rootkit scandal , there were reports that the Extended Copy Protection rootkit included on some Sony compact discs had portions of the LAME library without complying with the terms of the LGPL . Data compression In information theory , data compression , source coding , or bit-rate reduction is the process of encoding information using fewer bits than

2204-467: The data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information. A number of popular compression formats exploit these perceptual differences, including psychoacoustics for sound, and psychovisuals for images and video. Most forms of lossy compression are based on transform coding , especially

2262-439: The data may be encoded as "279 red pixels". This is a basic example of run-length encoding ; there are many schemes to reduce file size by eliminating redundancy. The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch ,

2320-462: The data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications. Latency is introduced by the methods used to encode and decode the data. Some codecs will analyze a longer segment, called a frame , of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. The inherent latency of

2378-469: The file size is reduced to 5-20% of the original size and a megabyte can store about a minute's worth of music at adequate quality. Several proprietary lossy compression algorithms have been developed that provide higher quality audio performance by using a combination of lossless and lossy algorithms with adaptive bit rates and lower compression ratios. Examples include aptX , LDAC , LHDC , MQA and SCL6 . To determine what information in an audio signal

LAME - Misplaced Pages Continue

2436-693: The input. The table itself is often Huffman encoded . Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair . The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching . The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling. In

2494-455: The late 1980s, digital images became more common, and standards for lossless image compression emerged. In the early 1990s, lossy compression methods began to be widely used. In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a corresponding trade-off between preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive

2552-472: The means for mapping data onto a signal. Data Compression algorithms present a space-time complexity trade-off between the bytes needed to store or transmit information, and the Computational resources needed to perform the encoding and decoding. The design of data compression schemes involves balancing the degree of compression, the amount of distortion introduced (when using lossy data compression ), and

2610-418: The original representation. Any particular compression is either lossy or lossless . Lossless compression reduces bits by identifying and eliminating statistical redundancy . No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs

2668-473: The principles of simultaneous masking —the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking —where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weigh the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models . Other types of lossy compressors, such as

2726-443: The resources as a function of the length or size of the input. Resource usage is often partially quantified using Big O notation . Computational resources are useful because we can study which problems can be computed in a certain amount of each computational resource. In this way, we can determine whether algorithms for solving the problem are optimal and we can make statements about an algorithm's efficiency . The set of all of

2784-411: The reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission , it is called source coding: encoding is done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding , for error detection and correction or line coding ,

2842-488: The same time as louder sounds. Those irrelevant sounds are coded with decreased accuracy or not at all. Due to the nature of lossy algorithms, audio quality suffers a digital generation loss when a file is decompressed and recompressed. This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, lossy formats such as MP3 are very popular with end-users as

2900-546: The size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by the centroid of its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial in image and signal processing , k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving

2958-547: The space required to store or transmit them. The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a medium bit rate . A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB. Lossless audio compression produces

SECTION 50

#1732851294880

3016-511: The symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence". An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors , and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to

3074-478: The topic in the late 1940s and early 1950s. Other topics associated with compression include coding theory and statistical inference . There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution). Conversely, an optimal compressor can be used for prediction (by finding

3132-503: The vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM. According to AIXI theory, a connection more directly explained in Hutter Prize , the best possible compression of x is the smallest possible software that generates x. For example, in that model,

3190-591: Was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969. An important image compression technique is the discrete cosine transform (DCT), a technique developed in the early 1970s. DCT is the basis for JPEG, a lossy compression format which was introduced by the Joint Photographic Experts Group (JPEG) in 1992. JPEG greatly reduces

3248-425: Was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan . Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura ( Nagoya University ) and Shuzo Saito ( Nippon Telegraph and Telephone ) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed

3306-404: Was only released in source code form, it should only be considered as an educational description of an MP3 encoder, and thus did not infringe any patent in itself. They also advised users to obtain relevant patent licenses before including a compiled version of the encoder in a product. Some software was released using this strategy: companies used the LAME library, but obtained patent licenses. In

3364-424: Was required by some programs released as free software in which LAME was linked for MP3 support. This avoided including LAME itself, which used patented techniques, and so required patent licenses in some countries. All relevant patents have since expired, and LAME is now bundled with Audacity . The name LAME is a recursive acronym for "LAME Ain't an MP3 Encoder". Around mid-1998, Mike Cheng created LAME 1.0 as

#879120