Computer Science

Lossy Compression

Lossy compression is a data compression technique that reduces the size of a file by permanently eliminating certain information deemed less essential. This process results in a smaller file size, making it more efficient for storage and transmission. However, it also leads to a loss of some data, which may affect the quality of the compressed file.

Written by Perlego with AI-assistance

8 Key excerpts on "Lossy Compression"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • How Video Works
    eBook - ePub

    How Video Works

    From Broadcast to the Cloud

    • Diana Weynand, Vance Piccin(Authors)
    • 2015(Publication Date)
    • Routledge
      (Publisher)

    ...14 Compression Compression is the process of reducing data in a digital signal by eliminating redundant information. This process reduces the amount of bandwidth required to transmit the data and the amount of storage space required to store it. Any type of digital data can be compressed. Reducing the required bandwidth permits more data to be transmitted at one time. Compression can be divided into two categories: lossless and lossy. In lossless compression, the restored image is an exact duplicate of the original with no loss of data. In Lossy Compression, the restored image is an approximation, not an exact duplicate, of the original (Figure 14.1). Lossless Compression In lossless compression, the original data can be perfectly reconstructed from the compressed data that was contained in the original image. Compressing a document is a form of lossless compression in that the restored document must be exactly the same as the original. It cannot be an approximation. In the visual world, lossless compression lends itself to images that contain large quantities of repeated information, such as an image that contains a large area of one color, perhaps a blue sky. Computer-generated images or flat colored areas that do not contain much detail—e.g., cartoons, graphics, and 3D animation—also lend themselves to lossless compression. Figure 14.1 Lossless vs Lossy Compression One type of lossless compression commonly used in graphics and computer-generated images (CGI) is run-length encoding. These images tend to have large portions using the same colors or repeated patterns. Every pixel in a digital image is composed of the three component colors—red, green, and blue—and every pixel has a specific value for each color...

  • Understanding Digital Cinema
    eBook - ePub

    Understanding Digital Cinema

    A Professional Handbook

    • Charles S. Swartz(Author)
    • 2004(Publication Date)
    • Routledge
      (Publisher)

    ...In one respect this is obviously the ideal form of compression in that (assuming error-free transmission) there can be no possibility of degradation. This is lossless compression, and it does have practical applications. Well-known computer programs such as PK-Zip and Stuffit are lossless compression systems. They can take a computer file, make it more compact for storage or transmission, and then restore a perfect copy of the original. Unfortunately, lossless systems generally do not provide sufficient compression for large-scale imagery applications such as Digital Cinema distribution. Typically, lossless systems can compress image data by factors in the range of two or three to one; a useful degree of compression, certainly, but not enough to make Digital Cinema practical. Recently there have been claims that new techniques can provide much higher compression ratios but—at the time of writing—no independent tests have verified these claims. So the majority of this chapter will be devoted to the characteristics and design of Lossy Compression systems; systems that are likely to meet the practical needs of Digital Cinema distribution. However, lossless compression does still play an important role. These techniques may be used with almost any source of data, including the output data of a Lossy Compression system. So practical compression systems usually consist of a lossy front end followed by a lossless section (known as the entropy coder) to reduce the bit rate even further. Lossy Compression For the foreseeable future, Digital Cinema will require the use of compression systems that are not lossless: systems that discard or distort some of the information in the original image data, or Lossy Compression...

  • The Manual of Photography
    • Elizabeth Allen, Sophie Triantaphillidou(Authors)
    • 2012(Publication Date)
    • Routledge
      (Publisher)

    ...The aim is that the decompressed version of the dataset is as close to the original as possible. However, it is important to note that compression may be lossless or lossy. Lossless compression methods, as the name suggests, compress data without removing any information, meaning that after decompression the reconstruction will be identical to the original. However, the amount of compression achieved will be limited. Certain types of information require perfect reconstruction, and therefore only lossless methods are applicable. Lossy Compression methods remove redundancy in both data and information, incurring some losses in the reconstructed version. Lossy Compression is possible in cases where there is some tolerance for loss and depends on the type of information being represented. An example of such a situation is one where some of the information is beyond the capabilities of the receiver. This process is sometimes described as the removal of irrelevancies. In lossy methods there is always a trade-off between the level of compression achieved and the degree of quality loss in the reconstructed signal. Types of redundancy Mathematically, the process of compression may be seen as the removal of correlation within the image. There are a number of different areas of redundancy commonly present in typical digital images: • Spatial redundancy (see Figure 29.2). This type of redundancy refers to correlation between neighbouring pixels and therefore inherent redundancy in the pixel values (also known as interpixel redundancy). The correlation may consist of several consecutive pixels of the same value, in an area where there is a block of colour, for example. More commonly in natural images, however, neighbouring pixels will not be identical, but will have similar values with very small differences. In images where there are repeating patterns, there may be correlation between groups of pixels...

  • Compression for Great Video and Audio
    eBook - ePub

    Compression for Great Video and Audio

    Master Tips and Common Sense

    • Ben Waggoner(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)

    ...We see the same thing as with the text compression example. Small Increases in Compression Require Large Increases in Compression Time There is a fundamental limit to how small a given file can be compressed, called the Shannon limit. For random data, the limit is the same as the size of the source file. For highly redundant data, the limit can be tiny. A file that consists of the pattern “01010101” repeated a few million times can be compressed down to a tiny percentage of the original data. However, real-world applications don’t get all the way to the Shannon limit, since it requires an enormous amount of computer horsepower, especially as the files get larger. Most compression applications have a controlling tradeoff between encoding speed and compression efficiency. In essence, these controls expand the amount of the file that is being examined at any given moment, and the size of the codebook that is searched for matches. However, doubling compression time doesn’t cut file size in half! Doubling compression time might only get you a few percentages closer to the Shannon limit for the file. Getting a file 10 percent smaller might take more than 10 times the processing time, or be flat-out impossible. Lossy and Lossless Compression Lossless compression codecs preserve all of the information contained within the original file. Lossy codecs, on the other hand, discard some data contained in the original file during compression. Some codecs, like PNG, are always lossless. Others like VC-1 are always lossy. Others still may or may not be lossy depending on how you set their quality and data rate options. Lossless algorithms, by definition, might not be able to compress the file any smaller than it started. Lossy codecs generally let you specify a target data rate, and discard enough information to hit that data rate target...

  • Introduction to Digital Audio
    • John Watkinson(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)

    ...Clearly with computer programs the corruption of a single bit can be catastrophic. Lossless coding is generally restricted to compression factors of around 2:1. It is important to appreciate that a lossless coder cannot guarantee a particular compression factor and the communications link or recorder used with it must be able to handle the variable output data rate. Audio material which results in poor compression factors on a given codec is described as difficult. It should be pointed out that the difficulty is often a function of the codec. In other words audio which one codec finds difficult may not be found difficult by another. Lossless codecs can be included in bit-error-rate testing schemes. It is also possible to cascade or concatenate lossless codecs without any special precautions. In lossy coding, data from the decoder are not identical bit-for-bit with the source data and as a result comparing the input with the output is bound to reveal differences. Clearly lossy codecs are not suitable for computer data, but are used in many audio coders, MPEG included, as they allow greater compression factors than lossless codecs. The most successful lossy codecs are those in which the errors are arranged so that the listener finds them subjectively difficult to detect. Thus lossy codecs must be based on an understanding of psychoacoustic perception and are often called perceptive codes. Perceptive coding relies on the principle of auditory masking, which was considered in Chapter 2. Masking causes the ear/brain combination to be less sensitive to sound at one frequency in the presence of another at a nearby frequency. If a first tone is present in the input, then it will mask signals of lower level at nearby frequencies. The quantizing of the first tone and of further tones at those frequencies can be made coarser. Fewer bits are needed and a coding gain results...

  • The Technology of Video and Audio Streaming
    • David Austerberry(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)

    ...To reduce the rate even further, some form of image compression has to be employed. Compression Compression removes information that is perceptually redundant; that is, information that does not add to the perception of a scene. Compression is a tradeoff between the level of artifacts that it causes and the saving in bandwidth. These trade-offs sometimes can be seen on satellite television. If too many channels are squeezed into one transponder, fast-moving objects within a scene can become blocky and soft. Like scaling, compression of video splits into spatial compression (called intraframe) and temporal or (interframe) compression. Intraframe compression Single frames can be compressed with spatial, or intraframe, compression. This can be a simple system like run-length encoding, or a lossy system where the original data cannot wholly be reconstructed. A typical example of a lossy system is JPEG, a popular codec for continuous-tone still images. Interframe compression The next method to compress video is to remove information that does not change from one frame to the next, and to transmit information only in the areas where the picture has changed. This is referred to as temporal or interframe compression. This technique is one of those used by the MPEG-1, MPEG-2, and MPEG-4 standards. Compression classes The different algorithms are classified into families: Lossless Lossy Naturally lossy Unnaturally lossy If all the original information is preserved the codec is called lossless. A typical example for basic file compression is PKZIP. To achieve the high levels of compression demanded by streaming codecs, the luxury of the lossless codecs is not possible – the data reduction is insufficient. The goal with compression is to avoid artifacts that are perceived as unnatural. The fine detail in an image can be degraded gently without losing understanding of the objects in a scene...

  • Art of Digital Audio
    • John Watkinson(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)

    ...Audio material which results in poor compression factors on a given codec is described as difficult. It should be pointed out that the difficulty is often a function of the codec. In other words audio which one codec finds difficult may not be found difficult by another. Lossless codecs can be included in bit-error-rate testing schemes. It is also possible to cascade or concatenate lossless codecs without any special precautions. In lossy coding, data from the decoder are not identical bit-for-bit with the source data and as a result comparing the input with the output is bound to reveal differences. Clearly lossy codecs are not suitable for computer data, but are used in many audio coders, MPEG included, as they allow greater compression factors than lossless codecs. The most successful lossy codecs are those in which the errors are arranged so that the listener finds them subjectively difficult to detect. Thus lossy codecs must be based on an understanding of psychoacoustic perception and are often called perceptive codes. Perceptive coding relies on the principle of auditory masking, which was considered in Chapter 2. Masking causes the ear/brain combination to be less sensitive to sound at one frequency in the presence of another at a nearby frequency. If a first tone is present in the input, then it will mask signals of lower level at nearby frequencies. The quantizing of the first tone and of further tones at those frequencies can be made coarser. Fewer bits are needed and a coding gain results. The increased quantizing distortion is allowable if it is masked by the presence of the first tone. In perceptive coding, the greater the compression factor required, the more accurately must the human senses be modelled. Perceptive coders can be forced to operate at a fixed compression factor. This is convenient for practical transmission applications where a fixed data rate is easier to handle than a variable rate...

  • Encyclopedia of Image Processing
    • Phillip A. Laplante, Phillip A. Laplante(Authors)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)

    ...The large increase in the amount of digital information clearly provides enormous application potential, but there is also an important handling problem and a growing need for compression formats. The storage, access, and transmission of these large volumes of information could be problematic, and compression techniques could be a part of the solution. Compression techniques are methods which reduce the number of symbols used to represent source information. [ 2 ] Thus, they imply reducing the amount of space needed to store information or the amount of time necessary to transmit it. There are different types of data compression methods; the main methods are detailed in Section “Image Compression Methods” and some of them are used for remotely sensed images. Most of these methods can be classified as lossless (completely reversible) or lossy (with some loss of information) compressions (see Section “Image Compression Methods” for definitions). Lossless compression is usually incapable of achieving the high compression requirements of many storage and distribution applications, [ 3 ] which can only be achieved by Lossy Compression algorithms. As Lossy Compression introduces irreversible alterations in the original remotely sensed information, rigorous studies are needed to understand the effects and consequences of this manipulation. To this end, in the present work, we review papers from different fields analyzing the effect of compression when using RS images for multiple objectives: spectral analysis, digital classification, texture analysis, stereoscopy, geostatistics, and multivariate regression, among others. A brief review of image compression methods is presented in Section “Image Compression Methods” (mainly focused on lossy techniques). Section “Remotely Sensed Images” explains the main characteristics of RS images that are relevant for compression purposes in order to understand some interesting issues that will be elaborated in the following sections...