M
M
Mikhail Shatilov2014-01-23 18:59:03
Photo
Mikhail Shatilov, 2014-01-23 18:59:03

Is it possible to use this method of compressing photos?

All of the following is a small discussion.
It is known that the photo does not compress well, because. has few similarities. But what if, in the presence of 100 pieces, glue them into 1 and compress. As a result, the probability of meeting a greater number of identical points will increase. What do you say?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
@
@ntkt, 2014-01-23
_

Any noticeable compression will not work. Two raw photographs of a static landscape on a sunny day, taken from the same place with the same camera with a difference of 0.1 seconds, binary differ very much - high-frequency noise from the matrix, even if the eye does not notice the difference. That is why image compression is based on frequency transformations (the same discrete cosine transformations in JPEG).

S
Spetros, 2014-01-23
@Spetros

What you are asking about is called interframe difference and is used in video.
Speaking of photographs, "if you have 100 pieces, glue them into 1 and compress", then you can get, oddly enough, a video.

A
Andrew, 2014-01-23
@OLS

Those photos that seem similar to a person (!) (for example, 100 photos of blue sky + white beach) actually have some general information, but it is in the low-frequency part of the spectrum and it is very, very small from the total amount of information. I would estimate in the range of 0.5 to 2%.
Everything else is a medium and high-frequency spectrum (the location and curvature of the coastline, stones, waves, clouds, etc.) - it occupies the vast majority of information during compression, and it will be different for each photo.

D
Deerenaros, 2014-06-29
@Deerenaros

Well, actually there is and is often used, for example, the H264 codec uses similar areas of adjacent frames, remembering only the original and the difference between them.
By the way, there is even such a compression method that finds similar areas within the same photo, a very promising one - fractal compression . In short, the image is divided into sub-images, among which rank ones are distinguished, and domain ones are searched for and an affine transformation is determined, applying which to the domain one will turn out a rank one. It sounds difficult, if even simpler - we look for subimages similar to each other in the image, remember one of them and the transformation.
The most promising of them were compressed a million times in laboratory conditions. And not just like that - the image has an extremely high entropy, the amount of information in len.bmp per kilobyte barely exceeds a couple of bits. But the problem is that all existing algorithms have a fatal flaw - they are highly specialized (JPEG for photos, because artifacts; vectors for animations and graphics, because it's hard to decode; PNG is still heavy, etc.). The problem is further aggravated by the fact that the image has low information content, usually in the entire spectrum, that is, both low and medium frequencies and high frequencies - everywhere there is high redundancy, but in the signal, and raw-data usually does not differ much from random; and at the same time, the signal cannot be compressed in such a way that the entire spectrum is effectively compressed.
PS It is never known, as I just explained, that the photo does not compress well. The only question is "how" to do it effectively.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question