home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.compression.research
- Path: sparky!uunet!walter!att-out!pacbell.com!ames!purdue!decwrl!adobe!wtyler
- From: wtyler@adobe.com (William Tyler)
- Subject: Re: C source for Fractal compression, huh !
- Message-ID: <1992Nov18.083335.18739@adobe.com>
- Followup-To: comp.compression.research
- Summary: Back to Basics
- Sender: Bill Tyler
- Organization: Adobe Systems Inc., Mountain View, CA
- References: <1992Nov16.184754.3170@maths.tcd.ie> <Bxu712.LvA@metaflow.com> <1992Nov18.024912.24072@maths.tcd.ie>
- Date: Wed, 18 Nov 1992 08:33:35 GMT
- Lines: 25
-
- In article <1992Nov18.024912.24072@maths.tcd.ie> tim@maths.tcd.ie (Timothy Murphy) writes:
-
- >Chaitin/Kolmogorov Algorithmic Information Theory
- >does set an absolute limit to the degree to which
- >any given data can be compressed:
- >the string s cannot be compressed
- >beyond its entropy (or informational content) H(s).
- >H(s) is precisely defined,
-
- Ah, there's the rub. H(s) is only precisely defined with respect to
- some particular model of the information source s, expressed in the
- form of probabilities. The better your model of the source, the more
- you can compress the information. Usually our source models are far
- from ideal, due both to lack of complete understanding of the source,
- and practical considerations of constructing and using complex models.
- As an example, consider a source where the probability of a given
- symbol being emitted was a function of the previous 10^10 symbols.
- You aren't likely to even try to tabulate the probabilities, let alone
- make optimal use of them :-)
-
- Bill
-
-
- --
- Bill Tyler wtyler@adobe.com
-