wirelessguru1 Wrote:
-------------------------------------------------------
> > If you had read my post. you would have seen
> that Perakh has shown that Dembski,
> > the great ID theoretician, has infromation
> theory exactly backwards and it does
> > NOT support ID at all !!!!
>
> I read it and, of course, disagree. There isn't
> "greater" information content on random data!
> Otherwise, we couldn't even communicate over the
> Internet...
>
> Specific protocola and layered software layers
> and structures are always required for proper
> communications. So, for example, your statement
> about "The entropy of that meaningless random
> string is large, and so is the information carried
> by that string" is nothing more than reverse
> twisted logic at its best!
>
> Somehow you are trying to suggest that the
> information content of a random string is larger
> than any organized string! This way we could never
> communicate to start with!!! LOL
>
> -wirelessguru1
I know you live in a world of your own, but you are not entitled to make up your own facts-- Information theory has a clear set of definitions. Instead of your own unsupported opinion please provide us with QUOTES with FOOTNOTES to contradict the following:
pp. 64- There exists a well-developed science named “information theory”.. . a seminal concept of the theory in question is information.. . Perhaps a more appropriate name for it would be “communication theory” (Indeed Shannon’s classical paper of 1948 was entitled “A mathematical theory of Communication.”). In fact, what information theory studies is the communication process, viewed as the transmission of information, regardless of the presence or absence of a meaningful message in that information. This choice of the definition of information was justified for at last two reasons. First, the process of information’s transmission is not affected by that information’s semantic content. Second, the originators of the information theory did not possess a method for measuring the semantic content. Therefore, for the purposes of communication theory, which is the essence of information theory, the definition of information in that theory was not only adequate but also logical and convenient. However, it becomes inadequate if we wish to use the term information in connection with its semantic content.
Information is neither a substance nor a property of some substance. Essentially, information is a measure of a system’s randomness, as we will discuss in more detail later. Therefore, these assertions that information can exist in some abstract way independent of a material medium or that it is conserved (like energy in physics) seem to be dubious propositions. Information can be unearthed, identified, sent, transmitted, or received, and generally handled in whichever way, only if it is recorded in the structure of a material medium (including electromagnetic waves).. .
Information has no relation either to the semantic contents of the message or to the particular appearance of the symbols used to record it. It is essential for our discussion to note that the more random the transmitted text, the larger the amount of information it carries.. .
p. 66. [message will always mean the meaningful contents of information— different from information as defined in information theory]. One of the measures of information according to information theory, is a quantity named entropy. To all intents and purposes, it behaves like its namesake in thermodynamics. The entropy of a text quantitavely characterizes the level of disorder in that text. The total entropy of a text as a whole is proportional to the text’s length and is therefore an extensive quantity. A more interesting quantity is the specific entropy, which is the entropy of a unit of text, and therefore is an intensive quantity. Usually it is expressed in as entropy per character and measured in bits per character. In the following discussion, unless indicated otherwise, the term entropy will mean the specific entropy. There exists a hierarchy of texts in regard to their entropy For example, consider a string of the same letter (like A) repeated, say, a million times: AAAAAAA.. .etc. This meaningless text is perfectly ordered. The entropy of the text is practically zero. Now consider a text obtained , for example, by what we cal the urn technique .[BOM completely random]. . Let a text in an “urn language” be, say, a million letters long. This string is almost always gibberish (there is some, extremely small probability that a string of an urn language happens to be a piece of a meaningful message). If, as is overwhelmingly the case, this string is gibberish, in an overwhelming majority of situations there is no or very little order in that string. We call it a random string. The entropy of that meaningless random string is large, and so is the information carried by that string. Meaningful texts are located somewhere in the middle of the entropy scale, their entropy being much larger than in perfectly ordered texts of very low entropy (like AAAAA. ..) but much smaller than in the meaningless random texts. Here are some typical numbers. The entropy of a normal meaningful text in English (as was estimated already by Shannon) is about 1 bit per character. On the other hand, the entropy of a text written in urn language, that is the entropy of a randomized sequence of 27 symbols (26 letters plus space), may be as high as 4.76 bits per character.
Bernard