This morning’s CNET articles includes one that describes a diatribe by the hardware industry on the software industry.[Link] Evidently the explosion was partly induced by Megahard’s announcement, covered in another blot, of how the next edition or iteration of WINDOWS would be vastly different from VISTA, being based fundamentally on using multicore processors.
The hardware industry, in the form of an Intel fellow, lambastes the software industry for not also being responsive to Moore’s Law. At this point I should perhaps mention something about the taxonomy of laws. Laws of the first kind, sometimes called primary laws, are those that describe fundamental processes of reality, tempered only by our limited perceptions and cognitions. The laws of physics are example of these. A common misconception is that these laws are immutable and in a sense they are, but since our understanding of the actuality that these laws represent is limited and hopefully improving, the statement and interpretation of these laws is mutable.
Laws of the second kind, sometimes called secondary laws, are those that describe derivative processes. These include most of the “laws” postulated by social scientists. Often these laws are empirical in nature and are valid over a limited range of some sort. Laws of the third kind, sometimes called tertiary laws, are those invented by humans without any strong connection with reality. The laws passed by legislatures fall into this category.
“Violations of the laws of the first kind will kill you, of the second kind either enrich or pauperize you, and of the third kind enslave you.”
Moore’s Law is a law of the second kind. It states that the number of fundamental processor operations performed per second that a state-of-the-engineering processor can perform doubles every 1.5 years. The reason this is a secondary law is that it may violate some primary laws. That is, at some point the way the universe is physically may prevent engineers from developing new processors that are faster than predecessors. Interestingly, Intel seems to have admitted, somewhat stealthily, that it has hit a speed barrier and is only maintaining advances by multicoring. Of course, this may be a misunderstanding on my part due to poor reportage or misreading.
There are a couple of things that strike me as interesting here. The social one is an indication that the hardware side of the information world sees the software side as damping the market. Since Megahard portrays that VISTA is primarily an effort to strengthen security, an aspect almost orthogonally alien to the hardware industry, this perception may have some justification but not necessarily benefit for the user/consumer.
On a technical basis, Moore’s Law is actually a limiting factor computationally. If we are interested in solving problems with computers, the way in which the problems are mapped into the discrete representation of computers is crucial. Let us suppose that a representation of N discrete things is necessary to adequate state the problem. Then, if the number of calculations to enumerate the problem is proportional to N, we may double the size of N, and thus the accuracy of the representation, every 1.5 years.
If however, and this is usually the case, the number of calculations to enumerate the problem is N^m, then the time to be able to double the size of N is 2^(m-1) * 1.5 years. Many basic problems require N^2 calculations (a simple sort, for example), so the doubling time for the problem is really 2^(2-1) * 1.5 years = 3.0 years.
I should comment that this situation doesn’t apply to most people/users/consumers. Primarily this only impacts folks doing technical research, have rapidly growing databases that need to be serached and sorted, and computer gamers. The latter are impacted by the resolution of the visual representation and thereby the ease of suspension of disbelief, and hence, presumably, the believability and enjoyment. Most folks however fall into two categories of users.
The first of these are those who have some programs they run periodically. The characteristic of these programs is how many times they have to run it. If its once a day or month, then processor speed isn’t a primary concern, that occurs only if the time to run the programs gets close to (or bigger than) the time between individual runs. So the impact of processor speed here is how much time they have to spend making n runs a day.
The second are folks who use several different programs and would like to have them all running at a good speed simultaneously. Since this is often limited by the speed of the processor, these folks are primarily interested in how many programs is the practical limit.
Hence, both of these types of folks are more interested in having multiple processors available since that translates into multiple programs or multiple instances of the same program running simultaneously. This multitasking is not as directly compatible with multicore as one might think and it is still a major challenge for the software folks, despite all of Megahard’s often coughed claims.
And this, I suspect, is the real gap that the software folks need to fix.