Quantcast

Maximum PC

It is currently Mon Dec 29, 2014 7:54 am

All times are UTC - 8 hours




Post new topic Reply to topic  [ 19 posts ] 
Author Message
 Post subject: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 12:25 am 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
I forget where I heard the question, but it got me to thinking...

Would you rather have a 1985 computer running 2010 software/algorithms or a 2010 computer running 1975 software/algorithms?

I'm going to hold off posting some of my thoughts for a bit. I will say that if it hasn't been coined already that Gadget's Law describes the relative increase in the performance of algorithms over a given period of time. ;)

So what do you guys think?


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 5:54 am 
SON OF A GUN
SON OF A GUN
User avatar

Joined: Mon Nov 01, 2004 5:41 am
Posts: 11605
Hardware. Faster hardware is almost always better.

However, now that we are really getting into the true multi-core/multi-thread and 64bit environment, this question won't be so easy to answer.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 6:11 am 
Bitchin' Fast 3D Z8000
Bitchin' Fast 3D Z8000
User avatar

Joined: Mon Jun 14, 2004 4:04 pm
Posts: 987
Location: Earth
Gadget wrote:
Would you rather have a 1985 computer running 2010 software/algorithms or a 2010 computer running 1975 software/algorithms?

So what do you guys think?


I'll take a stab at it and say, none of the above.

I'll just assume that with the amount of code bloat present in today's software, trying to run it on a 1985 PC would be tantamount to suicide. However, on the other side of the coin, running 1975 software/algorithms on multicore systems will hit a point of diminishing returns because 1975 algorithms aren't exactly optimized to run on multicore environments.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 8:03 am 
Java Junkie
Java Junkie
User avatar

Joined: Mon Jun 14, 2004 10:23 am
Posts: 24238
Location: Granite Heaven
Well, a 1985 computer can't run 2010 software. Windows 7 barely runs on a 2002 computer.

Algorithms? I guess it depends on the application. I assume that 8 logical cores running at 4GHz each with a gig of RAM each could search a database more quickly using brute force than a C64 with a 1MHz CPU and 64KB of RAM using fancy-pants algos ... and I know that there were much better solutions that brute force in existence in '85.

Thus .. I chose new hardware and old software / algos.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 8:42 am 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
CrashTECH wrote:
Hardware. Faster hardware is almost always better.

Because? It is certainly easier... but that doesn't mean there is an easy answer to the question. =)

CrashTECH wrote:
However, now that we are really getting into the true multi-core/multi-thread and 64bit environment, this question won't be so easy to answer.

Let's pretend that CPUs have doubled in speed every two years since 1985, ram is plentiful, hard drives enormous, etc.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 8:50 am 
Java Junkie
Java Junkie
User avatar

Joined: Mon Jun 14, 2004 10:23 am
Posts: 24238
Location: Granite Heaven
Uh oh .. Gadget has numbers .. that means that he has something to prove. :)


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 9:38 am 
Bitchin' Fast 3D Z8000
Bitchin' Fast 3D Z8000
User avatar

Joined: Mon Jun 14, 2004 4:04 pm
Posts: 987
Location: Earth
Jipstyle wrote:
Uh oh .. Gadget has numbers .. that means that he has something to prove. :)


and far too many things to read. :P


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 9:59 am 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
DJSPIN80 wrote:
Gadget wrote:
Would you rather have a 1985 computer running 2010 software/algorithms or a 2010 computer running 1975 software/algorithms?

So what do you guys think?


I'll take a stab at it and say, none of the above.

I'll just assume that with the amount of code bloat present in today's software, trying to run it on a 1985 PC would be tantamount to suicide. However, on the other side of the coin, running 1975 software/algorithms on multicore systems will hit a point of diminishing returns because 1975 algorithms aren't exactly optimized to run on multicore environments.

Err... I just realized that I mistyped the date. I meant to put down 1985 software/algorithms.

Concerning bloat -- If your computer was frozen in time at 1985 levels, you wouldn't have developed the bloat to begin with right? It just wouldn't fit. Also, there is nothing preventing you from using a lean and mean operating system either. Just because much of our current software is considered bloated, it isn't all that way nor does it have to be that way. We can always run an RTOS or even just bootstrap single programs on the older hardware. I really hadn't given the operating system much thought, but I didn't mean to imply that you had to try and run Windows 7 on a Apple IIe either! =)

In 1985, the consumer basically had one of two choices CP/M or Apple DOS. A very strict interpretation of the rules would probably be a win for the software/algorithms... the 2+ gigs of ram in your computer aren't going to do you much good if you're limited to an 8-bit address space! Perhaps there were some 16 or 32 bit versions of VMS, Unix or a LISP flavor to save the day. Either way, we should probably pretend that the OS is magic and allows you to use the full capabilities of your hardware.

Here is more of what I had in mind...

1) Who will win in a game of computer chess?
2) You have a vast collection of documents on your computer which system does IR better (information retrieval)?
3) You have a relational database (not RDBMS), which system can make best use of the data?

Obviously, there are things for which optimal, or nearly optimal, algorithms were known in 1985, so all of these items would obviously be better performed on modern hardware. However, there are quite a few things that are uncertain.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 11:02 am 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
Jipstyle wrote:
Algorithms? I guess it depends on the application. I assume that 8 logical cores running at 4GHz each with a gig of RAM each could search a database more quickly using brute force than a C64 with a 1MHz CPU and 64KB of RAM using fancy-pants algos ... and I know that there were much better solutions that brute force in existence in '85.

I tend to agree with the idea of it depends. Let's use a hardware speed-up factor of 5000x for this example (2^12.5 is 5792.6... and 2^12 is 4096). We need to drill holes in a circuit board. And just to make things difficult on you, we don't need to find the "optimal" circuit -- just one that is within twice the optimal distance. This is a classic example of TSP approximation. Since I wasn't being nice earlier, we'll pretend there are just 16 holes in the board.

Using a naive brute-force solution, you'd have to check 16! permutations:
20,922,789,888,000

Yeah, that's a pretty big number, but on the plus side, your fancy hardware reduces it by 5000x to appx 4 billion units of 1985 computer time! Plus, you actually have the optimal answer. At least until on of the hardware guys realizes they forgot a diode or something! =)

Using the Euclidean TSP algorithm requires O(n log n) time which comes out to 64 units of 1985 computer processing power.

Granted, this is a very contrived example:
For starters, this approximation algorithm was discovered in 1976, not 1985. =)
You could use pruning to eliminate a large percentage of the search space.
OTOH, there aren't any circuits with only 16 holes in them any more either!

Jipstyle wrote:
Thus .. I chose new hardware and old software / algos.

For shame!!! =)

edit: Some day, I will learn English. I swear.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 12:38 pm 
Java Junkie
Java Junkie
User avatar

Joined: Mon Jun 14, 2004 10:23 am
Posts: 24238
Location: Granite Heaven
Gadget wrote:
For shame!!! =)


But if the algorithm you mentioned was developed in '76, I'm correct. Running the same algorithm on old or new hardware is obviously going to be faster on the new hardware. :P

(Smartassedness aside, this is an interesting discussion)


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 2:17 pm 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
Jipstyle wrote:
Gadget wrote:
For shame!!! =)


But if the algorithm you mentioned was developed in '76, I'm correct. Running the same algorithm on old or new hardware is obviously going to be faster on the new hardware. :P

(Smartassedness aside, this is an interesting discussion)

I have paused and though about it more than a few times during the past week. =)

Another one of the things that people tend to take for granted is the maturity of the compilers that we have today. It would take a fairly significant team of programmers to perform the types of optimizations that a compiler performs in seconds these days. I'm not sure how to quantify the difference, but I suspect that for some applications/algorithms it is very significant. A related area would be query plan optimization in a DBMS. This is another area that we kind of take for granted that has improved significantly.

So how about it... who thinks that a modern computer running a top chess program circa '85 can beat an Apple IIe or LISP/M (or whatever) running a modern chess program?

Haha.... I just thought of something funny. Crashtech said that he'd prefer to have a new computer and old software, which I'm going to take as a sign that he is a closet LISP programmer in disguise just looking for an excuse to come out! =)


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Thu Sep 09, 2010 10:11 pm 
Team Member Top 100
Team Member Top 100

Joined: Fri Sep 17, 2004 5:35 pm
Posts: 1176
Just as a thought, many modern algorithms take advantage of a space/time tradeoff rather than being asymptotically fast (and those that are are often O(n log[n] log[log[n]]) versus O(n log^2[n]), which is a nice improvement but not epic). Having 8GB of very fast RAM versus 1MB of very slow RAM may be necessary to get the huge performance advantage.

That said, I think I'd rather have modern hardware (i.e. lithography). A modern FPGA, let alone an ASIC or full-custom, would dominate older hardware (even with the same algorithms). Then again, without modern CAD algorithms tech mapping and placement/routing would be entirely infeasible.

I guess the answer is that hardware and software go hand-in-hand. Codesign is essential. One without the other leaves you unable to fully exploit the resources you have, or possibly unable to use them.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 1:06 am 
Team Member Top 100
Team Member Top 100

Joined: Sun Feb 12, 2006 2:52 pm
Posts: 1994
Location: Cryptogram City!
I've been a LONG time computer chess fan - actually, that's why I wanted to learn to program in C, in the first place.

Quote:
So how about it... who thinks that a modern computer running a top chess program circa '85 can beat an Apple IIe or LISP/M (or whatever) running a modern chess program?


The 1985 program, on modern hardware, would destroy the best programs today (Rybka, Shredder, etc.), running on Apple IIe's.

Search depth is "everything" in chess programming. Chess programs have not improved nearly as much since 1985, as the hardware has.

Big win for the hardware team! 8)


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 5:34 am 
Bitchin' Fast 3D Z8000
Bitchin' Fast 3D Z8000
User avatar

Joined: Mon Jun 14, 2004 4:04 pm
Posts: 987
Location: Earth
Gadget wrote:
edit: Some day, I will learn English. I swear.


And I teach English to Polish students in the summers. Just a hint, since you made mention of it. :P


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 5:56 am 
Bitchin' Fast 3D Z8000
Bitchin' Fast 3D Z8000
User avatar

Joined: Mon Jun 14, 2004 4:04 pm
Posts: 987
Location: Earth
Quote:
Another one of the things that people tend to take for granted is the maturity of the compilers that we have today. It would take a fairly significant team of programmers to perform the types of optimizations that a compiler performs in seconds these days. I'm not sure how to quantify the difference, but I suspect that for some applications/algorithms it is very significant. A related area would be query plan optimization in a DBMS. This is another area that we kind of take for granted that has improved significantly.


I find it very difficult to quantify the difference between teams of programmers vs. a compiler. However, there's a qualitative aspects that I had thought about regarding the significance of compiler optimizations. I think that we've had the benefit of years of observations that compiler writers (and language nuts like Gosling and Hejlsberg (sp.)) have benefited from. I tend to break programming into two camps: the specialized camp where they tend to focus on every drop of performance, this could mean fine tuning the IL, the assembly, et al. The second camp is the 'everyday/everything else' camp, this camp tends to be the business developers and non-specialized app teams. Most compilers, IMO, tend to be written for the common denominator (think camp 2). Computing has largely been a linear exercise where we tend to problem-solve for problems of a different permutation on the same problem (e.g., blogs, CRM's, data warehouses, video games). I think that compilers have largely been optimized for this general purpose use, and there are very few 'new things' that crop up. I mean, let's face it, how much compiler optimization could one do to improve the performance of Microsoft Word or that web app you wrote? Most compilers make do with the behaviors and patterns of everyday usage.

The specialized camp is a little bit more, well, specialized. One area I can think of is computational finance. I met a lot of these folks and their job requirements are insane. They do a LOT of fine tuning at the IL level (some even write IL code straight up) and this is an area where hardware performance and software algorithms come to mind. One of my college buddies who is in computational finance pretty much said that they live and die by algorithms. Simply brute forcing a quantitative analysis is unacceptable, they crunch LARGE quantities of data and this is something a compiler may not be as efficient in, so they fine tune their apps (which tends to be very small in nature) to run as fast as possible on hardware that would make your desktop cry Uncle.


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 9:51 am 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
Kybo_Ren wrote:
Just as a thought, many modern algorithms take advantage of a space/time tradeoff rather than being asymptotically fast (and those that are are often O(n log[n] log[log[n]]) versus O(n log^2[n]), which is a nice improvement but not epic).

While there certainly are cases where only a small improvement (if any) has been made to an algorithm's time complexity, there are also plenty of examples of larger improvements during the past twenty or thirty years. Often what is more significant than a small time complexity improvement (if any) is a better approximation bound, probabilistic error bound or some other algorithmic feature. In 2002, the AKS Primality Test algorithm, while a bit slower than some known randomized/probabilistic primality tests, made a lot of noise for being the first primality test in P (ie deterministic). And PRIME is one of those cases where a small improvement in time complexity is VERY significant!

Kybo_Ren wrote:
Having 8GB of very fast RAM versus 1MB of very slow RAM may be necessary to get the huge performance advantage.

Well, the "slow ram" can be accounted for with an additional constant factor like in the Euclidean TSP example. However, you do bring up a good point -- if the time complexity difference between two algorithms is log(n) and one computer is slower by a factor of say 1024, then we would have to increase the input size n by an amount that probably won't fit into the main memory of either system. There has been quite a bit of work on very large datasets and databases during the past twenty years, which I'm less familiar with than either computing theory or AI, so it isn't clear whether to me which would be the better system.

Kybo_Ren wrote:
That said, I think I'd rather have modern hardware (i.e. lithography). A modern FPGA, let alone an ASIC or full-custom, would dominate older hardware (even with the same algorithms). Then again, without modern CAD algorithms tech mapping and placement/routing would be entirely infeasible.

I was wondering if someone would notice the catch-22 in this hypothetical: automated software is a critical factor in the design of computing hardware. Regarding ASIC boards, which I believe came around during the late 70's, would this change the analysis?


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 10:30 am 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
Adak wrote:
The 1985 program, on modern hardware, would destroy the best programs today (Rybka, Shredder, etc.), running on Apple IIe's.

Search depth is "everything" in chess programming. Chess programs have not improved nearly as much since 1985, as the hardware has.

I won't disagree with you that computer hardware has (in general) improved more than chess programs over the past twenty years. After all, computer hardware is a bazillion dollar industry whereas chess programming teams are quite a novelty.

I take deep offense to the notion that "search depth" is everything in chess programming and the notion that you need faster hardware to do it! =)

I've read Hsu's book on Deep Blue, Jonathan Schaeffer's book on Chinook (checkers) and a couple of writings by Ray Kurzweil on the subject (which I came seem to locate at the moment). Let's take the ultimate chess machine, Deep Blue, which lost to Kasparov in '96 and beat him in '97. It had 480 VLSI "chess chips" and was able to evaluate 200 million positions per second. This is pretty common knowledge, but did you know that "Hsu received his Ph.D. in Computer Science from Carnegie Mellon in 1989 for architectural work on chess machines and research on parallel alpha-beta search algorithms." That's right -- research on parallel alpha-beta search algorithms! Those parallel alpha-beta search algorithms are what allowed Deep Blue to search "interesting lines of play" to such fantastic depths (IIRC >40 plies in some cases). Using just a pure brute force search, this monster of a machine would have only been able to search to a depth of 5 or 6 plies!

Consider that modern computer systems are able to perform at the level of Deep Blue with a fraction of the computational power! How are they able to do this? Better algorithms... of course!!! Anyways, there is plenty more to say on the subject, but I need to run for now. We'll pick this up later.

Adak wrote:
Big win for the hardware team! 8)

Bah! Maybe we can setup an experiment sometime. My cellphone is going to whoop your giga-whatever computer's ass! :P


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 7:51 pm 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
I found the Ray Kurzweil link.

Deep Fritz Draws: Are Humans Getting Smarter, or Are Computers Getting Stupider?
October 20, 2002 by Ray Kurzweil


Top
  Profile  
 
 Post subject: Re: Hardware vs Software/Algorithms
PostPosted: Fri Sep 10, 2010 8:33 pm 
Bitchin' Fast 3D Z8000*
Bitchin' Fast 3D Z8000*
User avatar

Joined: Tue Jun 29, 2004 11:32 pm
Posts: 2555
Location: Somewhere between compilation and linking
DJSPIN80 wrote:
I find it very difficult to quantify the difference between teams of programmers vs. a compiler."

What I was getting at by saying that a a compiler could replace a team of programmers is that a modern compiler is the distillation 1000's of labor-years of knowledge & research, implementation, and testing. You would need a large and very good team of programmers to replicate the work of a state of the art compiler. By maturity, I also meant that older compilers often didn't compile code correctly. We're not talking about inefficient object code -- just plain wrong code! We tend to take for granted that modern compilers work right of the box, but think back to 2002-3 when everyone was dealing with goofed up C++ compilers. Dr. Dobbs ran an article on the problem. I HATED Visual Studio during this time period.

DJSPIN80 wrote:
Most compilers, IMO, tend to be written for the common denominator (think camp 2). Computing has largely been a linear exercise where we tend to problem-solve for problems of a different permutation on the same problem (e.g., blogs, CRM's, data warehouses, video games). I think that compilers have largely been optimized for this general purpose use, and there are very few 'new things' that crop up. I mean, let's face it, how much compiler optimization could one do to improve the performance of Microsoft Word or that web app you wrote? Most compilers make do with the behaviors and patterns of everyday usage.

While there are specialized compilers (eg for embedded systems or RT systems), in general, a compiler doesn't know whether the app it is compiling is MS Word or some specialized app. All of the optimizations that improve a specialized app would improve Word -- you would just be less likely to notice them in an event-driven application like Word.

There are a large number of very complex optimizations that a compiler can perform:
http://en.wikipedia.org/wiki/Compiler_optimization

It is more or less a general optimization problem (minimize the number of instructions, number of fetches, cache lines, etc), and like many optimization problems, there are some optimizations performed by compilers that are just too complex to be done by hand. Register allocation and instruction scheduling, for example, both include problems in NP. If a problem is in NP, then you don't have too many options: approximation, randomization, heuristic search, etc.


Top
  Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ] 

All times are UTC - 8 hours


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group

© 2014 Future US, Inc. All rights reserved.