Fast Forward: Another x86 Growth Spurt

12

Comments

+ Add a Comment
avatar

Coldrage

Hypothetically speaking, wouldn't performance increase dramatically if they ditched x86 used a newer architecture from scratch?

avatar

LatiosXT

Technically there is no such thing as a native x86 processor these days. Intel took cues from the AMD Athlon line and built a native RISC processor with a CISC translator when they did Core 2. So we did see a dramatic per-clock performance increase of the x86 architecture in whole when AMD released the Athlon, but that was about it.

Aside from that, people elsewhere said it. x86 is so embedded in our lives that it is the defacto architecture for computers that don't have a strict power requirement.

avatar

DJSPIN80

Yes and no since it's hard to say.  For example, Itanium's IA-64 architecture is technically superior to x86 but x86 runs circles around it in most computing cases.  However, ARM's Cortex A9 is technically superior to the Atom's x86 processor since its RISC design carries better performance/watt on the mobile platforms than the Atom.

It's a tough call.  I do advocate creating a new architecture simply because x86 is running out of air to breathe.  The problem is the software eco-system: creating a new ISA isn't as simple as 1-2-3 since you have to account for the vast majority of x86 software in existence right now. 

avatar

Nimrod

i recently invented x87 and its much much better faster and what not.

avatar

Black Widow

Don't know computer history, do you? Intel created the 8087 (math coprocessor) in 1980:

http://en.wikipedia.org/wiki/Intel_8087

 

avatar

Biceps

Tom, great article!  May I suggest an idea for a MaxPC white paper?  Would love to learn how the 'True Random Number Generator' works - sounds very cool!

 

 

avatar

I Jedi

I look forward to what Intel is going to put out in the market over the next few years. Anyone else planning to try to get an Ivy Bridge setup this February? At anyrate, great article, Tom.

avatar

Coldrage

Why has x86 stayed for so long?

avatar

praetor_alpha

Compatibility, decent performance, and low cost.

 

Also recessions...

avatar

routine

B/c it would be very expensive to re-write all the apps which are currently written to run on x86.

Pretty much every application that runs on a PC or Mac today.

avatar

praetor_alpha

Apps would not have to be entirely re-written, if at all. Any program in a language running a virtual machine (java, .net, python, etc...) would get away untouched. Anything else would likely just need a re-compile, but hardly anyone would want to fork over the support costs for it. Most open source apps are already running on multiple architectures.

avatar

damicatz

First off, the virtual machine has to be ported.  And even with virtual machines, your programs are not 100% immune from platform-specific issues.  After 5 years of programming in Java, I can safely say that "write once, run anywhere" is a myth due to platform specific bugs and idiosyncracies and the tendency to use things like SWT that call native APIs through JNI.

Second, unless the program was written from the ground up to be architecture-agnostic (something that takes more manhours and, often, cannot be justified from a cost-benefit standpoint) it's more than just a recompile to get it working.

I've ported stuff from x86-32 to x64 before and it can be a real PITA when dealing with programs that were written with the assumption of a 32-bit architecture.  For example, often times a programmer will cast a pointer (memory address) to an int rather than a uintptr_t.  Ints and pointers on 32-bit systems are both 32-bits but on a 64-bit system, pointers are 64-bits and ints are 32-bits.  Suddenly, your program is pulling data from some other area of memory rather than from where it should and this can manifest itself very subtly (such as a function returning incorrect calculations because it's pulling data from the wrong area of memory).  

It's even harder when you are talking about a non-x86 architecture.  A lot of programs use inline assembler for optimization. x64 assembly is mostly a superset of x86 (with a lot of legacy features that people haven't used for years removed such as segmented memory) and it's relatively easy to port x86-32 assembly to x64 assembly compared to say porting x86 assembly to ARM assembly (which basically neccesitates a complete rewrite of the code).  There is also the matter of optimization; RISC processors like ARM are far more sensitive to how programs are written and programs that aren't optimized for the architecture suffer much greater hits in performance compared to x86. 

As for open-source software, the open-source nature of it allows anyone with free time and some esoteric hardware platform to spend time porting the program.  Commercial software companies, on the other hand, are not going to spend manhours and money porting something to a platform no one uses. 

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.