Exclusive: We Build the First Nehalem System. Don't Tell Intel!

27

Comments

+ Add a Comment
avatar

n0t_a_n000b

Please tell me this is one of the last socket changes in a while.  Why can't Intel learn from AMD?

N0t a n00b

avatar

darkliquids

to respond to one of you.

nehalem needs (aka is designed for) 3 channels of memory (im sure it can be run with a signle stick of ram or with two, but this is maximum pc). the specific mobo used in the article is strange: it has 3 channels but 4 ram slots/banks. so what gordon says in a nutshell is that you may need to ( for the sake of equal ram amounts per channel)  run it with for example 1gig 1gig and 512+512 in the final channel.

gordon was definitely apprehensive, and i tend to agree with agengineer-gordon did not want to slam Intel, but basically triple channel is probably terrible for performance.

avatar

Keith E. Whisman

Yes, Sadly Chef is dead. Disco will never be the same. Isaac Hayes believed in Scientology. So he is with the aliens now if his religeon is true.

I wonder if he has any relationship with the Hayes Modem?

As for the article this is like porn for me only without naked ladies. I love the article.

BTW I want to get one thing clear. In triple channel memory setup is there Six ram slots, Two per Channel? Does this mean that you have to install Two Sticks of ram per channel requiring Six sticks of ram for Triple channel? Or would that be Six Cannel. The podcast confused me as Gordon Mah Ung stated that there were two slots per channel and then the conversation that ensued confused the heck out of me even further. It would seem that tri channel would require just three matched sticks of ram. I'm referring to Podcast number 76 BTW.

avatar

Volund

You may want to remove the Isaac Hayes comment in the first paragraph, as he died on August 10th.

 

Very good information though :P

avatar

AgEngineer

Don't take my word on it just look at the performance delta on AMD processors when you have two mismatched DIMM sizes on a Dual DRAM controller product. Answer memory bandwidth drops because.....You guessed it imbalanced load cause this config to behave like a single DRAM channel.

avatar

AgEngineer

If he new anything about DRAM controllers or why dual channel is better than single channel he would never have made the statement that 2M, 1M, 1M would opperate at full speed. This configuration will load one controller more than the other two and will reduce the performance not increase it.

 

As for 3 DIMMs not helping much over 2 DIMMs this is most likey also not going to change with a BIOS update. Load balancing between any non power of 2 address space is a tricky problem to solve in hardware. It can be done but it requires a lot of logic and it is never extreamly effective.  The best load balancing is done when you can evenly balance the address space between all channels such that linear accesses alternate between channels.  How do you make that work when you have an odd number of channels??

 

 

avatar

Keith E. Whisman

Hey this is technology that comes from continuous reverse engineering of technology from the original UFO crash in Roswell, NM in 1947.

Since it's alien technology don't ding it until youve tried it. I'm sure this technology was used to contoll the toilets on the UFO.

avatar

Hunt3r.j2

If they used computers to control the toilets, a bunch of bad things will happen.

 

For example: Whenever the toilet was bumped against while flushed all of the waste is dispensed out the toilet seat. And it floods the bathroom. If you attempt to unclog the toilet you need to flush while erasing the toilet, completely redo the plumbing, and install a new toilet and then install every piece of toilet on while keeping a bunch of people who ate something with laxatives away from the toilet.

avatar

virtualrain

If you want to understand how the clock domains and overclocking will work on Nehalem, check this out...

http://www.nehalemnews.com/2008/05/editorial-nehalem-clock-power-domains.html 

 

www.nehalemnews.com

avatar

pellier

can we get a price point

avatar

Yoda1366

It is interesting to note that for all the hype and high
prices, the Intel X38, X48, and P45 chipsets have shown only marginal
performance gains over the aging P35. There are three possible reasons for this:
(1) unimaginative platform design, (2) marketing manipulation, or (3) the core
technology has truly reached its limits. Likewise, pushing quad core and octi
core is a joke when almost no software uses more than two threads. Why burn
135W when a 65W E8400 processor costs only a fraction, has the same real-world
performance, halves the heat dissipation, and lowers the electric bill? Similar
problems afflict DDR3. High cost, high latency, and for what? The theoretical bandwidth
produces almost no real-world result except an empty pocketbook and a sluggish
feeling desktop. It’s like starting up a locomotive to move one kid across the
playground. Sometimes a bicycle just makes more sense.

 

Now, along comes Nehalem. Will putting an integrated memory
controller onboard the CPU die revolutionize computing? Not likely. It didn’t do
much for AMD. But something else might: OPENING IT UP! The brilliant discoveries
of overclocking have not come from Intel’s marketing department, but from the
OCer Community at large. Despite a decade of Intel’s foot dragging and
disapproval, the OCers have persisted and experimented and discovered the
hidden treasures of clock timings, voltage manipulation, and even graphite
pencil traces on the mobo. Intel and other hardware manufacturers now sell hundreds
of thousands of “Enthusiast” units based on their ability to be “tweaked” and
“pushed” and “hacked”.

 

So here’s the bottom line, Intel. Keep your new architecture
and bios as open and flexible as possible. Unlock those multipliers. Expose
every setting. The "Nehalem System Interconnect" and "Simultaneous Multi-threading"
are only the weapons of the war. What you need are the warriors. To get them,
you need to add something else: a connector and a jumper.  It’s that simple. Put a connector on the mobo
that provides a readout of every temperature sensor, clock cycle, millivolt, and
fan speed so OCers can put dashboard gauges right on the front of the case. The
second thing needed is a jumper—one simple jumper on the mobo with two
settings: (1) Locked and (2) Open. “Locked” is for
the everyday consumer and the corporate enterprise. “Open” is for the OCer. No
restrictions, no guarantees, no warrantee; set it at your own risk. Flip that
jumper and every setting in the CPU, Chipset, and RAM is exposed in all its glorious
nakedness. Wake up Intel. One connector and one jumper and you will have hired
thousands of developers around the world for free! Tap into that vast reservoir
of OC geniuses and let them do your R&D.  With one connector and one jumper, Intel will
be the enabler of an evolutionary new technology, and no longer the prison
warden of the old.  Imagine the
innovation, the benchmarks, the forums, the web communities. Imagine the sales!

avatar

Keith E. Whisman

Um moving the memory controller to the die revolutionized the AMD CPU. It made a processor with a lower transistor count and lower Clock speed beet out high performance Intel CPU's with high clock speeds and higher transistor counts for years. History was made with the Athlon 64. And I'm not a fan boy infact I'm an Intel user.

So AMD's moving the Memory controller from the FSB to the Die did make drastic improvements in performance and I think your the only one that believes other wise.

Do you also believe that Bush ordered the 911 attacks to take place in NYC? That we never landed on the moon? That the Nazis never really tried to wipe out the Jews?

avatar

Wildebeast

I would like to see AMD/ATI back on the cutting edge, too. 

If there were actually some useful optimizations in BTX, they probably should find a way to bring it back --under a new spec.

The way the cost of energy has been going up, I wish the article said what psu wattage they were using & what actual expected power usage are...  It would be nice to see some added efficiency all the way across the board, instead of just the 1 GHz or so -super small range.

(I've been using my 90W powered laptop, instead of 700W desktop +20" CRT, and not running AC --unless heat index is over 90.)     :)

avatar

Caboose

With possible poor SLI support, this could be a big help to AMD in the GPU department. OR it could spell disaster for Intel's X58 chipset in that fewer manufacturers will use it and instead opt for the nForce 200.

 

-= I don't want to be dead, I want to be alive! Or... a cowboy! =-

avatar

Pixelated

It will not replace Intel's X58 chipset, rather it's
simply Nvidia's way of making money off of every SLI "Enabled"
motherboard. The nForce 200 is Nvidia's
way of sticking their dick in the middle of something they have no
business doing. That is if there is even anymore room left on what is sure to be a 6-8 layer mobo as it is. Adding an nforce 200 is sure to make it a 10 layer mobo with way too many traces. It all spells more payout and\or lawsuites for Nvidia for motheroards that are unstable and ridiculously expensive. Why would you want a Bloomfield SLI rig anyways when ATI's cards are a better fit and will work without jerryrigging two 1000w PSU's in your case for a GTX280 SLI system?

avatar

FrancesTheMute

I'm probably wrong about this, but I get the impression that the nForce 200 will be an addition to the board, not a replacement for the X58.  I envision it to be akin to putting a separate SATA RAID controller on the board but still leaving the native one that goes with the chipset.  So the board will have both the X58 and an nForce 200. 

avatar

Skiplives

 The nForce 200 is a chip the connects to the Northbridge and delivers another 2 lanes of PCI-E 2.0 16X.

One may ask why do you need the chip at all if the new Northbridge is pretty much just a PCI-E controller (the Southbridge handles any other type of connection).  The reason is that SLI only works on nVidia chipsets, so  the SLIed cards need the nForce 200 to operate. 

____________________ 

That's my story, and I'm sticking to it.

avatar

Bender2000

new Quad core cpu, gtx280 oc, the DM2008 is already obsolete. You may need to build new benchmark systems again.

avatar

Cache

Any word from any mainstream aftermarket coolers as far as support for the new socket?  Don't get me wrong, I know the stock will work just fine, but it's hardly going to be as good as it could be, or designed to be more aesthetically pleasing in my case.

 

On a personal note, I like the new layout because it does buy the socket plenty of real estate around it.  But who assigned the SATA ports?  Bad karma points, Intel. On the whole, the new board looks good, but I can't say that I have any use for one PCI-E x1 let alone two.  I see no PATA interface, and can I be the first to say "THANK YOU!" for that?  Now if only they'd give me a couple more USB headers in there, and move the SATA ports to someplace--ANYPLACE--that might actually be useful for people.

avatar

FrancesTheMute

I'm curious why you don't like the SATA port placement? Personally I think it looks fine, far to the right, nothing blocking it and not that stupid horizontally placed ones that they show on the penryn board. I have that on my board now and I hate it.

avatar

Cache

Really?  What's wrong with the horizontal ones, if you don't mind my asking?

 I personally hate having all the SATA ports spread out--I don't mind them in a straight line (it does help with zip-tying them for neatness).  What bothers me is that--at least from the pic shown--is that the bottom PCI-16 would block off the SATA ports if a full-length card was used.  Sure, you still have 4 to work with, but I'm currently at 5 on my current system.  Heck, for that matter, why do we only get 6?

avatar

ghot

at least SOMEONE besides ASUS put the ram and Vid card slots facing the same direction....bout f***ing time :/

looks like Intel still playing the:  "if you want our new chip you're going to have to buy all new hardware and software"   ...thats just sad.....3 Yachts isn't enough...or am I behind in the count  :/

 

Personally I hope AMD knocks the stuffing out of Intel ...at least they try to remain backwards compatible.

I still think they should just make CPU cards like ...well a video card, and dispense with forcing people to toss out perfectly good hardware everytime they release a new proc. 

avatar

xentreos

Don't be stupid, LGA775 has been around since 2004. In the meantime, AMD's 939 socket (also around since 2004) was replaced by AM2 in 2006, which is being replaced by AM2+. Yes, AM2+ is compatible with AM2, but the point still stands that AMDs sockets are changed far more often - remember 754 before 939? That lasted about a year.

 

Yeah, I'd most likely prefer AMD over Intel myself if they came out with a chip that could compete. But you don't have to rail on Intel for upgrading their socket.

avatar

killerxx7

look at the low fsb!

 

avatar

Strongbad536

So are we overclocking by multiplier now????  And look at how low the core voltage is!  Plus the horizontal facing RAM will help increase airflow from left to right.  And the CPU-Z recognized the 8 threads.  Good stuff guys

 

 

Complete the free Stamps.com offer for a free 360 Elite

avatar

n0b0dykn0ws

I can't wait until later this year to see full benchmarks.

On chip memory controller, tri-channel DDR3, and Crossfire or SLI, pretty as you please.

n0b0dykn0ws

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.