Really? I haven't followed hardware closely for quite some time now. Are there plans for incorporating FPGAs into mainstream hardware? That would raise all kinds of interesting questions/issues in a multi-programming environment.
There have been plans for this for years, but none have borne much fruit. People have tried everything from FPGAs on PCMCIA cards to FPGAs directly in processor sockets, to hard PPC cores directly embedded in FPGA logic.
I didn't think about it before, but I guess the physics boards that started appearing a few years back are one example. I'm assuming that they're an ASIC, but I'm not sure.
I'm not so convinced this is really the way forward. This only exacerbates the problems VLIWs present. VLIWs haven't been adopted on a large scale, but it's not really because of the compiler's obligation to statically schedule everything. Intel's IA-64 compiler is quite good at this. The most serious problem is the notion of binary-compatible software goes right out the window, even among CPUs sharing a common instruction set! NISC has the exact same problem here, but even worse: now you don't even have a common instruction set!
Prior to this moment, I haven't given this much thought... so be gentle if I really screw this one up. =)
Binary compatibility is a nice feature, but it does have at least one drawback. The [probably not so] obvious drawback is that commercial software may not be compiled or written to take advantage of many of the features of your particular processor [remember, be gentle. =)]. I'm sure there are reams of interesting academic and industry papers dealing with this issue, and granted, most applications will not benefit greatly, but there are those special cases in which having some extra muscle is really nice (eg media encoding, matrix calculations, etc). Assuming this is true, I think we need to ask whether binary compatibility is really all that beneficial in this day and age, right? Why shouldn't a decent installation program be able to determine the type of machine then install an appropriate binary?
It's not really a lie. We're used to an ISA as a specification for interacting with the CPU's control FSM, which in turn manages the microarchitectural details necessary to actually effect your command. In this case, there isn't much need for a decoder and control logic: nearly all the microarchitectural details are handled by the compiler, which encodes their commands directly.
I was suggesting that it could be considered a white lie because an ISA serves as an interface between the hardware and software. Even if the hardware is programmable (which I don't believe is the case for NISC), there has to be an explicit interface somewhere, otherwise, how are the compiler folks going to know what to do? Sure, the decoder and instruction set may no longer exist, but the hardware interfacing role still exists which is why I called it a white lie. In a NISC architecture, do the compilers still generate "object code" or some other intermediate code?
From a theoretical CS perspective, I can't imagine why this architecture isn't reducible to a standard Turing machine. From a CS architecture perspective, are there some compelling reasons for adopting it? Does the elimination of the decode step translate into a potentially large gain in performance? Will compilers be able to customize branch-prediction to the individual application to such a degree that it makes sense to adopt this architecture? I read on Wikipedia that the design of the ISA is very costly, but I wonder if this is due to the actual complexity of designing a decoder. Is most of the cost saving resulting from simply shifting hardware development to software (which can also be substantial)?