Have we gone past the limit, of how fast we can design a CPU for our computers if overheating is a problem?

Please note : In the context of fast - I mean how much speed in , for instance, Ghz a CPU can operate at, not how fast we can build one fromthe planning stage.

In the design of CPUs whether they are made by Intel, AMD, or whoever ( e.g. Motorola ), have we gone past the limit of how far the materials that we use to build CPUs can withstand the output of heat, if we need to invest in CPU cooling systems to add on?

Argument :

The need to build faster CPUs is a good idea, but the materials and means that we use to build them have their limits. For instance, in the metal connections that are so small, inside a CPU that can not be seen which are infused into the CPU's plastic moulding which can overheat. If a channel of metal which is this small can overheat because we are driving past the limit of how many times an electron can travel in the small channel, then we need to invent a better material to handle faster speeds.

For instance, we could invest in the idea of a photon based CPU which uses light to emit the same binary signal through fiber optic channels to devices on the motherboard, but would have the bottle neck proble of converting light back to a electron based signal. On the issue of heating, I dont think , photons dont emit that much heat in single fiber optic strands compared to electrons that traverse the microscopic electrical channels in current CPU designs. I do not know if anyone has tried to make a CPU that uses photons. However, if anyone has developed a photon CPU, the next problem is - can it be feasible for developing a CPU for a common PC ? This issue comes into question because of the size and space required to invent a photon CPU that works in the same way that we have designed electron based CPUs. We have reduced computers to such small sizes in our evolution of technology, but a photon CPU may be a problem because of the bottle neck problem.

A photon CPU also opens the door for developing a programming language that uses fuzzy logic, which uses values between 0 and 1 assigned to color recognition.

So I am back to the idea that maybe we have gone past the limit of how far the materials that we use to build CPUs can withstand the output of heat.

What do you think?
 

Evanski

Raccoon Lord
Forum Staff
Moderator
Im no expert on cpus other then they do a whole lot of math really quick and get hot (they also have stuff to do with timings and electric inputs im a novice not completely daft)

Point being is, computers will only ever grow until they become our slaves, realize thats kinda lame, fight for there rights, lose, kill us all

Yes current materials for cpu's are probably reaching there limit but humans will always find better materials so there toaster can still make toast connect to the internet have a screen and bluetooth connection for the toast app.
 

woodsmoke

Member
Yeah we are slowly reaching the limit of miniaturization, which is causing the electrons to leak out of their lanes. But there are probably many more things we haven't thought of yet. Currently there is stacking and adding more cores. Software needs to catch up to make use of sth like 64 cores though.

By the way do GM2 games still only use one core like GM1?
 

Roa

Member
What was said above is pretty much the crux of the situation.

CPUs get faster by adding more but optmized instructions. The smaller lane with in a die, the more instruction paths you can make thus meaning more speed per cycle.

The problem is that constantly finding ways to make things smaller is proving to be an issue, as paths often short circuit with either too much energy, bad resistance, or being too close.

Single threaded speed is hitting a technology limitation at this point, so all you can really do is dump more energy into the clock rate, thus providing more heat on these tiny paths. Or opt for more cores to do work simultaneously.

There is also a trade off of simplicity vs a clock speed. The simpler the CPU chip, the faster it can go with losing excessive energy to thermals.
 

RollyBug

Member
Single threaded speed is hitting a technology limitation at this point, so all you can really do is dump more energy into the clock rate, thus providing more heat on these tiny paths. Or opt for more cores to do work simultaneously.
I think this could be a really good thing. On your last point specifically, we are adding more cores and threads to do work in parallel and in turn changing our design process and thinking. Barring some extremely advantageous breakthrough in technology like, say, breaking into the quantum realm and molding that to our current computational needs (which I doubt will happen soon if at all), we are perhaps forced into a situation that requires us to branch out and take different avenues of approach. It's common knowledge that we've moved out of an era where the fastest clock rate chip reigns supreme and into one where productivity is fueled by some mix of clock speed and the amount of cores and threads you can throw at a job. And, of course, not all jobs are designed to be handled like this. But many more can be. To folks like you and I software development is quite easy on a smaller scale in the way that our hardware can handle so much design abuse. Write some sloppy code that increases the computation amount by an order of magnitude or more? No problem, your system can handle it no sweat. Will this last, though? Shifting our focus onto utilizing these extra cores, seeing it as a necessity, should I think drive invention and further change our design thinking. As a both a consumer and a creator I'm excited to see how things change in the near future.
 
Top