Quote:
Originally Posted by cody
Here's the thing. A slow processor (or "dog") would prevent you from watching a 1080P movie because it would be pegged at 100% for the whole movie, resulting in a choppy video. A slow processor could prevent you from successfully burning a CD while watching Youtube, becuase it'd be peg the processor. In both situations, you'd see a pegged processor and the computer would be slow as a result. Now try those same tasks with a fast processor and the CPU won't be pegged as often and as a result, the computer will function better. I mean that's the case, right?
And yes, if you tell somebody it's funny that they think something, it is condecending, regardless of tone.
|
You're talking about real-time applications.
Playing a movie means reading data, decoding it, and drawing it to the screen approximately 30 times per second. Each iteration of that process has X instructions that need to be executed, so that's 30X instructions per second.
If you have a fast processor, it can do, let's say, 1000X instructions per second, which means it spends the vast majority of the time playing that movie doing nothing... each frame is rendered well before it has to be done in order to meet the timing requirement of the video. Lots of time is spent just waiting until it's time to draw the next frame.
But on a slow processor that can only do around 30X instructions per second, by the time a frame is done being rendered, it's already time to start the next frame. So there is no waiting period between the frames of the movie where the processor gets to just sit around idle. This processor will show up as 100% utilized. And if the processor ends up being unable to complete all the instructions in time to draw a frame on schedule, then it ends up having to skip ahead in order for the movie to be played on time, which results in dropped frames and jumpyness.
Writing a CD or DVD is similarly a real-time operation, in that the computer has to be able to send data to the disc drive fast enough that the write head has data to write as the disc spins past it. If you run out of data because the CPU isn't fast enough to meet the input schedule of the burner, you get a coaster out of the disc drive.
In general, slower processors will have higher CPU % on real-time operations because they're simply not getting done with as much time to spare as faster processors. CPU percentage is literally calculated by counting how much time the processor is idling waiting for the next instruction to run, then averaging that over some sampling time.
On non-real-time operations (which are the vast majority of operations), the CPU will be at 100% for both a fast and a slow processor. A non real-time operation will be run as fast as possible, meaning there's no timing schedule for how often a piece of the operation must be done. You just want to get it done ASAP. For an operation like that, there are X instructions, and while those instructions are running the CPU is at 100%. Most operations are very short (like clicking a button, or dragging a window, etc) so you can't really see the difference between a fast and slow processor. But try something like encoding an mpeg and you'll see a massive difference.
Lets say an encoding job is going to take 10,000X instructions... on the fast 1000X per second processor, you'll be "stuck" at 100% for 10 seconds.... but on the slow 30X processor, you're waiting around 5.5 minutes for the job to finish. But both CPUs are pegged at 100%. *Any* CPU doing this job would be at 100%... it's just a matter of how long it's at 100%.
This is the reason why CPU benchmarks are done on long running, intensive operations. You never see a comparison of real-time operations, like decoding a movie. They tend to do stuff like MP3 encoding, or launching 100 copies of Office, or other similar stuff. A half-ass way to evaluate performance is to check frame-rates in 3D games... those games are real-time operations like watching a movie, but can use the spare time left on fast processors to render additional frames each second, making the game smoother. More frames per second implies a faster processor.
Anyway, the core of my point is that CPU performance shouldn't be evaluated based on CPU % since that number is just a contrived average percentage of how much time the CPU was at 100% vs. 0% over some sampling period. To really sort out how fast a processor is, you need to know how many operations per second it can run and what sort of parallelism it's got. Not that the PC-world bothers to evaluate chips this way, but the unit for that is TFLOP, or Trillions of Floating-Point Operations per second.
Cnet can be right claiming that processor is "a dog" and yet you personally can have very little experience with pegging out the CPU. It just means you don't use that processor for any long running non-real-time operations. Someone that does a lot of multi-tasking, or mpeg encoding, or video editing, may absolutely hate life, while you're totally happy with it because you only do low-end real-time processing (like watching YouTube) or short non-real-time operations (like browsing the web). Is just that traditionally processor speeds are evaluated on the applications that are CPU intensive. So you're simply not bothered by the slow processor. Consequently, since more and more people are using computers less and less intensively, we're actually seeing slower processors come out... like the Atom for NetBooks... that are cheap and fast enough to do basic real-time stuff like watching an HD movie and that's it. They would totally suck for DVD ripping... but that not what they're for. But they're still technically "dog slow", even if they work great for what they're designed for.
Ugh.. how did this turn into a dissertation? Hopefully no tone is being inferred.