Introduction
The Hyper-Threading has been presented for the first time by Intel with the Pentium 4 "Northwood" to 3.06 GHz, and the number of CPU Xeon MP "Foster" in 2002. The main purpose of this proprietary technology was to improve the use of the processor due to increased parallelization. With the latest Core i7 980X and six physical cores, Hyper-Threading reaches 12 logical cores on a single PC. But all this is that legitimate questions mount: the software is able to really take advantage of eight or more parallel threads? The Hyper-Threading is a problem or a panacea for efficiency? Is not it better to stay in six-core physical, rather than risk decreases performance due to optimized applications that deploy for no reason just loads of work on logical drives? The core Gulftown integrates Intel's Hyper-Threading Technology to provide 12 virtual cores, but only with a few specific applications can be seen better performance.
History of Hyper-Threading
There was a real need to make the Hyper-Threading. Since the Pentium 4 had a pipeline of instructions rather long, it was imperative to get the frequencies as soon as possible and keep the pipeline busy. So Intel duplicated units that store the state of architecture, and so a core with Hyper-Threading is the operating system as two logical processors. The scheduler could distribute two threads or processes simultaneously, and if the branch prediction unit of Intel had worked properly, the instructions would ensure an efficient loading and execution. The benefits for the Pentium 4 were mainly the increase of the reactivity on single core systems and small performance improvements for applications. At least in the desktop segment. In servers, where parallel processing is key, Hyper-Threading has a greater impact. Applications written for desktop users were not optimized to take advantage of parallel processing, because the necessary hardware was not there. Initially, the Hyper-Threading has become a bad reputation because it has failed to improve performance in those securities that are working in single thread.
With the arrival of Core 2, Hyper-Threading disappeared. However, Intel has decided to bring it back to life with the Nehalem microarchitecture, which is the basis of all CPU Core i3, i5 and i7 available today, including the six-core Core i7 980X. The situation is very different today than in the debut of Hyper-Threading. To start the software developers are much more in tune with the hardware ecosystem, so it is rare to find a popular title that can not benefit from parallelism and is optimized to take advantage of threading. In addition to this, now AMD can not apply pressure to Intel in the segment of high performance and Hyper-Threading has been transformed into a value-added and differentiating features of the series rather than as a key innovation. With six physical cores, Hyper-Threading really make sense?
How do you work the Hyper-Threading
The Pentium III had a 10-stage instruction pipeline, the Pentium 4, or bring the length to 20 with the Willamette core (180nm) and Northwood (130nm). The Prescott core (90nm) had a pipeline to 31 stages. The last of its kind, the core Cedar Mill (65nm), maintain this structure of the pipeline. The basic idea behind the pipeline is to structure the calculation instructions in different steps, and then organizing them in a line (pipeline), in order to obtain a higher execution speed (throughput), especially at high frequencies. However, if the pipeline is empty, or contains incorrect instructions, the performance falls. This is why the so-called program branches is important. Indicates essentially the ability to "branch" of a program, and is managed by the CPU by a factor called the branch prediction unit, which serves just to figure out which element is next in line for implementation. The 31-stage pipeline in Prescott and Cedar Mill cores in particular depend on the efficiency of a high load. This is why Intel has invented and added a "replay units", which allows the processor to intercept transactions sent by mistake, and take it when you could ensure the correct conditions for the implementation. A side effect of the replay system concerned the slowdown in some applications with Hyper-Threading enabled, because the execution resources were constrained and therefore deducted from the secondary thread performance. At that time, the value of Hyper-Threading must be questioned, because some times it was a benefit and other damage.
The current implementation of Hyper-Threading is similar to that which we discussed, at least that means that each physical core appears to the operating system as two logical processors. If the execution resources are not used by an ongoing operation, the scheduler of the processor can run something else to increase efficiency and prevent the deadlock from incorrect branch predictions, cache or other loss of data dependencies. In terms of hardware, everything you need, well suited to the CPU, to support Hyper-Threading is a platform that supports BIOS and operating system compatible with Windows NT or later. In the past we have seen the Hyper-Threading provide additional services, but also increase consumption (although according to Intel is just an addition in increasing the surface of the die). Applications that use threads heavily and workloads usually get an advantage in efficiency of many cores and more threads that less than the average software optimized for multiple threads.
Bookmarks