Results 1 to 4 of 4

Thread: Fermi - Dual screen idle temperatures fix

  1. #1
    Join Date
    Nov 2005
    Posts
    1,203

    Fermi - Dual screen idle temperatures fix

    This is a guide for settings up Dual Monitor setup on GTX400 series card with some light on Fermi Architecture of Nvidia. I had noticed that while attaching a second monitor on the system and that too on overclocked processor heats up your system. I had found my GPU hitting the speed of 74 degree. You can say that Fermi can be responsible behind this. It is recommended that you must have a good ventilation support in your system. If your system is not good in ventilation then you might issue regular heating problem.

    Using two monitors refers to the use of two physical display devices to increase the display space running on a single computer. Settings monitor are now dual and multiple operating system support Microsoft Windows. The creation of two monitors is easy. However, it requires that you add a second video card or install a video card that can support a double-headed or two separate physical products. The following instructions are for those with only one video card installed on their computers.

    It is easier and cheaper to just use a regular video card with two video interface. Most modern graphics cards are equipped with two video outputs, except, perhaps, low-profile models intended for installation in a compact body. Maps of the higher price range are usually provided with two digital outputs of DVI, designed for connection of LCD monitors, and more massive the maps are installed one DVI and one analog Video Interface VGA (Connector D-Sub). However, in a set of all cards with DVI invariably includes adapters for D-Sub - to them it was possible to connect any monitor. As a rule, simply connect a second monitor to the free video output to card recognizes it before loading the operating system.

    The first important step is to check if the video card supports resolutions that are required. This applies especially to displays of 1280 x 1024 and higher. The final resolution of a dual monitor setup is 1280 x 1024 2560 x 1024. This type of resolution is too costly for many graphics cards and is even more difficult is also using a 3D controller. The second important step is to decide what type of configuration of multiple monitors is to be achieved. The third important step is to gather all the specifications of the graphics cards and monitors.

    Installing multiple cards makes it possible to connect to one computer for more than two monitors, however, the problem is that the vast majority of motherboards only one AGP slot or PCI Express x16. Until very recently, the only solution to this problem was the purchase card slot for PCI, but this tire is noticeably inferior to the capacity of the bus AGP. With the advent of chipset nVidia nForce 4 Ultra SLI has become possible to buy a motherboard with two slots PCI Express x16, which means that the computer on the basis of such charges can be connected for four monitors. But in some cases the GPU heats while overclocking. So you must properly ensure that your system has good ventilation support and the process of your overclocking must be appropriate.


    Overclocking GTX 470 :

    Overclock a component means that it will work and consume more, and consequently increase the heat, so before considering to change operating frequencies, check that the computer tower that energy can evacuate it, and that the power supply can meet this increased consumption. Because an inadequate degree of overclock can damage your computer, your application must be undertaken carefully, decoupling the authors of this article or any other person or entity referenced in it, for any damage caused by the reader on your computer.

    This popularity has caused manufacturers to use the ability to overclock their products as an advertising hook, but we must not forget that this practice is considered an inappropriate use of the material, and most likely results in loss of warranty. Like the CPU, the manufacturer determines a voltage under which the reports will work properly. Speeding is often accompanied by increases in voltage to maintain system stability. The reports normally run at 2.6V. (The standard is 2.5V.) Although there are reports that need even more than 3V to express their full potential. The most important parameters of the reports part of his work are often latencies. Since the main memory chips are built of capacitors, need refresh cycles of the information they contain, typically referred to as latency. At lower values, higher performance.

    The two best tools suitable for this are :
    • MSI Afterburner 1.6
    • NVIDIA Inspector 1.87


    Download the software and then install the same. Then run the tool and check your card options. Give a eye on the Show overclocking option and then you will notice some settings which have difference between the GPU BIOS settings. After that unlock Min and then you can move the memory bar to overclock your system. Put it at lower level first. My Card is GeForce GTX 470 and do not forgot to check the settings of the system.

    You can also try GPU Tool for this. First you must open the GPU-TOOL and go to the tab clocks, there will see three values: memory clock, shader clock and gpu clock. To start the overclock, we click on the button find max of any rod (not increase it manually). On the other hand, if you can increase the memory, also increase the bandwidth and here you can increase the gpu clock other values such as filtering of pixels. Now get off the Gpu-z and notice the values. If they had the gpu-z with the values before comparing difference.


    I have a 9400 GT and I greatly increase the performance in games, e.g. in GTA IV had before 15 fps and now I have 25 to 30, the undercover NFS had 20 to 30, now I have 50-60. The card company \ has been putting some effort into your Super premium edition overclocking graphics cards, often producing products with the highest available factory overclocks. Today we have the opportunity to test their new GV-I-13 Super N470SO Overclock Edition. The raw specifications suggest that this is the fastest factory overclocked GeForce GTX 470 published to date, and look forward to considering additional performance against the premium paid by high-end graphics.

    GeForce GTX 470 overclocked from the factory is equipped with a core clock speed of 700 MHz, which is almost 100 MHz faster than the population of 607 MHz specification. The shaders run at 1400 MHz, about 200 higher than the Nvidia reference 1215 MHz overclock Gigabyte states that offer 12% better performance than a standard GeForce GTX 470, and let's see if our reference points support this statement. Unfortunately, the memory card is not overclocked. It runs on the same 837 MHz GDDR5 as Nvidia's reference card. This should give us an idea of whether the GPU is GF100 with bandwidth constraints or not.

    A quick glance is all it takes to recognize that Gigabyte chose to assemble your card with a cooler more powerful than the Nvidia reference fan. The cooling system has been designated 3x thrusts, and the company claims that the pulse width modulation triple (PWM) fan configuration (equipped with anti-turbulence inclined fins and copper heat pipes) can generate 27 CFM flow air fan - all the while maintaining a silence of 22 dB and 38 dB at idle under load. For all performance-oriented features, the results of the card GeForce GTX 470 are standard fare, including two dual-link DVI outputs complemented by a single mini-HDMI output. As with GeForce GTX 470 reference, remember that you can only use two of the three digital outputs at once.

    Another useful program - UltraMon . It allows you to define clearly the application on different screens to set up their "wallpaper and screensavers for each monitor or stretch one" wallpaper "on all displays, and display the image on all monitors" primary "display. Finally, UltraMon supports the taskbar for all displays. This program is shareware, and for the full version will have to pay $ 40, but this is unlikely to stop a person who needs the support of more than 10 monitors, but it only provides this program.

  2. #2
    Join Date
    Nov 2005
    Posts
    1,203

    Re: Fermi - Dual screen idle temperatures fix

    Overclocking by Rivatuner :


    After installing RivaTuner, this one will make a backup of your registry and then will start. On the first tab of the software (Main), one can see the brand and model of your graphics card, the monitor on which it is installed, the version of the GPU, the memory bus and the graphics card. Below, you can also see the driver version installed on your graphics card. At the name of the driver for your graphics card, there is a small square, with an indication Customize. Click it then click the first icon to System settings.
    A window opens, the first thing to do is check the Enable driver-level hardware overclocking, which will allow us to access the overclocking. Now there are two bars that you can set:

    • Core clock: frequency of your GPU
    • Memory clock: frequency of your memory

    You can now apply for new frequencies to your GPU and your memory (proportionally, see summary). It is advisable to increase the settings step by step, some cards supporting overclocking important, others not. Go, then gently. After each new frequency, test with 3dmark (you will see both performance before and after O / C). RivaTuner will also inform you of the limits of your equipment in a small panel showing attention. On images of your GPU and memory. These are often well-calculated but sometimes, in some cases, we can exceed them without problems. With you to think and test.

    One last detail, do not check the box next to Apply overclocking at Windows startup as long as you're not sure your overclock is stable because it could crash the machine every time Windows starts. As for the CPU, you will eventually reach the limits of your graphic card and some events will appear as a sign of overclocking too high. Freeze your monitor (the screen freezes and nothing responds), which means that you push your GPU too much, the only option is to do a reset. You will see artifacts on screen (black lines, missing textures): it is a symptom that your memory is overclocked too, it will suffice simply to lower the frequency until you see these artifacts appear.

    Overclocking by PowerStrip :



    The installation of this software is very simple, just follow the instructions, an icon should appear in the shortcuts on the taskbar at the bottom right. This comprehensive software has many features and it is possible to create profiles with different settings (colors, frequencies, refresh). Some features like making a shortcut on the brightness can be very useful for some games like counter strike but is not the purpose of this article.

    To change the frequency of the graphics card, just click the right mouse button on the icon and choose powerstrip performance profiles / configure. It is then easily changing the frequencies of the graphics card using the 2 green arrows, you can change the frequency of memory (Memory Clock) and frequency of the GPU (Engine clock). For Nvidia, it is possible to download a small "tweak" that can activate a function in the properties of the graphics card drivers. The screen below shows an example of the function clock frequencies that appears after activation. In the same way as the software powerstrip, it is now possible to increase the frequency of memory and GPU card.

    Now that we know how to change the frequency of its graphics card, a question arises: what frequencies to use to achieve the best performance. The first thing to do is not to suddenly increase the frequency of memory and / or the GPU, the risk of crash the computer, see damage to the graphics card is not negligible. Fortunately, the software usually returns to default settings if the frequency of the map is too large and the display is possible. To overclock the graphics card, it must be tested separately and the GPU memory by increasing the step frequency (steps of about 10MHz). At almost every change of frequency, test in a 3D game or even better using a benchmark software like 3DMark. The screenshot below shows one of the most popular software Benchmark: 3DMark2003.

    Once the maximum frequency were determined, it is advisable to complete a benchmark to validate the overclocking and measure the performance gain. To go higher in frequency it is possible to consider technical amendments in order to improve cooling. A box fan can bring some fresh air, but to significantly improve performance, there is no alternative to change the radiator and / or the graphics card fan (if equipped) .

    Risk of Overclocking :

    • The overclocking is of course under the responsibility of those who practice it and the author of this article cannot be blamed for any damage you would suffer for your hardware.
    • To answer the question, yes, overclock its graphics card is risky. But like all extreme sports, overclock its processor following these (e.g. in this guide) can limit the risks to a minimum. The author of this article practice this sport for many years, no early death component to report.
    • First is to overclock the graphics card makes its course void the warranty of the manufacturer, even if it is true that it is difficult for the manufacturer to know how the latter has been used by the Users.
    • The main risk of overclocking is to grill the graphics card overheating. In practice, the graphics are fairly hardy and may not damage is low, provided of course to ensure continuous cooling of the RAM and GPU card.

    Dual Monitor Setup :

    Dual Monitor Tools is a collection of tools for managing multiple monitors. The application is completely free (and open source ) and, because it is modular, we just have to pitch we need to take care of multiple tasks. In Windows XP has a problem - if you have two monitors with one monitor (in my case a TV) is switched off, the windows are located on the second (now switched off) monitor - and stayed. We have to carry them or including dragging handles or the other, rather perverse ways. In the new Windows 7 to solve that problem, but as typical of Microsoft - are not sufficiently clear. By pressing the keys win + p opens fast with the option of two monitors, and if, through him off the second monitor, the windows there will automatically work fine.

    New software support for advanced multi mode in the world of PC-compliant computers appeared in the Windows 95 operating system. However, while computer monitors are still very expensive, and few people could afford to purchase an additional display. Today, even the liquid crystal displays have become affordable to many users, therefore, conceived a decade ago, the function can become much more marketable.

  3. #3
    Join Date
    Nov 2005
    Posts
    1,203

    Re: Fermi - Dual screen idle temperatures fix

    When the allocation of a window (i.e. its active state) by pressing the keys win + shift + arrow left / right - the right window climbs to the left or right, respectively. On two monitors can simultaneously work with two programs, open to full screen. Moreover, the NVidia graphics cards, this mode is called DualView - i.e., two monitors, as it represent a desktop, but by default the application runs only on one monitor. If desired, and if the frame is narrow at the monitors can be a single application to open on both monitors. If the application runs in a mode of full-screen, the second monitor remains vacant and it can monitor the state of the system, video, etc.

    Virtually all modern LCD-monitors have two inputs: analog (D-SUB), left for compatibility with older graphics cards, and digital (DVI-D), focused on next-generation graphics card. Some monitors automatically determines which input is connected to video card, but most of the others have a special channel selector is usually passed to a separate panel. By pressing a button, we can switch from analog input to digital and then go back. But what if you take two computers, put them side by side and connected to one monitor? First - on the analog input, the second - on digital. Harder to deal with a mouse and keyboard. Well, actually, two keyboards easily placed on the table and look very sexy, moreover, since both mouse and keyboard are PS / 2 devices, they are without much fear of anything to burn.

    The advantage of the proposed technology in an extremely high quality images (in fact, we are looking at the monitor through its native interface), but the disadvantage - limited cable length (only a few meters) and the inability to connect the third monitor. That is, if we have four computers, without two monitors still not enough. But agree, two monitors - it's still not four.

    Moreover, the choice of one method does not preclude the use of another. Suppose we have two main computers connected to a monitor via the analog and digital inputs. Other computers can connect either through the switches to the same or another monitor, and reconcile ourselves to the low quality images, or by providing them with another monitor with a direct connection, if the drop in quality is unacceptable. Sometimes we have to solve the opposite problem, connecting to a computer two monitors at once. Technically it is easier, safer, better and cheaper dig video card with two outputs. They are different: either both analog, or one analog and one digital.

    Another option - to put two monitors side by side, split the Work Desk in half and thereby increasing the horizontal resolution in half. Additional software support in this case is not required (enough drivers and graphics card), but working with such a miracle of technology without a shudder, and it is impossible because it goes to the forest (but widescreen movies on it quite pleasant to watch, of course, from a distance of not less than one meter). Option number three - the cloning of monitors: what can be seen on one monitor, will be seen on another. Except, perhaps, video. And all because for performance reasons, most video players using the so-called overlay mode (overlay mode), in which the video stream is in transit through the card directly to the monitor, bypassing the video memory and other units involved in cloning.

    The Fermi Support :

    Nvidia has decided to focus on the aspect of computing architecture, without going into details concerning the graphics, although the base is the same and already allows doing a little idea about it. If Nvidia has decided to attack in force in computing with GPUs, AMD more example, because it is an important strategic point for the future of the company. Compared to Intel and AMD, Nvidia has no CPU and has a vested interest in everything to try him nibble market share quickly, so that they may be reluctant to create too much competition to their core products. It is also a means of securing its future as it is obvious that confine themselves to 3D when Intel and AMD can offer a complete platform is risky.

    For its latest generation of GPUs, NVIDIA uses the name of famous physicists such as code name for its architecture. The GT200 was as Tesla, in reference to Nikola Tesla. A name that has also become the brand of Nvidia for its range of products for the market massively parallel computing. Do not confuse the code name that changes with each generation of GPU, and the mark will remain, Tesla. By introducing the G80 and the GeForce 8, Nvidia has opened the way GPU architectures are designed with a view to facilitate the computing. While the GPU Computing existed before, but through the 3D rendering pipeline. Nvidia has introduced access to its GPU computing, in addition to the classical rendering mode and introduced a shared memory which allows a level of communication between threads. It took the Radeon HD 4800, two years later to see AMD make a similar orientation.

    The GPU Computing is designed to accelerate massively parallel algorithms taking advantage of the many threads exist within a GPU. These algorithms must be designed to decompose in a multitude of small threads that will run in parallel, in groups, on the GPU. We are talking about thousands of threads, in contrast with the few threads of a CPU. One way of thinking and programming so totally different Nvidia who asked to design a suitable language: C for CUDA. It was written by Ian Buck, who was already behind Brook GPU, a language designed to exploit the 3D rendering pipeline, and not directly to the heart of computing on the GPU, which made it more rigid and less performance.

    Since the G80, Nvidia has gradually improved the capabilities of its GPUs for everything concerning the massively parallel computing. Firstly, the atomic operations have emerged with the derivatives of the G80. With the GT200, Nvidia was still a little further by doubling the number of general registers, passing the maximum number of threads per group of 768 to 1024 and adding support, albeit limited, the calculation in double precision.

    • More power of computing in general
    • A support network of double (FP64)
    • A memory subsystem more efficient and secure
    • A C + + compatibility
    • Better performance with small kernels

  4. #4
    Join Date
    Nov 2005
    Posts
    1,203

    Re: Fermi - Dual screen idle temperatures fix

    If Nvidia wanted to keep the programming model based on CUDA, for reasons of compatibility with the current code for CUDA and not to impose a model too different developers, significant improvements have been made to the ISA. If the PTX 1.x will be compatible with Fermi, the PTX 2.0 specific to this new architecture will be him, not compatible with current GPUs. The improvements concern both the hardware and the software because of the interactions at this level. First, Nvidia wanted to bring a support CUDA C + +, which was not trivial as several limitations of previous architectures prevented. So, Nvidia unified memory space addressable by Fermi. If there are 40 bits as the GT200, it includes the local memory, shared global. This space allows unified management of pointers and references that are necessary to support a high level language such as C + +.

    This is not everything, since Nvidia has made its management more flexible connections to support virtual functions and recursive calls. Preaching is also on the menu for all instructions to avoid having to use dynamic branching in maximum cases. The ISA supports Fermi system calls and exceptions with primitives such as try and catch. While current GPUs are capable of handling several types of kernels in graphics mode (pixel shaders, vertex shaders, etc.). This is not the case in computing mode. In it, they can send only one kernel at a time and the following cannot be before the first has been fully processed. The problem is that if the kernel is small, that is to say, it does not fill the entire GPU, the computing power is lost. With Fermi Nvidia proposes to carry up to 16 kernels (but perhaps a little less in practice) in competition, if there is no dependency course. Enough to blow up the future performance of the GPU and finally make viable some algorithms on it.

    Switching from one context to another will also be much faster, Nvidia talks a gain of 10x compared to the GT200, which will allow more effective interactions between a 3D renderings and calculating the physics for example. Like its previous high-end GPUs, Nvidia has not done in lace and has developed a huge chip. 1.4 billion Transistors with the GT200, and we spend 3 billion with Fermi, enough to get the Cypress and its 2.15 billion transistors that impressed us a few days ago for a little guy! Nvidia would not disclose the size of the chip but it obviously will be enormous, despite the 40-nanometer manufacturing. The memory subsystem Fermi was completely revised. In the GT200 GPU and the previous Nvidia does not have real caches L1 and L2. He was really dedicated texture cache only access to textures. Type operations Load / Store Cache had no human, contrary to what happens at AMD since the RV770.

    The new Quadro series Fermi 4000, 5000 and 6000 using all the young architecture Fermi already introduced by Nvidia in its line of consumer graphics cards. Nvidia says that compared to equivalent models of the previous generation, the new framework provides a performance increase five times with 3D applications and eight times higher in the simulation calculation. The most powerful model is capable of processing up to 1.3 million triangles per second.

    Nvidia has also introduced the Quadro Plex 7000 solution, which integrates multiple GPUs on a single integrated system, and mobile workstation GPU Quadro 5000M.

    The new Quadro product family consists of the following products:
    • Quadro Plex 7000, with 12 GB of total memory and 896 stream processors;
    • Framework 6000, with 6 GB of GDDR5 memory and 448 stream processors;
    • Framework 5000, with 5 GB of GDDR5 memory and 352 stream processors;
    • Framework 4000, with 2 GB of GDDR5 memory and 256 stream processor;
    • Quadro 5000M, with 2 GB of GDDR5 memory and 320 stream processors.

    Conclusion :

    Virtually all modern desktop computers are extremely useful, but very rarely-requested feature - the ability to simultaneously work with multiple monitors. Unfortunately, few people tried to work with multi-mon, but almost everyone who tried it, appreciate the convenience of such a system and do not intend to abandon it. Moreover, according to experts in the field of ergonomics, the increase in the working area of productivity. First of all, everyone who works for a computer professional: programmers, designers, and even office workers. Modern operating systems with a window interface allow time to arrange for two or more screens of the set of documents - no longer need to press the button or tab to pull the mouse to move from one document to another. You need a good graphic support for that.

    A two 17-inch monitor is more information than a single 21-inch, and the cost is much cheaper solution. Simple calculations show that only two 17-inch monitor with 1280 x 1024 pixels have a combined workspace area 2560 x 1024, and three monitors - an area of 3840 x 1024. Support for multi-mon mode in the game Microsoft Flight Simulator is the standard. There are other games that can work with multiple monitors, but many of them realized only support graphics cards Matrox DualHead , which show not too high performance with a three-dimensional graphics. The L2 cache in combination with a higher number of dedicated units, will speed the processing of atomic operations that protect a memory time a thread can read, modify and enter the result. With the L2 cache, it is now possible to treat atomic operations which follow on the same memory area by making changes in the L2 cache without making round trips to the video memory. Nvidia talks gains ranging from 5x to 20x, but without specifying under what conditions they can be obtained.

    Finally, Nvidia has added a very expected in the professional world: the ECC. This support can detect errors in the different memories and possibly correct them. We do not know the exact implementation that Nvidia has made since the manufacturer has chosen not to enter into these details. We know from cons that registers, caches L1 and L2 are protected in the same way as GDDR5 video memory or DDR3. The first is not yet in version ECC memory but will be a choice for Fermi, because in addition to offer significantly more bandwidth than DDR3, with which Fermi may be cramped, it also secures data transfer. Note that we are talking about DDR3 and not GDDR3, the first being the only one compatible with the ECC. In Fermi, each of the 16 has a dual multiprocessor scheduler that always runs at low frequency and 4 blocks executions that operate at high frequency.

    Note the presence of two power connectors PCI-Express on every card, one type 6-pin and one 8-pin type. Remember also that the model is a widely photographed Mechanical sample: no graphics chip is soldered on, no memory chip is soldered, and fan of the card is not even spinning. As a bonus, the PCB is incomplete because part shown on one of its ends. Tesla Model requires that there has DVI output, a choice that obviously does not find on the GeForce cards.

Similar Threads

  1. Replies: 5
    Last Post: 30-03-2012, 07:38 AM
  2. ATI/Nvidia card dual monitor idle power
    By Bryant in forum Monitor & Video Cards
    Replies: 6
    Last Post: 03-08-2011, 11:04 PM
  3. Different temperatures in dual cpu magny core machines
    By ceremonial in forum Motherboard Processor & RAM
    Replies: 6
    Last Post: 02-06-2011, 10:10 PM
  4. Dual Monitor shows 100% clock in idle state with GTX260
    By Gentza in forum Monitor & Video Cards
    Replies: 5
    Last Post: 23-03-2011, 10:33 PM
  5. Idle And Loading Temperatures For I7 930
    By Numb in forum Overclocking & Computer Modification
    Replies: 3
    Last Post: 11-12-2010, 06:31 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Page generated in 1,717,378,273.62526 seconds with 17 queries