Results 1 to 11 of 11

Thread: Hyper-V in Windows Server 2008

  1. #1
    Join Date
    May 2008
    Posts
    420

    Hyper-V in Windows Server 2008

    There is a lot of talk about virtualization and the bulk of this debate revolves around server virtualization. This is one of the most interesting trends in the industry with the potential to, in the next few years, change the paradigm of the deployment of IT systems. But server virtualization will change not only how IT administrators and architects think about servers and use of systems, it should also influence the processes isredstva used to control what will certainly become an increasingly dynamic environment. In fact, virtualization did not begin yesterday, but the technology is still evolving. Even the word still means different things to different people. But if you take a broad definition, the meaning of virtualization is the abstraction level of a stack of technology from the next level, such as storage on servers or operating system from applications. Abstraction of various levels, in turn, makes it possible to consolidate and better manageability.

    As a concept, virtualization is applicable to storage, networks, servers, applications and access. If you look at storage and network virtualization to a reduction in the whole set of different devices to a common pool of resources looked and acted as one entity. For example, you can customize the solution 40 TB storage, instead of a set of 20 storage and 2 TB. But with the other components of the virtualization of operating in the opposite direction, helping to make a single system to introduce several systems. The most common example of this is server virtualization, where multiple instances of operating systems and environments are installed on one server.

    Microsoft has approached the issue of virtualization on several different levels, ranging from desktop to data center, with solutions for server virtualization, applications, presentations and desktop computers. The common thread running through all of this is control by using Microsoft System Center. In this article, my attention was drawn to the component of server virtualization, and specifically on the role that Hyper-V, a key feature of Windows Server 2008, plays in the dynamic data center.

  2. #2
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Server Virtualization Market

    To start, I think it's worth looking at what exists in today's environment, and where the market is moving in general. Depending on what research to read, some analysts estimate the number of physical servers that are used now as carriers of virtualization, in 5.9 percent of their total number. You can find a large sector of the market, given that every year comes more than nine million physical servers. Certainly we can say one thing: there remain big business prospects, to the extent that, as more customers become acquainted with virtualization and want to apply it.

    Importantly, where it was adopted virtualization. Corporate clients are definitely headed the process, testing her and taking her first. However, virtualization is now deployed as small and medium-sized businesses. Adoption of virtualization cross the line types of workloads, from business applications and management, to work with the network and e-mail.

    So why is virtualization now all the buzz? This is due to several factors, not least of which is a good time. Several key trends in the industry come together at the same time, helping to promote the increased adoption of virtualization. These factors include the use of 64-bit computing, multicore processors and even the movement for environmentally safe disposal of computerization and improved the old systems.

    Systems are becoming more and demand technologies like virtualization to make full use of its capabilities. But while it is true that the core technology (and Moore's Law) confidently provide more computing power than they can use the system, now we are also acutely aware of the impact on the environment, energy requirements and cooling costs.

    All of these factors, plus the ease of study the possibility of profit from investments in the adoption of virtualization, should, together, to accelerate the adoption of virtualization, large and small companies. And we, IT professionals, we can expect that all major market participants will continue to invest in this technology over the next few years, improving the components and functionality.

  3. #3
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    How does server virtualization works

    Server virtualization, generally speaking, can take the same physical device and install it (and used simultaneously) Two or more operating environments that are potentially different and have different credentials, application stacks, and so on. Hyper-V - a new generation of virtualization technology, based on 64-bit hypervisor, which offers a platform with reliable and scalable capacity. Along with System Center, it offers a single set of integrated management tools for physical and virtual resources.

    All this is aimed at reducing costs, improving resource utilization, optimize infrastructure and providing companies the opportunity to quickly deliver new servers. To help readers better understand the architecture of Hyper-V, I would like first to look at the different types of virtualization solutions.

  4. #4
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Types of virtualization solutions

    In fact, there are three main types of architectures used for server virtualization. Fundamental relations between them lie in the relationship between the level of virtualization and physical equipment. The level I understand the level of virtualization software called Virtual PC monitor (VMM, not to be confused with Virtual Machine Manager). It is this level provides an opportunity to create a few isolated instances based on the same hardware.

    An example of architecture of Type-2 VMM virtual machines are Java Virtual Machines. The goal of virtualization is to create a runtime environment within which the process can perform a set of instructions, not relying on a system carrier. In this case, insulation is designed for different processes and allows one application to run on different operating systems without having to worry about operating system dependencies. Server virtualization not fall into this category.

    Type-1 VMM and hybrid VMM - an approach with which you can usually find today. Hybrid VMM - a phase in which the VMM is running with the OS on the host computer and helps to create a virtual machine. Examples of hybrid VMM is Microsoft Virtual Server, Microsoft Virtual PC, VMware Workstation and VMware Player. It should be noted that although these types of solutions are perfect for customers scenario where the VMs are used only for a part-time, VMM also add significant overhead and are therefore not suited for workloads, resource intensive.

    The architecture of VMM type 1, level VMM runs directly on hardware. This is often called the hypervisor level. This architecture was originally developed by IBM in 1960 for mainframe and recently became available on platforms x86/x64, as part of a series of decisions, including Windows Server 2008 Hyper-V.

    There are solutions available, which hypervisor is the introduction part of the firmware. This, however, is the only option for delivery and no effect on the technology itself.

    Looking at Type-1 VMM can detect, in fact, two main types of architecture decisions hypervisor: micronucleus and monolithic. Both of these approaches are real Type-1 VMM, whose hypervisor directly on physical hardware.

    The monolithic approach places the hypervisor / VMM in a single level, which also includes most of the required components, such as the kernel, device drivers and a stack of input / output. This is the approach used by such decisions as VMware ESX and traditional mainframe systems.

    Micro nucleus approach uses a very thin, specialized hypervisor, performing only basic tasks of isolating sections and memory management. This level does not include a stack I / O or device drivers. This is the approach used by Hyper-V. In this architecture, the virtualization stack and drivers for specific devices are located in a special section called the parent partition.

  5. #5
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Windows hypervisor

    Ensuring strict separation between multiple operating systems running through the creation of virtual processors, memory, timer and interrupt controller. Operating systems use these virtual resources in exactly the same as using a physical counterparts.

    Hypervisor Windows, part of the Hyper-V, perform the following tasks:

    * Create logical partitions.
    * Manages the allocation of memory and CPU resources for guest operating systems.
    * Provides mechanisms for virtualization I / O and communication between sections.
    * Ensure the implementation of the rules of access to memory.
    * Ensures implementation of policies of CPU resources.
    * Provides a simple programming interface, known as the hypercall.

    Using micronuclear approach, Windows hypervisor rather small - less than 1 MB. These small memory requirements can help improve the overall security system.

    One of the requirements for the use of Hyper-V is the availability of x64 systems with Intel VT or AMD-V. X64 technology allows access to a larger address space and systems with more memory, allowing, thus, more virtual machines on a single carrier system. Intel VT and AMD-V - is supported by the hardware virtualization solutions that provide ultra privilege level in the architecture of the ring environment, helping to make runtime hypervisor separate from the rest of the system. It also enables Hyper-V to run unmodified guest OS, without causing serious harm to productivity through emulation.

  6. #6
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Parent Partition

    Hyper-V is a single parent partition, which, in essence, a virtual computer with a special or privileged access. This is the only virtual machine with direct access to hardware resources. All other virtual machines, known as the guest sections pass through the parent partition to access devices.

    The existence of the parent partition sufficiently transparent. Ready to install Hyper-V, in the first place is to install Windows Server 2008 x64 Edition on a physical system. Then go to the Server Manager, including the role of Hyper-V and restart the system. After restarting the system, first boot the hypervisor Windows, then the rest of the stack is converted into the parent partition.

    The parent section has a keyboard, mouse, video screens and other devices attached to the host server. He has no direct control over timers and interrupt controllers, which uses a hypervisor.

    Parent section contains provider Management Instrumentation Windows (WMI), contributing to the management of all aspects of the virtualized environment, as well as the virtualization stack, performing tasks related to the equipment on behalf of the child partitions. In addition, any drivers are independent suppliers of equipment (IHV) necessary for the system of the carrier contained in the parent partition and all drivers are created for 64-bit versions of Windows Server 2008 also contained in the parent partition.

  7. #7
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Architecture-sharing devices

    One of the innovative architectural components in Hyper-V is a new architecture is the sharing of devices supporting emulated and synthetic devices in each guest OS. Emulation devices quite useful to support the older operating system, device driver, developed for previous generations of equipment. For example, Hyper-V includes emulation of Intel 21140 network adapter, so called DEC 21140 network adapter during the delivery of many of the old OS.

    Typically, emulation devices slowly, with difficulty extensible and does not scale. But the emulation is still important because it allows you to run most x86 operating systems on Hyper-V. Since virtualization is now becoming a very specialized technology, primarily intended for testing and development, in an important technology for production environments, users require better performance, to run larger workloads. Emulated devices no longer meet these growing demands.

    An alternative solution here is to use synthetic devices Hyper-V. Synthetic devices - a virtual device, directly mapped to physical devices. In contrast to the emulated devices, synthetic devices do not emulate obsolete equipment. The model of sharing equipment Hyper-V, guest operating systems can interact directly with synthetic devices who may not have physical counterparts. These OSs use of online services (VSC), acting as device drivers within the guest OS.

    Instead of direct access to physical equipment used VSC VMBus, which is a high-speed bus in the RAM to access the virtual service providers (VSP) in the parent section. Then VSP parent partition control access to physical hardware based system. A key advantage of synthetic devices is that the performance of synthetic devices VMBus, closer to the performance of non-virtualized physical devices.

  8. #8
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    A set of quality in Hyper-V

    Needless to say, that the closer the virtualization platform is suitable to work on the type of physical server, the easier it is for organizations to deploy virtual workloads and rely on them. In my view, there are four key areas in terms of which can be viewed as a platform for virtualization.

    Today, most virtualization solutions based on hypervisor rather close to each other in terms of their qualities and functions. With development, the main difference will be like in terms of total cost of ownership and ease of use. Investment in management decisions and their development will continue to bring us closer to the concept of dynamic IT environments, where infrastructure is flexible enough to adapt to the needs of business and policy models and help push forward the management and increasing automation.

  9. #9
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Scalable Hyper-V

    Using a microkernel architecture hypervisor, Hyper-V requires very little CPU resources, leaving enough room for virtualization workloads. By allowing virtual machines to use the powerful features and equipment, such as multi-core technology, improved access to drives and large memory, Hyper-V improves scalability and performance of virtualization platforms.

    In combination with other features of Windows Server 2008, Hyper-V allows you to consolidate the majority of workloads (including 32-bit and 64-bit load) on a single system. It can also help to balance the adoption of 64-bit technology's continuing support for 32-bit workloads, already used in the environment.

    The fact that Hyper-V requires a 64-bit system with hardware-supported virtualization helps ensure the availability of this system a large pool of memory resources. Hyper-V supports up to 1 TB of memory on the host computer and up to 64 GB of memory on the virtual machine. This is a key factor for those who plan to virtualize memory demanding workloads, such as Exchange Server and SQL Server.

    Hyper-V also supports up to 16 logical processors on the host computer that allows you to scale Hyper-V to the level of most sold two-and chetyrehsoketnyh systems with multiple cores. You can also create a virtual machine with up to four virtual processors to support workloads, taking advantage of opportunities multiprocessors.

    Consolidating servers through Hyper-V also allows these servers to take advantage of the solid support network, including VLAN, Network Address Translation (NAT) and policy Network Access Protection (NAP) (quarantine). In addition, as a component of Windows Server 2008, Hyper-V interacts well with other components of Windows Server, such as BitLocker and Windows PowerShell.

  10. #10
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    High Availability Hyper-V

    High availability - this is a scenario in which Hyper-V and clustering capabilities of carriers are working together to help ensure continuity of business operations and disaster recovery. Continuity of business operations - is the ability to minimize both planned and unplanned downtime. This includes time lost to routine functions, such as maintenance and backup, as well as unplanned shutdowns.

    Disaster recovery is an important component of business continuity operations. Natural disasters, malicious attack, and even simple configuration problems like software conflicts can disable services and applications, while the administrator will not solve the problem and restore the data. Solid strategy continuity and disaster recovery should offer minimal loss of data and extensive remote management capabilities.

    When considering high-availability should be evaluated in three different categories - planned downtime, unplanned outages and backups. Protection strategy in the event of planned downtime is usually needed to help move the VMs from the host computer, requiring the maintenance of equipment, or apply patches to your computer or platform virtualization (that potentially could require a reboot).

    Most organizations want to schedule maintenance windows, and that is really needed - is to minimize or eliminate the period of time during which the virtual machines will be unavailable until the carrier is served by their system. With Quick Migration ( «Quick Migration"), you can quickly, in seconds, migration of virtual machines from one node to another. This allows you to keep virtual machines available to work at the same time carrying out maintenance of the initial host. Upon completion of service, you can use Quick Migration for the return of the VMs back to the computer.

    Unplanned simple - it is simple, which is impossible to predict. It can be caused by accident or something as simple as accidentally pulling the power cord from the socket server. While this may seem unlikely, for years I have met quite a few administrators at Tech • Ed, VMworld and other conferences, telling about how this or that server was accidentally turned off a colleague.

    Using Hyper-V, you can install a cluster of hosts for different systems and configure all virtual machines as cluster resources, then they will automatically switch to the other systems in the event of failure of one of them. At the same time, multi-node clustering capabilities of Windows Server 2008 would establish a geographically dispersed cluster, so that in case of failure of the primary data center will continue to recover the other virtual machines on a remote data center.

    It is also convenient for the Protection of branches. One advantage of the support of unplanned downtime in the Hyper-V is that it does not depend on the specific guest OS, which means that its benefits are high availability can be extended to virtual machines Linux, as well as older versions of Windows Server, protecting and restoring these systems similarly.

    Looking at unscheduled simple, it is important to note that the recovery is equivalent to shutdown and restart the system, which means the loss of all information about the state. This may or may not be a problem, depending on the workload used in the virtual machine. It is for this reason it is important to consider backing up in the context of high availability.

    Hyper-V allows you to create backups of each virtual machine, or use VSS to create a consistent copy of all supporting VSS virtual machines while they are still working. Using VSS you can install a backup to regular intervals, without causing at the same time the availability of workload and, at the same time, ensuring continuous availability of backup plan, which may contribute to a slight recovery from unplanned downtime.

    Microkernel architecture is designed to minimize the area open to attacks and improve security, especially when using Hyper-V role in Server Core. Server Core ( «core server") - is the option to install Windows Server 2008. The hypervisor contains no device drivers or code from third-party manufacturers, providing a more stable, thin and secure foundation for running virtual machines. Hyper-V also provides a solid role-based security, by integrating Active Directory. In addition, Hyper-V allows virtual machines to take advantage of security features, level of equipment, such as a fragment of execution disable (NX), to improve the security of virtual machines.

    Hyper-V went through a development cycle safety (SDL), like all other Windows Server components and to ensure high security Hyper-V as virtualization platform were carried out extensive modeling and analysis of threats. When you deploy Hyper-V, make sure the implementation of the recommendations for deploying Windows Server 2008, as well as recommendations for Hyper-V. Turn on Active Directory, as well as antivirus and anti-malware solutions in its plan. And use the delegated administration capabilities to ensure proper use of the privileges of administrative access to host computers Hyper-V.

  11. #11
    Join Date
    May 2008
    Posts
    420

    Re: Hyper-V in Windows Server 2008

    Hyper-V Controllability

    Slide from a small random servers to the enormous growth of indiscriminate growth of virtual machines is easy. This is the risk caused by the ease of deploying virtual machines. And given the increased mobility of virtual machines are added as the need to know is where the specific virtual machines operate, monitor security contexts, and so on.

    Fortunately, in the case of Hyper-V virtual environment does not need to create a separate management infrastructure. It integrates with Microsoft management tools, System Center Virtual Machine Manager, Microsoft System Center Operations Manager and the management tools from third-party manufacturers. This allows you to manage physical and virtual resources from a single console. At the same time, support for Windows PowerShell makes it easy to automate tasks.

    Hyper-V virtual machines and provides an unprecedented ability to use available equipment. Since all drivers are certified laboratory Windows Hardware Quality Lab (WHQL) able to work in the parent partition, Hyper-V provides broad compatibility for drivers and devices, simplifying the management of the various drivers working in the environment.


    Conclusion

    As I mentioned earlier, the management will be a key area of development and growth differences. In the coming years likely will be seen considerable activity in this area. In general, this is really interesting time when virtualization is beginning to become more prevalent among the masses of the role.

Similar Threads

  1. Facing blue screen on windows 2008 R2 with hyper v server
    By MacNamara in forum Windows Server Help
    Replies: 6
    Last Post: 27-02-2012, 11:25 PM
  2. Windows 2008 R2 Server on Hyper-V
    By RedZot in forum Guides & Tutorials
    Replies: 3
    Last Post: 23-01-2011, 04:14 AM
  3. Prepare Your Windows Server 2008 R2 for Hyper-V Role
    By mauricio in forum Guides & Tutorials
    Replies: 3
    Last Post: 17-09-2010, 04:00 AM
  4. Windows Server 2008 - about Hyper-V and Terminal Service
    By satimis in forum Operating Systems
    Replies: 2
    Last Post: 08-07-2010, 12:03 AM
  5. Windows server 2008 enterprise hyper-v licensing
    By Patio in forum Windows Software
    Replies: 3
    Last Post: 01-07-2009, 07:35 PM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Page generated in 1,714,028,479.95835 seconds with 17 queries