Sunday, 25 August 2013

What desktop virtualization really brings

Depending on whom you talk to, desktop virtualization is either the hottest trend in IT or an expensive notion with limited appeal

Desktop virtualization harks back to the good old mainframe days of centralized computing while upholding the fine desktop tradition of user empowerment. Each user retains his or her own instance of desktop operating system and applications, but that stack runs in a virtual machine on a server -- which users can access through a low-cost thin client similar to an old-fashioned terminal.

The argument in favor of desktop virtualization is powerful: What burns through more hands-on resources or incurs more risk than desktop computers? Even with remote desktop management, admins must invade cubicles and shoo away employees when it's time to upgrade or troubleshoot. And each desktop or laptop provides a fat target for hackers and an opportunity to steal data. But if you run desktops as virtual machines on a server, you can manage and secure all those desktop user environments in one central location. Patches and other security measures, along with hardware or software upgrades, demand much less overhead. And the risk that users will make mischief or mistakes that breach security drops dramatically.

The argument against desktop virtualization is almost as strong. Overhead costs conserved through central management get cancelled out by the need for powerful servers, virtualization software licenses, and additional network bandwidth. Plus, the cost of client hardware and Microsoft software licenses stays roughly the same, while the user experience -- at least today -- seldom lives up to user expectations. And then the kicker: How are users supposed to compute when they're disconnected from the network?

Decisions about whether or in what form to adopt desktop virtualization become a whole lot easier when you understand the basic variants and technologies. Here's what you need to know:

1. Desktop virtualization really is virtualization

Just like server virtualization, desktop virtualization relies on a thin layer of software known as a hypervisor, which runs on the server hardware and provides a platform on which administrators deploy and manage virtual machines. With desktop virtualization, each user gets a virtual machine that contains a separate instance of the desktop operating system (almost always Windows) and whatever applications have been installed. To the desktop OS, the applications, and the user, the VM does a pretty good job of impersonating a real desktop machine.

2. Traditional thin client solutions are not desktop virtualization

By far the most popular form of server-based, thin client computing relies on Microsoft Terminal Services (recently renamed Remote Desktop Services), which lets multiple users share the same instance of Windows. Terminal Services is often paired with Citrix XenApp (formerly known as Presentation Server and, before that, MetaFrame), which adds management features and improves performance -- no hypervisors or VMs here. The main drawbacks: Some applications run poorly or not at all in this can’t share environment, and individuals customize their user experience the way they can with virtual machines or real desktops. Nonetheless, people often refer to traditional thin client solutions as desktop virtualization because the basic goal is the same: to consolidate desktop computing at the server.

3. Desktop virtualization and VDI mean pretty much the same thing

VMware was first to promote the VDI (virtual desktop infrastructure) terminology, but Microsoft and Citrix have followed suit, offering VDI solutions of their own based on the Hyper-V and XenServer hypervisors, respectively. Think of it this way: VDI refers to the basic architecture for desktop virtualization, where a VM for each user runs on the server.

4. Don't confuse desktop virtualization with ... desktop virtualization

The desktop virtualization we're talking about refers to server-based computing. But "desktop virtualization" also refers to running virtual machines on desktop systems, using such desktop virtualization solutions as Microsoft Virtual PC, VMware Fusion, or Parallels Desktop. Probably the most common use of this sort of desktop virtualization is running Windows in a Parallels or Fusion VM on the Mac. In other words, this has nothing to do with server-based computing.

5. No server-based computing solution supports the same range of hardware as a desktop

The Windows folks in Redmond spend half their lives ensuring compatibility with every printer, graphics card, sound card, scanner, and quirky USB device. With thin clients, your support for hardware is going to be pretty generic, and some items won't work at all. Other limitations are introduced by the fact that users interact with their VMs over the network. Multimedia, videos, and Flash apps can be problematic.

6. VDI solutions cost more (and deliver more) than traditional thin client solutions

Think about it: With VDI, each virtual machine needs its own slice of memory, storage, and processing power to run a user's desktop environment, while in the old-fashioned Terminal Services model, users share almost everything except data files. VDI also means a separate Windows license for each user, while Terminal Services-style setups give you a break with Microsoft Client Access Licenses. Plus, VDI incurs greater network traffic, which may add a network upgrade to the purchase order for beefy server hardware.

In return for that extra cost, along with a better user experience, VDI delivers greater manageability and availability. As with server virtualization, you can migrate virtual machines among servers without bringing down those VMs, perform VM snapshots for quick recovery, run automated load balancing, and more. And if a virtual machine crashes, that doesn't affect other VMs; with Terminal Services, that single instance of Windows is going to bring down every connected user when it barfs.

7. Dynamic VDI solutions improve efficiency

In a standard VDI installation, each user's virtual machine persists from session to session; as the number of users grows, so do storage and administration requirements. In a dynamic VDI architecture, when users log in, virtual desktops assemble themselves on the fly by combining a clone of a master image with user profiles. Users still get a personalized desktop, while administrators have fewer operating system and application instances to store, update, and patch.

8. Application virtualization eases VDI requirements even more

When an application is virtualized, it's "packaged" with all the little operating system files and registry entries necessary for execution, so it can run without having to be installed (that is, no changes need be made to the host operating system). In a dynamic VDI scenario, admins can set up virtualized applications to be delivered to virtual machines at runtime, rather than adding those apps to the master image cloned by VMs. This reduces the footprint of desktop virtual machines and simplifies application management. If you add application streaming technology, virtualized applications appear to start up faster, as if they were installed in the VM all along.

9. Client hypervisors will let you run virtual machines offline

A client hypervisor installs on an ordinary desktop or laptop so that you can run a "business VM" containing your OS, apps, and personal configuration settings. Talk about full circle: Why would you want all that in a virtual machine instead of installed on the desktop itself? Two reasons: One, it's completely secure and separate from whatever else may be running on that desktop (such as a Trojan some clueless user accidentally downloaded) and two, you get all the virtualization management advantages, including VM snapshots, portability, easy recovery, and so on. Client hypervisors also make VDI more practical. You can run off with your business virtual machine on a laptop and compute without a connection; then when you connect to the network again, the client VM syncs with the server VM.

Client hypervisors point to a future where we bring our own computers to work and download or sync our business virtual machines to start the day. Actually, you could use any computer with a compatible client hypervisor, anywhere. The operative word is "future" -- although Citrix has released a "test kit" version of its client hypervisor, and VMware is expected to release its own early version soon, shipping versions will not arrive before 2011.

The long march to the server side

Meanwhile, a completely different form of server-based computing continues to gain traction: the variant of cloud computing known as SaaS (software as a service), where service providers maintain applications and user data and deliver everything through the browser. A prime example is Google's campaign for Google Docs, encouraging users to forget about upgrading to Office 2010 and adopt Google's suite of productivity apps instead. Plus, Google's Chrome OS promises to create entire desktop environments in the cloud that retain user personalization.

Very likely, no big winner will emerge in server-based computing. Old-style Terminal Services setups will continue to crank along for offices harboring users with narrow, simple needs. True desktop virtualization on the VDI model will make sense where security and manageability are paramount, such as widely distributed organizations that use lots of contractors. And where far-flung collaboration is key, SaaS will flourish, because anyone with a Web browser can join the party. Conventional desktops may never disappear, but one way or another, the old centralized model of computing is making a comeback.



Article submitted by : SUNDE, global provider of innovative Terminal Services and Virtual Desktop Infrastructure (VDI) solutions paired with zero clients to help customers dramatically reduce the cost and complexity of desktop computing.

Sunday, 18 August 2013

Zero Client and Thin Client Technology: History, Use and Critical Comparison

Introduction

The debate over the strengths and weaknesses of thin clients versus fat clients in a distributed computing environment has gone on for many years. Thin clients have been highlighted as a preferred method for information publishing across the enterprise and as a key tool in the ongoing struggle to reduce ownership costs for information technology. In late 2003, unsatisfied with its strategic direction and relieved of the most severe anti-trust threats, Microsoft began to reverse the technology pendulum back toward fat-client architectures as it announced strategic plans to embed more functionality within its Windows client operating system. Overlooked in many discussions of industry trends is the "Zero Client", a technology which offers the benefits of fat clients while delivering equivalent cost of ownership reductions and faster performance than the fastest thin clients.

Definition and Description

The zero client ("station") is a set of components (monitor, keyboards, mouse), none of which have independently programmable intelligence, that relies on a centralized CPU ("Host PC") for all program execution and information processing. The connection between the zero client and the Host PC is a direct, point-to-point connection that operates at bus speed, requiring no network protocol. Zero clients are typically implemented in clusters, using a "star-like" configuration around the Host PC. Each cluster can function either as a network component of a distributed computing system or as self-contained, small-group system. When combined with the high performance of the bus-speed delivery system, zero client technology offers an unequalled platform for small-group, transactional-based systems accessing a shared database.

Since a zero client uses low-cost component hardware, with no local intelligence or processing, its cost per seat is similar to that of network computers. Likewise, zero clients offer a single point location - the Host PC - for upgrade, maintenance and support, thus drastically reducing licensing and lifetime system costs.

History of Zero Client Technology

Zero client technology has its earliest roots in mini/mainframe computing, where computing tasks and program execution were centralized and information was sent and displayed to multiple users through terminal devices that lacked programmable intelligence, ergo, "dumb terminals" (later renamed "mainframe interactive terminals").

Character-based terminals such as the initial 3270, 5250 and VT52/VT100 stations provided the user interface on a variety of systems. These terminals were typically connected to the host via low bandwidth serial links (i.e. less than 9.6 Kbps). Output from an application program was passed by the operating system through the serial link to the terminal firmware to be displayed on the user’s screen.

When personal computers were introduced, their computing architecture was a radical change for the industry. In the PC, applications could be executed locally on the user’s desktop, eliminating the requirement that the operating system transmit the output to a slow, external display device. Some of the earliest PC applications were terminal emulators so that a single PC could displace the dumb terminal on the desktop.

The impact of this change in architecture was dramatic and rapid. Applications began to change as developers embraced the assumption of "one user, one PC". Using this dedicated-user assumption, PC applications began leveraging direct access to the hardware for maximum performance. For example, the user interface was optimized by bypassing the operating system entirely and directly addressing the display device.

Then, in the mid-1990s, coincident with the improved performance in newer Intel x86 chipsets, the PC user interface shifted from character-based to graphical. Windows and OS/2 became the predominant operating systems for Intel-based personal computers. In these advanced environments, the operating system took more control of access to and use of the PC hardware. In display management, the operating system was reinserted between the application and the display adapter. As a consequence of the relentlessly-increasing operating system functionality and more complex applications, it became more difficult and more expensive to provide support for the PC environments.

During this same period, the zero client technology (still using a serial connection) delivered less and less perceived performance as a direct result of the increased amount of information being passed over that connection for increasingly graphical and "user friendly" applications. In comparing the less than 2,000 bytes in a character based screen to the just over 300,000 bytes to represent the graphic pixels in the smallest Windows display, it became evident that the serial connection no longer provided a viable solution.

A new type of connectivity hardware was introduced in the late 1980s, generically referred to as a multi-display adaptor ("MDA"). These add-on boards contained multiple VGA chipsets and used a variety of cabling options (fiber optic, coaxial, etc.). When adapted to an operating system environment, they all delivered the display data directly to multiple VGA displays at bus speed. During the early to mid 1990s, these multi-display adapters were implemented for use on a variety of flavors of Unix (including SCO), other proprietary operating systems (including PC-MOS, VM386, THEOS) and enhanced DOS operating systems (Concurrent and Multiuser).

These MDAs were the early predecessors of the hardware used by zero client technology today. Today, multi-display hardware uses SVGA/XGA chipsets, supports 1600x1200 resolution in full color and directly delivers the video streams to multiple displays via a variety of high speed transmission media.

The use of this hardware with its bus transfer speed for additional video displays provided the hardware foundation necessary to deliver efficient zero client technology for Windows operating systems.

Definition of Zero Client

A traditional PC has a single display adapter, a single mouse port and a single keyboard controller. A zero client PC Host has multiple display adapters, multiple mouse ports and multiple keyboard controllers. Through system software that resides on the PC Host, multiple virtual machines or sessions are created, each associated with a display adapter, a mouse and a keyboard. Input for the session is read directly from its mouse and keyboard; output is written directly to its display adapter. As in any computing architecture, there are both hardware and software components involved to deliver this advanced functionality.

Key Components of Zero Client Technology
Host PC - Standard PC with multiple display adapters
Local Station - Standard (or USB) input and output devices (e.g. monitor, mouse, keyboard, audio, Touch Screen, serial, etc.)
High Speed Delivery System - Direct connection or extension
Software Component - Multiuser or Virtualization Software
Hardware Architecture

The Host PC has more than one display adapter (or possibly a display adapter with multiple SVGA chipsets) for support of zero client technology. Each of those SVGA chipsets is associated with a direct connection to a local station, typically from 5 to 500 feet away. One example of a local station consists of a connector box of some kind, into which a monitor, mouse and keyboard are plugged. If local peripherals are used, the connector box will also include signal decoding, which will multiplex the combined video, serial data and parallel data and feed it to and receive it from the appropriate component.

A vital element of the zero client solution is the ability to transmit a true graphical signal directly to the station’s display, perhaps as much as several hundred feet away. By extending the VGA signal, as opposed to packetizing the video with software and network protocols, the Host PC is not burdened with CPU overhead and the responsiveness of the station’s display is as fast as a standalone PC system.

Software

As indicated earlier, zero client technology has existed in various flavors for many years, However, until the introduction of the Application software in late 1996, zero client technology had never been implemented on a Windows 95/98 platform.

Windows 95/98 included preemptive multitasking capabilities and was the first Windows-based platform in which zero client technology could be effectively implemented. In prior versions of Windows (3.x), multiple applications could be open in their own windows, but only one application was active at any given point in time. For example, a user could have Word and Excel windows displayed, but, after beginning a long recalculation in Excel, the user couldn’t switch to Word until the Excel computation was complete.

Using the above example with the preemptive multitasking in Windows 95/98, a user could have both Word and Excel open, start a long recalculation in Excel and then immediately switch to Word to edit a document while the recalculation finished in the background.

At a lower level, a Host PC using zero client technology has system software enhancements that support multiple virtual machines or sessions. Each of these sessions is associated with a display adapter, a mouse, a keyboard and optional audio. As previously mentioned, he system software directly passes input for each virtual machine from its corresponding mouse and keyboard; similarly, output is written directly to the corresponding display adapter.

One of the obvious benefits of the zero client design is very high video performance. The physical presence of a video chipset for each of the virtual machines eliminates the overhead of emulation, packetizing and transmission of graphical orders or video. The degree to which this benefits performance is directly tied to the extent that color and graphics are used by application(s) being executed in that virtual machine.

In addition, performance is improved because all display data is transferred at bus transfer speeds rather than through a network connection. A network connection requires the transmission of data in packets and using some protocol. The effective throughput of a network at any point in time is determined by multiple factors, including the bandwidth and amount of active traffic on the network at that time. The point-to-point transfer of display data directly from a memory structure to a video display can occur in a small fraction of the time required to pass the same data over the network.

Zero client technology also offers simplified installation, configuration and support, by virtue of the use of a single Host PC and multiple stations (each consisting of a monitor, mouse and keyboard) rather than multiple PCs individually configured and combined into a small network.

Thin Client Compared to Zero Client

Thin client technology has received a lot of attention in recent years. In support of an industry focus upon expense reduction and improved manageability of desktop computing, the computer industry has drawn from the experience of the mini/mainframe model of host and terminal. With the thin client architecture, the application moved back to a multi-user host, which transmitted the display information to an intelligent device for presentation to the user.

However, the thin client model ignored a crucial change that occurred in the application domain with adoption of Windows as a standard platform: the move from a character-based to a graphical user interface. The client station must now do substantially more processing than the old "dumb" terminal. Higher bandwidth links are also required for the graphical information. When multimedia is added to the application equation, the effectiveness of thin client technology is severely reduced.

Zero client technology differs from thin client technology in client hardware requirements, display data processing and the data delivery system. A comparison of the processing of graphical commands points to some key differences.

As a baseline, on a standalone PC, Windows passes graphical commands directly to a display driver that interprets them and updates the display.

PC Architecture

Within a thin client host (terminal server), a protocol layer is introduced. Here, Windows passes graphical commands to a protocol layer, usually either the Citrix ICA or the Microsoft share (RDP). This protocol layer encodes the commands into packets and transmits them over the network to the intelligent client device. At the client end, the protocol layer decodes the commands and passes them to a display driver that interprets them and updates the display. Sun offers a similar capability with its Sun Ray line of products for the Solaris operating system.

Thin Client Architecture

On a zero client system, the process is almost identical to that occurring in the standalone PC, with the single exception that the driver updates the display that corresponds to each virtual machine or session. Within the zero client Host PC, the protocol layer and the transmission of the data in packets are avoided. Therefore, zero client architecture conserves processing power within the Host PC and eliminates client processing entirely.

Zero Client Architecture

The zero client architecture and combines the best attributes of the thin client and the standard personal computer architectures. As in the thin client, applications execute on a shared Host PC. This minimizes cost, delivers the highest performance and improves manageability.

Summary

The zero client architecture combines key aspects of the thin client, the NC and the personal computer. As in the thin client model, Windows applications (including browsers) execute on a shared Host PC. This reduces cost and improves control and manageability. As in the NC model, the zero client stations are lowest cost, secure and environmentally efficient. As in the personal computer model, the display adapter resides in the same computer as the application. This preserves performance because it eliminates the need for a network transmission protocol that degrades CPU processing and injects delays due to network overhead. When all the strengths and weaknesses of each desktop configuration alternative are considered, the zero client technology offers flexible and valuable options to users seeking minimized costs of ownership and improved control.



Article submitted by: http://www.sundenc.com



Sunday, 11 August 2013

VDI hardware comparison: Thin vs. thick vs. zero clients

When it comes to virtual desktop infrastructure, administrators have a lot of choices. You may have wondered about the differences between VDI software options, remote display protocols or all the licenses out there. In this series, we tackle some of the biggest head-scratchers facing VDI admins to help you get things straight.

When you deploy VDI, you need to figure out what hardware your virtual desktops will run on.

To host virtual desktops, you have a lot of choices: thin clients, zero clients and smart clients -- not to mention tablets and mobile devices. Thin clients and other slimmed-down devices rely on a network connection to a central server for full computing and don't do much processing on the hardware itself. Those differ from thick clients -- basically traditional PCs -- that handle all the functionality of a server on the desktop itself.

Understanding the benefits, challenges and cost implications of all these VDI hardware options will help you make the right choice. Let's get this straight:



Thick clients

It's possible to use thick clients for desktop virtualization, but many organizations don't because it doesn't cut down on overall hardware and requires all local software. If you use traditional PCs to connect to virtual desktops, you don't get many of the benefits of VDI, such as reduced power consumption, central management and increased security.

How thick clients compare to thin

Since a thick client is basically a PC running thin client software, it is usually more costly than a thin client device. Plus, thick clients have hard drives and media ports, making them less secure than thin clients. Finally, thin clients tend to require less maintenance than thick ones, although thin client hardware problems can sometimes lead to having to replace the entire device.

Thin clients

With thin client hardware, virtual desktops are hosted in the data center and the thin client simply serves as a terminal to the back-end server. Thin clients are generally easy to install, make application access simpler, improve security and reduce hardware needs by allowing admins to repurpose old PCs.

What to look for in thin client devices

Thin clients are meant to be small and simple, so the more advanced features you add, the more expensive they get. As you choose thin client devices, consider whether you need capabilities such as video conferencing and multi-monitor support. You should also take into account your remote display protocol and how much display processing your back end can supply.

Aside from being cheap and uncomplicated, thin clients should also offer centralized management. For instance, you can automatically apply profile policies to groups of thin clients with similar configurations. That tends to be easier than individual manual management. Plus, you want your VDI hardware to be simple enough for nonveteran IT staff or those at remote branch offices to be able to deploy.

Zero clients

Zero clients are gaining ground in the VDI market because they're even slimmer and more cost-effective than thin clients. These are client devices that require no configuration and have nothing stored on them. Vendors including Dell Wyse, Fujitsu, and SUNDE offer zero client hardware.

Pros and cons of zero clients

So what are the benefits of this kind of VDI hardware? First off, zero clients can be less expensive than thick and thin clients. Plus, they use less power and can simplify client device licensing.

Still, there's a catch: Vendors often market zero clients as requiring no management or maintenance, which isn't always true. Some products do require software or memory and other resources. In addition, zero clients tend to be proprietary, so organizations could run into vendor lock-in.



Contact us: http://www.sundenc.com

Article submitted by: SUNDE VDI delivers an extremely high performance virtual desktop for users including rich multi-media, full screen 1080P streaming video and Flash, dynamic graphics, and seamless responsiveness.

Sunday, 4 August 2013

The Most Affordable VDI-- SUNDE's VDI

VDI or Virtual Desktop Infrastructure is a quite popular term today. It is needless to mention that VDI is a system that launches an OS in a virtual infrastructure. Plus, the entire system runs on a centralized server. In short, VDI is a high-end server based computing system that simplifies work. To be precise, it is the list of benefits of VDI that is making people interested towards it.


As soon as an organization installs VDI device, a template gets created. It features a virtual workstation that integrates an operating systems, applications and security codes. This virtual workstation can be established anytime and anywhere a user wants. Eventually, it saves time as there is no need of setting the OS and installing software every time before starting working.


• Most users feel that it is difficult to keep track on software updates. While some have auto-update options, others need manual management. But with VDI, the patch management gets executed right on time, irrespective of whether the software needs manual management or functions automatically.


It is necessary to update the security system of every computer as frequently as possible. This protects the sensitive data from getting exposed to newly emerging cyber threats. But attending every standalone PC is time consuming too. With VDI, a user can update the security system of the entire virtual workstation at a go!


In spite of these advantages of VDI, there is still a dearth of users. The reasons behind this unpleasant scenario are many:

Cost is the major factor that makes people stay away from VDI services. Big players charge exorbitantly on management software. Even, separate fees are imposed for providing smooth performance in remote desktop display protocol.

• Deploying and maintaining the system is a complex affair and requires expertise.

• The performance of virtual desktop technology is not up to the mark. Clients feel that it fails to provide seamless results. Users, especially schools and SMEs are in the lookout of VDI services have the following features:


The performance of the virtual workstation should be similar to the experience of working on a high quality standalone PC.

An untrained person should be able to deploy and handle the VDI.

• Since one of the options why users wish to switch over to VDI is to minimize infrastructural cost, they look for solutions that are cost effective and do not add up to their expenses


However, good news here is that service providers like SUNDE offer flawless performance at a highly economical rate! The users need to make a one-time payment for SUNDE’s VDI. The service provider also gives complementary backend software with every purchased VDI device. Moreover, no extra charge or hidden cost is there for future updates of software, device firmware or security patches.


Most companies do not provide actual zero client solutions that feature integrated OS, memory and CPU capacity. On the other hand, providers like SUNDE offer actual zero client solutions with costs that are much economical than many other VDI providers. Performance wise, SUNDE offers much more precision than Citrix’s ICA that comes with HDX or PCoIP from VMWare. You will appreciate that SUNDE’s Diana Device system enables users to play multiple videos on the virtual desktop without any compromise on the quality of performance.


The quality of service of this newcomer SUNDE is as excellent as big players. But the process of installation and management of SUNDE’s VDI is very simple. Moreover, the process is very fast too. Therefore, users do not need to undergo any additional training or hire skilled IT staffs to manage the same.


Article submitted by: SUNDE offer actual zero client solutions with costs that are much economical than many other VDI providers.