Sunday, 29 September 2013

The Right Display Protocol For The VDI Solution

Diana and mouse

This write-up assumes that the reader is aware of the VDI architectural basics. It further delves into providing an insight to the further questions:

• Why is a good Remote Display Protocol needed?
• What should be expected out of an efficient display protocol?
• Who are the popular manufacturers in Remote Display Protocol currently?
• What makes SUNDE-VDI Protocol an obvious choice for the VDI solutions?


For a moment, let us keep the rest of the components of the VDI architecture at backstage, and pick up a single component, which establishes the communication between the server and the endpoint device- the connection broker. Every connection broker follows a specific set of rules to bridge this connection. In other words, the connection broker, which is a software program, uses a Remote Display Protocol which helps in communicating the output from server to the endpoint device.

A good Remote Display Protocol is expected to be equipped with the following properties:

1. It should ensure a reliable data delivery. Transmission Control Protocol (TCP) is a well known protocol, which sends the generated output in small packets of data. The protocol retains the network connection till all the data is transferred, and resends the output if the connection is interrupted. It is a highly authenticated protocol.

2. The data delivery must be fast, especially for transferring the media rich applications. The audio and the video should be synchronized with minimal delay. User Datagram Protocol(UDP), unlike TCP, does not sequence the data packets, and thus is efficient in sending the data faster.

3. There must be no complex hardware extension involved. Most of the display protocols today are those which were designed to be used for Terminal Servers, and needed several hardware extensions before these could be used for the VDI solutions.

The Popular Display Protocols:

Choosing the right protocol depends mainly on the multimedia requirements of the endpoint device. However, the commonly used remote display protocols today are technologically designed to offer:


Under VDI architecture, there are 3 such display protocols, which are popular, with their won limitations:

1. Microsoft’s Remote Desktop Protocol(RDP)/ Remote FX:
The RDP has been going through continuous up gradations to meet up with the demanding multimedia requirements. Remote FX is the improved protocol version of RDP for later versions of Windows Server. However, it is not considered the best option for the continually increasing multimedia requirements. Without the required upgrade, the user experience does not suffice.

2. VMWare’s PC-over-IP(PCoIP):
With a low bandwidth consumption over LAN and WAN, PCoIP is preferred over the other display protocols. Its performance is also better since it uses UDP unlike the other display protocols which use TCP as well. However, a major limitation of VMWare’s PCoIP is that it cannot be used with Windows Server 2012, which is wide in use at present.

3. Citrix’s HDX:
Initially called Independent Computing Architecture(ICA), Citrix launched its remote display protocol with an appreciable performance, which was launched as HDX as a part of its 2009 suite release. It offered a very good end-user experience with a low bandwidth and multimedia redirection. The further versions of HDX were multimedia rich. Its biggest limitation however was its incompatibility with later versions of Windows servers, which confined Citrix to a lesser number of users.

While, the companies are still in research mode to offer a PC like multimedia solution under VDI architecture, SUNDE has grown by leaps and bounds in this area:

1. SUNDE-VDI Protocol uses both the TCP and UDP, retaining the reliability and the quickness of the data stream transfer.

2. The graphics acceleration in SUNDE-VDI protocol is efficient enough to handle the rich multimedia, including graphics and animation.

3. This is a server rendering protocol, which means that it depends upon the host for its operations. This eliminates the need of CPU and a huge memory at the endpoint device, making it a good option for zero clients as well.

4. SUNDE-VDI Protocol comes as a part of vPointServer software, which is the connection broker using VirtualBox platform as the hypervisor. Another part of this package is Diana Zero Client, which is the endpoint device from SUNDE. Everything as a package takes care of the compatibility issues between the components, and makes it a perfect choice as a display protocol.




For more details, please visit: http://www.sundenc.com/support/knowledge/The%20Right%20Display%20Protocol%20For%20The%20VDI%20Solution.html


Tuesday, 17 September 2013

What Is Zero Client & What Is A True Zero Client?

Zero client, also known as ultra thin client, is a server-based computing model in which the end user's computing device has no local storage. A zero client can be contrasted with a thin client, which retains the operating system and each device's specific configuration settings in flash memory.

A typical zero client product is a small box that serves to connect a keyboard, mouse, monitor and Ethernet connection to a remote server. The server, which hosts the client's operating system (OS) and software applications, can be accessed wirelessly or with cable. Zero clients are often used in a virtual desktop infrastructure (VDI ) environment.

Benefits of zero client computing:
-- Power usage can be as low as 1/50th of fat client requirements.
-- Devices are much less expensive than PCs or thin clients.
-- Efficient and secure means of delivering applications to end users.
-- No software at the client means that there is no vulnerability to malware.
-- Easy administration.
-- In a VDI environment, administrators can reduce the number of physical PCs or blades and run multiple virtual PCs on server class hardware.

The term zero client is often misapplied in thin client vendor marketing materials. True zero client endpoints do no local processing and have no client operating systems, drivers, software, storage, or even any configuration settings. They are completely stateless and management-free. Zero clients mean zero endpoint management – absolutely zero.

Some thin client vendors have even tried to make their endpoints look “zero” by keeping the client operating system image on the hard disk of a separate “streaming” appliance, requiring that users wait while it is downloaded to the endpoint’s hard disk or flash storage before use. Unfortunately, this only makes the entire VDI architecture from these vendors even more complex and fragile.

To see if vendor claims of “zero-ness” are valid, apply these tests:
1. Does the endpoint include a CPU of any kind? Any RAM or Flash Memory? Any storage devices or moving parts at all?
2. Are you forced to configure the endpoint in any way before use?
3. Do you need to reconfigure the endpoints before you are able to swap them between users?
4. Does the endpoint need to download an operating system image or any software before you can use it?
5. Are you not able to use the native Windows drivers that Microsoft or the manufacturer supply to connect to a new peripheral?
6. Does the endpoint require you use an embedded management tool?


Article submitted by : http://www.sundenc.com/support/knowledge/What%20Is%20Zero%20Client.html

Sunday, 8 September 2013

VDI hardware comparison: Thin vs. thick vs. zero clients

When it comes to virtual desktop infrastructure, administrators have a lot of choices. You may have wondered about the differences between VDI software options, remote display protocols or all the licenses out there. In this series, we tackle some of the biggest head-scratchers facing VDI admins to help you get things straight.

When you deploy VDI, you need to figure out what hardware your virtual desktops will run on. To host virtual desktops, you have a lot of choices: thin clients, zero clients and smart clients -- not to mention tablets and mobile devices. Thin clients and other slimmed-down devices rely on a network connection to a central server for full computing and don't do much processing on the hardware itself. Those differ from thick clients -- basically traditional PCs -- that handle all the functionality of a server on the desktop itself.

Understanding the benefits, challenges and cost implications of all these VDI hardware options will help you make the right choice. Let's get this straight:

Thick clients

It's possible to use thick clients for desktop virtualization, but many organizations don't because it doesn't cut down on overall hardware and requires all local software. If you use traditional PCs to connect to virtual desktops, you don't get many of the benefits of VDI, such as reduced power consumption, central management and increased security.

How thick clients compare to thin

Since a thick client is basically a PC running thin client software, it is usually more costly than a thin client device. Plus, thick clients have hard drives and media ports, making them less secure than thin clients. Finally, thin clients tend to require less maintenance than thick ones, although thin client hardware problems can sometimes lead to having to replace the entire device.

Thin clients

With thin client hardware, virtual desktops are hosted in the data center and the thin client simply serves as a terminal to the back-end server. Thin clients are generally easy to install, make application access simpler, improve security and reduce hardware needs by allowing admins to repurpose old PCs.

What to look for in thin client devices

Thin clients are meant to be small and simple, so the more advanced features you add, the more expensive they get. As you choose thin client devices, consider whether you need capabilities such as video conferencing and multi-monitor support. You should also take into account your remote display protocol and how much display processing your back end can supply. Aside from being cheap and uncomplicated, thin clients should also offer centralized management. For instance, you can automatically apply profile policies to groups of thin clients with similar configurations. That tends to be easier than individual manual management. Plus, you want your VDI hardware to be simple enough for nonveteran IT staff or those at remote branch offices to be able to deploy.

Zero clients

Zero clients are gaining ground in the VDI market because they're even slimmer and more cost-effective than thin clients. These are client devices that require no configuration and have nothing stored on them. Vendors including Dell Wyse, Fujitsu, and SUNDE offer zero client hardware.

Pros and cons of zero clients So what are the benefits of this kind of VDI hardware? First off, zero clients can be less expensive than thick and thin clients. Plus, they use less power and can simplify client device licensing.

Still, there's a catch: Vendors often market zero clients as requiring no management or maintenance, which isn't always true. Some products do require software or memory and other resources. In addition, zero clients tend to be proprietary, so organizations could run into vendor lock-in.


Contact us: http://www.sundenc.com

Article submitted by: SUNDE VDI delivers an extremely high performance virtual desktop for users including rich multi-media, dynamic graphics, and seamless responsiveness.

Sunday, 1 September 2013

What’s the difference between virtualization and cloud computing?

Virtualization is a computing technology that enables a single user to access multiple physical devices. Another way to look at it is a single computer controlling multiple machines, or one operating system utilizing multiple computers to analyze a database. Virtualization may also be used for running multiple applications on each server rather than just one; this in turn reduces the number of servers companies need to purchase and manage. It enables you to consolidate your servers and do more with less hardware. It also lets you support more users per piece of hardware, deliver applications, and run applications faster.

Cloud computing offers scalable infrastructure and software off site, saving labor, hardware, and power costs. Financially, the cloud’s virtual resources are typically cheaper than dedicated physical resources connected to a personal computer or network. With cloud computing, the software programs you use aren’t run from your personal computer, but rather are stored on servers housed elsewhere and accessed via the Internet. If your computer crashes, the software is still available for others to use. Simply, the cloud is a collection of computers and servers that are publicly accessible via the Internet.

One way to look at it is that virtualization is basically one physical computer pretending to be many computing environments whereas cloud computing is many different computers pretending to be the one computing environment (hence user scaling). Virtualization provides flexibility that is a great match for cloud computing. Moreover, cloud computing can be defined based on the virtual machine containers created with virtualization. Virtualization is not always necessary in cloud computing; however, you can use it as the basis. Cloud computing is an approach for the delivery of services while virtualization is one possible service that could be delivered. Large corporations with little downtime tolerance and airtight security requirements may find that virtualization fits them best. Smaller businesses are more likely to profit more with cloud computing, allowing them to focus on their mission while leaving IT chores to those who can do more for less.

Plainly, virtualization provides more servers on the same hardware and cloud computing provides measured resources while paying for what you use. While it is not uncommon to hear people discuss them interchangeably, they are very different approaches to solving the problem of maximizing the use of available resources. They differ in many ways and that also leads to some important considerations when selecting between the two.


Article submitted by: http://www.sundenc.com

Sunday, 25 August 2013

What desktop virtualization really brings

Depending on whom you talk to, desktop virtualization is either the hottest trend in IT or an expensive notion with limited appeal

Desktop virtualization harks back to the good old mainframe days of centralized computing while upholding the fine desktop tradition of user empowerment. Each user retains his or her own instance of desktop operating system and applications, but that stack runs in a virtual machine on a server -- which users can access through a low-cost thin client similar to an old-fashioned terminal.

The argument in favor of desktop virtualization is powerful: What burns through more hands-on resources or incurs more risk than desktop computers? Even with remote desktop management, admins must invade cubicles and shoo away employees when it's time to upgrade or troubleshoot. And each desktop or laptop provides a fat target for hackers and an opportunity to steal data. But if you run desktops as virtual machines on a server, you can manage and secure all those desktop user environments in one central location. Patches and other security measures, along with hardware or software upgrades, demand much less overhead. And the risk that users will make mischief or mistakes that breach security drops dramatically.

The argument against desktop virtualization is almost as strong. Overhead costs conserved through central management get cancelled out by the need for powerful servers, virtualization software licenses, and additional network bandwidth. Plus, the cost of client hardware and Microsoft software licenses stays roughly the same, while the user experience -- at least today -- seldom lives up to user expectations. And then the kicker: How are users supposed to compute when they're disconnected from the network?

Decisions about whether or in what form to adopt desktop virtualization become a whole lot easier when you understand the basic variants and technologies. Here's what you need to know:

1. Desktop virtualization really is virtualization

Just like server virtualization, desktop virtualization relies on a thin layer of software known as a hypervisor, which runs on the server hardware and provides a platform on which administrators deploy and manage virtual machines. With desktop virtualization, each user gets a virtual machine that contains a separate instance of the desktop operating system (almost always Windows) and whatever applications have been installed. To the desktop OS, the applications, and the user, the VM does a pretty good job of impersonating a real desktop machine.

2. Traditional thin client solutions are not desktop virtualization

By far the most popular form of server-based, thin client computing relies on Microsoft Terminal Services (recently renamed Remote Desktop Services), which lets multiple users share the same instance of Windows. Terminal Services is often paired with Citrix XenApp (formerly known as Presentation Server and, before that, MetaFrame), which adds management features and improves performance -- no hypervisors or VMs here. The main drawbacks: Some applications run poorly or not at all in this can’t share environment, and individuals customize their user experience the way they can with virtual machines or real desktops. Nonetheless, people often refer to traditional thin client solutions as desktop virtualization because the basic goal is the same: to consolidate desktop computing at the server.

3. Desktop virtualization and VDI mean pretty much the same thing

VMware was first to promote the VDI (virtual desktop infrastructure) terminology, but Microsoft and Citrix have followed suit, offering VDI solutions of their own based on the Hyper-V and XenServer hypervisors, respectively. Think of it this way: VDI refers to the basic architecture for desktop virtualization, where a VM for each user runs on the server.

4. Don't confuse desktop virtualization with ... desktop virtualization

The desktop virtualization we're talking about refers to server-based computing. But "desktop virtualization" also refers to running virtual machines on desktop systems, using such desktop virtualization solutions as Microsoft Virtual PC, VMware Fusion, or Parallels Desktop. Probably the most common use of this sort of desktop virtualization is running Windows in a Parallels or Fusion VM on the Mac. In other words, this has nothing to do with server-based computing.

5. No server-based computing solution supports the same range of hardware as a desktop

The Windows folks in Redmond spend half their lives ensuring compatibility with every printer, graphics card, sound card, scanner, and quirky USB device. With thin clients, your support for hardware is going to be pretty generic, and some items won't work at all. Other limitations are introduced by the fact that users interact with their VMs over the network. Multimedia, videos, and Flash apps can be problematic.

6. VDI solutions cost more (and deliver more) than traditional thin client solutions

Think about it: With VDI, each virtual machine needs its own slice of memory, storage, and processing power to run a user's desktop environment, while in the old-fashioned Terminal Services model, users share almost everything except data files. VDI also means a separate Windows license for each user, while Terminal Services-style setups give you a break with Microsoft Client Access Licenses. Plus, VDI incurs greater network traffic, which may add a network upgrade to the purchase order for beefy server hardware.

In return for that extra cost, along with a better user experience, VDI delivers greater manageability and availability. As with server virtualization, you can migrate virtual machines among servers without bringing down those VMs, perform VM snapshots for quick recovery, run automated load balancing, and more. And if a virtual machine crashes, that doesn't affect other VMs; with Terminal Services, that single instance of Windows is going to bring down every connected user when it barfs.

7. Dynamic VDI solutions improve efficiency

In a standard VDI installation, each user's virtual machine persists from session to session; as the number of users grows, so do storage and administration requirements. In a dynamic VDI architecture, when users log in, virtual desktops assemble themselves on the fly by combining a clone of a master image with user profiles. Users still get a personalized desktop, while administrators have fewer operating system and application instances to store, update, and patch.

8. Application virtualization eases VDI requirements even more

When an application is virtualized, it's "packaged" with all the little operating system files and registry entries necessary for execution, so it can run without having to be installed (that is, no changes need be made to the host operating system). In a dynamic VDI scenario, admins can set up virtualized applications to be delivered to virtual machines at runtime, rather than adding those apps to the master image cloned by VMs. This reduces the footprint of desktop virtual machines and simplifies application management. If you add application streaming technology, virtualized applications appear to start up faster, as if they were installed in the VM all along.

9. Client hypervisors will let you run virtual machines offline

A client hypervisor installs on an ordinary desktop or laptop so that you can run a "business VM" containing your OS, apps, and personal configuration settings. Talk about full circle: Why would you want all that in a virtual machine instead of installed on the desktop itself? Two reasons: One, it's completely secure and separate from whatever else may be running on that desktop (such as a Trojan some clueless user accidentally downloaded) and two, you get all the virtualization management advantages, including VM snapshots, portability, easy recovery, and so on. Client hypervisors also make VDI more practical. You can run off with your business virtual machine on a laptop and compute without a connection; then when you connect to the network again, the client VM syncs with the server VM.

Client hypervisors point to a future where we bring our own computers to work and download or sync our business virtual machines to start the day. Actually, you could use any computer with a compatible client hypervisor, anywhere. The operative word is "future" -- although Citrix has released a "test kit" version of its client hypervisor, and VMware is expected to release its own early version soon, shipping versions will not arrive before 2011.

The long march to the server side

Meanwhile, a completely different form of server-based computing continues to gain traction: the variant of cloud computing known as SaaS (software as a service), where service providers maintain applications and user data and deliver everything through the browser. A prime example is Google's campaign for Google Docs, encouraging users to forget about upgrading to Office 2010 and adopt Google's suite of productivity apps instead. Plus, Google's Chrome OS promises to create entire desktop environments in the cloud that retain user personalization.

Very likely, no big winner will emerge in server-based computing. Old-style Terminal Services setups will continue to crank along for offices harboring users with narrow, simple needs. True desktop virtualization on the VDI model will make sense where security and manageability are paramount, such as widely distributed organizations that use lots of contractors. And where far-flung collaboration is key, SaaS will flourish, because anyone with a Web browser can join the party. Conventional desktops may never disappear, but one way or another, the old centralized model of computing is making a comeback.



Article submitted by : SUNDE, global provider of innovative Terminal Services and Virtual Desktop Infrastructure (VDI) solutions paired with zero clients to help customers dramatically reduce the cost and complexity of desktop computing.

Sunday, 18 August 2013

Zero Client and Thin Client Technology: History, Use and Critical Comparison

Introduction

The debate over the strengths and weaknesses of thin clients versus fat clients in a distributed computing environment has gone on for many years. Thin clients have been highlighted as a preferred method for information publishing across the enterprise and as a key tool in the ongoing struggle to reduce ownership costs for information technology. In late 2003, unsatisfied with its strategic direction and relieved of the most severe anti-trust threats, Microsoft began to reverse the technology pendulum back toward fat-client architectures as it announced strategic plans to embed more functionality within its Windows client operating system. Overlooked in many discussions of industry trends is the "Zero Client", a technology which offers the benefits of fat clients while delivering equivalent cost of ownership reductions and faster performance than the fastest thin clients.

Definition and Description

The zero client ("station") is a set of components (monitor, keyboards, mouse), none of which have independently programmable intelligence, that relies on a centralized CPU ("Host PC") for all program execution and information processing. The connection between the zero client and the Host PC is a direct, point-to-point connection that operates at bus speed, requiring no network protocol. Zero clients are typically implemented in clusters, using a "star-like" configuration around the Host PC. Each cluster can function either as a network component of a distributed computing system or as self-contained, small-group system. When combined with the high performance of the bus-speed delivery system, zero client technology offers an unequalled platform for small-group, transactional-based systems accessing a shared database.

Since a zero client uses low-cost component hardware, with no local intelligence or processing, its cost per seat is similar to that of network computers. Likewise, zero clients offer a single point location - the Host PC - for upgrade, maintenance and support, thus drastically reducing licensing and lifetime system costs.

History of Zero Client Technology

Zero client technology has its earliest roots in mini/mainframe computing, where computing tasks and program execution were centralized and information was sent and displayed to multiple users through terminal devices that lacked programmable intelligence, ergo, "dumb terminals" (later renamed "mainframe interactive terminals").

Character-based terminals such as the initial 3270, 5250 and VT52/VT100 stations provided the user interface on a variety of systems. These terminals were typically connected to the host via low bandwidth serial links (i.e. less than 9.6 Kbps). Output from an application program was passed by the operating system through the serial link to the terminal firmware to be displayed on the user’s screen.

When personal computers were introduced, their computing architecture was a radical change for the industry. In the PC, applications could be executed locally on the user’s desktop, eliminating the requirement that the operating system transmit the output to a slow, external display device. Some of the earliest PC applications were terminal emulators so that a single PC could displace the dumb terminal on the desktop.

The impact of this change in architecture was dramatic and rapid. Applications began to change as developers embraced the assumption of "one user, one PC". Using this dedicated-user assumption, PC applications began leveraging direct access to the hardware for maximum performance. For example, the user interface was optimized by bypassing the operating system entirely and directly addressing the display device.

Then, in the mid-1990s, coincident with the improved performance in newer Intel x86 chipsets, the PC user interface shifted from character-based to graphical. Windows and OS/2 became the predominant operating systems for Intel-based personal computers. In these advanced environments, the operating system took more control of access to and use of the PC hardware. In display management, the operating system was reinserted between the application and the display adapter. As a consequence of the relentlessly-increasing operating system functionality and more complex applications, it became more difficult and more expensive to provide support for the PC environments.

During this same period, the zero client technology (still using a serial connection) delivered less and less perceived performance as a direct result of the increased amount of information being passed over that connection for increasingly graphical and "user friendly" applications. In comparing the less than 2,000 bytes in a character based screen to the just over 300,000 bytes to represent the graphic pixels in the smallest Windows display, it became evident that the serial connection no longer provided a viable solution.

A new type of connectivity hardware was introduced in the late 1980s, generically referred to as a multi-display adaptor ("MDA"). These add-on boards contained multiple VGA chipsets and used a variety of cabling options (fiber optic, coaxial, etc.). When adapted to an operating system environment, they all delivered the display data directly to multiple VGA displays at bus speed. During the early to mid 1990s, these multi-display adapters were implemented for use on a variety of flavors of Unix (including SCO), other proprietary operating systems (including PC-MOS, VM386, THEOS) and enhanced DOS operating systems (Concurrent and Multiuser).

These MDAs were the early predecessors of the hardware used by zero client technology today. Today, multi-display hardware uses SVGA/XGA chipsets, supports 1600x1200 resolution in full color and directly delivers the video streams to multiple displays via a variety of high speed transmission media.

The use of this hardware with its bus transfer speed for additional video displays provided the hardware foundation necessary to deliver efficient zero client technology for Windows operating systems.

Definition of Zero Client

A traditional PC has a single display adapter, a single mouse port and a single keyboard controller. A zero client PC Host has multiple display adapters, multiple mouse ports and multiple keyboard controllers. Through system software that resides on the PC Host, multiple virtual machines or sessions are created, each associated with a display adapter, a mouse and a keyboard. Input for the session is read directly from its mouse and keyboard; output is written directly to its display adapter. As in any computing architecture, there are both hardware and software components involved to deliver this advanced functionality.

Key Components of Zero Client Technology
Host PC - Standard PC with multiple display adapters
Local Station - Standard (or USB) input and output devices (e.g. monitor, mouse, keyboard, audio, Touch Screen, serial, etc.)
High Speed Delivery System - Direct connection or extension
Software Component - Multiuser or Virtualization Software
Hardware Architecture

The Host PC has more than one display adapter (or possibly a display adapter with multiple SVGA chipsets) for support of zero client technology. Each of those SVGA chipsets is associated with a direct connection to a local station, typically from 5 to 500 feet away. One example of a local station consists of a connector box of some kind, into which a monitor, mouse and keyboard are plugged. If local peripherals are used, the connector box will also include signal decoding, which will multiplex the combined video, serial data and parallel data and feed it to and receive it from the appropriate component.

A vital element of the zero client solution is the ability to transmit a true graphical signal directly to the station’s display, perhaps as much as several hundred feet away. By extending the VGA signal, as opposed to packetizing the video with software and network protocols, the Host PC is not burdened with CPU overhead and the responsiveness of the station’s display is as fast as a standalone PC system.

Software

As indicated earlier, zero client technology has existed in various flavors for many years, However, until the introduction of the Application software in late 1996, zero client technology had never been implemented on a Windows 95/98 platform.

Windows 95/98 included preemptive multitasking capabilities and was the first Windows-based platform in which zero client technology could be effectively implemented. In prior versions of Windows (3.x), multiple applications could be open in their own windows, but only one application was active at any given point in time. For example, a user could have Word and Excel windows displayed, but, after beginning a long recalculation in Excel, the user couldn’t switch to Word until the Excel computation was complete.

Using the above example with the preemptive multitasking in Windows 95/98, a user could have both Word and Excel open, start a long recalculation in Excel and then immediately switch to Word to edit a document while the recalculation finished in the background.

At a lower level, a Host PC using zero client technology has system software enhancements that support multiple virtual machines or sessions. Each of these sessions is associated with a display adapter, a mouse, a keyboard and optional audio. As previously mentioned, he system software directly passes input for each virtual machine from its corresponding mouse and keyboard; similarly, output is written directly to the corresponding display adapter.

One of the obvious benefits of the zero client design is very high video performance. The physical presence of a video chipset for each of the virtual machines eliminates the overhead of emulation, packetizing and transmission of graphical orders or video. The degree to which this benefits performance is directly tied to the extent that color and graphics are used by application(s) being executed in that virtual machine.

In addition, performance is improved because all display data is transferred at bus transfer speeds rather than through a network connection. A network connection requires the transmission of data in packets and using some protocol. The effective throughput of a network at any point in time is determined by multiple factors, including the bandwidth and amount of active traffic on the network at that time. The point-to-point transfer of display data directly from a memory structure to a video display can occur in a small fraction of the time required to pass the same data over the network.

Zero client technology also offers simplified installation, configuration and support, by virtue of the use of a single Host PC and multiple stations (each consisting of a monitor, mouse and keyboard) rather than multiple PCs individually configured and combined into a small network.

Thin Client Compared to Zero Client

Thin client technology has received a lot of attention in recent years. In support of an industry focus upon expense reduction and improved manageability of desktop computing, the computer industry has drawn from the experience of the mini/mainframe model of host and terminal. With the thin client architecture, the application moved back to a multi-user host, which transmitted the display information to an intelligent device for presentation to the user.

However, the thin client model ignored a crucial change that occurred in the application domain with adoption of Windows as a standard platform: the move from a character-based to a graphical user interface. The client station must now do substantially more processing than the old "dumb" terminal. Higher bandwidth links are also required for the graphical information. When multimedia is added to the application equation, the effectiveness of thin client technology is severely reduced.

Zero client technology differs from thin client technology in client hardware requirements, display data processing and the data delivery system. A comparison of the processing of graphical commands points to some key differences.

As a baseline, on a standalone PC, Windows passes graphical commands directly to a display driver that interprets them and updates the display.

PC Architecture

Within a thin client host (terminal server), a protocol layer is introduced. Here, Windows passes graphical commands to a protocol layer, usually either the Citrix ICA or the Microsoft share (RDP). This protocol layer encodes the commands into packets and transmits them over the network to the intelligent client device. At the client end, the protocol layer decodes the commands and passes them to a display driver that interprets them and updates the display. Sun offers a similar capability with its Sun Ray line of products for the Solaris operating system.

Thin Client Architecture

On a zero client system, the process is almost identical to that occurring in the standalone PC, with the single exception that the driver updates the display that corresponds to each virtual machine or session. Within the zero client Host PC, the protocol layer and the transmission of the data in packets are avoided. Therefore, zero client architecture conserves processing power within the Host PC and eliminates client processing entirely.

Zero Client Architecture

The zero client architecture and combines the best attributes of the thin client and the standard personal computer architectures. As in the thin client, applications execute on a shared Host PC. This minimizes cost, delivers the highest performance and improves manageability.

Summary

The zero client architecture combines key aspects of the thin client, the NC and the personal computer. As in the thin client model, Windows applications (including browsers) execute on a shared Host PC. This reduces cost and improves control and manageability. As in the NC model, the zero client stations are lowest cost, secure and environmentally efficient. As in the personal computer model, the display adapter resides in the same computer as the application. This preserves performance because it eliminates the need for a network transmission protocol that degrades CPU processing and injects delays due to network overhead. When all the strengths and weaknesses of each desktop configuration alternative are considered, the zero client technology offers flexible and valuable options to users seeking minimized costs of ownership and improved control.



Article submitted by: http://www.sundenc.com



Sunday, 11 August 2013

VDI hardware comparison: Thin vs. thick vs. zero clients

When it comes to virtual desktop infrastructure, administrators have a lot of choices. You may have wondered about the differences between VDI software options, remote display protocols or all the licenses out there. In this series, we tackle some of the biggest head-scratchers facing VDI admins to help you get things straight.

When you deploy VDI, you need to figure out what hardware your virtual desktops will run on.

To host virtual desktops, you have a lot of choices: thin clients, zero clients and smart clients -- not to mention tablets and mobile devices. Thin clients and other slimmed-down devices rely on a network connection to a central server for full computing and don't do much processing on the hardware itself. Those differ from thick clients -- basically traditional PCs -- that handle all the functionality of a server on the desktop itself.

Understanding the benefits, challenges and cost implications of all these VDI hardware options will help you make the right choice. Let's get this straight:



Thick clients

It's possible to use thick clients for desktop virtualization, but many organizations don't because it doesn't cut down on overall hardware and requires all local software. If you use traditional PCs to connect to virtual desktops, you don't get many of the benefits of VDI, such as reduced power consumption, central management and increased security.

How thick clients compare to thin

Since a thick client is basically a PC running thin client software, it is usually more costly than a thin client device. Plus, thick clients have hard drives and media ports, making them less secure than thin clients. Finally, thin clients tend to require less maintenance than thick ones, although thin client hardware problems can sometimes lead to having to replace the entire device.

Thin clients

With thin client hardware, virtual desktops are hosted in the data center and the thin client simply serves as a terminal to the back-end server. Thin clients are generally easy to install, make application access simpler, improve security and reduce hardware needs by allowing admins to repurpose old PCs.

What to look for in thin client devices

Thin clients are meant to be small and simple, so the more advanced features you add, the more expensive they get. As you choose thin client devices, consider whether you need capabilities such as video conferencing and multi-monitor support. You should also take into account your remote display protocol and how much display processing your back end can supply.

Aside from being cheap and uncomplicated, thin clients should also offer centralized management. For instance, you can automatically apply profile policies to groups of thin clients with similar configurations. That tends to be easier than individual manual management. Plus, you want your VDI hardware to be simple enough for nonveteran IT staff or those at remote branch offices to be able to deploy.

Zero clients

Zero clients are gaining ground in the VDI market because they're even slimmer and more cost-effective than thin clients. These are client devices that require no configuration and have nothing stored on them. Vendors including Dell Wyse, Fujitsu, and SUNDE offer zero client hardware.

Pros and cons of zero clients

So what are the benefits of this kind of VDI hardware? First off, zero clients can be less expensive than thick and thin clients. Plus, they use less power and can simplify client device licensing.

Still, there's a catch: Vendors often market zero clients as requiring no management or maintenance, which isn't always true. Some products do require software or memory and other resources. In addition, zero clients tend to be proprietary, so organizations could run into vendor lock-in.



Contact us: http://www.sundenc.com

Article submitted by: SUNDE VDI delivers an extremely high performance virtual desktop for users including rich multi-media, full screen 1080P streaming video and Flash, dynamic graphics, and seamless responsiveness.