Sunday, 29 September 2013

The Right Display Protocol For The VDI Solution

Diana and mouse

This write-up assumes that the reader is aware of the VDI architectural basics. It further delves into providing an insight to the further questions:

• Why is a good Remote Display Protocol needed?
• What should be expected out of an efficient display protocol?
• Who are the popular manufacturers in Remote Display Protocol currently?
• What makes SUNDE-VDI Protocol an obvious choice for the VDI solutions?


For a moment, let us keep the rest of the components of the VDI architecture at backstage, and pick up a single component, which establishes the communication between the server and the endpoint device- the connection broker. Every connection broker follows a specific set of rules to bridge this connection. In other words, the connection broker, which is a software program, uses a Remote Display Protocol which helps in communicating the output from server to the endpoint device.

A good Remote Display Protocol is expected to be equipped with the following properties:

1. It should ensure a reliable data delivery. Transmission Control Protocol (TCP) is a well known protocol, which sends the generated output in small packets of data. The protocol retains the network connection till all the data is transferred, and resends the output if the connection is interrupted. It is a highly authenticated protocol.

2. The data delivery must be fast, especially for transferring the media rich applications. The audio and the video should be synchronized with minimal delay. User Datagram Protocol(UDP), unlike TCP, does not sequence the data packets, and thus is efficient in sending the data faster.

3. There must be no complex hardware extension involved. Most of the display protocols today are those which were designed to be used for Terminal Servers, and needed several hardware extensions before these could be used for the VDI solutions.

The Popular Display Protocols:

Choosing the right protocol depends mainly on the multimedia requirements of the endpoint device. However, the commonly used remote display protocols today are technologically designed to offer:


Under VDI architecture, there are 3 such display protocols, which are popular, with their won limitations:

1. Microsoft’s Remote Desktop Protocol(RDP)/ Remote FX:
The RDP has been going through continuous up gradations to meet up with the demanding multimedia requirements. Remote FX is the improved protocol version of RDP for later versions of Windows Server. However, it is not considered the best option for the continually increasing multimedia requirements. Without the required upgrade, the user experience does not suffice.

2. VMWare’s PC-over-IP(PCoIP):
With a low bandwidth consumption over LAN and WAN, PCoIP is preferred over the other display protocols. Its performance is also better since it uses UDP unlike the other display protocols which use TCP as well. However, a major limitation of VMWare’s PCoIP is that it cannot be used with Windows Server 2012, which is wide in use at present.

3. Citrix’s HDX:
Initially called Independent Computing Architecture(ICA), Citrix launched its remote display protocol with an appreciable performance, which was launched as HDX as a part of its 2009 suite release. It offered a very good end-user experience with a low bandwidth and multimedia redirection. The further versions of HDX were multimedia rich. Its biggest limitation however was its incompatibility with later versions of Windows servers, which confined Citrix to a lesser number of users.

While, the companies are still in research mode to offer a PC like multimedia solution under VDI architecture, SUNDE has grown by leaps and bounds in this area:

1. SUNDE-VDI Protocol uses both the TCP and UDP, retaining the reliability and the quickness of the data stream transfer.

2. The graphics acceleration in SUNDE-VDI protocol is efficient enough to handle the rich multimedia, including graphics and animation.

3. This is a server rendering protocol, which means that it depends upon the host for its operations. This eliminates the need of CPU and a huge memory at the endpoint device, making it a good option for zero clients as well.

4. SUNDE-VDI Protocol comes as a part of vPointServer software, which is the connection broker using VirtualBox platform as the hypervisor. Another part of this package is Diana Zero Client, which is the endpoint device from SUNDE. Everything as a package takes care of the compatibility issues between the components, and makes it a perfect choice as a display protocol.




For more details, please visit: http://www.sundenc.com/support/knowledge/The%20Right%20Display%20Protocol%20For%20The%20VDI%20Solution.html


Tuesday, 17 September 2013

What Is Zero Client & What Is A True Zero Client?

Zero client, also known as ultra thin client, is a server-based computing model in which the end user's computing device has no local storage. A zero client can be contrasted with a thin client, which retains the operating system and each device's specific configuration settings in flash memory.

A typical zero client product is a small box that serves to connect a keyboard, mouse, monitor and Ethernet connection to a remote server. The server, which hosts the client's operating system (OS) and software applications, can be accessed wirelessly or with cable. Zero clients are often used in a virtual desktop infrastructure (VDI ) environment.

Benefits of zero client computing:
-- Power usage can be as low as 1/50th of fat client requirements.
-- Devices are much less expensive than PCs or thin clients.
-- Efficient and secure means of delivering applications to end users.
-- No software at the client means that there is no vulnerability to malware.
-- Easy administration.
-- In a VDI environment, administrators can reduce the number of physical PCs or blades and run multiple virtual PCs on server class hardware.

The term zero client is often misapplied in thin client vendor marketing materials. True zero client endpoints do no local processing and have no client operating systems, drivers, software, storage, or even any configuration settings. They are completely stateless and management-free. Zero clients mean zero endpoint management – absolutely zero.

Some thin client vendors have even tried to make their endpoints look “zero” by keeping the client operating system image on the hard disk of a separate “streaming” appliance, requiring that users wait while it is downloaded to the endpoint’s hard disk or flash storage before use. Unfortunately, this only makes the entire VDI architecture from these vendors even more complex and fragile.

To see if vendor claims of “zero-ness” are valid, apply these tests:
1. Does the endpoint include a CPU of any kind? Any RAM or Flash Memory? Any storage devices or moving parts at all?
2. Are you forced to configure the endpoint in any way before use?
3. Do you need to reconfigure the endpoints before you are able to swap them between users?
4. Does the endpoint need to download an operating system image or any software before you can use it?
5. Are you not able to use the native Windows drivers that Microsoft or the manufacturer supply to connect to a new peripheral?
6. Does the endpoint require you use an embedded management tool?


Article submitted by : http://www.sundenc.com/support/knowledge/What%20Is%20Zero%20Client.html

Sunday, 8 September 2013

VDI hardware comparison: Thin vs. thick vs. zero clients

When it comes to virtual desktop infrastructure, administrators have a lot of choices. You may have wondered about the differences between VDI software options, remote display protocols or all the licenses out there. In this series, we tackle some of the biggest head-scratchers facing VDI admins to help you get things straight.

When you deploy VDI, you need to figure out what hardware your virtual desktops will run on. To host virtual desktops, you have a lot of choices: thin clients, zero clients and smart clients -- not to mention tablets and mobile devices. Thin clients and other slimmed-down devices rely on a network connection to a central server for full computing and don't do much processing on the hardware itself. Those differ from thick clients -- basically traditional PCs -- that handle all the functionality of a server on the desktop itself.

Understanding the benefits, challenges and cost implications of all these VDI hardware options will help you make the right choice. Let's get this straight:

Thick clients

It's possible to use thick clients for desktop virtualization, but many organizations don't because it doesn't cut down on overall hardware and requires all local software. If you use traditional PCs to connect to virtual desktops, you don't get many of the benefits of VDI, such as reduced power consumption, central management and increased security.

How thick clients compare to thin

Since a thick client is basically a PC running thin client software, it is usually more costly than a thin client device. Plus, thick clients have hard drives and media ports, making them less secure than thin clients. Finally, thin clients tend to require less maintenance than thick ones, although thin client hardware problems can sometimes lead to having to replace the entire device.

Thin clients

With thin client hardware, virtual desktops are hosted in the data center and the thin client simply serves as a terminal to the back-end server. Thin clients are generally easy to install, make application access simpler, improve security and reduce hardware needs by allowing admins to repurpose old PCs.

What to look for in thin client devices

Thin clients are meant to be small and simple, so the more advanced features you add, the more expensive they get. As you choose thin client devices, consider whether you need capabilities such as video conferencing and multi-monitor support. You should also take into account your remote display protocol and how much display processing your back end can supply. Aside from being cheap and uncomplicated, thin clients should also offer centralized management. For instance, you can automatically apply profile policies to groups of thin clients with similar configurations. That tends to be easier than individual manual management. Plus, you want your VDI hardware to be simple enough for nonveteran IT staff or those at remote branch offices to be able to deploy.

Zero clients

Zero clients are gaining ground in the VDI market because they're even slimmer and more cost-effective than thin clients. These are client devices that require no configuration and have nothing stored on them. Vendors including Dell Wyse, Fujitsu, and SUNDE offer zero client hardware.

Pros and cons of zero clients So what are the benefits of this kind of VDI hardware? First off, zero clients can be less expensive than thick and thin clients. Plus, they use less power and can simplify client device licensing.

Still, there's a catch: Vendors often market zero clients as requiring no management or maintenance, which isn't always true. Some products do require software or memory and other resources. In addition, zero clients tend to be proprietary, so organizations could run into vendor lock-in.


Contact us: http://www.sundenc.com

Article submitted by: SUNDE VDI delivers an extremely high performance virtual desktop for users including rich multi-media, dynamic graphics, and seamless responsiveness.

Sunday, 1 September 2013

What’s the difference between virtualization and cloud computing?

Virtualization is a computing technology that enables a single user to access multiple physical devices. Another way to look at it is a single computer controlling multiple machines, or one operating system utilizing multiple computers to analyze a database. Virtualization may also be used for running multiple applications on each server rather than just one; this in turn reduces the number of servers companies need to purchase and manage. It enables you to consolidate your servers and do more with less hardware. It also lets you support more users per piece of hardware, deliver applications, and run applications faster.

Cloud computing offers scalable infrastructure and software off site, saving labor, hardware, and power costs. Financially, the cloud’s virtual resources are typically cheaper than dedicated physical resources connected to a personal computer or network. With cloud computing, the software programs you use aren’t run from your personal computer, but rather are stored on servers housed elsewhere and accessed via the Internet. If your computer crashes, the software is still available for others to use. Simply, the cloud is a collection of computers and servers that are publicly accessible via the Internet.

One way to look at it is that virtualization is basically one physical computer pretending to be many computing environments whereas cloud computing is many different computers pretending to be the one computing environment (hence user scaling). Virtualization provides flexibility that is a great match for cloud computing. Moreover, cloud computing can be defined based on the virtual machine containers created with virtualization. Virtualization is not always necessary in cloud computing; however, you can use it as the basis. Cloud computing is an approach for the delivery of services while virtualization is one possible service that could be delivered. Large corporations with little downtime tolerance and airtight security requirements may find that virtualization fits them best. Smaller businesses are more likely to profit more with cloud computing, allowing them to focus on their mission while leaving IT chores to those who can do more for less.

Plainly, virtualization provides more servers on the same hardware and cloud computing provides measured resources while paying for what you use. While it is not uncommon to hear people discuss them interchangeably, they are very different approaches to solving the problem of maximizing the use of available resources. They differ in many ways and that also leads to some important considerations when selecting between the two.


Article submitted by: http://www.sundenc.com

Sunday, 25 August 2013

What desktop virtualization really brings

Depending on whom you talk to, desktop virtualization is either the hottest trend in IT or an expensive notion with limited appeal

Desktop virtualization harks back to the good old mainframe days of centralized computing while upholding the fine desktop tradition of user empowerment. Each user retains his or her own instance of desktop operating system and applications, but that stack runs in a virtual machine on a server -- which users can access through a low-cost thin client similar to an old-fashioned terminal.

The argument in favor of desktop virtualization is powerful: What burns through more hands-on resources or incurs more risk than desktop computers? Even with remote desktop management, admins must invade cubicles and shoo away employees when it's time to upgrade or troubleshoot. And each desktop or laptop provides a fat target for hackers and an opportunity to steal data. But if you run desktops as virtual machines on a server, you can manage and secure all those desktop user environments in one central location. Patches and other security measures, along with hardware or software upgrades, demand much less overhead. And the risk that users will make mischief or mistakes that breach security drops dramatically.

The argument against desktop virtualization is almost as strong. Overhead costs conserved through central management get cancelled out by the need for powerful servers, virtualization software licenses, and additional network bandwidth. Plus, the cost of client hardware and Microsoft software licenses stays roughly the same, while the user experience -- at least today -- seldom lives up to user expectations. And then the kicker: How are users supposed to compute when they're disconnected from the network?

Decisions about whether or in what form to adopt desktop virtualization become a whole lot easier when you understand the basic variants and technologies. Here's what you need to know:

1. Desktop virtualization really is virtualization

Just like server virtualization, desktop virtualization relies on a thin layer of software known as a hypervisor, which runs on the server hardware and provides a platform on which administrators deploy and manage virtual machines. With desktop virtualization, each user gets a virtual machine that contains a separate instance of the desktop operating system (almost always Windows) and whatever applications have been installed. To the desktop OS, the applications, and the user, the VM does a pretty good job of impersonating a real desktop machine.

2. Traditional thin client solutions are not desktop virtualization

By far the most popular form of server-based, thin client computing relies on Microsoft Terminal Services (recently renamed Remote Desktop Services), which lets multiple users share the same instance of Windows. Terminal Services is often paired with Citrix XenApp (formerly known as Presentation Server and, before that, MetaFrame), which adds management features and improves performance -- no hypervisors or VMs here. The main drawbacks: Some applications run poorly or not at all in this can’t share environment, and individuals customize their user experience the way they can with virtual machines or real desktops. Nonetheless, people often refer to traditional thin client solutions as desktop virtualization because the basic goal is the same: to consolidate desktop computing at the server.

3. Desktop virtualization and VDI mean pretty much the same thing

VMware was first to promote the VDI (virtual desktop infrastructure) terminology, but Microsoft and Citrix have followed suit, offering VDI solutions of their own based on the Hyper-V and XenServer hypervisors, respectively. Think of it this way: VDI refers to the basic architecture for desktop virtualization, where a VM for each user runs on the server.

4. Don't confuse desktop virtualization with ... desktop virtualization

The desktop virtualization we're talking about refers to server-based computing. But "desktop virtualization" also refers to running virtual machines on desktop systems, using such desktop virtualization solutions as Microsoft Virtual PC, VMware Fusion, or Parallels Desktop. Probably the most common use of this sort of desktop virtualization is running Windows in a Parallels or Fusion VM on the Mac. In other words, this has nothing to do with server-based computing.

5. No server-based computing solution supports the same range of hardware as a desktop

The Windows folks in Redmond spend half their lives ensuring compatibility with every printer, graphics card, sound card, scanner, and quirky USB device. With thin clients, your support for hardware is going to be pretty generic, and some items won't work at all. Other limitations are introduced by the fact that users interact with their VMs over the network. Multimedia, videos, and Flash apps can be problematic.

6. VDI solutions cost more (and deliver more) than traditional thin client solutions

Think about it: With VDI, each virtual machine needs its own slice of memory, storage, and processing power to run a user's desktop environment, while in the old-fashioned Terminal Services model, users share almost everything except data files. VDI also means a separate Windows license for each user, while Terminal Services-style setups give you a break with Microsoft Client Access Licenses. Plus, VDI incurs greater network traffic, which may add a network upgrade to the purchase order for beefy server hardware.

In return for that extra cost, along with a better user experience, VDI delivers greater manageability and availability. As with server virtualization, you can migrate virtual machines among servers without bringing down those VMs, perform VM snapshots for quick recovery, run automated load balancing, and more. And if a virtual machine crashes, that doesn't affect other VMs; with Terminal Services, that single instance of Windows is going to bring down every connected user when it barfs.

7. Dynamic VDI solutions improve efficiency

In a standard VDI installation, each user's virtual machine persists from session to session; as the number of users grows, so do storage and administration requirements. In a dynamic VDI architecture, when users log in, virtual desktops assemble themselves on the fly by combining a clone of a master image with user profiles. Users still get a personalized desktop, while administrators have fewer operating system and application instances to store, update, and patch.

8. Application virtualization eases VDI requirements even more

When an application is virtualized, it's "packaged" with all the little operating system files and registry entries necessary for execution, so it can run without having to be installed (that is, no changes need be made to the host operating system). In a dynamic VDI scenario, admins can set up virtualized applications to be delivered to virtual machines at runtime, rather than adding those apps to the master image cloned by VMs. This reduces the footprint of desktop virtual machines and simplifies application management. If you add application streaming technology, virtualized applications appear to start up faster, as if they were installed in the VM all along.

9. Client hypervisors will let you run virtual machines offline

A client hypervisor installs on an ordinary desktop or laptop so that you can run a "business VM" containing your OS, apps, and personal configuration settings. Talk about full circle: Why would you want all that in a virtual machine instead of installed on the desktop itself? Two reasons: One, it's completely secure and separate from whatever else may be running on that desktop (such as a Trojan some clueless user accidentally downloaded) and two, you get all the virtualization management advantages, including VM snapshots, portability, easy recovery, and so on. Client hypervisors also make VDI more practical. You can run off with your business virtual machine on a laptop and compute without a connection; then when you connect to the network again, the client VM syncs with the server VM.

Client hypervisors point to a future where we bring our own computers to work and download or sync our business virtual machines to start the day. Actually, you could use any computer with a compatible client hypervisor, anywhere. The operative word is "future" -- although Citrix has released a "test kit" version of its client hypervisor, and VMware is expected to release its own early version soon, shipping versions will not arrive before 2011.

The long march to the server side

Meanwhile, a completely different form of server-based computing continues to gain traction: the variant of cloud computing known as SaaS (software as a service), where service providers maintain applications and user data and deliver everything through the browser. A prime example is Google's campaign for Google Docs, encouraging users to forget about upgrading to Office 2010 and adopt Google's suite of productivity apps instead. Plus, Google's Chrome OS promises to create entire desktop environments in the cloud that retain user personalization.

Very likely, no big winner will emerge in server-based computing. Old-style Terminal Services setups will continue to crank along for offices harboring users with narrow, simple needs. True desktop virtualization on the VDI model will make sense where security and manageability are paramount, such as widely distributed organizations that use lots of contractors. And where far-flung collaboration is key, SaaS will flourish, because anyone with a Web browser can join the party. Conventional desktops may never disappear, but one way or another, the old centralized model of computing is making a comeback.



Article submitted by : SUNDE, global provider of innovative Terminal Services and Virtual Desktop Infrastructure (VDI) solutions paired with zero clients to help customers dramatically reduce the cost and complexity of desktop computing.

Sunday, 18 August 2013

Zero Client and Thin Client Technology: History, Use and Critical Comparison

Introduction

The debate over the strengths and weaknesses of thin clients versus fat clients in a distributed computing environment has gone on for many years. Thin clients have been highlighted as a preferred method for information publishing across the enterprise and as a key tool in the ongoing struggle to reduce ownership costs for information technology. In late 2003, unsatisfied with its strategic direction and relieved of the most severe anti-trust threats, Microsoft began to reverse the technology pendulum back toward fat-client architectures as it announced strategic plans to embed more functionality within its Windows client operating system. Overlooked in many discussions of industry trends is the "Zero Client", a technology which offers the benefits of fat clients while delivering equivalent cost of ownership reductions and faster performance than the fastest thin clients.

Definition and Description

The zero client ("station") is a set of components (monitor, keyboards, mouse), none of which have independently programmable intelligence, that relies on a centralized CPU ("Host PC") for all program execution and information processing. The connection between the zero client and the Host PC is a direct, point-to-point connection that operates at bus speed, requiring no network protocol. Zero clients are typically implemented in clusters, using a "star-like" configuration around the Host PC. Each cluster can function either as a network component of a distributed computing system or as self-contained, small-group system. When combined with the high performance of the bus-speed delivery system, zero client technology offers an unequalled platform for small-group, transactional-based systems accessing a shared database.

Since a zero client uses low-cost component hardware, with no local intelligence or processing, its cost per seat is similar to that of network computers. Likewise, zero clients offer a single point location - the Host PC - for upgrade, maintenance and support, thus drastically reducing licensing and lifetime system costs.

History of Zero Client Technology

Zero client technology has its earliest roots in mini/mainframe computing, where computing tasks and program execution were centralized and information was sent and displayed to multiple users through terminal devices that lacked programmable intelligence, ergo, "dumb terminals" (later renamed "mainframe interactive terminals").

Character-based terminals such as the initial 3270, 5250 and VT52/VT100 stations provided the user interface on a variety of systems. These terminals were typically connected to the host via low bandwidth serial links (i.e. less than 9.6 Kbps). Output from an application program was passed by the operating system through the serial link to the terminal firmware to be displayed on the user’s screen.

When personal computers were introduced, their computing architecture was a radical change for the industry. In the PC, applications could be executed locally on the user’s desktop, eliminating the requirement that the operating system transmit the output to a slow, external display device. Some of the earliest PC applications were terminal emulators so that a single PC could displace the dumb terminal on the desktop.

The impact of this change in architecture was dramatic and rapid. Applications began to change as developers embraced the assumption of "one user, one PC". Using this dedicated-user assumption, PC applications began leveraging direct access to the hardware for maximum performance. For example, the user interface was optimized by bypassing the operating system entirely and directly addressing the display device.

Then, in the mid-1990s, coincident with the improved performance in newer Intel x86 chipsets, the PC user interface shifted from character-based to graphical. Windows and OS/2 became the predominant operating systems for Intel-based personal computers. In these advanced environments, the operating system took more control of access to and use of the PC hardware. In display management, the operating system was reinserted between the application and the display adapter. As a consequence of the relentlessly-increasing operating system functionality and more complex applications, it became more difficult and more expensive to provide support for the PC environments.

During this same period, the zero client technology (still using a serial connection) delivered less and less perceived performance as a direct result of the increased amount of information being passed over that connection for increasingly graphical and "user friendly" applications. In comparing the less than 2,000 bytes in a character based screen to the just over 300,000 bytes to represent the graphic pixels in the smallest Windows display, it became evident that the serial connection no longer provided a viable solution.

A new type of connectivity hardware was introduced in the late 1980s, generically referred to as a multi-display adaptor ("MDA"). These add-on boards contained multiple VGA chipsets and used a variety of cabling options (fiber optic, coaxial, etc.). When adapted to an operating system environment, they all delivered the display data directly to multiple VGA displays at bus speed. During the early to mid 1990s, these multi-display adapters were implemented for use on a variety of flavors of Unix (including SCO), other proprietary operating systems (including PC-MOS, VM386, THEOS) and enhanced DOS operating systems (Concurrent and Multiuser).

These MDAs were the early predecessors of the hardware used by zero client technology today. Today, multi-display hardware uses SVGA/XGA chipsets, supports 1600x1200 resolution in full color and directly delivers the video streams to multiple displays via a variety of high speed transmission media.

The use of this hardware with its bus transfer speed for additional video displays provided the hardware foundation necessary to deliver efficient zero client technology for Windows operating systems.

Definition of Zero Client

A traditional PC has a single display adapter, a single mouse port and a single keyboard controller. A zero client PC Host has multiple display adapters, multiple mouse ports and multiple keyboard controllers. Through system software that resides on the PC Host, multiple virtual machines or sessions are created, each associated with a display adapter, a mouse and a keyboard. Input for the session is read directly from its mouse and keyboard; output is written directly to its display adapter. As in any computing architecture, there are both hardware and software components involved to deliver this advanced functionality.

Key Components of Zero Client Technology
Host PC - Standard PC with multiple display adapters
Local Station - Standard (or USB) input and output devices (e.g. monitor, mouse, keyboard, audio, Touch Screen, serial, etc.)
High Speed Delivery System - Direct connection or extension
Software Component - Multiuser or Virtualization Software
Hardware Architecture

The Host PC has more than one display adapter (or possibly a display adapter with multiple SVGA chipsets) for support of zero client technology. Each of those SVGA chipsets is associated with a direct connection to a local station, typically from 5 to 500 feet away. One example of a local station consists of a connector box of some kind, into which a monitor, mouse and keyboard are plugged. If local peripherals are used, the connector box will also include signal decoding, which will multiplex the combined video, serial data and parallel data and feed it to and receive it from the appropriate component.

A vital element of the zero client solution is the ability to transmit a true graphical signal directly to the station’s display, perhaps as much as several hundred feet away. By extending the VGA signal, as opposed to packetizing the video with software and network protocols, the Host PC is not burdened with CPU overhead and the responsiveness of the station’s display is as fast as a standalone PC system.

Software

As indicated earlier, zero client technology has existed in various flavors for many years, However, until the introduction of the Application software in late 1996, zero client technology had never been implemented on a Windows 95/98 platform.

Windows 95/98 included preemptive multitasking capabilities and was the first Windows-based platform in which zero client technology could be effectively implemented. In prior versions of Windows (3.x), multiple applications could be open in their own windows, but only one application was active at any given point in time. For example, a user could have Word and Excel windows displayed, but, after beginning a long recalculation in Excel, the user couldn’t switch to Word until the Excel computation was complete.

Using the above example with the preemptive multitasking in Windows 95/98, a user could have both Word and Excel open, start a long recalculation in Excel and then immediately switch to Word to edit a document while the recalculation finished in the background.

At a lower level, a Host PC using zero client technology has system software enhancements that support multiple virtual machines or sessions. Each of these sessions is associated with a display adapter, a mouse, a keyboard and optional audio. As previously mentioned, he system software directly passes input for each virtual machine from its corresponding mouse and keyboard; similarly, output is written directly to the corresponding display adapter.

One of the obvious benefits of the zero client design is very high video performance. The physical presence of a video chipset for each of the virtual machines eliminates the overhead of emulation, packetizing and transmission of graphical orders or video. The degree to which this benefits performance is directly tied to the extent that color and graphics are used by application(s) being executed in that virtual machine.

In addition, performance is improved because all display data is transferred at bus transfer speeds rather than through a network connection. A network connection requires the transmission of data in packets and using some protocol. The effective throughput of a network at any point in time is determined by multiple factors, including the bandwidth and amount of active traffic on the network at that time. The point-to-point transfer of display data directly from a memory structure to a video display can occur in a small fraction of the time required to pass the same data over the network.

Zero client technology also offers simplified installation, configuration and support, by virtue of the use of a single Host PC and multiple stations (each consisting of a monitor, mouse and keyboard) rather than multiple PCs individually configured and combined into a small network.

Thin Client Compared to Zero Client

Thin client technology has received a lot of attention in recent years. In support of an industry focus upon expense reduction and improved manageability of desktop computing, the computer industry has drawn from the experience of the mini/mainframe model of host and terminal. With the thin client architecture, the application moved back to a multi-user host, which transmitted the display information to an intelligent device for presentation to the user.

However, the thin client model ignored a crucial change that occurred in the application domain with adoption of Windows as a standard platform: the move from a character-based to a graphical user interface. The client station must now do substantially more processing than the old "dumb" terminal. Higher bandwidth links are also required for the graphical information. When multimedia is added to the application equation, the effectiveness of thin client technology is severely reduced.

Zero client technology differs from thin client technology in client hardware requirements, display data processing and the data delivery system. A comparison of the processing of graphical commands points to some key differences.

As a baseline, on a standalone PC, Windows passes graphical commands directly to a display driver that interprets them and updates the display.

PC Architecture

Within a thin client host (terminal server), a protocol layer is introduced. Here, Windows passes graphical commands to a protocol layer, usually either the Citrix ICA or the Microsoft share (RDP). This protocol layer encodes the commands into packets and transmits them over the network to the intelligent client device. At the client end, the protocol layer decodes the commands and passes them to a display driver that interprets them and updates the display. Sun offers a similar capability with its Sun Ray line of products for the Solaris operating system.

Thin Client Architecture

On a zero client system, the process is almost identical to that occurring in the standalone PC, with the single exception that the driver updates the display that corresponds to each virtual machine or session. Within the zero client Host PC, the protocol layer and the transmission of the data in packets are avoided. Therefore, zero client architecture conserves processing power within the Host PC and eliminates client processing entirely.

Zero Client Architecture

The zero client architecture and combines the best attributes of the thin client and the standard personal computer architectures. As in the thin client, applications execute on a shared Host PC. This minimizes cost, delivers the highest performance and improves manageability.

Summary

The zero client architecture combines key aspects of the thin client, the NC and the personal computer. As in the thin client model, Windows applications (including browsers) execute on a shared Host PC. This reduces cost and improves control and manageability. As in the NC model, the zero client stations are lowest cost, secure and environmentally efficient. As in the personal computer model, the display adapter resides in the same computer as the application. This preserves performance because it eliminates the need for a network transmission protocol that degrades CPU processing and injects delays due to network overhead. When all the strengths and weaknesses of each desktop configuration alternative are considered, the zero client technology offers flexible and valuable options to users seeking minimized costs of ownership and improved control.



Article submitted by: http://www.sundenc.com



Sunday, 11 August 2013

VDI hardware comparison: Thin vs. thick vs. zero clients

When it comes to virtual desktop infrastructure, administrators have a lot of choices. You may have wondered about the differences between VDI software options, remote display protocols or all the licenses out there. In this series, we tackle some of the biggest head-scratchers facing VDI admins to help you get things straight.

When you deploy VDI, you need to figure out what hardware your virtual desktops will run on.

To host virtual desktops, you have a lot of choices: thin clients, zero clients and smart clients -- not to mention tablets and mobile devices. Thin clients and other slimmed-down devices rely on a network connection to a central server for full computing and don't do much processing on the hardware itself. Those differ from thick clients -- basically traditional PCs -- that handle all the functionality of a server on the desktop itself.

Understanding the benefits, challenges and cost implications of all these VDI hardware options will help you make the right choice. Let's get this straight:



Thick clients

It's possible to use thick clients for desktop virtualization, but many organizations don't because it doesn't cut down on overall hardware and requires all local software. If you use traditional PCs to connect to virtual desktops, you don't get many of the benefits of VDI, such as reduced power consumption, central management and increased security.

How thick clients compare to thin

Since a thick client is basically a PC running thin client software, it is usually more costly than a thin client device. Plus, thick clients have hard drives and media ports, making them less secure than thin clients. Finally, thin clients tend to require less maintenance than thick ones, although thin client hardware problems can sometimes lead to having to replace the entire device.

Thin clients

With thin client hardware, virtual desktops are hosted in the data center and the thin client simply serves as a terminal to the back-end server. Thin clients are generally easy to install, make application access simpler, improve security and reduce hardware needs by allowing admins to repurpose old PCs.

What to look for in thin client devices

Thin clients are meant to be small and simple, so the more advanced features you add, the more expensive they get. As you choose thin client devices, consider whether you need capabilities such as video conferencing and multi-monitor support. You should also take into account your remote display protocol and how much display processing your back end can supply.

Aside from being cheap and uncomplicated, thin clients should also offer centralized management. For instance, you can automatically apply profile policies to groups of thin clients with similar configurations. That tends to be easier than individual manual management. Plus, you want your VDI hardware to be simple enough for nonveteran IT staff or those at remote branch offices to be able to deploy.

Zero clients

Zero clients are gaining ground in the VDI market because they're even slimmer and more cost-effective than thin clients. These are client devices that require no configuration and have nothing stored on them. Vendors including Dell Wyse, Fujitsu, and SUNDE offer zero client hardware.

Pros and cons of zero clients

So what are the benefits of this kind of VDI hardware? First off, zero clients can be less expensive than thick and thin clients. Plus, they use less power and can simplify client device licensing.

Still, there's a catch: Vendors often market zero clients as requiring no management or maintenance, which isn't always true. Some products do require software or memory and other resources. In addition, zero clients tend to be proprietary, so organizations could run into vendor lock-in.



Contact us: http://www.sundenc.com

Article submitted by: SUNDE VDI delivers an extremely high performance virtual desktop for users including rich multi-media, full screen 1080P streaming video and Flash, dynamic graphics, and seamless responsiveness.

Sunday, 4 August 2013

The Most Affordable VDI-- SUNDE's VDI

VDI or Virtual Desktop Infrastructure is a quite popular term today. It is needless to mention that VDI is a system that launches an OS in a virtual infrastructure. Plus, the entire system runs on a centralized server. In short, VDI is a high-end server based computing system that simplifies work. To be precise, it is the list of benefits of VDI that is making people interested towards it.


As soon as an organization installs VDI device, a template gets created. It features a virtual workstation that integrates an operating systems, applications and security codes. This virtual workstation can be established anytime and anywhere a user wants. Eventually, it saves time as there is no need of setting the OS and installing software every time before starting working.


• Most users feel that it is difficult to keep track on software updates. While some have auto-update options, others need manual management. But with VDI, the patch management gets executed right on time, irrespective of whether the software needs manual management or functions automatically.


It is necessary to update the security system of every computer as frequently as possible. This protects the sensitive data from getting exposed to newly emerging cyber threats. But attending every standalone PC is time consuming too. With VDI, a user can update the security system of the entire virtual workstation at a go!


In spite of these advantages of VDI, there is still a dearth of users. The reasons behind this unpleasant scenario are many:

Cost is the major factor that makes people stay away from VDI services. Big players charge exorbitantly on management software. Even, separate fees are imposed for providing smooth performance in remote desktop display protocol.

• Deploying and maintaining the system is a complex affair and requires expertise.

• The performance of virtual desktop technology is not up to the mark. Clients feel that it fails to provide seamless results. Users, especially schools and SMEs are in the lookout of VDI services have the following features:


The performance of the virtual workstation should be similar to the experience of working on a high quality standalone PC.

An untrained person should be able to deploy and handle the VDI.

• Since one of the options why users wish to switch over to VDI is to minimize infrastructural cost, they look for solutions that are cost effective and do not add up to their expenses


However, good news here is that service providers like SUNDE offer flawless performance at a highly economical rate! The users need to make a one-time payment for SUNDE’s VDI. The service provider also gives complementary backend software with every purchased VDI device. Moreover, no extra charge or hidden cost is there for future updates of software, device firmware or security patches.


Most companies do not provide actual zero client solutions that feature integrated OS, memory and CPU capacity. On the other hand, providers like SUNDE offer actual zero client solutions with costs that are much economical than many other VDI providers. Performance wise, SUNDE offers much more precision than Citrix’s ICA that comes with HDX or PCoIP from VMWare. You will appreciate that SUNDE’s Diana Device system enables users to play multiple videos on the virtual desktop without any compromise on the quality of performance.


The quality of service of this newcomer SUNDE is as excellent as big players. But the process of installation and management of SUNDE’s VDI is very simple. Moreover, the process is very fast too. Therefore, users do not need to undergo any additional training or hire skilled IT staffs to manage the same.


Article submitted by: SUNDE offer actual zero client solutions with costs that are much economical than many other VDI providers.

Sunday, 28 July 2013

Virtual Desktop Infrastructure -- A Necessity Today

Virtualization has been the talk of the computing world for it has modified and transformed many facets in the IT field. Virtualization can be referred as modifying or creating a virtual version of storage device, hardware or operating system, network in terms of computing. Desktop Virtualization is a virtualization technology that separates a computer desktop environment from a physical computer. It is like you can interact with a virtual desktop in the same way you would use a physical desktop. This virtualization is considered to be a type of client-server computing model as the virtualized desktop is stored on a centralized or remote server and not on the physical machine. Many companies have started up this powerful solution in their business to reduce the administrative and management workloads and to protect from potential dangers.

Virtual Desktop Infrastructure (VDI) is a desktop virtualization technique that helps users to run desktop operating systems and applications inside virtual machines that resides on a server in the data center. Desktop operating systems inside virtual machines are referred to as virtual desktops. With the help of these virtual desktops, availability and efficiency of resources and applications can be improved. It is reported that about 70% of a typical IT budget in a non-virtualized data center applies for maintenance with a little or none left for innovation. But Virtual desktops bring transition from ‘one server, one application’ model to running multiple virtual machines on a single physical machine, thereby reducing the maintenance costs and safeguarding the resources.

VDI has become a major necessity because the rapid evolution in technology, agility in business processes and data storage requirements make it difficult for businesses to stay competitive with the existing resources. In enterprises, managing a great number of computing desktops is a hard task. Involving virtual desktops instead of laptops can save huge sums on infrastructure maintenance. Also it promotes multi-tasking with several applications running on a single physical server. VDI reduces the management tasks, provides network security, can easily back up data and power consumption can also be minimized. Different architectures are available for virtual desktops but the most popular is a configuration with a client connecting to a server running the virtual desktop. VDI simulates a copy of desktop with its OS, software applications document and other data that are stored and run entirely from the server. Users can access their desktop remotely from an endpoint device, just like they access on a physical one. Few components are necessary for desktop virtualization.


-Physical PC(s)/ server(s) is a physical environment in which all data are executed and stored.
-Hypervisor (virtual machine manager) is software capable of creating and hosting multiple virtual machines.
-Virtual desktop agent is a connection broker to manage the desktop and for connection to the user’s client device through a remote session protocol. It usually consists of management console and remote desktop display protocol.
-Client machines / endpoint device is a physical device used to see and control the user’s virtual desktop.

The PC/servers use the hypervisor to create a virtual machine that simulates the same capabilities of the physical desktop computers. Virtual machines connect over LAN to specialized endpoint devices at the user’s location that are in turn connected to peripherals to make a complete system.

SUNDE is a global provider of zero client endpoint for VDI with their own, free proprietary protocol: SUNDE-VDI for communication and uses VirtualBox (hypervisor) for creating virtual machines. SUNDE-VDI provides better performance; video play is fast, smooth, seamless and of high quality in LAN environment. These require efficient bandwidth and stable network environment.

Benefits:

--It only takes less than fifteen minutes to set up a new VDI workstation than a traditional one and user training is not required. A library of VDI images can be created within the data centre forming a common pool and they can be monitored, backed up, upgraded or updated and utilized due to the centralized control and scalable management.

--More cost effective since purchase and replacement costs, warranty fees can be saved as VDI lasts longer (typically 7-10 years). Less power consumption and reduced emission of carbons save huge expenses. No local PC storage and data loss due to hardware failures or theft of laptops can be eliminated.

--Security management, monitoring and control over clients and network ensure data security. Troubleshooting problems and modifying configurations of desktop resources occur rapidly without travelling to user’s location. Virtual desktops appear flexible and are customized with specific settings of users’ choice. Multiple OS can co-exist on the same server and reduces conflicts.

--Desktop virtualization is an emerging state of art in the computing technology which has become a deadly need of every IT company.VDI must be adopted by companies because of the following key drivers: providing a native Windows desktop and decreasing cost of ownership (TCO). Also it provides minimized dependency, agility in business and network security.



Contact us : http://www.sundenc.com

Article submitted by: SUNDE is a global provider of virtual desktop solutions with zero clientsfor Terminal Services and Virtual Desktop Infrastructures (VDI).

Sunday, 21 July 2013

Difference Between VDI and Cloud Computing

Interested for a hassle-free business environment with less managing and security chores? Present innovations in technology bring forth your desire to achieve in your business through the Cloud Computing and Virtualization methods. These terms are a little unclear to many users and newbie of the techno world. Also it’s significant to know and understand the uniqueness and differences of these technologies for a business firm so as to adopt and utilize their services.

Virtualization, a form of cloud computing, is narrowing down your business space and management through virtual machines. It can be performed with Virtual Desktop Infrastructure (VDI) technique using a centralized data center from which data can be accessed through single server by many virtual desktops. Through VDI, one can run multiple operating systems and applications on numerous virtual desktops eliminating ‘one server-one application’ model. VDI consists of a physical server/PC, a hypervisor to manage the virtual machines, a virtual remote desktop agent to manage and control the desktops using management console and software protocol, and endpoint devices. VDI helps to reduce the purchase of numerous hardware and other equipments. They enhance power efficiency and produces less e-waste.

Cloud Computing is another techno buzz that offers many benefits to businesses. In simple words, it refers to the process of saving and acquiring the access of data and other programs over the internet instead of using the hard drive. ‘Cloud’ is the metaphor for the internet. Today resources are limited but demands of the users grow exceedingly. This can be leveraged and helped by cloud computing. Cloud based services provide data and resources of high scalability to drive your businesses upfront. You can scale up or down your work and storage depending on the situation. They provide agility of processes due to shared infrastructure and assist to increase your revenue in market by reducing the infrastructure costs.

VDI denotes the centralization of desktop computing whereas cloud computing denotes consolidation of servers into one single resource pool or cloud. Cloud computing constructs and controls a server-side resource pool (cloud) whereas VDI acquires and controls a client-side resource pool (virtual desktop). With VDI, you can prevent breaching of data and data loss in a firm due to hardware failure or data theft because all the data and applications are consolidated and stored in a single server within the centralized data center. Though cloud computing is secure, personalized information can sometimes be breached through hacking (cyber attack) of the internet because the applications and data are accessed over the internet. The virtual resources connected in cloud are cheaper than the physical resources connected to a personal PC or network. You can use the apps that suit your business from the cloud service provider and pay for what you use only. But VDI requires the purchase of all the four components from vendors to suit your business.

VDI denotes the centralization of desktop computing whereas cloud computing denotes consolidation of servers into one single resource pool or cloud. Cloud computing constructs and controls a server-side resource pool (cloud) whereas VDI acquires and controls a client-side resource pool (virtual desktop). With VDI, you can prevent breaching of data and data loss in a firm due to hardware failure or data theft because all the data and applications are consolidated and stored in a single server within the centralized data center. Though cloud computing is secure, personalized information can sometimes be breached through hacking (cyber attack) of the internet because the applications and data are accessed over the internet. The virtual resources connected in cloud are cheaper than the physical resources connected to a personal PC or network. You can use the apps that suit your business from the cloud service provider and pay for what you use only. But VDI requires the purchase of all the four components from vendors to suit your business.

Finally VDI can fix up the large enterprises who toil through the security and management routine whereas Cloud Computing can be utilized by small to mid-sized firms for enhanced performance. It is wise for a firm to assert its needs and then choose on what technology best fits them.


Article submitted by: http://www.sundenc.com

Sunday, 14 July 2013

Can SUNDE Co-work with other Platforms?

Virtualization offers quality solutions to business through easy management and administration of IT chores; minimizing hardware costs and secured network. SUNDE provides such quality based desktop virtualization results by creating virtual machines in a firm thereby promoting their fruitful growth. SUNDE technology renders true zero clients for desktop computing which includes H4 zero clients for terminal services and Diana zero clients for VDI. While SUNDE offers significant benefits of desktop computing to customers, many consumers often raise queries on SUNDE zero clients’ support over Citrix, VMware or Microsoft Hyper V? Some may had been their customers but when they want to switch over, compatibility issues arise.

While considering H4 zero clients, they enable multi-users to share untapped resources from and within the single PC/server and doesn’t virtualizes. They use Microsoft’s standard RDP for communication and NetPoint software to achieve cost-effective access to applications. This RDP can access any number of desktops supporting RDP, including the virtual machines created by the hypervisors of VMware, Citrix or Microsoft Hyper-V.

In the case of Diana zero clients, they’re more advanced creating virtual desktops managed by VM hypervisors supporting HD video play and delivering the full ability of a native Windows desktop. There are two considerations available when questioning the co-working option of SUNDE Diana for VDI.

Citrix users will be provided with Citrix XenServer which is a server virtualization feature that helps to manage and consolidate data centers into dynamic state. First consideration is that when the customers use platforms that virtualize physical servers transforming them to run multiple application servers and enhancing server utilization, then SUNDE zero clients can be used for building the desktop environment with virtual desktops to access the application servers. This boosts up the functionality and efficacy of the system.

The second consideration is that when the customer uses the products of Citrix – XenDesktop which is a desktop virtualization feature that delivers a complete Windows desktop experience allowing remote access and integrated security, then SUNDE zero clients cannot be adopted since the user had already moved onto desktop virtualization and SUNDE will not be compatible. Similar to Citrix, the same considerations apply for VMware and Hyper V offerings.

Be it H4 for terminal servers or Diana for VDI, SUNDE technology can co-exist and work with other platforms thus enabling users to attain stream-lined data integrity and high productivity in the IT market.

SUNDE is working with you to bring new ideas to life in IT sector. Get in touch with SUNDE by contacting by email at info@hy-elect.com or by phone at 0086-20-3229381. For more information, please visit www.sundenc.com.

Sunday, 7 July 2013

VDI Architectures and Endpoints

Virtual Desktop Infrastructure, or VDI, is a desktop computing architecture that centralizes the desktop operating system and applications on Virtual Machines, or VMs, running on a hypervisor on a shared physical server in the data center. VDI promises significant benefits in containing and reducing the management and support burden of delivering desktop computing. VDI, unlike earlier end-user virtualization approaches like terminal services and application virtualization, is intended to deliver the full capabilities of a native Windows desktop to users.

All of the many technological and architectural approaches to VDI share the common goal of freeing the user’s desktop computing environment (and in turn the supporting IT staff) from the constraints and problems associated with deploying, maintaining, securing, and running Windows on physically distributed personal computer hardware.

There are a wide range of >VDI architecture choices: what level of centralization, which hypervisors, management tools, and connection brokers to use; whether virtual desktops are only server-based or also client-based, etc. Possibly the most critical choice is the endpoint types or architecture. This choice will often drive many, if not all, of your other VDI architecture, technology, and vendor choices.

The four main types of VDI endpoints are blade PCs, software clients, thin clients and zero clients. Because they have captured the bulk of the current VDI market, this whitepaper looks in depth at thin clients and zero clients.

Five Key Factors for Choosing a VDI Endpoint VDI endpoints help deliver many of the benefits of deploying VDI. Five key factors in making a VDI endpoint choice are:

1. Improve Productivity – Stateless and management-free VDI endpoints can eliminate the need for IT staff to travel to users in order to resolve problems or perform maintenance. Deploying or replacing an endpoint should never require more than connecting wires and turning it on.

2. Simplify Adoption – Endpoints, and their supporting VDI software, should provide essentially the same user experience as native Windows running on the desktop PC they replaced. This not only saves time retraining users and support staff, but also simplifies supporting the large number of peripherals that users rely on.

3. Conserve Energy – Efficient VDI endpoints use just a few percent of the electricity consumed by desktop PCs, cutting substantially the electricity used to power and cool the devices. This savings alone could potentially pay for the VDI deployment in just a few years.

4. Strengthen Security – By not storing any data (even temporarily) on the endpoint, the risk to confidential data from malware, hardware failures, or endpoint theft can be eliminated. VDI endpoints should also not present any new security holes that malware could attack.

5. Slash TCO – The key overall driver for selecting VDI endpoints is the promise of radically lowering the Total Cost of Ownership (TCO) while still delivering a reliable Windows-based desktop computing infrastructure. In addition to savings from higher IT productivity and energy savings, VDI endpoints should deliver further TCO savings by limiting costs from endpoint hardware and software, systems integration, and user or IT staff retraining.

These five benefits are the key drivers for the return on investments you can expect to realize from deploying VDI in place of traditional PCs. In order to achieve optimal results, it is critical to make careful choices in both the technical architectures and the products and vendors included in your VDI deployment plans.

As leading experts in VDI, SUNDE is working with you to bring new ideas to life in your IT field. Get in touch with SUNDE by contacting by email at info@hy-elect.com or by phone at 0086-20-3229381. For more information, please visit www.sundenc.com.

Sunday, 23 June 2013

SUNDE Changes the Face of VDI for Education

Is it feasible to provide computing labs and learning at lower costs? Will students be able to attain the utility of the latest technology within smaller budgets?  These questions pave way for schools utilizing and implementing Virtual Desktop Infrastructure solution
s.
VirtualDesktop Infrastructure (VDI) provides a virtual set up helping one to access the available physical resources across the entire infrastructure at full-scale. VDI brings reduced IT support due to the central management of the user profiles and resources. It renders increased data security and patch management as the data is stored centrally. Also VDI minimizes the total cost of ownership and provides better end-user experience and scalable access to the applications. VDI utilizes lower energy footprint and provides greener computing using stateless VDI devices. VDI setup has longer period than the traditional PCs, thus extending or delaying the refreshing cycle.

BlendingVDI technology with education system is a desirable venture for every school and teaching institution at present. But it is not always viable.

Most notably, VDI is harder to install than a PC or terminal server setup, especially in large deployments. One might assume that a VDI deployment would be similar to a terminal server setup, but it usually is more complicated. For most deployments of VDI, customers must take on the integration of the endpoint device with management tools, connection brokers, and VDI protocols from multiple vendors, which can significantly raise the complexity and increase the risks and fragility of a VDI deployment. And high skill levels for IT staff are typically required for deployment and maintenance.
Cost is another factor that users should go in deep. In most cases users are being misled if they think that they are going to get a direct financial boost from implementing VDI. In general, some cost savings are possible with VDI if users consider deploying cheaper thin client or zero client devices on desktops as opposed to full-blown PCs on desks. However, back-end costs for VDI server infrastructure, management tools and other costs for getting the remote desktop protocol to perform essential VDI functions and rich multimedia display could outweigh these desktop device savings.
SUNDE introduces a simple, cost-saving and still high performance VDI solution which truly meets the challenges in settings ranging from K-12 to colleges, universities, and technical schools. SUNDE VDI simplifies the deployment, costs and management of VDI with Diana as endpoint and free vPointServer software installed on the server, reducing expensive and complex deployment of software and licenses. SUNDE VDI uses Dianazero client as endpoint device, which is small with no moving parts, CPU, storage, OS or software resulting in less energy consumption. It is tamper-resistant and do not store data locally preventing data loss. Significantly, it does not require endpoint management software, no patch management, no firmware upgrades and no local OS licensing fees or updates, thus reducing virtual desktop deployment costs and improve on-going IT productivity. SUNDE’s innovative free SUNDE-VDI protocol provides the communication link between the vPointServer software and the Diana zero client devices, delivering a full PC experience including rich multimedia, full screen 1080P streaming video and Flash, and seamless responsiveness without requiring any complex or expensive protocol extensions of high end GPU configuration.

Key features of SUNDE solutions for education and academic organizations include: 

-- Uses only 5 watts when active and 0.2 watts in sleep mode thus saving more than 97% of energy that takes for PCs.

-- Longer desktop refresh cycles from obsolescence-free SUNDE zero clients.

-- Produces less e-waste & noise and saves space in classrooms and labs.

-- Complete lack of hardware eliminates endpoint data loss and maintenance.

-- Centralized management reduces support overhead; desktop-related issues can be fixed in minutes from data center without going to the machines.

-- Rapid refresh of computer labs, allowing an entire lab to be reset to a standard configuration in just minutes. 

-- Provides seamless high end performance including smooth multiple and simultaneous high quality video play.

-- Most cost-effective when compared to other vendors as purchase of SUNDE zero clients is cost-effective and software is provided free of cost.

-- Great simplicity, even an untrained person can setup and manage.

Education is being asked to do more with less — to increase access to the computing resources while lowering costs, reduce energy footprint, all within smaller budgets. With smaller budgets, hardware getting older, and IT administration reductions, make VDI the best move education organizations can make. But there are a lot of factors to consider when schools adopt the new technology. To select the right technologies and determine who is and is not a good candidate for VDI is decides whether the new venture is cost-effective or not.
SUNDE is working with you to bring new ideas to life in your education establishment. Get in touch with SUNDE by contacting by email at info@hy-elect.com or by phone at 0086-20-3229381. For more information, please visit www.sundenc.com.