Virtualization is a concept that has evolved from what many first recognized as a niche technology to one that's driving many mainstream networks. Evidence of virtualization exists in nearly all aspects of information technology today. You can see virtualization in sales, education, testing, and demonstration labs and you can see it even driving network servers.
Virtualization is the latest in a long line of technical innovations designed to increase the level of system abstraction and enable IT users to harness ever-increasing levels of computer performance. At its simplest level, virtualization allows you, virtually and cost-effectively, to have two or more computers, running two or more completely different environments, on one piece of hardware. For example, with virtualization, you can have both a Linux machine and a Windows machine on one system. Alternatively, you could host a Windows 95 desktop and a Windows XP desktop on one workstation. In slightly more technical terms, virtualization essentially decouples users and applications from the specific hardware characteristics of the systems they use to perform computational tasks. This technology promises to usher in an entirely new wave of hardware and software innovation. For example, and among other benefits, virtualization is designed to simplify system upgrades (and in some cases may eliminate the need for such upgrades), by allowing users to capture the state of a virtual machine (VM), and then transport that state in its entirety from an old to a new host system. Virtualization is also designed to enable a generation of more energy-efficient computing. Processor, memory, and storage resources that today must be delivered in fixed amounts determined by real hardware system configurations will be delivered with finer granularity via dynamically tuned VMs. The term "virtualization" was coined in the 1960s, to refer to a virtual machine (sometimes called pseudo machine), a term which itself dates from the experimental IBM M44/44X system.
A virtual machine can be more easily controlled and inspected from outside than a physical one, and its configuration is more flexible. This is very useful in kernel development and for teaching operating system courses. A new virtual machine can be provisioned as needed without the need for an up-front hardware purchase. Also, a virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to his laptop, without the need to transport the physical computer. At the same time, an error inside a virtual machine does not harm the host system, so there is no risk of breaking down the OS on the laptop. Because of the easy relocation, virtual machines can be used in disaster recovery scenarios.
 The creation and management of virtual machines has been called platform virtualization, or server virtualization, more recently. In fact the first virtualization was made by the University of Manchester with the Atlas computer. This supercomputer was the first one using virtual memory to handle the 576 KB of his four drum stores and also used paging technologies. Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine, for its guest software. The guest software, which is often itself a complete operating system, runs just as if it were installed on a stand-alone hardware platform. Typically, many such virtual machines are simulated on a single physical machine, their number limited by the host’s hardware resources. Typically there is no requirement for a guest OS to be the same as the host one. The guest system often requires access to specific peripheral devices to function, so the simulation must support the guest's interfaces to those devices. Trivial examples of such devices are hard disk drive or network interface card.
Well, to keep it simple, consider virtualization to be the act of abstracting the physical boundaries of a technology. Physical abstraction is now occurring in several ways, with many of these methods illustrated in Figure 1. For example, workstations and servers no longer need dedicated physical hardware such as a CPU or motherboard in order to run as independent entities. Instead, they can run inside a virtual machine (VM). In running as a virtual machine, a computer's hardware is emulated and presented to an operating system as if the hardware truly existed. With this technology, you have the ability to remove the traditional dependence that all operating systems had with hardware. In being able to emulate hardware, a virtual machine can essentially run on any x86-class host system, regardless of hardware makeup. Furthermore, you can run multiple VMs running different operating systems on the same system at the same time!

Virtualization is More than Virtual Machine Software

It is clear to those who have explored the topic of virtualization technology that it encompasses far more than just virtual machine software, such as VMware ESX Server, XenSource XenEnterprise or Microsoft Virtual Server. Over the last 30 years,  virtualization technology has been developed to enhance how individuals access computing solutions, how applications are developed and deployed, how they are processed, where and how they are stored, how systems communicate with one another, and, of course, how an extended system environment can be made both secure and manageable. This broad view is very important if an organization hopes to make optimal use of this technology.
Somewhere along the way, many in the industry have come to believe that virtualization is merely the use of virtual machine software. This rather narrow view of virtualization is based upon the view that the whole purpose of virtualization is to encapsulate an operating system and a whole stack of software enabling an application or Web service to run. Virtual machine software then makes it possible for one or more of these "capsules" to run simultaneously on a single machine.
While this viewpoint is useful if the goals were only consolidating an existing application portfolio onto a smaller number of systems, cost reduction, cost avoidance or making it easier to deploy systems for new tasks, it is not as useful if the organization is seeking higher levels of performance, greater levels of scalability, greater agility, high levels of reliability and availability or being able to manage their physical and virtual resources in a uniform way. Virtual machine software, after all, is only one of five virtual processing functions. Virtual processing is one of seven layers of virtualization technology. Decision makers must work with a broader view of virtualization technology.
The Kusnetzky Group believes this state of affairs can be attributed to the marketing prowess of a small number of suppliers of virtual machine software rather than virtualization really being such a limited concept. This paper will examine a useful model of virtualization technology and present what each type of virtualization can do for an organization. Future papers in this series will focus on other aspects of virtualization.
What is virtualization?
Virtualization is a way to abstract applications and their underlying components away from the hardware supporting them and present a logical view of these resources. This logical view may be strikingly different than the physical view. The goal usually is one of the following: higher levels of performance, scalability, reliability/availability, agility or to create a unified security and management domain.

WEB 2.0



v  Abstract

In 2004, we realized that the Web was on the cusp of a new era, one that would finally let loose the power of network effects, setting off a surge of innovation and opportunity. To help usher in this new era, O’Reilly Media and CMP launched a conference that showcased the innovators who were driving it. When O’Reilly’s Dale Dougherty came up with the term “Web 2.0” during a brainstorming session, we knew we had the name for the conference. What we didn’t know was that the industry would embrace the Web 2.0 meme and that it would come to represent
the new Web.

At the end of 2006, Time magazine’s Person of the Year was ‘You’. On the cover of the magazine, underneath the title of the award, was a picture of a PC with a mirror in place of the screen, reflecting not only the face of the reader, but also the general feeling that 2006 was the year of the Web - a new, improved, 'second version', 'user generated' Web. But how accurate is our perception of so-called 'Web 2.0'? Is there real substance behind the hyperbole? Is it a publishing revolution or is it a social revolution? Is it actually a revolution at all? And what will it mean for education, a sector that is already feeling the effects of the demands of Internet-related change?

Media coverage of Web 2.0 concentrates on the common applications/services such as blogs, video sharing, social networking and podcasting—a more socially connected Web in which people can contribute as much as they can consume. In chapter two I provide a brief introduction to some of these services, many of them built on the technologies and open standards that have been around since the earliest days of the Web, and show how they have been refined, and in some cases concatenated, to provide a technological foundation for delivering services to the user through the browser window (based on the key idea of the Web, rather than the desktop, as the technology platform). But is this Web 2.0? Indeed, it can be argued that these applications and services are really just early manifestations of ongoing Web technology development. If we look at Web 2.0 as it was originally articulated we can see that it is, in fact, an umbrella term that attempts to express explicitly the framework of ideas that underpin attempts to understand the manifestations of these newer Web services within the context of the technologies that have produced them. These ideas though, need technology in order to be realized into the functioning Web-based services and applications that we are using.
Web 2.0 is much more than just pasting a new user interface onto an old application. It’s a way of thinking, a new perspective on the entire business of software— from concept through delivery, from marketing through support. Web 2.0 thrives on network effects: databases that get richer the more people interact with them, applications that are smarter the more people use them, marketing that is driven by user stories and experiences, and applications that interact with each other to form a broader computing platform.

The trend toward networked applications is accelerating. While Web 2.0 has initially taken hold in consumer-facing applications, the infrastructure required to build these applications, and the scale at which they are operating, means that, much as PCs took over from mainframes in a classic demonstration of Clayton Christensen’s “innovator’s dilemma” hypothesis, web applications can and will move into the enterprise space.

Two years ago we launched the Web 2.0 Conference to evangelize Web 2.0 and to get the industry to take notice of the seismic shift we were experiencing. This report is for those who are ready to respond to that shift. It digs beneath the hype and buzzwords, and teaches the underlying rules of Web 2.0—what they are, how successful Web 2.0 companies are applying them, and how to apply them to your own business. It’s a practical resource that provides essential tools for competing and thriving in today’s emerging business world. I hope it inspires you to embrace the Web 2.0 opportunity.

Education and educational institutions will have their own special issues with regard to Web 2.0 services and technologies and in section five I look at some of these issues. By special request, particular attention has been given to libraries and preservation and the issues that present themselves for those tasked with preserving some of the material produced by these services and applications. Finally, I look to the future. What are the technologies that will affect the next phase of the Web’s development: what one might call, rather reluctantly, Web 3.0?
v   'Web 2.0' or 'Web 1.0’: a tale of two Tims

Web 2.0 is a slippery character to pin down. Is it a revolution in the way we use the Web? Is it another technology 'bubble'? It rather depends on who you ask. A Web technologist will give quite a different answer to a marketing student or an economics professor.

The short answer, for many people, is to make a reference to a group of technologies which have become deeply associated with the term: blogs, wikis, podcasts, and RSS feeds etc., which facilitate a more socially connected Web where everyone is able to add to and edit the information space. The longer answer is rather more complicated and pulls in economics, technology and new ideas about the connected society. To some, though, it is simply a time to invest in technology again—a time of renewed exuberance after the dot-com bust.

For the inventor of the Web, Sir Tim Berners-Lee, there is a tremendous sense of déjà vu about all this. When asked in an interview for a podcast, published on IBM’s website, whether Web 2.0 was different to what might be called Web 1.0 because the former is all about connecting people, he replied:

"Totally not. Web 1.0 was all about connecting people. It was an interactive space, and I think Web 2.0 is of course a piece of jargon, nobody even knows what it means. If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along. And in fact, you know, this 'Web 2.0', it means using the standards which have been produced by all these people working on Web 1.0."
                          Laningham (ed.), developer Works Interviews.

This distinction is key to understanding where the boundaries are between ‘the Web’, as a set of technologies, and ‘Web 2.0’—the attempt to conceptualize the significance of a set of outcomes that are enabled by those Web technologies. Understanding this distinction helps us to think more clearly about the issues that are thrown up by both the technologies and the results of the technologies, and this helps us to better understand why something might be classed as ‘Web 2.0’ or not. In order to be able to discuss and address the Web 2.0 issues that face higher education we need to have these conceptual tools in order to identify why something might be significant and whether or not we should act on it.

For example, Tim O'Reilly, in his original article, identifies what he considers to be features of successful ‘Web 1.0’ companies and the ‘most interesting’ of the new applications. He does this in order to develop a set of concepts by which to benchmark whether or not a company is Web 1.0 or Web 2.0. This is important to him because he is concerned that ‘the Web 2.0 meme has become so widespread that companies are now pasting it on as a marketing buzzword, with no real understanding of just what it means

Web 1.0

Web 2.0
Google AdSense
Britannica Online
personal websites
--> and EVDB
domain name speculation
search engine optimization
page views
cost per click
screen scraping
web services
content management systems
directories (taxonomy)
tagging ("folksonomy")

 v Web 2.0

The term "web 2.0" refers to a perceived second generation of web development and design, that aims to facilitate communication, secure information sharing, interoperability, and collaboration on the World Wide Web. Web 2.0 concepts have led to the development and evolution of web-based communities, hosted services, and applications such as social-networking sites, video-sharing sites, wikis, blogs, and folksonomies.
The term first became notable after the Web 2.0 conference in 2004. Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specifications, but rather to changes in the ways software developers and end-users utilize the Web.
Web 2.0 is a set of economic, social, and technology trends that collectively form the basis for the next generation of the Internet—a more mature, distinctive medium characterized by user participation, openness, and network effects.”





 Introduction to XML

XML was designed to transport and store data.
HTML was designed to display data.

What You Should Already Know
Before you continue you should have a basic understanding of the following:
  • HTML
  • JavaScript
If you want to study these subjects first, find the tutorials on our Home page.

What is XML?

  • XML stands for EXtensible Markup Language
  • XML is a markup language much like HTML
  • XML was designed to carry data, not to display data
  • XML tags are not predefined. You must define your own tags
  • XML is designed to be self-descriptive
  • XML is a W3C Recommendation

The Difference Between XML and HTML
XML is not a replacement for HTML.
XML and HTML were designed with different goals:
  • XML was designed to transport and store data, with focus on what data is.
  • HTML was designed to display data, with focus on how data looks.
HTML is about displaying information, while XML is about carrying information.

XML Does not DO Anything
Maybe it is a little hard to understand, but XML does not DO anything. XML was created to structure, store, and transport information.
The following example is a note to Tove from Jani, stored as XML:
Don't forget me this weekend!
The note above is quite self descriptive. It has sender and receiver information, it also has a heading and a message body.
But still, this XML document does not DO anything. It is just pure information wrapped in tags. Someone must write a piece of software to send, receive or display it.

XML is Just Plain Text
XML is nothing special. It is just plain text. Software that can handle plain text can also handle XML.
However, XML-aware applications can handle the XML tags specially. The functional meaning of the tags depends on the nature of the application.

With XML You Invent Your Own Tags
The tags in the example above (like and ) are not defined in any XML standard. These tags are "invented" by the author of the XML document.
That is because the XML language has no predefined tags.
The tags used in HTML (and the structure of HTML) are predefined. HTML documents can only use tags defined in the HTML standard (like

, etc.).

XML allows the author to define his own tags and his own document structure.

XML is Not a Replacement for HTML

XML is a complement to HTML.
It is important to understand that XML is not a replacement for HTML. In most web applications, XML is used to transport data, while HTML is used to format and display the data.
My best description of XML is this:
XML is a software and hardware independent tool for carrying information.

XML is Everywhere
We have been participating in XML development since its creation. It has been amazing to see how quickly the XML standard has developed, and how quickly a large number of software vendors have adopted the standard.
XML is now as important for the Web as HTML was to the foundation of the Web.
XML is everywhere. It is the most common tool for data transmissions between all sorts of applications, and is becoming more and more popular in the area of storing and describing information.

Using XML in ASP.NET

XML is a cross-platform, hardware and software independent, text based markup language, which enables you to store data in a structured format by using meaningful tags. XML stores structured data in XML documents that are similar to databases. Notice that unlike Databases, XML documents store data in the form of plain text, which can be used across platforms.
The World Wide Web (WWW) is the biggest infrastructure for information publishing and exchange. In this context XML as standard format for documents becomes more and more important. In most cases XML documents are generated automatically from databases. To exploit the content of any XML structured file, it is important to find an efficient representation. One common representation is the Document Object Model
(DOM) (W3C,1998), which has the advantage that every XML document can be gathered. On the contrary, accessing single elements is very inefficient, because knowledge about the XML document’s specific structure is not supported by the DOM. Thus an XPath query selecting child elements with a certain name can only be evaluated by testing all siblings, which results in linear access time. Since most XML documents
are structured according to a given XML language description like XML Schema (W3C,2001) or DTD (W3C,2004), it is more efficient to use so called class generators to produce classes which are capable to represent XML documents of a specific XML language much more efficiently. In this paper we will introduce a new XML Class Generator for Java (XCG), which offers efficient access to sub elements in constant time, provides a simple class structure and moreover guarantees universality. In this context universality means that every correct DTD or XML Schema can be mapped onto (Java) classes.
Xml Code Generator (XCG)

What is XCG?

XCG is a technology that allows you to create classes using an xml syntax. You can declaratively specify a class' members and their initial values, as well as a number of additional things that make certain common programming tasks easier.
Above all else, XCG is designed to be flexible. This means that everything it does is customisable. XCG stands as a framework around which you can build your own code-generation system. Basically, this means that if XCG does not do what you want, you can make it do what you want. Virtually any part of programming you find tedious can be automated by XCG, and instead of having to build your own code generation system from scratch, you can hook into XCG, which also has the added benefit that your extensions can cooperate with XCG's intrinsic functionality as well as with other extensions written by you or others. Of course you don't have to learn how to write extensions if you just want to use XCG; you can use XCG and any existing extensions without even knowing how the extension system works.
Although XCG and similar systems are ideal for situations where you are designing a graphical UI, it should be remarked that XCG is not limited to this. It is a general purpose method for hooking up .Net objects.

An XCG primer

Rather than trying to tell you what XCG is, it would be far simpler to give examples of what it can do. All code examples, unless otherwise specified, use C# and Visual Basic .Net syntax. Even if you're not familiar with C# per se this should make the samples reasonably straight-forward to follow if you're familiar with C++ or Java. 





ZigBee devices is extremely competitive, with full nodes available for a fraction of the cost of a Bluetooth node.
ZigBee devices are actively limited to a through-rate of 250Kbps, compared to Bluetooth's much larger pipeline of 1Mbps, operating on the 2.4 GHz ISM band, which is available throughout most of the world.
ZigBee has been developed to meet the growing demand for capable wireless networking between numerous low-power devices. In industry ZigBee is being used for next generation automated manufacturing, with small transmitters in every device on the floor, allowing for communication between devices to a central computer. This new level of communication permits finely-tuned remote monitoring and manipulation. In the consumer market ZigBee is being explored for everything from linking low-power household devices such as smoke alarms to a central housing control unit, to centralized light controls.
The specified maximum range of operation for ZigBee devices is 250 feet (76m), substantially further than that used by Bluetooth capable devices, although security concerns raised over "sniping" Bluetooth devices remotely, may prove to hold true for ZigBee devices as well.
Due to its low power output, ZigBee devices can sustain themselves on a small battery for many months, or even years, making them ideal for install-and-forget purposes, such as most small household systems. Predictions of ZigBee installation for the future, most based on the explosive use of ZigBee in automated household tasks in China, look to a near future when upwards of sixty ZigBee devices may be found in an average American home, all communicating with one another freely and regulating common tasks seamlessly.

ZigBee is a low-cost, low-power, wireless mesh networking standard. The low cost allows the technology to be widely deployed in wireless control and monitoring applications, the low power-usage allows longer life with smaller batteries, and the mesh networking provides high reliability and larger range.
The ZigBee Alliance, the standards body which defines ZigBee,[1] also publishes application profiles that allow multiple OEM vendors to create interoperable products. The current list of application profiles either published or in the works are:
  • Home Automation
  • ZigBee Smart Energy
  • Telecommunication Applications
  • Personal Home
  • Hospital Care
The relationship between IEEE 802.15.4 and ZigBee is similar to that between IEEE 802.11 and the Wi-Fi Alliance. The ZigBee 1.0 specification was ratified on 14 December 2004 and is available to members of the ZigBee Alliance. Most recently, the ZigBee 2007 specification was posted on 30 October 2007. The first ZigBee Application Profile, Home Automation, was announced 2 November 2007.
For non-commercial purposes, the ZigBee specification is available free to the general public.[2] An entry level membership in the ZigBee Alliance, called Adopter, costs US$ 3500 annually and provides access to the as-yet unpublished specifications and permission to create products for market using the specifications.
ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in countries such as USA and Australia, and 2.4 GHz in most jurisdictions worldwide. The technology is intended to be simpler and less expensive than other WPANs such as Bluetooth. ZigBee chip vendors typically sell integrated radios and microcontrollers with between 60K and 128K flash memory, such as the Freescale MC13213, the Ember EM250 and the Texas Instruments CC2430. Radios are also available stand-alone to be used with any processor or microcontroller. Generally, the chip vendors also offer the ZigBee software stack, although independent ones are also available.
"As of 2006, the retail price of a Zigbee-compliant transceiver is approaching $1, and the price for one radio, processor, and memory package is about $3."[3] Comparatively, the price of consumer-grade Bluetooth chips is now under $3.[4]
The first stack release is now called Zigbee 2004. The second stack release is called Zigbee 2006, and mainly replaces the MSG/KVP structure used in 2004 with a "cluster library". The 2004 stack is now more or less obsolete.
Zigbee 2007, now the current stack release, contains 2 stack profiles, stack profile 1 (simply called ZigBee), for home and light commercial use, and stack profile 2 (called ZigBee Pro). ZigBee Pro offers more features, such as multi-casting, many-to-one routing and high security with Symmetric-Key Key Exchange (SKKE), while ZigBee (stack profile 1) offers a smaller footprint in RAM and flash. Both offer full mesh networking and work with all ZigBee application profiles.
ZigBee 2007 is fully backward compatible with ZigBee 2006 devices: a ZigBee 2007 device may join and operate on a ZigBee 2006 network and vice versa. Due to differences in routing options, ZigBee Pro devices must become non-routing ZigBee End-Devices (ZEDs) on a ZigBee 2006 or ZigBee 2007 network, the same as ZigBee 2006 or ZigBee 2007 devices must become ZEDs on a ZigBee Pro network. The applications running on those devices work the same regardless of the stack profile beneath them.


ZigBee protocols are intended for use in embedded applications requiring low data rates and low power consumption. ZigBee's current focus is to define a general-purpose, inexpensive, self-organizing mesh network that can be used for industrial control, embedded sensing, medical data collection, smoke and intruder warning, building automation, home automation, etc. The resulting network will use very small amounts of power -- individual devices must have a battery life of at least two years to pass ZigBee certification[5].
Typical application areas include
  • Home Entertainment and Control — Smart lighting, advanced temperature control, safety and security, movies and music
  • Home Awareness — Water sensors, power sensors, smoke and fire detectors, smart appliances and access sensors
  • Mobile Services — m-payment, m-monitoring and control, m-security and access control, m-healthcare and tele-assist
  • Commercial Building — Energy monitoring, HVAC, lighting, access control
  • Industrial Plant — Process control, asset management, environmental management, energy management, industrial device control

Device types

There are three different types of ZigBee devices:
  • ZigBee coordinator(ZC): The most capable device, the coordinator forms the root of the network tree and might bridge to other networks. There is exactly one ZigBee coordinator in each network since it is the device that started the network originally. It is able to store information about the network, including acting as the Trust Centre & repository for security keys.
  • ZigBee Router (ZR): As well as running an application function a router can act as an intermediate router, passing data from other devices.
  • ZigBee End Device (ZED): Contains just enough functionality to talk to the parent node (either the coordinator or a router); it cannot relay data from other devices. This relationship allows the node to be asleep a significant amount of the time thereby giving long battery life. A ZED requires the least amount of memory, and therefore can be less expensive to manufacture than a ZR or ZC.


The protocols build on recent algorithmic research (Ad-hoc On-demand Distance Vector, neuRFon) to automatically construct a low-speed ad-hoc network of nodes. In most large network instances, the network will be a cluster of clusters. It can also form a mesh or a single cluster. The current profiles derived from the ZigBee protocols support beacon and non-beacon enabled networks.
In non-beacon-enabled networks (those whose beacon order is 15), an unslotted CSMA/CA channel access mechanism is used. In this type of network, ZigBee Routers typically have their receivers continuously active, requiring a more robust power supply. However, this allows for heterogeneous networks in which some devices receive continuously, while others only transmit when an external stimulus is detected. The typical example of a heterogeneous network is a wireless light switch: the ZigBee node at the lamp may receive constantly, since it is connected to the mains supply, while a battery-powered light switch would remain asleep until the switch is thrown. The switch then wakes up, sends a command to the lamp, receives an acknowledgment, and returns to sleep. In such a network the lamp node will be at least a ZigBee Router, if not the ZigBee Coordinator; the switch node is typically a ZigBee End Device.
In beacon-enabled networks, the special network nodes called ZigBee Routers transmit periodic beacons to confirm their presence to other network nodes. Nodes may sleep between beacons, thus lowering their duty cycle and extending their battery life. Beacon intervals may range from 15.36 milliseconds to 15.36 ms * 214 = 251.65824 seconds at 250 kbit/s, from 24 milliseconds to 24 ms * 214 = 393.216 seconds at 40 kbit/s and from 48 milliseconds to 48 ms * 214 = 786.432 seconds at 20 kbit/s. However, low duty cycle operation with long beacon intervals requires precise timing, which can conflict with the need for low product cost.
In general, the ZigBee protocols minimize the time the radio is on so as to reduce power use. In beaconing networks, nodes only need to be active while a beacon is being transmitted. In non-beacon-enabled networks, power consumption is decidedly asymmetrical: some devices are always active, while others spend most of their time sleeping.
ZigBee devices are required to conform to the IEEE 802.15.4-2003 Low-Rate Wireless Personal Area Network (WPAN) standard. The standard specifies the lower protocol layers—the physical layer (PHY), and the medium access control (MAC) portion of the data link layer (DLL). This standard specifies operation in the unlicensed 2.4 GHz, 915 MHz and 868 MHz ISM bands. In the 2.4 GHz band there are 16 ZigBee channels, with each channel requiring 5 MHz of bandwidth. The center frequency for each channel can be calculated as, FC = (2405 + 5 * (ch - 11)) MHz, where
 ch = 11, 12, ..., 26.
The radios use direct-sequence spread spectrum coding, which is managed by the digital stream into the modulator. BPSK is used in the 868 and 915 MHz bands, and orthogonal QPSK that transmits two bits per symbol is used in the 2.4 GHz band. The raw, over-the-air data rate is 250 kbit/s per channel in the 2.4 GHz band, 40 kbit/s per channel in the 915 MHz band, and 20 kbit/s in the 868 MHz band. Transmission range is between 10 and 75(upto 1500meteres for zigbee pro.)meters (33 and 246 feet), although it is heavily dependent on the particular environment. The maximum output power of the radios is generally 0 dBm (1 mW).
The basic channel access mode is "carrier sense, multiple access/collision avoidance" (CSMA/CA). That is, the nodes talk in the same way that people converse; they briefly check to see that no one is talking before they start. There are three notable exceptions to the use of CSMA. Beacons are sent on a fixed timing schedule, and do not use CSMA. Message acknowledgments also do not use CSMA. Finally, devices in Beacon Oriented networks that have low latency real-time requirements may also use Guaranteed Time Slots (GTS), which by definition do not use CSMA.

Software and hardware

The software is designed to be easy to develop on small, inexpensive microprocessors. The radio design used by ZigBee has been carefully optimized for low cost in large scale production. It has few analog stages and uses digital circuits wherever possible.
Even though the radios themselves are inexpensive, the ZigBee Qualification Process involves a full validation of the requirements of the physical layer. This amount of concern about the Physical Layer has multiple benefits, since all radios derived from that semiconductor mask set would enjoy the same RF characteristics. On the other hand, an uncertified physical layer that malfunctions could cripple the battery lifespan of other devices on a ZigBee network. Where other protocols can mask poor sensitivity or other esoteric problems in a fade compensation response, ZigBee radios have very tight engineering constraints: they are both power and bandwidth constrained. Thus, radios are tested to the ISO 17025 standard with guidance given by Clause 6 of the 802.15.4-2006 Standard. Most vendors plan to integrate the radio and microcontroller onto a single chip.


A white paper published by a European manufacturing group (associated with the development of a competing standard, Z-Wave) claims that wireless technologies such as ZigBee, which operate in the 2.4 GHz RF band, are subject to significant interference - enough to make them unusable.[6] It claims that this is due to the presence of other wireless technologies like Wireless LAN in the same RF band. The ZigBee Alliance released a white paper refuting these claims.[7] After a technical analysis, this paper concludes that ZigBee devices continue to communicate effectively and robustly even in the presence of large amounts of interference.