Your Smartphone Will Replace Your Car Keys by 2015


Your smartphone has the potential to replace nearly everything else in your pockets, so why not your car keys? 
Hyundai is working to do just that, with an embedded NFC tag that allows you to open your car, start the engine and link up to the touchscreen with a simple swipe.
Because the system can recognize different smartphones, it can customize the in-car experience to suit each driver’s seat, mirror and infotainment settings.
Once the phone is in the console, it links up with the 7-inch touchscreen mounted in the dash, and Hyundai is employing the Car Connectivity Consortium’s MirrorLink standard to automatically import contacts, navigation destinations, streaming audio and apps.
Despite forging dozens of automaker partnerships, MirrorLink hasn’t caught on with many manufacturers yet. That’s mainly due to concerns about driver distraction and how certain apps would be ported to the integrated screen, modifying the user interface to suit a more driver-focused experience. But that’s about to change as MirrorLink begins gaining momentum.
Hyundai and its connectivity partners at Broadcom are working to get this NFC- and MirrorLink-driven technology to market in its next generation of products, with the automaker claiming to have many of these systems in place by 2015.

Google will start charging for Google Apps



Google is ending availability of a free version of its Google Apps online application suite for small businesses, saying it wants to provide a stronger and more uniform experience to users.
The Internet giant said Thursday in a blog post that now even small businesses with ten or fewer users will have to pay to use its online app platform, a group that up until now has been free. All businesses will now be charged US$50 per user, per year, for the service.
Google Apps will remain free for individual users, as well as existing business customers that currently use the free version.
A blog post Thursday by Clay Bavor, director of product management for Google Apps, said the service is used by "millions of businesses." 
Since launching a paid version of its online product in 2007, Google has gradually scaled down the size of businesses that can use it free of charge. In 2009, the limit was set at 50 users, and in 2011 it was lowered to ten users, prior to ending the free portion of the service. 

Secret Gmail Feature...Must try it out today

There's a Google Mail feature one should seriously use.
Because copying an entire chain of messages after your reply doesn't make any sense when people can scroll down to see all the messages, chained one after the other. What makes sense is to only provide the snippet that you are actually replying to. You need to do the following:


1. Select the text you are replying to in Gmail.

2. Hit the reply button.

3. Boom! Only the selected text will be quoted. Reply at will.

This is common in all mail programs, but most people don't know it exists in Gmail too.
So start using it, please. Pass it around and enjoy the love of your correspondents, who will be grateful forever for your neat replies.

Google Glass & Microsoft's Glasses





Google is working on computerized glasses. They're called Google Glass.
It turns out Microsoft is working on something similar.
There's a big difference between what Microsoft is working on and Google Glass.
The most recent word out of Google is that Google Glass isn't going to use "augmented reality" – where data and illustrations overlay the actual world around you.
Google Glass is actually just a tiny screen you have to look up and to the left to see. 

Microsoft's glasses seem to utilize augmented reality. In a patent illustration we've embedded below, you can see that the glasses put data on top of a live action concert and a ballgame. Microsoft is working on could end up replacing the smartphone as the dominant way people access the Internet and connect to each other.

Both gadget concepts are very interesting. 

Computers have been getting smaller and closer to our faces since their very beginning.
First they were in big rooms, then they sat on desktops, then they sat on our laps, and now they're in our palms. Next they'll be on our faces.
(Eventually they'll be in our brains.)
If Microsoft and Google are working on computerized glasses, so would be Apple. 
Here's the patent illustration from Microsoft:
MSFT

And here's what Google Glass looks like:

DVF Google Glasses



Comparing Web-Based Word Processors


The next major jump in computing is Cloud Computing. In cloud computing users access web-based applications via their web browsers. Today's most popular web-based application is Google Docs, but there are many more available. Below given are other web-based applications which the users can compare. Google Docs is a web-based office suite composed of three key components: the Google Docs word processing program, Google Spreadsheets spreadsheet program, and Google Presentations presentation program. Any one can access Google Docs using a web browser on any computer with an Internet connection, and it's totally free.
Google Docs is the most popular web-based word processor availableIt works well and easily. Basic formatting is very simple, storage space for documents is generous, and sharing/collaboration version control is a snap. But, Google Docs doesn't include all the functionality you find in Microsoft Word. Google Docs lacks sophisticated page formatting (no two-column documents, for example), mail merge, macros, and the like—some of which can be found in competing web-based word processing programs. If you're a Microsoft Word power user, Google Docs may disappoint.
Zoho Writer  easily moves ahead of Google Docs in the web-based word processor race. You get all the standard editing and formatting features, as well as page numbering, headers and footers, footnotes and endnotes, table of contents, and other advanced features not found in all other web-based word processors. Zoho Writer also offers robust sharing and collaboration features, just as you find with Google Docs.
ThinkFree Write  is a Java-based online word processor that is similar to Microsoft Word 2003 interface. Each new document opens in its own window, with a Word-style pull-down menu and toolbar. The editing and formatting functions are also quite Word-like, complete with styles, editing marks, fields, an autocorrect function, and so forth.

Peepel WebWriter is part of a multi-application web-based office suite. The Peepel interface is a lile unusual: The document you're editing appears in its own window, on top of the larger home window that holds the toolbar and tabs that you use to edit and format the document. Peepel offers some interesting features, like the ability to edit documents offline if you don't have an Internet connection.


iNetWord  is a web-based word processor applicaion. iNetWord features a tabbed interface, with each open document appearing on its own tab. here is a support for page backgrounds, borders, page numbering, tables, images, and so on. It even consiss of predesigned templates for documents.

Glide Write  is part of the Glide Business suite of web-based applications. Glide Write is an elegant word processor that integrates seamlessly with other Glide applications, including email and chat. In addition, Glide documents can be viewed on a number of smart phones, including the iPhone, T-Mobile Sidekick, and a handful of Treo and BlackBerry models.

Docly is an interesting application, designed especially for professional writers. Docly offers a minimalist interface approach to editing and formatting. Its focus is on copyright management, including the ability to assign a document a Creative Commons license or a traditional "all rights reserved" license. This means that not only can you share and publish your Docly documents, you can also offer them for sale.

Buzzword is Adobe's entry into the web-based word processor marketplace. Unlike Google Docs, Buzzword runs in Flash, which might be problematic for users with older PCs or those with slow Internet connections. Flash implementation gives Buzzword a snazzy interface and some advanced editing and formatting features.

Buzzword interface is head-and-shoulders above the interface of Google Docs. Buzzword provides full text and paragraph formatting, headers and footers, page numbering, endnotes, and keyboard shortcuts, none of which are currently available with Google Docs. You also get a running word count, inline spell-checking as you type, the ability to insert comments, and a history of revisions made to a file.




C# - Inheritance


nheritance is one of the main features of an object-oriented language. C# and the .NET class library are heavily based on inheritance. The telltale sign of this is that all .NET common library classes are derived from the object class, discussed at the beginning of this article. As pointed out, C# doesn’t support multiple Inheritance. C# only supports single inheritance; therefore, all objects are implicitly derived from the object class.
Implementing inheritance in C# is similar to implementing it in C++. You use a colon (:) in the definition of a class to derive it from another class.
 

C# - Events & Indexers


In C# events are a special type of delegate. An event member of a class provides notifications from user or machine input. A class defines an event by providing an event declaration, which is of type delegate.
The following line shows the definition of an event handler:

public delegate void EventHandler(object sender, System.EventArgs e);

The EventHandler takes two arguments: one of type object and the other of type System.EvenArgs. A class implements the event handler using the event keyword.

LEARN MORE>>>

C# - Classes


You saw a class structure in the “Hello, C# World!” sample. In the C#, you define a class by using the class keyword, just as you do in C++. Following the class keyword the class name and curly brackets ({. . .}), as shown here:
class Hello
{
static void Main()
{
Console.WriteLine("Hello, C# World!");
}
}
Note: C# classes don’t end semicolon (;) as C++.
Once a class is defined, you can add class members to it. Class members can include constants, fields, methods, properties, indexers, events, operators, instance constructors, static constructors, destructors, and nested type declarations. Each class member has an associated accessibility, which controls the scope of the member and defines whether these members are accessible outside the class. 

C# - Control Statements


Control flow and program logic are of the most important parts of a programming language’s dynamic behavior. In this section, I’ll cover control flow in C#. Most of the condition and looping statements in C# comes from c and C++. Those who are familiar with java will recognize most of them, as well.

The if . . .else Statement 
The if . . .else statement is inherited from C and C++. The if . . .else statement is also known as a conditional statement. For example:
if (condition)
statement
else
statement

The if. . .section of the statement or statement block is executed when the condition is true; if it’s false, control goes to the else statement or statement block. You can have a nested if . . .else statement with one of more else blocks. 

C# - Expressions and Operators

An expression is a sequence of operators and operands that specify some sort of computation. The operators indicate an operation to be applied to one or two operands. For example, the operators + and - indicate adding and subtracting operands.


For example, the operator + and- indicate adding and subtracting one object from another, respectively. Listing 22 is a simple example of operators and operands.

Listing below shows - The relationship between operators and operands

using System;
class Test
{
static void Main()

LEARN MORE>>>>

C# - Attributes & Variables

Attributes enable the programmer to give certain declarative information to the elements in their class. These elements include the class itself, the methods, the fields, and the properties. You can choose to use some of the useful built-in attributes provided with the .NET platform, or you can create your own. Attributes are specified in square brackets ( [. . .] ) before the class element upon which they’re implemented. 

Variables
A variable represents a strong location. Each variable has a type that determines what values can be stored in the variable. A variable must definitely be assigned before its value can be obtained. In C#, you declare a variable in this format: 


C# - Types


As mentioned earlier in the article, C# supports value types and reference types. Value types include simple data type such as int, char, and bool. Reference types include object, class, interface, and delegate. A value type contains the actual value of the object. That means the actual data is stored in the variable of a value type, whereas a reference type variable contains the reference to the actual data. Value Types Value types reference the actual data and declared by using their default constructors. The default constructor of these types returns a zero- initialized instance of the variable. The value types can further be categorized instance of the variable. The value types can further be categorized into many subcategories, described in the following sections.

Simple Types
Simple types include basic data types such as int, char, and bool. These types have a reserved keyword corresponding to one class of a CLS type defined in the System class.LEARN MORE>>>>

C# Components

Namespace and Assemblies

The first line of the “Hello, C# World!” program was this:

using System;

This line adds a reference to the System namespace to the program. After adding a reference to a namespace, you can access any member of the namespace. As mentioned, in .NET library references documentation, each class belongs to a namespace. But what exactly is a namespace?

To define .NET classes in a category so they’d be easy to recognize, Microsoft used the C++ class-packaging concept know as namespaces. A namespace is simply a grouping of related classes. The root of all namespaces is the System namespace. If you see namespaces in the .NET library, each class is defined in a group of similar category. For example, The System.Data namespace only possesses data-related classes, and System.Multithreading contains only multithreading classes. LEARN MORE>>>

C# Editors and IDEs & Your First Program

Before starting your first C# application, you should take a look at the C# editors available for creating applications. Visual Studio .NET (VS.NET) Integrated Development Environment (IDE) is currently the best tool for developing C# applications. Installing VS .NET also installs the C# command-line compiler that comes with the .NET Software Development Kit (SDK).
If you don’t have VS.NET, you can install the C# command-line compiler by installing the .NET SDK. After installing the .NET SDK, you can use any C# editor. Visual Studio 2005 Express is a lighter version of Visual Studio that is free to download. You can also download Visual C# 2005 Express version for free. To download these Express versions, go to MSDN website, select Downloads tab, and then select Visual
Studio related link.
Tip: There are many C# editors available- some are even free. Many of the editors that use the C# command-line compiler are provided with the .NET SDK. LEARN MORE>>>>>

C# - Introduction, Features,


Microsoft developed C#, a new programming language based on the C and C++ languages. Microsoft describes C# in this way: ”C# is a simple, modern, object–oriented, and typesafe programming language derived from C and C++. C# (pronounced c sharp) is firmly planted in the C and C++ family tree of languages and will immediately be familiar to C and C++ programmers. C# aims to combine the high productivity of visual basic and raw power of C++.”

Anders Hejlsberg, the principal architect of C#, is known for his work with Borland on Turbo Pascal and Delphi (based on object–oriented Pascal). After leaving Borland, Hejlsberg worked at Microsoft on Visual J++.
Some aspects of C# will be familiar to those, who have programmed in C, C++, or Java.C# incorporates the Smalltalk concept, LEARN MORE >>>>

Microprocessor

A microprocessor is an electronic device capable of manipulating data to produce desired results. The functions of a digital computer are performed using the microprocessor’s arithmetic, logic and control circuitry. It essentially consists of several hundred thousands, or perhaps even billions of tiny transistors on a single integrated circuit.
Every microprocessor depends on an ‘instruction set’, which is designed to program it to perform specialized functions.

Main parts of microprocessors

Microprocessors consist of several different parts:

1. The arithmetic and logic unit (ALU), which performs calculations and logical outputs.
2. Registers, in which temporary data is stored.
3. The control unit which decodes the programs fed into the processor.
4. The address, data and control buses, which exchange information to and from the various parts of the microprocessor system.

More advanced microprocessors series consist of an additional component called the cache memory that speeds up memory access and processing.
A crystal oscillator in a computer system provides a clock signal to govern the functioning of the microprocessor, helping it carry out billions of instructions per second.

Types and uses
Microprocessors are popularly classed according to the number of bits that they can manipulate. For instance, a microprocessor with an arithmetic and logic unit which can manipulate data 4-bits wide is referred to as a 4-bit microprocessor. This form of classification does not take into account the number of address bus lines (the channel which sends out addresses of memory locations or ports) or data bus lines (the channel which sends data to/from memory or ports).
Another way of classing microprocessors is as embedded controllers, also referred to as dedicated controllers or microcontrollers. These pre-programmed devices consist of not just a basic microprocessor, but also random-access memory (RAM), read-only memory (ROM) and input/output capabilities all integrated onto one and the same chip. These are used to control ‘smart machines’ such as programmable washing machines and microwave ovens.
One or more microprocessors typically make up a central processing unit (CPU) in a particular application using a computer system. In this way, scientific and business tasks can be effectively handled. Microprocessors are needed for a wide variety of applications from simple calculators to the largest mainframe computers and hand-helds.

History and development
The earliest microprocessors appeared in the 1970s with the development of Large Scale Integration (LSI) in integrated circuit technology, which made it possible to accommodate several thousand transistors, resistors and diodes onto a single silicon chip. With the advent of Very Large Scale Integration (VLSI) in the 1980s it became possible to fit several hundred thousand components onto chips not larger than 5mm square in size.
Some of the earliest microprocessors were Intel’s 4004 and Texas Instruments’ TMS 1000, both 4-bit microprocessors. Later, the 8-bit Intel 8008 was made in 1972. The more advanced 8080 had a larger instruction set than its predecessors. It used NMOS transistors, and was referred to as a second generation microprocessor. Around the same time, Motorola came up with its MC6800, also an 8-bit microprocessor.

Embedded controllers
An evolution in three different directions has been seen as far as microprocessors are concerned. The first direction is that of the embedded controllers. Examples are the 8051 series of Intel and Atmel’s 89C51/2.

Bit-slice processors
A second direction has been that of the bit-slice processors. Bit-slice processors have components that can work in parallel to manipulate 8-bit, 16-bit or 32-bit words. AMD’s 2900 family of processors is an example of this category.

The Internet - How it began.....


The Internet was the result of some visionary thinking by people in the early 1960s who saw great potential value in allowing computers to share information on research and development in scientific and military fields. J.C.R. Licklider of MIT, first proposed a global network of computers in 1962, and moved over to the Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later UCLA developed the theory of packet switching, which was to form the basis of Internet connections. Lawrence Roberts of MIT connected a Massachusetts computer with a California computer in 1965 over dial-up telephone lines. It showed the feasibility of wide area networking, but also showed that the telephone line's circuit switching was inadequate. Kleinrock's packet switching theory was confirmed. Roberts moved over to DARPA in 1966 and developed his plan for ARPANET. These visionaries and many more left unnamed here are the real founders of the Internet.
The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA) which initially connected four major computers at universities in the southwestern US (UCLA, Stanford Research Institute, UCSB, and the University of Utah). The contract was carried out by BBN of Cambridge, MA under Bob Kahn and went online in December 1969. By June 1970, MIT, Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal. were added. By January 1971, Stanford, MIT's Lincoln Labs, Carnegie-Mellon, and Case-Western Reserve U were added. In months to come, NASA/Ames, Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there were far too many to keep listing here.
The Internet was designed in part to provide a communications network that would work even if some of the sites were destroyed by nuclear attack. If the most direct route was not available, routers would direct traffic around the network via alternate routes.
The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.
E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC's are a means of sharing developmental work throughout community. The ftp protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC's were available electronically to anyone who had use of the ftp protocol.
The Internet matured in the 70's as a result of the TCP/IP architecture first proposed by Bob Kahn at BBN and further developed by Kahn and Vint Cerf at Stanford and others throughout the 70's. It was adopted by the Defense Department in 1980 replacing the earlier Network Control Protocol (NCP) and universally adopted by 1983.
The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell Labs. Usenet was started in 1979 based on UUCP. Newsgroups, which are discussion groups focusing on a topic, followed, providing a means of exchanging information throughout the world . While Usenet is not considered as part of the Internet, since it does not share the use of TCP/IP, it linked unix systems around the world, and many Internet sites took advantage of the availability of newsgroups. It was a significant part of the community building that took place on the networks.
Similarly, BITNET (Because It's Time Network) connected IBM mainframes around the educational community and the world to provide mail services beginning in 1981. Gateways were developed to connect BITNET with the Internet and allowed exchange of e-mail, particularly for e-mail discussion lists. 
In 1986, the National Science Foundation funded NSFNet as a cross country 56 Kbps backbone for the Internet. They maintained their sponsorship for nearly a decade, setting rules for its non-commercial government and research uses.
As the commands for e-mail, FTP, and telnet were standardized, it became a lot easier for non-technical people to learn to use the nets. It was not easy by today's standards by any means, but it did open up use of the Internet to many more people in universities in particular. Other departments besides the libraries, computer, physics, and engineering departments found ways to make good use of the nets--to communicate with colleagues around the world and to share files and resources.
In 1991, the first really friendly interface to the Internet was developed at the University of Minnesota. The University wanted to develop a simple menu system to access files and information on campus through their local network. A debate followed between mainframe adherents and those who believed in smaller systems with client-server architecture. The mainframe adherents "won" the debate initially, but since the client-server advocates said they could put up a prototype very quickly, they were given the go-ahead to do a demonstration system. The demonstration system was called a gopher after the U of Minnesota mascot--the golden gopher. The gopher proved to be very prolific, and within a few years there were over 10,000 gophers around the world. It takes no knowledge of unix or computer architecture to use. In a gopher system, you type or click on a number to select the menu selection you want.
In 1989 another significant event took place in making the nets easier to use. Tim Berners-Lee and others at the European Laboratory for Particle Physics, more popularly known as CERN, proposed a new protocol for information distribution. This protocol, which became the World Wide Web in 1991, was based on hypertext--a system of embedding links in text to link to other text, which you have been using every time you selected a text link while reading these pages. Although started before gopher, it was slower to develop.

The development in 1993 of the graphical browser Mosaic by Marc Andreessen and his team at the National Center For Supercomputing Applications (NCSA) gave the protocol its big boost. Later, Andreessen moved to become the brains behind Netscape Corp., which produced the most successful graphical type of browser and server until Microsoft declared war and developed its Microsoft Internet Explorer.

Since the Internet was initially funded by the government, it was originally limited to research, education, and government uses. Commercial uses were prohibited unless they directly served the goals of research and education. This policy continued until the early 90's, when independent commercial networks began to grow. It then became possible to route traffic across the country from one commercial site to another without passing through the government funded NSFNet Internet backbone.
Delphi was the first national commercial online service to offer Internet access to its subscribers. It opened up an email connection in July 1992 and full Internet service in November 1992. All pretenses of limitations on commercial use disappeared in May 1995 when the National Science Foundation ended its sponsorship of the Internet backbone, and all traffic relied on commercial networks. AOL, Prodigy, and CompuServe came online. Since commercial usage was so widespread by this time and educational institutions had been paying their own way for some time, the loss of NSF funding had no appreciable effect on costs.
Today, NSF funding has moved beyond supporting the backbone and higher educational institutions to building the K-12 and local public library accesses on the one hand, and the research on the massive high volume connections on the other.
Microsoft's full scale entry into the browser, server, and Internet Service Provider market completed the major shift over to a commercially based Internet. The release of Windows 98 in June 1998 with the Microsoft browser well integrated into the desktop shows Bill Gates' determination to capitalize on the enormous growth of the Internet. Microsoft's success over the past few years has brought court challenges to their dominance. We'll leave it up to you whether you think these battles should be played out in the courts or the marketplace.

A current trend with major implications for the future is the growth of high speed connections. 56K modems and the providers who support them are spreading widely, but this is just a small step compared to what will follow. 56K is not fast enough to carry multimedia, such as sound and video except in low quality. But new technologies many times faster, such as cablemodems, digital subscriber lines (DSL), and satellite broadcast are available in limited locations now, and will become widely available in the next few years. These technologies present problems, not just in the user's connection, but in maintaining high speed data flow reliably from source to the user. Those problems are being worked on, too.
During this period of enormous growth, businesses entering the Internet arena scrambled to find economic models that work. Free services supported by advertising shifted some of the direct costs away from the consumer--temporarily. Services such as Delphi offered free web pages, chat rooms, and message boards for community building. Online sales have grown rapidly for such products as books and music CDs and computers, but the profit margins are slim when price comparisons are so easy, and public trust in online security is still shaky. Business models that have worked well are portal sites, that try to provide everything for everybody, and live auctions. AOL's acquisition of Time-Warner was the largest merger in history when it took place and shows the enormous growth of Internet business! The stock market has had a rocky ride, swooping up and down as the new technology companies, the dot.com's encountered good news and bad. The decline in advertising income spelled doom for many dot.coms, and a major shakeout and search for better business models is underway by the survivors.
It is becoming more and more clear that many free services will not survive. While many users still expect a free ride, there are fewer and fewer providers who can find a way to provide it. The value of the Internet and the Web is undeniable, but there is a lot of shaking out to do and management of costs and expectations before it can regain its rapid growth.

Cloud Computing

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.
Cloud computing is at an early stage, with a motley crew of providers large and small delivering a slew of cloud-based services, from full-blown applications to storage services to spam filtering. Yes, utility-style infrastructure providers are part of the mix, but so are SaaS (software as a service) providers such as Salesforce.com. Today, for the most part, IT must plug into cloud-based services individually, but cloud computing aggregators and integrators are already emerging.
Here's a rough breakdown of what cloud computing is all about:

1. SaaS
This type of cloud computing delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting. Salesforce.com is by far the best-known example among enterprise applications, but SaaS is also common for HR apps and has even worked its way up the food chain to ERP, with players such as Workday. And who could have predicted the sudden rise of SaaS "desktop" applications, such as Google Apps and Zoho Office?

2. Utility computing
The idea is not new, but this form of cloud computing is getting new life from Amazon.com, Sun, IBM, and others who now offer storage and virtual servers that IT can access on demand. Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter. Other providers offer solutions that help IT create virtual datacenters from commodity servers, such as 3Tera's AppLogic and Cohesive Flexible Technologies' Elastic Server on Demand. Liquid Computing's LiquidQ offers similar capabilities, enabling IT to stitch together memory, I/O, storage, and computational capacity as a virtualized resource pool available over the network.

3. Web services in the cloud
Closely related to SaaS, Web service providers offer APIs that enable developers to exploit functionality over the Internet, rather than delivering full-blown applications. They range from providers offering discrete business services -- such as Strike Iron and Xignite -- to the full range of APIs offered by Google Maps, ADP payroll processing, the U.S. Postal Service, Bloomberg, and even conventional credit card processing services.

4. Platform as a service
Another SaaS variation, this form of cloud computing delivers development environments as a service. You build your own applications that run on the provider's infrastructure and are delivered to your users via the Internet from the provider's servers. Like Legos, these services are constrained by the vendor's design and capabilities, so you don't get complete freedom, but you do get predictability and pre-integration. Prime examples include Salesforce.com's Force.com, Coghead and the new Google App Engine. For extremely lightweight development, cloud-based mashup platforms abound, such as Yahoo Pipes or Dapper.net.

5. MSP (managed service providers)
One of the oldest forms of cloud computing, a managed service is basically an application exposed to IT rather than to end-users, such as a virus scanning service for e-mail or an application monitoring service (which Mercury, among others, provides). Managed security services delivered by SecureWorks, IBM, and Verizon fall into this category, as do such cloud-based anti-spam services as Postini, recently acquired by Google. Other offerings include desktop management services, such as those offered by CenterBeam or Everdream.

6. Service commerce platforms
A hybrid of SaaS and MSP, this cloud computing service offers a service hub that users interact with. They're most common in trading environments, such as expense management systems that allow users to order travel or secretarial services from a common platform that then coordinates the service delivery and pricing within the specifications set by the user. Think of it as an automated service bureau. Well-known examples include Rearden Commerce and Ariba.

7. Internet integration
The integration of cloud-based services is in its early days. OpSource, which mainly concerns itself with serving SaaS providers, recently introduced the OpSource Services Bus, which employs in-the-cloud integration technology from a little startup called Boomi. SaaS provider Workday recently acquired another player in this space, CapeClear, an ESB (enterprise service bus) provider that was edging toward b-to-b integration. Way ahead of its time, Grand Central -- which wanted to be a universal "bus in the cloud" to connect SaaS providers and provide integrated solutions to customers -- flamed out in 2005.

Today, with such cloud-based interconnection seldom in evidence, cloud computing might be more accurately described as "sky computing," with many isolated clouds of services which IT customers must plug into individually. On the other hand, as virtualization and SOA permeate the enterprise, the idea of loosely coupled services running on an agile, scalable infrastructure should eventually make every enterprise a node in the cloud. It's a long-running trend with a far-out horizon. But among big metatrends, cloud computing is the hardest one to argue with in the long term.

This article, was originally published at InfoWorld.com

Windows 8 - A Single Platform

Microsoft’s new Operating System is aimed at working across all the three device platforms most people across the world use today—mobile phones, tablets and computers. Visually and functionally, Microsoft’s new offering will be a departure from what it has so far provided in computing.
Microsoft’s thinking is inspired by an obvious fact: in the age of Internet-driven mobility and cloud computing.
Windows 8 sports a new tile-based user interface, which works well on touch surfaces (tablets and touch-screen phones) and should integrate well on computers. “Microsoft has put a common kernel (in Windows 8) that will be part of the PC, tablet and phone.
Windows’ earlier software language wasn’t compatible with the others—only 10 per cent of applications developed for Apple and Android were available for Windows. The new version is written in a language similar to what both Apple and Google use. This will open up an entire universe of applications for Windows users.

Computer Networks

Network can be classified into 3 major types depending upon the geographic spread. They are:
1. LAN (Local Area Network)
2. MAN (Metropolitan Area Network)
3. WAN (Wide Area Network)

LAN
A LAN or a Local Area Network is a network that is restricted to a small physical area. A small LAN might connect a few computers in an office or in a home; a large LAN could extend over an office park or university campus, connecting computers and other devices in a number of buildings. The major benefit of a local area network is that it can help to reduce cost by allowing people and microcomputers to share expensive resources.
There are three types of LANs. They are discussed below:
1. Dedicated Server LANs: Dedicated server LANs account for more than 70 percent of all installed LANs. A dedicated server LAN can connect with almost any other network, can handle very large databases and have a dedicated nework server.
2. Peer-to-Peer LANs: This network is a local area network that allows all users access to data on all workstations. In such networks, any computer on the network shares its resource such as hard disk and printer with any other computer on the same network.
3. Zero-slot LANs: This LAN operates like peer to peer LAN, but offers limited simple abilities such as sharing files and printers, transfer of files and transmission of e-mail. It is inexpensive. It does not require a network interface circuit card. Its adapter plug can be plugged into a serial or parallel port. This network usually can handle up to 30 computers

MAN
This is larger than a LAN and stands for Metropolitan Area Network. A MAN usually spans a geographical area that encompasses a city or country area. It interconnects various buildings or other facilities within this area. For example, linkages can be established between two commercial buildings. MAN technology has been rapidly developing in the area of cellular phone systems.

WAN
A wide area network (WAN) is one that operates over a vast distance (e.g., nationwide). Its nodes may span cities, states, or national boundaries. This network interconnects computers, LANs, MANs and other data transmission facilities. A WAN will employ communications circuits such as long distance telephone wires, microwaves, and satellites. Nationwide automated teller machines used in banking represent a common application of a wide area network.

Communication Media : Satellite

The problem with microwave communications is of line of sight. Because of the curvature of earth, mountains and other high structures often block the line of sight. So you require several repeater stations for long-distance transmission, which increases the cost of data transmission. This problem is overcome by using satellites.
A communication satellite is an electrical device positioned in an orbit around the earth. It can be thought of a big microwave repeater in the sky. I contains one or more 'transponders', each of which listens to some portion of the frequency spectrum, amplifies the incoming signal and then rebroadcasts it at another frequency. Different frequencies are used for 'uplinking' and 'downlinking' to avoid any interference of signals. Uplink refers to data flow from the earth to the satellite. Here the earth station works as a transmiter and the satellite transponder as a receiver. Downlink refers to data flow from the satellite to the earth. Here the saellite works as a transmitter and the earth station as a receiver.

Advantages
There is no line of sight restriction, so transmission and reception is possible between any two randomly chosen places.
Disadvantages
Launching a satellite into an orbit costs a lot.
A signal sent to a satellite is broadcasted to all receivers within the satellite's range. So, measures are required to prevent unauthorized tampering of information.

Communication Media : Microwave Link

Microwave radiation is also a popular medium of transmission. It does not require the laying of expensive cables. Microwave links use  very high frequency radio waves to transmit data through space. Microwave links use repeaters at intervals of about 25 to 30 km in between the transmitting and receiving stations

Parabolic antennas are mounted on towers to send a beam to another antenna which could be tens of kilometers away, but should be in the line of sight. The higher the tower the greater is the range. Microwave radio transmission is widely used for long-distance communication. It overcomes the problem of weak signals.

Advantages
  • Building two towers is cheaper than digging a 100 km trench and laying cables in it.
  • It can permit transmission rates of about 16 giga (1 giga bit = 10 to the power of 9 bits) bits per second.
Disadvantages
  • Repeaters, if used along the way, are to be maintained regularly.
  • Physical vibration will show up as signal noise.

Communication Media : Radio Frequency Propagation


Data transmission through air (and not through any channel) is called unguided transmission. Data is carried over electro-magnetic radiation in the form of radio waves. Such propagation is classified by the type of wave used for propagation. There are three types of RF (radio frequency) propagation.
  1. Ground wave
  2. Ionospheric
  3. Line of Sight (LOS)
Ground Wave Propagation
Ground wave propagation follows the curvature of the earth. Ground waves have carrier frequencies up to 2 MHz. The AM radio is an example of ground wave propagaiton.
Ionospheric Propagation
Ionospheric propagation bounces off the Earh's ionspheric layer in the upper atmosphere. It is sometimes called double hop propagation. It operates in the frequency range of 30 - 85 MHz. Because it depends on the Earth's ionosphere, it changes with the weaher and the time of the day. The signal sent from a radio tower bounces off the ionosphere and comes back to earth to a receiving station.

Line of Sight Propagation
Line of sight propagation transmits exactly in the line of sight. The receiving station must be in the view of the transmit station. It is someimes called space wave or tropospheric propagation. It is limited by the curvature of the Eartth for ground-based stations (100 km. from horizon to horizon). Reflected waves can cause problems. Examples of line of sight propagation are FM radio, and satellite microwave.

Communication Media : Optical Fibre

An optical fibre is a piece of hair-thin glass material of a different refractive index. Such a fibre is cabable of transmiting data at the speed of light with no significant loss of intensity over long distances.
Fibre-optic links are based on the principle of  ' total internal reflection '. When an electromagnetic wave, travelling in a medium with high refractive index, falls on the boundary of the surrounding medium of lower refractive index, a special phenomenon takes place. Up to a cerain angle of incidence, light passes through the boundary and enters the medium that has a lower refractive index. But if the angle of incidence is more than the critical angle, the light is reflected from the boundary and comes back to the first medium. This is called ' total internal reflectiont'. 
An optical transmission system based on the fibre-optics has three components, the transmission medium, the light source and the detector. The transmission medium is the ultra-thin glass fibre. The light source is either an LED (light emitting diode) or a laser diode, which emits light pulses when electric current is applied. the detector is a photo diode which generates electric pulses when light falls on it.
By attaching the LED or the laser diode to one end of an optical fibre and a photo diode at the other end, we can have an uni-directional data transmission system that accepts an electrical signal, converts and transmits it by light pulses and then reconverts the output to an electrical signal at the receiving end.

Advantages
It has very high rate of transmission of data.
It has better noise immunity.
It can transmit data over long distances.
Data is transmitted with high security.
The fibres are small in size.

Disadvantages
Limited physical arc of cable. Bend it too much and it will break!
Difficult to splice.


The cost of optical fibre is a trade off beween capacity and cost. A higher transmission capacity, it is cheaper than copper. At lower transmission capaciy, it is more expensive.

Communication Media : Coaxial Cable

Coaxial Cables are a group of specially wrapped and insulated wires capable of transmiting data at a very high rate. They consist of a central copper wire (inner conductor) surrounded by a PVC (polyvinyl chloride) insulation over which a sleeve of copper mesh (second conductor) is covered.

The metal sleeve is again shielded by an outer shield of thick PVC material. The signal is transmitted by the inner copper wire and is electrically shielded by the outer metal sleeve. Coaxial cables are extensively used in long-distance telephone lines and in closed-circuit TV.




Advantages
  • They are capable of transmitting digital signals at a very high rate of approximately 10 mega bits per second.
  • They have a higher noise immunity.
Disadvantages
  • These are compariively costly.
  • Such cables can easily be tapped, posing security problems.

Communication Media : Twisted Pair Cable

A twisted cable is the oldest and most common medium of transmission. It is generally used in telephone systems. A twisted pair consists of two insulated copper wires, typically 1 mm thick, twisted together just like a DNA molecule. The wire is twisted so as to reduce the electrical interference from and to the adjacent copper pairs. (The two conducting wires, which run parallel to each other, may cause electrical interference.)
When many twisted pairs run in parallel over a long distance, they are bundled togeher and enclosed in a proective sheath, so as not to interfere with each other. twisted pair is also used in a LAN (Local Area Network)

Advantages
It can be used for analog as well as digital transmission.
It is one of the cheapest media of transmission and has adequate performance.
Disadvantages
It is more prone to pick up noise signals.
A twisted pair can transmit dara only upto a certain distance.

Serial and Parallel Transmission

Serial Transmission
In, serial transmission, data is transmitted at one bit at a time in a continuous stream along the communications channel. For each direction of data flow, only one wire is used.
This pattern is analogous to the flow of traffic down a one-lane residential street. Most data transmitted over telephone lines uses a serial pattern. Serial Transmission is typically slower than parallel transmission, because data are sent sequentially in a bit-by-bit fashion.

Parallel Transmission
In the case of parallel data transmission, several bits of data are transmitted concurrently through separate communication lines. This resembles the flow of traffic on a multi lane highway. Internal transfer of binary data in a computer uses a parallel mode. If the computer uses a 32-bit internal structure, all the 32 bits of data are transferred simultaneously on 32-lane connections. Parallel data transmission is commonly used for interactitons between computer and is printing unit.

Digital and Analogue Transmission

Data is propagated from one point to another by means of electrical signals, which may be in the digital or analog form. As shown in the figure below,




analog signals are continuous in nature. They have continuous amplitude levels. Since they are continuous, it is very difficult to remove physically any noise and distortion from them, which is added during transmission or otherwise. The telephone lines used for data communication in computer networks are usually meant for carrying analog signals.





A digital signal is a sequence of voltage pulses represented in the binary form. These signals are well-defined with discrete amplitude levels. Computer generated data is digital in form.

Data Communication

Message and Structured Data
The term data communication describes the transmission of computer related records, which have a structured format, from source to destination over transmission media like telephone line, optical fibre, microwave link, etc.
Other forms of digital transmission such as digitized voice, facsimile or video do not use structured data except in the case of telemetry, which can use fixed data field and formats.
Each record of a structured data type is identified by a transaction code; the data is organized into predefined fields, each of which contains a specified maximum number of characters.

Bits Streams and Symbols
The characters which make up a message are transmitted one after another as a bit stream, where a bit is the fundamental data unit represented as 1 or 0, mark or space in digital systems.
A symbol is the basic transmission element, which is used to transmit a group of bits.

Data Rate
Data Rate is measured as bits per second. When group of k bits is combined to form a transmission symbol which has a transmit duration T, then
Data Rate = k/T

Baud Rate
Baud rate is the transmit speed of a communication channel in bits per second. For example, 4800 baud is 4,800 bits/second of data transfer.

Networking

Networking is the form of computer communication generally used for exchanging data and information. Two or more computers are connected together or networked by some types of communication media (wire or cable) to form a data path as to exchange program and data files between them.
Networking allows accessing shared output devices and data storage connected to the network. Examples of shard devices are printers, plotters and hard disks.

Need
Networking satisfies a broad range of purposes and meets various requirements. Networking has become essential for the following reasons:
1. File sharing
2. Resource sharing
3. Unlike equipment and communication.
4. To improve speed and accuracy.
5. Low cost.
6. For instant availability of data.
File Sharing
Networking enables a user to share files between all the connected computers. For example, the users of an organization having its different offices at distant places in a city, which are physically separated but are connected on a network, can share their files and data without moving physically to each and every office.
Resource Sharing
Laser printers and hard disks can be expensive. Networking enables users to share such resources by networking several computers together. For example, a company with say 20 users, each requiring only limited hard disk space and occasional printing, might save money by purchasing fewer resources and sharing them.
Unlike Equipment Communication
Through networking it is also possible to share and send messages between computers having equipment of different brands and using different operating systems like DOS, UNIX, MacOS, etc.
Improve Speed and Accuracy
Sending messages through a network is instantaneous as compared to a common letter, which takes several days to deliver. Computers can send data at very fast speed through satellite or microwave links. Also there are less chances of data being lost in case of networking. Accuracy is also maintained here.
Low Cost.
The cost of ttransferring data between computers connected on a network is lower than by other conventional means of transferring documents.
Instant Availability
Since the time taken in transferring data is quite small, data is available instantlyy at the other end. This feature is useful for communication price fluctuations in foreign exchange and in share/equity trading etc.

Terminology of Boolean Expressions

A Boolean expression has the following terms

Literals
A literal is a single Boolean variable or it's complement
Constant
A constant is a value or the quantity which has a fixed (unchanging) value. In real number (conventional) algebra, constants include all integers and fractions. In Boolean Algebra, there are only two possible constants 1 and 0. These two constants are used to describe true and false, YES or NO, etc.
Variable
A variable is a quantity which can change its value by taking on the value of any constant. At any one itme the variable has only one particular value of constant. There are only two values of constants in the Boolean System. Therefore, a variable in Boolean algebra can only be either 0 or 1. Variables are denoted by letters.
Term
A term is a literal or a collection of literals.
Product Term
A product ( logical AND) of several different literals is called the product term. For example :
A.B' is a product term
Sum Term
The Logical OR of Literals is called the sum term. For example:
A+B'+C
is an example of sum term
Sum of Products
It is the sum (logical OR) of product terms. For example 
A.B' +B.C + A.B.C
is the sum of three different erms i.e. A.B', B.C and A.B.C
Product of Sum
It is tthe product (logical AND) of sum terms. For example:
(A+B).(A+B'+C).(A+C)
is the product of three different sum terms i.e. (A+B), (A+B'+C) and (A+C)
Minterm
A minterm is an AND function that includes each variable once in its normal or complemented form. This is also known as the standard sum of products. For example:
If you have two variables, A and B, there are eight possible terms:
A, B, A', B', A'B', A'B, AB' and AB.
Similarly for three variables we have 26 possible terms.
Maxterm
A maxterm is a logical OR function that includes each variable once in its normal or complemented form. This is known as standard product of sums.

Basic Logic Gates

A logic gate is a simple electronic circuit, which operates on one or more input signals to produce standard output signals. Logic gates are the basic building blocks of all the electronic circuits in a computer. All circuits in a computer are made by the combination of logic gates. Every operation within a computer is carried out by signals passing through these logic gates. A few gates are discussed below:

There are seven basic logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR.

The basic operations are described below with the aid of truth tables.

AND gate






The AND gate is an electronic circuit that gives a high output (1) only if all its inputs are high. A dot (.) is used to show the AND operation i.e. A.B. Bear in mind that this dot is sometimes omitted i.e. AB
 
OR gate

 





The OR gate is an electronic circuit that gives a high output (1) if one or more of its inputs are high.  A plus (+) is used to show the OR operation.

 
The NOT gate is an electronic circuit that produces an inverted version of the input at its output.  It is also known as an inverter.  If the input variable is A, the inverted output is known as NOT A.  This is also shown as A', or A with a bar over the top, as shown at the outputs. The diagrams below show two ways that the NAND logic gate can be configured to produce a NOT gate. It can also be done using NOR logic gates in the same way.

 
This is a NOT-AND gate which is equal to an AND gate followed by a NOT gate.  The outputs of all NAND gates are high if any of the inputs are low. The symbol is an AND gate with a small circle on the output. The small circle represents inversion.

NOR gate




This is a NOT-OR gate which is equal to an OR gate followed by a NOT gate. The outputs of all NOR gates are low if any of the inputs are high.
The symbol is an OR gate with a small circle on the output. The small circle represents inversion.

EXOR gate



 





The 'Exclusive-OR' gate is a circuit which will give a high output if either, but not both, of its two inputs are high. An encircled plus sign () is used to show the EOR operation.

EXNOR gate




The 'Exclusive-NOR' gate circuit does the opposite to the EOR gate. It will give a low output if either, but not both, of its two inputs are high. The symbol is an EXOR gate with a small circle on the output. The small circle represents inversion.


The NAND and NOR gates are called universal functions since with either one the AND and OR functions and NOT can be generated.

Note:

A function in sum of products form can be implemented using NAND gates by replacing all AND and OR gates by NAND gates.
A function in product of sums form can be implemented using NOR gates by replacing all AND and OR gates by NOR gates.