Processor Management

Modern compuers can do several things at the same time. This is achieved by processor management. This involves wo major issues -
   • Ensuring that each process and application receives enough of the processor's time to function properly
   • Using as many processor cycles as possible for real work

The basic unit of software that the operating system deals with in scheduling the work done by the processor is either a process or a thread, depending on the operating system.
The application you see (word processor, spreadsheet or game) is, indeed, a process, but that application may cause several other processes to begin, for tasks like communications with other devices or other computers. There are also numerous processes that run without giving you direct evidence that they ever exist. For example, Windows XP and UNIX can have dozens of background processes running to handle the network, memory management, disk management, virus checks and so on.
A process, then, is software that performs some action and can be controlled -- by a user, by other applications or by the operating system.
It is processes, rather than applications, that the operating system controls and schedules for execution by the CPU. In a single-tasking system, the schedule is straightforward. The operating system allows the application to begin running, suspending the execution only long enough to deal with interrupts and user input.
Interrupts are special signals sent by hardware or software to the CPU. It's as if some part of the computer suddenly raised its hand to ask for the CPU's attention in a lively meeting. Sometimes the operating system will schedule the priority of processes so that interrupts are masked -- that is, the operating system will ignore the interrupts from some sources so that a particular job can be finished as quickly as possible. There are some interrupts (such as those from error conditions or problems with memory) that are so important that they can't be ignored. These non-maskable interrupts (NMIs) must be dealt with immediately, regardless of the other tasks at hand.
While interrupts add some complication to the execution of processes in a single-tasking system, the job of the operating system becomes much more complicated in a multi-tasking system. Now, the operating system must arrange the execution of applications so that you believe that there are several things happening at once. This is complicated because the CPU can only do one thing at a time. Today's multi-core processors and multi-processor machines can handle more work, but each processor core is still capable of managing one task at a time.
In order to give the appearance of lots of things happening at the same time, the operating system has to switch between different processes thousands of times a second. 
1. A process occupies a certain amount of RAM. It also makes use of registers, stacks and queues within the CPU and operating-system memory space.
2. When two processes are multi-tasking, the operating system allots a certain number of CPU execution cycles to one program.
3. After that number of cycles, the operating system makes copies of all the registers, stacks and queues used by the processes, and notes the point at which the process paused in its execution.
4. It then loads all the registers, stacks and queues used by the second process and allows it a certain number of CPU cycles.
5. When those are complete, it makes copies of all the registers, stacks and queues used by the second program, and loads the first program.

Operating System as a Resource Manager

The Operating system of a Computer works as an inerface between he user, and hardware devices. Applicaion software is designed to achieve specific tasks whereas the hardware helps in achieving the task. The compuer system provides all this with the use of available resources. The hard disk, primary memory and the processor can be though of as resources.
the Operating System provides an ordered and conrolled allocaion of the processor, memories and I/O devices among he various programs.
Take for example, a computer system with a printer and two applications programs running. If both programs wish to print a document at the same time, will those be printed by the printer or will there be a contention? Here, the operating system comes into play: it manages to queue the outputs to the printer. In short, this means that it is the ask of the operating system to manage a computer's resources. Some of the resources that the operating system manages are:
  1. Processor
  2. Memory
  3. Devices
  4. Information
To explain simply :-

  • It manages the hardware and software resources of the system. In a desktop computer, these resources include such things as the processor, memory, disk space and more (On a cell phone, they include the keypad, the screen, the address book, the phone dialer, the battery and the network connection).
  • It provides a stable, consistent way for applications to deal with the hardware without having to know all the details of the hardware.
  • A Break by Tina Moeder. $0.99 from Smashwords.com
    Sometimes all you need is a little break. This one comes with a surprise; one no one wants to miss.

Types of Operating Systems
There are generally four types, categorized based on the types of computers they control and the sort of applications they support. The categories are:
1. Real-time operating system (RTOS) - Real-time operating systems are used to control machinery, scientific instruments and industrial systems. An RTOS typically has very little user-interface capability, and no end-user utilities, since the system will be a "sealed box" when delivered for use. A very important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time, every time it occurs.
2. Single-user, single task - As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm OS for Palm handheld computers is a good example of a modern single-user, single-task operating system.
3. Single-user, multi-tasking - This is the type of operating system most people use on their desktop and laptop computers today. Microsoft's Windows and Apple's MacOS platforms are both examples of operating systems that will let a single user have several programs in operation at the same time.
4. Multi-user - A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. Unix, VMS and mainframe operating systems, such as MVS, are examples of multi-user operating systems.

Computer Software

Electronic devices like memory, input-output devices, and the CPU are made of plastic, silicon, and metal. This is the hardware of a computer. In a computer, it is the software that helps the machine store, process, reference information and perform other functions. Software refers to programs which incorporate instructions that make a computer work.
Computer Software can be divided in two categories:
1. System Software
2. Application Software

System Software
System software is computer software designed to operate the computer hardware and to provide a platform for running application software. It manages the operation of the computer itself.
The most basic types of system software are:
i) The computer BIOS and device firmware, which provide basic functionality to operate and control the hardware connected to or built into the computer.
ii) The operating system (prominent examples being Microsoft Windows, Mac OS X and Linux), which allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It also provides a platform to run high-level system software and application software.
iii) Utility software, which helps to analyze, configure, optimize and maintain the computer

Application Software
Application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform.
Application software, also known as an application or an "app", is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install one.
Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps such as Microsoft Office are available in versions for several different platforms; others have narrower requirements and are thus called, for example, a Geography application for Windows or an Android application for education or Linux gaming.

Machine Language and Assembly Language Programs

A program written in the form of zeros and ones i.e, in the machine - readable form is known as machine language program. Machine code is composed of binary numbers. The disadvantage of a machine language is that it is very difficult to write, understand and debug such a program. It is also very slow.
Instead of writing a program in zeros and ones, one can easily write a program in meaningful and easily-memorable alphanumeric symbols. these symbols are known as mnemonics. the programs written in mnemonics are known as assembly language programs which are easier to and faster to use.
Programs written in languages other than machine language are known as source programs. Such programs have to be converted into machine language to make them understandable by computers. A fully compiled or assembled program that is ready to be loaded into the computer is known as an object program.So the translation of the assembly language program into machine language is a must. The program that translates the assembly language program into a machine language program is known as an assembler.

Architechture of a Hypothetical Computer

A computer has three main components
1) Central Processing Unit (CPU) or Central Processor
2) Memory
3) I/O Devices

The CPU in turn has three parts
1) Arithmatic and Logic Unit (ALU)
2) Registers
3) Control Unit (CU)

Arithmatic and Logic Unit (ALU)
The ALU performs the following operations: Addition, Subtraction, Logical AND, Logical OR, Logical XOR, Complement, Increment, Decrement, LEft Shift, Clear.

Registers
This is a small memory unit. Registers are used by the processor for temporary storage and manipulation of data and innstructions. A register is a set of flip-flop. A flip-flop is an electronic circuit, which at any point of time stores either 0 or 1, which is any of the two states of a switch ON or OFF.
A register is mostly of different sizes and capacities : 8 bit, 16 bit, 32 bit, etc. Each register has a specific function in the CPU.

Given below are few commonly known registers:
Accumulator (AC)
the ALU requires temporary registers or memory locations for all its operations. An accumulator is one of the main registers of the ALU, used to store data and perform arithmetic and logic operations. The results of the operations are stores automatically in this register.
Program Counter (PC)
A PC is used as a memory pointer. It stores the address of the next instruction to be executed. this register is used to sequence the execution of instructions.
Instruction Register(IR)
An IR holds the instruction until it is decoded.
Stack Pointer(SP)
the address of a stack top is held in the stack pointer. A stack is a sequence of memory locations. It is used to save the contents of a register during the execution of a program. the memory location of an occupied potion is known as stack top.
Given below are some of the registers for a basic computer and their functions:
Symbol     Name                                    Function
DR            Data Register                         Holds memory operand
AR            Address Register                   Holds address for memory
AC            Accumulator                         Processor Register
IR              Instruction Register               Holds instruction code
PC             Program Counter                 Holds address of next instruction
TR             Temporary Register              Holds Temporary data
INPR         Input Register                      Holds Input Character
OUTPR     Output Register                    Holds Output Character
Control Unit (CU)
This circuit is responsible for the entire gamut of functions of the ALU. It receives instructions from memory and executes them after decoding them. Timing and control signals are generated by this circuit and sent to other circuits for the execution of the any program. It also transfers data between memory and I/O devices.

Let us discuss how the ALU functions while executing a program. A program is a set of instructions stored in a proper sequence in memory.
The ALU has to perform two main steps:
  1) Execution of an instruction
  2) Fetching the next instruction
The total time taken for the execution of an instruction is known as Instruction Cycle. (IC). A Fetch Cycle(FC) is the time that fetch operation takes to fetch the machine code of the instruction from memory. The FC is of fixed duration.
An Instruction Cycle consists of the Fetch cycle and the Execution Cycle. This is depicted in the following diagram:
The Execution Cycle is of variable duration, depending upon the length of the instruction to be executed. This time is known as Machine cycle.

Optical Disk

The optical disk is a storage media from which data is read and to which it is written on by lasers. Optical disks can store upto 6 gigabytes (6 billion bytes). That figure is uch higher that that for portable magnetic media, such as floppies. there are four basic types of optical disks:

CD-ROM (Compact Disc-Read Only Memory)
This is a type of optical disk, capable of storing latge amounts of data up to 1 GB (although the most common size is 650 MB). A single CD-ROM has a storage capacity of 700 floppy disks.
The vendor stamps CD-ROMs with data and once stamped, they cannot be erased and filled with new data. to read a CD, you need a Cd-ROM player. all CD-ROMs are made to a standard size and format, so one can load any type of CD-ROM into any CD-ROM player. The data on a CD-ROM is permanent and can be read any number of times, but annot be modified.
WORM(Write Once Read Many)
WORM drives contain disks that you can record on only once. whatever you record on the disk remains there permanently. After that, the WORM disk behaves just like a CD-ROM. These drives are perfect for achieving large amounts of data and are frequently used by banks and accounting firms. To write data onto it, a laser beam of modest intensity is employed. IBM developed the 20GB WORM for its PS/2 system; it has a large storage capacity and a longer life. It is more reliable - the only drawback being the longer access time.
EO(Erasable Optical)
These are optical disks that can be erased and loaded with new data, just like magnetic disks. Both lasers and electromagnets are used to record information on a cartridge, the surface of which contains tiny embedded magnets.
DVD(Digital Video Disk)
DVD stands for Digital Video disk. The DVD is also a member of optical disk family. It has the same dimensions of a CD but has significantly higher storage capacity. DVDs are double-sided unlike simple CDs which are single sided. However, DVD drivers can read most of the CD media. Many types of DVDs are avaialble.
1) DVD ROM is just like a large CD-ROM, allowinng data and infrastructure material as well as audio and video. It requires a DVD-ROM drive installed in computer. DVD-ROM drivers can play DVD Video movies.
2) DVD-R is a write-once version for creating masters.
3) DVD-RAM is the rewritable DVD.
4) DVD-R/W is also a rewritable version but is more an extension to DVD-R than a competitor to DVD-RAM.
5) DVD-Video is a read-only DVD disk used in full lenght movies.

Secondary Storage Devices

Primary memory is volatile, costly and of less capacity. A second type of memory which one can store for a long time and which can also be modified, is needed. This would perform like an audiocassette of a tape-recorder, where an already-recorded cassette can be played any number of times. Secondary Storage memory devices can therefore R/W (read and write) data a number of times. Such devices are discussed below:
Magnetic Tape
A magnetic tape is one of the most popular storage mediums for storing a large volume of data that is to be accessed and processed sequentially. It is a plastic ribbon usually half-an-inch wide, coated on one side with ironoxide which can be magnetized. the tape ribbon itself is stored in reels of 50 to 2,400 feet in small cartridges or cassettes. It is similar to the tape used in tape-recorders. Below are a few important terms related to the magnetic tape.
IBG: there is a gap between two consecutive blocks (group) of records stored on a magnetic tape. this gap is known as IBG (Inner Block Gap).
Blocking Factor: The total number of records in a single block is the blocking factor.
IRG: If the system stores only one record per block, the storage is known as unblocked and IBG is termed as IRG (Inter Record Gap).
The following formulae can be used to determine tape-processing speeds.
     1. Transfer rate = tape speed x recording density
     2. Length of a block = block size x tape density
     3. Time spent per block of data = block size / transfer rate
     4. time spent in IBG = IBG size / tape speed
     5. Maximum number of  =                Length of tape
               blocks on a tape     ---------------------------------
                                            Length of Block + Length of IBG       

Limitations of magnetic tapes:
     1. Lack of direct access to records: Records must be accessed sequentially
     2. Environment problem : The tape unit is susceptible to dust, humidity and high temperature.
 
Magnetic Disk ( Direct access storage device)
This is also a secondary storage device which provides large storage capabilities. A magnetic disk is circular disk on which a coating of y-ferrite on both sides has been applied. This coating has the property of being magnetized locally ie., it allows the recording of data in the form of magnetised spots. Both sides of the disk are coated for independent recordings.
Data is stored on the disks in a number of concentric circles called tracks. A disk can have 40 to 400 tracks per inch of surface. These tracks begin at the outer edge of the disk and continue towards the centre. Each track has a designated number. Tracks are divided into sectors and the number of sectors per track varies from computer to computer. A motor rotates the disk at a very fast but constant speed. Data are recorded on and read from the tracks of the spinning disk surface in the form of tiny magnetic spots by the disk drives read/write head.
The tracks can be divided into sectors by a software-controlled formatting operation.
the process is known as soft sectoring. If the sectors are permanently marked on the tracks by the manufacturers of the disk then the process is known as hard sectoring. the disks are accordingly known as soft-sectored disks (floppies) and hard-sectored disks (hard disks).
Floppy Disks
The floppy Disk is a medium used to store valuable data and programs. The floppy disk rotates inside the drive and the drive's read/write head moves onto that to perform read and write operations. Floppy disks are made up of mylar or vinyl plastic material with magnetic coating on one or both sides. These plastic disks coated with magnetic material are further encapsulated permanently in a plastic jacket to protect them against dust and scratches. Floppy disks come in 2 sizes:
      5.25 inches with a storage capacity of 1.2 MB
      3.5 inches with a storage capacity of 1.44 MB
Each floppy contains a write-protected notch / slide, which can be slided to protect it against being written on or to enable it to write.
Hard Disk
A hard disk is a device used for mass storage of data that can be accessed directly. A hard disk is capable of storing a large quantity of data. It is rigid, stable and hermetically sealed in a dust-free environment. a hard disk conntains more than one platter.
A platter is a round disk coated with magnetic recording material. these platters are of different sizes in different disks. Data can be read from and written onto both sides of the platter with the help of R/W heads. A platter can have more than one R/W head to read and write the data. The platter is hard; therefore the R/W head never touches the surface. all these platters are connected onto a single spindle which rotates at a very fast pace and moves back and forth to access the data needed to be read or to be written onto.
Winchester disks:
These are the hard disks which IBM introduced first in their personal computers. Before that, hard disks were used only in mainframes and mini-computers. They were generally known as disk-drives. Old disk drives are very large in size.

Primary Memory : RAM / ROM

Memory is the storage place where data and instructions are stored. They can be retrieved from memory whenever required. Every computer comes with a certain amount of physical memory, usually referred to as Main Memory or RAM. You can think of main memory as an array of cells, each cell holding a single bit of information. this menas a computer with 1MB of memory can hold about 1 million bytes of information.
RAM(Random Access Memory)
It is a read/write (R/W)memory which is volatile. This means when power is turned off, all the contents are destroyed. This is memory that can be accessed randomly: that is, any byte of memory can be accessed without touching the preceding bytes. RAM is synonymous with main memory, the memory avaialble to programs. RAm is the  most common type of memory found in computers and other devices such as printers. There are two basic types of RAM: Dynamic RAM (DRAM) and Static RAM(SRAM)
DRAM(Dynamic RAM) : Dynamic RAM is more common type. Dynamic RAM needs to be refreshed thousands of times per second. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
SRAM(StaticRAM) : Static RAM does not need to be refreshed, which makes it faster, but it is more expensive than dynamic RAM. In static RAM, a bit of data is stored using the state of a flip-flop. This form of RAM is more expensive to produce, but is generally faster and requires less power than DRAM and, in modern computers, is often used as cache memory for the CPU.
ROM (Read Only Memory)
ROM is non-volatile which menas it retains the stored information even if power is turned off. this memory is used to store programs that boot the computer and perform diagnostics. therefore, we can also call ROM as the read-only RAM.
ROM is of four types:
Masked ROM: In this ROM a bit pattern is permanently recorded by a marking and metalization process, which is an expensive and specialized one. Memory manufacturers are generally equipped to undertake this process.
PROM(Programmable ROM): A PROM is a memory chip on which data can be written onto only once. Once a program is written onto a PROM chip, it remains there forever. Unlike RAM, PROM retains its contents when the computer is turned off. The difference between a PROM and a ROM is that a PROM is manufactured as blank memory and programmed later with a special device called PROM programmer or the PROM burner, whereas the ROM is programmed during manufacturing process. the process of programming a PROM is sometimes called burning a PROM.
EPROM(Erasable Programmable ROM): An EPROM is a special type of PROM that can be erased by exposing it to ultraviolet light. Once erased, it can be reprogrammed. An EPROM is similar to a PROM except that it requires ultravilolet radiation to be erased.
EEPROM(Electrically Erasable Programmable ROM): EEPROM is a special type of PROM that can be erased by exposing it to an electrical charge. Like other types of PROM, EEPROM retains its contents even when the power is turned off. Also, like other types of ROM, EEPROM is not as fast as RAM. EEPROM is similar to Flash Memory (sometimes called flash EEPROM). the principal difference is that EEPROM requires data to be written or erased one byte at a time whereas flsh memory allows sata to be written or erased in blocks.
Cache Memory:
The speed of the CPU is extremely high as compared to the access time of main memory. the slowness of main memory inhibits the performance of CPU. To decrease the mismatch in operating speed, a small memory chip is attached between the CPU and the main memory, whose access time is close to the processing speed of the CPU. It is called cache memory. Cache memory is accessed more quickly than conventional RAM. It is used to store programs or data currently being executed or temporary data frequently used by the CPU.


Output Device : Plotters

Plotters are used to produce precise and good quality graphics and drawings. they use an ink pen or inkjet, a single coloured or multicoloured pens. The pens are driven by a motor. Pen plotters are slow. Drawings can be prepared on paper vellum or mylar (polyester film). colour transparencies can also be prepared with the help of pen plotters. there are different types of pen plotters:
1. Drum plotters
A drum plotter contains a long cylinder and a pen cartridge. The drum rotates under the control of plotting instructions sent by the computer either in clockwise or anti-clockwise direction. the pen is mounted horizontally on the carriage and moves horizontally under the computer's control with the carriage left to right and right to left on the paper to produce a drawing. Pens with inks of different colours can be mounted on the carriage and multicoloured drawings can be produced.
2. Micro-grip Plotter:
This type of plotter do not use a drum. the paper or any other medium is held on both sides at the edges by pinch wheels which ensure back and forth movement of the paper.
3. Flatbed Plotter:
A flatbed plotter consists of a horizontal flat surface on which paper or any other medium is fixed. the pen is mounted on a carriage which can move along the horizontal and vertical axis.
4. Inkjet-Plotter:
These plotters are capable of producing large multicoloured drawings while closing inkjets in place of ink pens. the paper is placed on a drum and inkjets with different coloured inks are mounted on a carriage.

Output Device : Printers

A printer is a peripheral which produces a text and/or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source.
Some printers, commonly known as network printers, have built-in network interfaces, typically wireless and/or Ethernet based, and can serve as a hard copy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time.
In addition, a few modern printers can directly interface to electronic media such as memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit, and can function as photocopiers. Printers that include non-printing features are sometimes called multifunction printers (MFP), multi-function devices (MFD), or all-in-one (AIO) printers. Most MFPs include printing, scanning, and copying among their many features.
Impact Printers:
These printers use the electromechanical mechanism, that is pressing a typeface against an inked ribbon, Which marks it on a paper and prints the characters. Egs: DMP (Dot Matrix Printer), Drum and Daisy-Wheel printer. Impact printers can further be divided into two types:
   a) Line Printers which print one line at a time eg. drum and chain printers whereas
   b) Character printers print one character at a time.
Drum Printer: It has a rapidly rotating cylindrical drum on which characters are embossed in a standard 132-column printer. There are 132 bands containing every possible character.
There are 132 hammers lifting the paper-and -ribbon drums. Logic circulating in the printer makes a hammer(which is electromagnetically operated) strike against the drum as the required character to be printed passes through. Likewise, the sequence continues and a line is printed on the paper.
Chain Printers: The mechanism of printing is slightly different as in a drum printer. The drum rotates vertically in circular motion and there is a set of characters for every hammer. In a chain printer, the chain is the unit on which letters are embossed and it moves horizontally in a circular motion against the hammers. The mechanism of hammers striking against the chain is almost similar. A chain consists of five or six sets of characters. whenever a character which is required to be printed passes in front of the hammer position, the hammer strikes and the character impression gets printed on the paper. the paper and ribbon both move in between the chain and hammer.
Character Printers:
They print a character at a time. Examples are the dot matrix printer and the Daisy Wheel printer.
Dot-Matrix Printer:
Unlike the drum and chain printer which have letters embossed with the hammer striking against it, dot-matrix printers generally have seven or nine impact pins. these pins fire or strike under the control of a printer logic through an electromagnetic motion. the complete unit comprises of impact pins and coils - called the printer head.
The paper on which the printing is done moves up and down depending upon the model of the printer. The printer head moves left to right or vice-versa, permitting a 5x7 - matrix character for a 7-pin head and a 7x9 matrix character for a 9-pin head. The speed of the dot matrix printers is noted in CPS (characters per second). These printers come in widths of 80 columns and 132 columns.
Daisy-wheel Printers:
This is a character with a daisy wheel on which characters are embossed. There is a hammer against the wheel. the paper movement is similar to the dot-matrix printer. The presence of one hammer ensures that one character is printed at a time in Daisy-Wheel printer.

Non-Impact Printers
These printers do not use an electro-mechanical printing head to strike against ribbon and paper. They use thermal, chemical, electrostatic, laser beam or inkjet technology for printing. Examples are thermal, inkjet and laser printers. these printers are faster than the impact printers.

Inkjet Printers:
They print characters by spraying small drops of ink onto paper. A special ink with high iron content is used. the droplets of ink are charged electrically after being lead onto a nozzle. The droplets are then guided to thin proper positions on paper by electrically-charged deflection plates. These are high-quality printers and are therefore better than impact printers.
Laser Printers
Laser printers use light beams to form images on paper using a toner ink as a medium. A beam of light strikes the exposed parts of the drum surface. The light beam gets electrically charged and exposed areas attract the toner ink particles. The toner particles are then deposited on and permanently fixed to the paper using heat or pressure. Laser printers are quiet workers. they produce very high-quality output both textually and graphically.
Thermal Printers
These printers use heat to make a mark on heat-sensitive paper. the print head contains needles which are pressed against the paper. On applying heat to the selected pins, the paper changes colour to form a pattern of dots.

Output Devices : VDU (Visual Display Unit)

The job of the output device is to bring the result of computation to the outside world. The output devices accept data in binary form from the computer, converts the coded data into human readable form, and then the converted result is displayed as output.

All computers are connected to a television like screen, called the monitor. Monitor includes the graphic card or the video adapter, which is an expansion card which sends electric signals to the monitor. The monitor is connected by a cable to the video card. A device driver uses the operating system to control the video card, to make it send the right signals to the monitor.
Cathode Ray Tube (CRT)
The first computer monitors used Cathode ray tubes (CRTs), which was the dominant technology until they were replaced by LCD monitors in the 21st Century.
The first computer monitors used cathode ray tubes (CRT). Until the early 1980s, they were known as video display terminals and were physically attached to the computer and keyboard. The monitors were monochrome, flickered and the image quality was poor. In 1981, IBM invented the Color Graphics Adapter, which could display four colors with a resolution of 320 by 200 pixels. They introduced the Enhanced Graphics Adapter in 1984, which was capable of producing 16 colors and had a resolution of 640 by 350.
CRT remained the standard for computer monitors through the 1990s. CRT technology remained dominant in the PC monitor market into the new millennium partly because it was cheaper to produce and offered viewing angles close to 180 degrees.
Liquid Crystal diaplay (LCD) Screens :
LCD screens are flat and soft. An example is the TFT screen. It contains no cathode ray tube. It produces a sharp and high resolution image.
There are multiple technologies that have been used to implement Liquid Crystal Displays (LCDs). Throughout the 1990s the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price verses a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points (active or passive monochrome, passive color, active matrix color (TFT). As volume and manufacturing capability have improved the monochrome and passive color technologies were dropped from most product lines.
TFT is a variant of liquid crystal display (LCD) which is now the dominant technology used for computer monitors.

Input Device: Mouse

In computing, a mouse is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of an object held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features that can add more control or dimensional input. The mouse's motion typically translates into the motion of a cursor on a display, which allows for fine control of a graphical user interface.

1. Mechanical mouse : It is a hand held pointing device. A mechanical mouse has a rotating ball on its base. It is rolled over a flat surface or on a mouse pad. The cursor on the screen moves in the direction of mouse's movement. Two rotating wheels placed at right angles to each other inside the mouse detect the direction of movement. Each wheel is connected to a shaft encoder,which limits electrical pulses for every incremental movement of wheel. The pulse transmitted by the mouse determines the distance moved.
There may be two or three buttons on a mouse. The button on the left is for selecting items on the screen and the button on the right is normally used for displaying and selecting pop-up menus. 
2. Optical Mouse : The optical mouse has a light beam instead of a rotating ball to direct movement across a specially-patterned mouse pad. the optical mouse use LEDs and photo detectors to trap its movements. Optical mice make use of one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, rather than internal moving parts as does a mechanical mouse. A Laser mouse is an optical mouse that uses coherent (Laser) light.
3. Inertial and gyroscopic mouse : Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position.
4. 3D mouse : Also known as bats, flying mice, or wands, these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3DConnexion/Logitech's SpaceMouse from the early 1990s.
In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station. Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.

Input Device : Scanner

In computing, an image scanner—often abbreviated to just scanner— is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for industrial design, reverse engineering, test and measurement, gaming and other applications. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical.
Modern scanners typically use a charge-coupled device (CCD) or a Contact Image Sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. A rotary scanner, used for high-speed document scanning, is another type of drum scanner, using a CCD array instead of a photomultiplier. Other types of scanners are planetary scanners, which take photographs of books and documents, and 3D scanners, for producing three-dimensional models of objects.
Another category of scanner is digital camera scanners, which are based on the concept of reprographic cameras. Due to increasing resolution and new features such as anti-shake, digital cameras have become an attractive alternative to regular scanners. While still having disadvantages compared to traditional scanners (such as distortion, reflections, shadows, low contrast), digital cameras offer advantages such as speed, portability and gentle digitizing of thick documents without damaging the book spine. New scanning technologies are combining 3D scanners with digital cameras to create full-color, photo-realistic 3D models of objects.
In the biomedical research area, detection devices for DNA microarrays are called scanners as well. These scanners are high-resolution systems (up to 1 µm/ pixel), similar to microscopes. The detection is done via CCD or a photomultiplier tube (PMT).
Scanners are input devices which are capable of recognizing marks or characters. They are capable of entering information directly from the computer without the user keying it in. They are fast and accurate. the two major types of scanners are OCR (Optical character Reader) and MICR (Magnetic Ink character Reader)
OCR (Optical Character Reader)
These scanners are capable of detecting alphabetic and numeric characters. If the characters are handwritten, they should be of standard size and there should be no stylish loops in the letters and lines. The characters should be properly connected if hand-written or in characters of a special font called OCR, if typed. OCR devices examine each character as if they were made up of a collection of minute spots.
OMR (Optical Mark Reader)
These scanners are capable of recognizing a pen or pencil mark made on a paper. An OMR scanner can sense the presence and absence of a mark. When you appear in objective-type tests or examinations in future, where you have to mark your answers by filling up a square box or a circular shape with a pencil, out of all the square/circular shapes provided, to indicate your choice of answer - you will know that these types of answer sheets are to be fed directly in to the computer to be scanned by an OMR.
MICR (Magnetic Ink character Reader)
MICR was developed to assist the banking industry to process a large number of cheques everyday. The bank identification code and the customer's account number are pre-printed with a special ink on every cheque. the ink contains magnetizable particles of iron oxide. a magnetic ink character reader reads these characters by examining their shapes with the help of a matrix, and sends this information to the system.

Input Device: Digitizer

A graphics tablet (or digitizer, digitizing tablet, graphics pad, drawing tablet) is a computer input device that allows one to hand-draw images and graphics, similar to the way one draws images with a pencil and paper. These tablets may also be used to capture data or handwritten signatures. It can also be used to trace an image from a piece of paper which is taped or otherwise secured to the surface. Capturing data in this way, either by tracing or entering the corners of linear poly-lines or shapes is called digitizing.
A graphics tablet (also called pen pad or digitizer) consists of a flat surface upon which the user may "draw" or trace an image using an attached stylus, a pen-like drawing apparatus. The image generally does not appear on the tablet itself but, rather, is displayed on the computer monitor. Some tablets, however, come as a functioning secondary computer screen that you can interact with images directly by using the stylus.

A digitizer or tablet is a surface over which a stylus (similar to a pencil) or hand cursor is moved. The location of the hand cursor or stylus is available to the computer system. The size of the tablet, a square block, varies from 10 inches to 5 sq. ft. depending upon application. The stylus senses a position through a transducer (pressure-sensitive switch) so that the movement of the stylus over the tablet causes a corresponding line on the CRT screen.

A Break by Tina Moeder. $0.99 from Smashwords.com
Sometimes all you need is a little break. This one comes with a surprise; one no one wants to miss.

Input Device : Joystick

The joystick is an input device used for playing games. It is a stick which can be moved lefr, right, forward and backward. The movements are sensed by a potentiometer. As the stick moves, the movements are translated into binary instructions with the help of electrical contacts in its base.
A popular variation of the joystick used on modern video game consoles is the analog stick.
The joystick has been the principal flight control in the cockpit of many aircraft, particularly military fast jets, either as a center stick or side-stick.
Joysticks are also used for controlling machines such as cranes, trucks, underwater unmanned vehicles, wheelchairs, surveillance cameras and zero turning radius lawn mowers. Miniature finger-operated joysticks have been adopted as input devices for smaller electronic equipment such as mobile phones.
The name "joystick" is thought to originate with early 20th century French pilot Robert Esnault-Pelterie. There are also competing claims on behalf of fellow pilots Robert Loraine, James Henry Joyce and A. E. George. Loraine is credited with entering the term "joystick" in his diary in 1909 when he went to Pau to learn to fly at Bleriot's school. George was a pioneer aviator who with his colleague Jobling built and flew a biplane at Newcastle in England in 1910. He is alleged to have invented the "George Stick" which became more popularly known as the joystick.

Input Device:Light Pen

A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with a computer's CRT TV set or monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy.
A light pen is fairly simple to implement. Just like a light gun, a light pen works by sensing the sudden small change in brightness of a point on the screen when the electron gun refreshes that spot. By noting exactly where the scanning has reached at that moment, the X,Y position of the pen can be resolved. This is usually achieved by the light pen causing an interrupt, at which point the scan position can be read from a special register, or computed from a counter or timer. The pen position is updated on every refresh of the screen.
The light pen is a photosensitive pen. It is also a poinnting device, capable of sensing a position on the screen. When a light pen's tip is moved over the screen surface, its photocell sensing element detects light coming from the screen. Corresponding signals are sent to the processor that identifies the point (pixel) on the screen. A light pen is used to draw images onscreen.


Input Device : Keyboard

The following functions are carried out by input devices:
1. They accept data from the outside world.
2. They convert this data into binary form acceptable to the machine.
3. They send data in binary form to the computers for further processing.

A keyboard looks like a typewriter. It enables one to enter data into a computer. Computer keyboards are similar to electronic-typewriter keyboards but contain additional keys. The keys on computer keyboards are often classified as follows:
  1. Alphanumeric keys - Letters and numbers
  2. Punctuation keys - comma, period, semicolon and so on.
  3. Special keys - function keys, control keys, arrow keys, caps lock key, etc.

In normal usage, the keyboard is used to type text and numbers into a word processor, text editor or other program. In a modern computer, the interpretation of key presses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all key presses to the controlling software.
Keyboards are also used for computer gaming, either with regular keyboards or by using keyboards with special gaming features, which can expedite frequently used keystroke combinations. A keyboard is also used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination, which brings up a task window or shuts down the machine. It is the only way to enter commands on a command-line interface.

Solved Examples for an algorithm

1. Write an algorithm to assign the value of a variable to another variable.

The algorithm can be written as:

1. Declare X and Y.
2. Read value for X.
3. Assign value of X and Y.
4. Print value of X and Y.


2. Write an algorithm that prints the smaller of any two given numbers.

Solution:

Steps : 1. Read two numbers X and Y.
           2. Compare X and Y.
           3. If X is smaller than Y print X else print Y

Algorithms

Computers were invented to solve problems which could not be solved manually with ease. In a computer, problems can be divided into small modules and then solved step-by-step.
the set of rules that define how a particular problem can be solved in a finite number of well defined steps is known as an algorithm. Each of the steps may require one or more operations.

The basic features of an algorithm are:
1. Each step of an algorithm is simple and definite.
2. Its logic is clear and unambiguous.
3. The logic of an algorithm is effective and has a unique solution for a problem.
4. It has a finite number of steps.

The success of a program depends largely upon the clarity and efficiency of the algorithm. If the algorithm is not properly designed, it makes the program prone to errors.

Characteristics of an algorithm:
1. Input: this part of an algorithm reads the data required for processing by accepting the input of a given problem.
2. Process: To perform the required computations in easy steps.
3. Finiteness: An algorithm should come to an end gracefully after a finite number of steps.
4. Effectiveness: Every step of an algorithm must be "simple"; their order should be unambiguous and the algorithm should execute within a definite period of time on a machine.
5. Output: It must give the desired output.

After analysing the problem correctly, the programmer has to understand the problem in order to develop a logic or method that can solve the problem. A computer program is a sequence of instructions outlining the steps to be performed by a computer. Translating an algorithm into a programming language is known as coding.

Fifth-Generation Computers ( 1990 and Beyond)

Scientists are now at work on the fifth-generation of computers. this is still not a reality, but recent engineering advances have made it possible for computers to be able to accept spoken words (voice recognition) and inmate human reasoning. There-fore they are thought to have Artificial Intelligence. The ability to translate a foreign language is also moderately possible with fifth-generation computers.
The term fifth generation was intended to convey the system as being a leap beyond existing machines. Computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance. The project was to create the computer over a ten year period, after which it was considered ended and investment in a new, Sixth Generation project, began. Opinions about its outcome are divided: Either it was a failure, or it was ahead of its time.

Fourth-Generation Computers ( 1971 onwards)

Fourth-Generation computers used Very Large Scale Integration (VLSI) technology. After the induction of integrated circuits, computers could only get smaller in size, since hundreds of components could fit onto one chip. By the 1980s, the use of VLSI technology had squeezed hundreds of thousands of components onto a single chip. Ultra-large Scale Integration (ULSI) increased that number to millions. This also helped decrease the price of computers. It also increased their power, efficiency and reliability. Examples of such computers are IBM-PC, Apple Machintosh, SUN SPARCstation, etc.
The advantages of fourth-generation computers over third-generation computers are:
1. They were cheaper
2. They had a larger memory and high functional speed.
3. They consumed less power.
4. They generated a negligible amount of heat.

Third-Generation Computers (1964 - 1971)

In the third generation of computers integrated circuits (ICs) began to be used. These ICs were called chips. An IC is more compact than a transistor. A single IC has many transistors, registers and capacitors, placed on a single thin slice of silicon. So, the computer built of such components become smaller. Some of the computers developed during this period were :

IBM-360: Developed by IBM in 1964
PDP-8: Developed by DEC in 1965
PDP-11: Developed by DEC in 1970
CRAY-1: Developed by Cray Research in 1974
VAX: Developed by DEC in 1978.

High-Level languages such as BASIC (Beginners All-purpose Symbolic Instruction Code) were developed during this period.

The advantages that the third-generation computers had over the second-generation computers were:

1. They were smaller in size as compared to the second-generation computers.
2. They generated less heat.
3. They reduced computational time.
4. They involved low maintenance cost.
5. They were easily portable.
6. They required less power to keep them going.
7. They were comparatively cheaper.

Second-Generation Computers

In this generation of computers, transistors were used in place of vacuum tubes. Transistors are more compact than vacuum tubes as they are made up of semiconductors. they are also more durable than vacuum tubes. Programming languages such as COBOL and FORTRAN were developed during this period. Some of the computers of the second generation are:

IBM 1620: Its size was smaller as compared to the first-generation computers and it was used mostly for scientific purposed.
IBM 1401: It was used for business applications
CDC 3600 : It was used for scientific purposes.

The advantages that the second-generation computers had over the first-generation computers are:

1. They were smaller as compared to the first-generation computers
2. They generated less heat.
3. They took comparatively less computational time.
4. They were less prone to failure.

The disadvantages that second-generation computers had as compared to the first-generation computers are:

1. They required air conditioning
2. Frequent maintenance was required.
3. They were difficult to produce and were quite expensive.

First Generation Computers (1945 - 1956)

First-generation computers used vacuum tubes and valves as their basic electronic component. They were extremely large in size and not reliable. The language used for storing and processing data was machine language. Some of the first-generation computers are:-
ENIAC (Electronic Numerical Integrator and Calculator): It was built in 1946 at the University of Pennsylvania, (USA, by John Eckert and John Mauchly.
EDVAC (Electronic Discrete Variable Automatic Computer): It was developed in 1950
EDSAC (Electronic Delay Storage Automatic computer): It was developed by MV Wilkes at Cambridge University in 1949.
UNIVAC-1: The Universal Automatic Computer was USA's first commercially available computer system. It was delivered in 1951 by the Eckert-Mauchly Computer Corp.

Disadvantages
The shortcomings of the first-generation computers were as follows:
1. They were too bulky
2. they emitted large amounts of heat because they used lots of vacuum tubes.
3. Air conditionaing was required.
4. They were prone to frequent faliure; that's why they were unreliable.
5. They were not portable

Characteristics of Computers

Useful characteristics of computers
1. Speed :
The computer present in the modern world has the speed of nano and pico second. The various speed that are used by the computers from the former generations are as follows:

1 milli second=1*10^-3 second
1 micro second=1*10^-6 second
1 nano second=1*10^-9 second
1 pico second=1*10^-12 second
2. Accuracy: The accuracy of computers is quite high. They are reliable and robust. It ever makes a mistake. Most probably the error occurs due to the user rather than the computer. There may be certain hardware mistake but with the advanced technique in hand they are overcome.
Example:Only accurate robots are used to perform the operations for the patients since human hands are not flexible for making operations.

3. Diligence: Unlike human beings, computers are persistent and are not afflicted by tiredness, monotony, lack of concentration, etc.
If there are surplus amount of executions to be made then each and every execution will be executed at the same time period. They can perform their assigned task without taking any refreshment.

Example: Computers which are used for controlling the satellites.
4. Reliability: Computers produce reliable and precise results. Humans cannot work with such precision. The computers are automatic. It may execute the process without any intervention of user once they are assigned to a work. Once the data or instruction are fetched from the secondary devices such as optical disks, hard disks etc. Immediately they get stored into RAM (primary memory) and then sequentially they get executed.

5. Versatility: Computers can work with different types of data like sound, graphics, audio, etc. In our day to day life computers has been a part, with their extended flexibility they are used, all over the world. They can be used as personal computers, for home uses, for business oriented tasks, weather forecasting, space explorations, teaching, railways, banking, medicine etc. All Modern computer can perform different kind of tasks simultaneously.

6. Memory: Computers can store large amounts of data for years till a hardware failure occurs.
Secondary storage devices are the key for the data storage. They store the data for which the user wants to retrieve these data for future use. The examples for various secondary devices are Floppy disk, Optical disks (CS and DVD), Zip drives, Thumb drives etc. The data of smaller size can be easily fetched and they can be copied to the primary memory (RAM).

Example: Data Warehousing made by IBM.

Computers today have the following limitations:
1. No IQ : Computers possess no intelligence of their own. They do what they are instructed to do. They do not have brains and hence cannot think.
2. No feelings: Since computers cannot think, they also do not have any feeling.

The Evolution of Computers

Computers have evolved through different stages before reaching their present state of development. The history of the computer dates back to 3000 BC. Different stages of its development are as follows:

3000 BC - Invention of the Abacus
AD 1500 - Mechanical calculator invented by Leonardo da Vinci
AD 1621 - Slide rule invented.
AD 1640 - Blaise Pascal invents the Arithmetic Machine
AD 1800 - Jacquard invents the first punch cards for storing data
AD 1822 - Charles Babbage invents the Difference Engine
AD 1830 - Charles Babbage invents the analytical Engine
AD 1857 - Sir Charles Wheatstone uses paper tape to store data
AD 1926 - The first patent for semiconductor transistor was filed.
AD 1936 - Dvorak keyboard created.
AD 1937 - Alan Turing defines the Turing Test.
AD 1940 - Konrad Zuse completes the first fully-functioning electro-mechanical computer of the world.
AD 1943 - The first electronic general-purpose computer, ENIAC, produced.

Types of Computers

Depending upon the way a system performs, a computer can be classified into the following types:
1. Digital
2. Analog
3. Hybrid

Digital Computers
It works with digits and numbers. For example, to calculate the distance travelled by a car, you might take into consideration the diameter of the tyre to calculate the circumference, the number of revolutions of the wheel per minute, the time taken in minutes, and then multiply them all to get the distance moved. This is known as digital calculation.
Digital computers can be classified on the basis of size and capabilities into :-
Super Computers
Mainframe Computers
Mini Computers
Micro computers
Personal Computes

Analog Computers
This computer  works on the principle of using continuous measurements of physical phenomenon like breath, rotation, electric effects, etc. Take the example of the principle of the milometer in a car. when the wheels of a car rotate, they make some gears to move. This movement is transmitted to the centre by a flexible shaft. The meter itself contains some gears/wheels marked with numerical numbers and is calibrated to calculate the exact distance travelled in metres or kilometres. In an analog computer, the input and output are continuously varying quantities such as voltage, instead of the discrete digits of digital computers.

Hybrid Computers
There are some computes which employ both digital as well as analog quantities. these are known as hybrid computers. For example, a digital thermometer employs a mechanism which converts the temperature observed into digital form using analog-to-digital conversion. Hybrid computers are the computers that are generally used in the process of control environment, where an analog (or continuously varying) input is provided to a computer which processes it digitally and presents the output in the analog or digital form as required.

Input and Output Unit

Input Unit
Data and instructions must be entered into the computer in order for computations to be carried out. The task (data-entry) is carried out by input devices. Data read by input devices are of different forms depending upon the form of input. Regardless of their type, all input devices provide data to emory in the binary form (in 0s and 1s). Input deices, therefore, perform the following basic functions:
1. Accept data from the outside world.
2. They convert data into binary form so as to make it understandable by the computer.
3. They send data in binary form to the computer for further processing.
The keyboard, joystick, light pen, scanner, touch screen, mouse, magnetic disk and floppy disk are some popular input devices.
Output Unit
This unit of the computer system provides the result of computation to the outside world. Its job is opposite to that of the Input Unit. Since a computer works with binary code, the result produced by it is also in binary form. This has to be converted to a human acceptable form to enable the results to be supplied outside. This task is carried out by the output units. an output unit, therefore, performs the following functions:
1. Accepts results produced by the computer in binary coded form.
2. It converts the coded data into human acceptable form.
3. Supplies converted results to the outside world.
A few examples of output devices are monitors (visual display units), printers, plotters, speech symthesizers and microfilm or microfiche.

Arithmetic Logic Unit (ALU)

It is one of the main components of the Central Processing Unit. This is where all arithmetic and logic operations are done. Data received from an input device is stored in primary memory before being passed on to ALU for processing.
Memory
Memory is the location where data and instructions are stores. They can be retrieved from memory whenever required. Memory is used to:
1. Hold the data received from input device temporarily and ready it for processing.
2. Hold data that has been processed and the intermediate results generated within.
3. Hold the finished results of processed data, until released to output devices.
4. Hold the system software and application software in use.

Data is stored in memory as bytes. A byte is made up of eight bits. A bit is the smallest unit of memory. Other units of memory are Kilobytes(KB), Megabytes(MB), Gigabytes(GB) and Terra bytes (TB) where:

1 KB = 1024 bytes
1 MB = 1024 KB = 1024 * 1024 bytes
1 GB = 1024 MB
1 TB = 1024 GB

Memory can be classified into two groups:
1. Main Memory
2. Secondary Memory

Main Memory
Every computer comes with a certain amount of physical memory, usually referred to as the main memory of RAM( Random Access Memory)
There are different kinds of main memory:
RAM (Read/write or R/W). this type of memory is also known as random access memory. This is volatile, which means, when the power is turned off, all the contents are destroyed. This memory is again of two types: Static RAM and Dynamic RAM.
ROM (Read Only Memory). The ROM is non-volatile, which means, it retains the stored information even if the power is turned off.
ROM is again of four types:
Masked ROM, PROM (Programmable ROM), EPROM( Erasable Programmable ROM) and EEPROM(Electrically Erasable Programmable ROM).
Secondary Memory:
Secondary Memory includes storage devices like cassette tape, magnetic tape, floppy disk, hard dist, Winchester disk, etc.

Control Unit

How many times have you kept your book open with your eyes fixed on it and not read a single word? This happens because the Control Unit of your brain does not allow your eyes to provide an input at a given point of time.
A Control Unit is that part of the computer, which makes the ALU and Memory work in synchronization with the data. It is a part of the central processing unit which directs the sequence of operations, interprets coded instructions and sees to the execution of program instructions. In order to process instructions sequentially, the CU goes through the following steps:

1. Retrieves an instruction from the processor memory.
2. Determines the action to be taken, on being requested.
3. Directs the CPU to perform operations.
4. Determines whether the operations\ was carried out properly or not.
5. Displays an error message to the user if the operation was not carried out properly - through the output device.
6. Stores the result in memory for further processing, if error free.
7. Determines the location of the next instruction.

Central Processing Unit

In a human body, all major decisions are taken by the brain and other parts function as directed by it. Similarly, the CPU is the brain of the computer system where all major decisions are taken. All calculations and comparisions are made by the CPU and it is responsible for activating and controlling the operations of other units of computer systems.The CPU consists of three main components:
1. Control Unit
2. Arithmetic Logic Unit(ALU)
3. Memory

Components of a Computer

The internal architechtural design of computers may differ from one another. But there is a basic organization seen in all computers
The block diagram of basic computer organization is shown below:


The diagram shows that a computer consists of the Central Processing Unit (CPU), memory and input / output unit.


Functions of a Computer

Any computer system can perform five basic functions.

Input
A computer can accept input data for the purpose of processing. This is called inputting.

Storing
Inputted data can be saved so that it is available for initial or additional processing as and when required. this is called storing.

Processing
Performing basic arithmetic or logical operations on data in order to get the input data converted into required useful information is known as processing.

Output
It is the process of producing useful information or results for the person or device, such as a printed report or visual system. the output can also be the input for a control system.

Computer Overviiew

Introduction
The word 'compute' comes from the word 'compute' which means to calculate. So, a computer is normally considered to be calculating device that can speedily perform arithmetic and logical operations.
the original objective for inventing a computer was to create a fast calculating machine, through a major part of work done by computers nowafays is non-mathematical. Therefore, defining a computer assembly only as a calculatinng device is not justified.
The computer is an electronic device designed to accept and store input data, manipulate it and output results under the direction of detailed, step-by-step stored programs and instructions.

Data
It denotes raw facts and figures such as numbers, words, amount, quantity, that can be processed, manipulated or produced by the computer. for example: Rita, 18, XI B. This is raw data.

Information
It is a meaningful and arranged form of data. Raw data does not make any sense on its own. So, it has to be arranged in a meaningful manner such that it makes sense. For example, Information that makes sense can be Rita aged 18 is in class XI B.

Hardware andSoftware
A computer consists of two fundamental components. One is called hardware, the other sothe software. Hardware refers to the physical components or blocks, for example, CPU, memory, input and output devices. Software is the package of instructions designed to operate the conserened hardware, for example, MS-DOS, Microsoft Office, etc. In fact, we can divide software into two broad categories:
1) System Software: It runs the basic functioning of a computer system. It consists of operating systems, compilers, translators, etc.
2) Application Software: the basic aim of making and running a computer is to get work done from it. So, programs which are developed in order to serve a particular application are known as application software. For example, Microsoft Office, Tally, etc.

Lossless compression algorithms

  1. run-length encoding (also known as RLE)
  2. dictionary coders :
    • LZ77 & LZ78
    • LZW
  3. Burrows-Wheeler transform (also known as BWT)
  4. prediction by partial matching (also known as PPM)
  5. context mixing (also known as CM)
  6. entropy encoding :
    • Huffman coding (simple entropy coding; commonly used as the final stage of compression)
    • Adaptive Huffman coding
    • arithmetic coding (more advanced)
      • Shannon-Fano coding
      • range encoding (same as arithmetic coding, but looked at in a slightly different way)
Run-length encoding
Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, simple graphic images such as icons and line drawings.
For example, consider a screen containing plain black text on a solid white background. There will be many long runs of white pixels in the blank space, and many short runs of black pixels within the text. Let us take a hypothetical single scan line, with B representing a black pixel and W representing white:
WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWB
If we apply a simple run-length code to the above hypothetical scan line, we get the following:
12WB12W3B24WB
Interpret this as twelve W's, one B, twelve W's, three B's, etc.
The run-length code represents the original 53 characters in only 13. Of course, the actual format used for the storage of images is generally binary rather than ASCII characters like this, but the principle remains the same. Even binary data files can be compressed with this method; file format specifications often dictate repeated bytes in files as padding space. However, newer compression methods such as deflation often use LZ77-based algorithms, a generalization of run-length encoding that can take advantage of runs of strings of characters (such as BWWBWWBWWBWW).
Run-length encoding performs lossless data compression and is well suited to palette-based iconic images. It does not work well at all on continuous-tone images such as photographs, although JPEG uses it quite effectively on the coefficients that remain after transforming and quantizing image blocks. RLE is used in fax machines (combined with other techniques into Modified Huffman coding). It is relatively efficient because most faxed documents are mostly white space, with occasional interruptions of black.
Data that have long sequential runs of bytes (such as lower-quality sound samples) can be RLE compressed after applying a predictive filter such as delta encoding.

Dictionary coder
A dictionary coder, also sometimes known as a substitution coder, is any of a number of lossless data compression algorithms which operate by searching for matches between the text to be compressed and a set of strings contained in a data structure (called the 'dictionary') maintained by the encoder. When the encoder finds such a match, it substitutes a reference to the string's position in the data structure.
Some dictionary coders use a 'static dictionary', one whose full set of strings is determined before coding begins and does not change during the coding process. This approach is most often used when the message or set of messages to be encoded is fixed and large; for instance, the many software packages that store the contents of the Bible in the limited storage space of a PDA generally build a static dictionary from a concordance of the text and then use that dictionary to compress the verses.
More common are methods where the dictionary starts in some predetermined state but the contents change during the encoding process, based on the data that has already been encoded. Both the LZ77 and LZ78 algorithms work on this principle. In LZ77, a data structure called the "sliding window" is used to hold the last N bytes of data processed; this window serves as the dictionary, effectively storing every substring that has appeared in the past N bytes as dictionary entries. Instead of a single index identifying a dictionary entry, two values are needed: the length, indicating the length of the matched text, and the offset (also called the distance), indicating that the match is found in the sliding window starting offset bytes before the current text.

PPM compression algorithm
PPM is an adaptive statistical data compression technique based on context modeling and prediction. The name stands for Prediction by Partial Matching. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the stream.
Predictions are usually reduced to symbol rankings. The number of previous symbols, n, determines the order of the PPM model which is denoted as PPM(n). Unbounded variants where the context has no length limitations also exist and are denoted as PPM*. If no prediction can be made based on all n context symbols a prediction is attempted with just n-1 symbols. This process is repeated until a match is found or no more symbols remain in context. At that point a fixed prediction is made. This process is the inverse of that followed by DMC compression algorithms (Dynamic Markov Chain) which build up from a zero-order model.
Much of the work in optimizing a PPM model is handling inputs that have not already occurred in the input stream. The obvious way to handle them is to create a "never-seen" symbol which triggers the escape sequence. But what probability should be assigned to a symbol that has never been seen? This is called the zero-frequency problem. One variant assigns the "never-seen" symbol a fixed pseudo-hit count of one. A variant called PPM-D increments the pseudo-hit count of the "never-seen" symbol every time the "never-seen" symbol is used. (In other words, PPM-D estimates the probability of a new symbol as the ratio of the number of unique symbols to the total number of symbols observed).
PPM compression implementations vary greatly in other details. The actual symbol selection is usually recorded using arithmetic coding, though it is also possible to use Huffman encoding or even some type of dictionary coding technique. The underlying model used in most PPM algorithms can also be extended to predict multiple symbols. It is also possible to use non-Markov modeling to either replace or supplement Markov modeling. The symbol size is usually static, typically a single byte, which makes generic handling of any file format easy.
Published research on this family of algorithms can be found as far back as the mid-1980s. Software implementations were not popular until the early 1990s because PPM algorithms require a significant amount of RAM. Recent PPM implementations are among the best-performing lossless compression programs for natural language text.

Context mixing
Context mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions. For example, one simple method (not necessarily the best) is to average the probabilities assigned by each model. Combining models is an active area of research in machine learning.
The PAQ series of data compression programs use context mixing to assign probabilities to individual bits of the input.

Entropy encoding
An entropy encoding is a coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols. Typically, entropy encoders are used to compress data by replacing symbols represented by equal-length codes with symbols represented by codes where the length of each codeword is proportional to the negative logarithm of the probability. Therefore, the most common symbols use the shortest codes.
According to Shannon's source coding theorem, the optimal code length for a symbol is -logbP, where b is the number of symbols used to make output codes and P is the probability of the input symbol.
Two of the most common entropy encoding techniques are Huffman coding and arithmetic coding. If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code such as unary coding, Elias gamma coding, Fibonacci coding, Golomb coding, or Rice coding may be useful.

Huffman coding
In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. It was developed by David A. Huffman, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes.".
Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix-free code (that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common characters using shorter strings of bits than are used for less common source symbols. Huffman was able to design the most efficient compression method of this type: no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code. A method was later found to do this in linear time if input probabilities (also known as weights) are sorted.
For a set of symbols with a uniform probability distribution and a number of members which is a power of two, Huffman coding is equivalent to simple binary block encoding, e.g., ASCII coding. Huffman coding is such a widespread method for creating prefix-free codes that the term "Huffman code" is widely used as a synonym for "prefix-free code" even when such a code is not produced by Huffman's algorithm.
Although Huffman coding is optimal for a symbol-by-symbol coding with a known input probability distribution, its optimality can sometimes accidentally be over-stated. For example, arithmetic coding and LZW coding often have better compression capability. Both these methods can combine an arbitrary number of symbols for more efficient coding, and generally adapt to the actual input statistics, the latter of which is useful when input probabilities are not precisely known.

Adaptive Huffman coding
Adaptive Huffman coding is an adaptive coding technique based on Huffman coding, building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data. The benefit of one-pass procedure is that the source can be encoded realtime, though it becomes more sensitive to transmission errors, since just a single loss ruins the whole code.

Arithmetic coding
Arithmetic coding is a method for lossless data compression. It is a form of entropy encoding, but where other entropy encoding techniques separate the input message into its component symbols and replace each symbol with a code word, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 = n < 1.0).

Burrows-Wheeler transform
The Burrows-Wheeler transform (BWT, also called block-sorting compression), is an algorithm used in data compression techniques such as bzip2. It was invented by Michael Burrows and David Wheeler.
When a character string is transformed by the BWT, none of its characters change value. The transformation rearranges the order of the characters. If the original string had several substrings that occurred often, then the transformed string will have several places where a single character is repeated multiple times in a row. This is useful for compression, since it tends to be easy to compress a string that has runs of repeated characters by techniques such as move-to-front transform and run-length encoding.