Sunday, January 11, 2009

hard disk

Bernabe Adio Bit-41 How does a Hard Disk Work? On a hard disk, data is stored in the magnetic coating of the disk’s platters. The platter is a flat disk of either alloy or glass, with a spindle at the centre. Modern platters generally have a diameter of 3.5” in desktops or 2.5” in laptops, although smaller 1.8” drives are available for devices that require a micro-drive. Hard Disk Components The spindle is rotated by an electric motor, and this causes the platter to spin. The speed at which the platter spins is measured in RPM (Rotation Per Minute) and a higher speed is usually reflective of a higher performance disk in terms of data writing and reading. Usual RPM is 5200 and 7200. Hard drives with 10,000, 15,000 RPM normally use SCSI (Small Computer System Interface). SCSI interfaces provide for faster data transmission rates (up to 80 megabytes per second) compared to standard Serial ATA (SATA) and hard drives and you can attach as many devices as you like to a single SCSI port. The magnetic media holds the binary data as with tapes and floppy disks. The data is read from the surface of the platter by a set of ‘heads’ which are fixed so that they can only move between the centre of the platter and the outside edge. The heads are held just above the magnetic media by actuator arms that facilitate this movement across the disk’s platter surface. The heads are not designed to touch the platter surface as physical contact can cause damage to the magnetic media. Each platter has a top side and an underside, and there is usually a head for both. Therefore, a hard disk with 5 platters would have 10 heads. Data Recovery Technical Guide: How Hard Drives Work? When the disk is not in use, the heads are ‘parked’, which means they spring back to the outside edge of the platter until they called into action by another data read or write. Data in the magnetic media is organized into cylinders - concentric tracks on the media that are further divided into sectors. A sector is the smallest allocatable logical unit on a drive and usually, but not always, is 512 bytes in size. When an Operating System such as Windows sends data to the hard drive to be recorded, the drive first processes the data using a complex mathematical formula that adds extra bits to the data. When the data is retrieved, the extra bits allow the drive to detect and correct random errors caused by variations in the drive's magnetic fields. Next, the drive moves the heads over the appropriate track on a platter. The time it takes to move the heads is called the seek time. Once over the correct track, the drive waits while the platters rotate the desired sector under the head. The amount of time that takes is called the drive's latency which is measured in milliseconds. The shorter the seek time and latency, the faster the drive can do its work. Average seek time in the majority of hard disk drives is 10ms. When the drive electronics determine that a head is over the correct sector to write the data, it sends electrical pulses to that head. The pulses produce a magnetic field that alters the magnetic surface of the platter. The magnetic variations recorded on the surface of the platter, is called “data”. Compare these magnetic variations with the grooves on a vinyl record that is read by the record player needle arm which reproduces the music recorded on it by scratching the grooves on the surface of a spinning record. Reading data complements the recording process. The drive positions the read portion of the head over the correct track, and then waits for the correct sector to orbit around. When the particular magnetic variations that represent your data in the right, the drive's electronics detect the small magnetic changes and convert them back into bits. Once the drive checks the bits for errors and fixes any it sees, it sends the data back to the operating system. What is a board swap? Inside any Hard Disk Drive, (HDD), there is a Printed Circuit Board, (PCB) that contains the electronics that manages the HDDs activities. Like any other PCB, it contains chips and other components that the manufacturer has designed to allow the HDD to function effectively. Each HDD manufacturer has its own proprietary firmware. Firmware is the chips that contain program instructions and is highly specific to each manufacturer and HDD. Firmware is continually updated and as a result, a given HDD may go through many firmware revisions as the manufacturer attempts to get better and better performance from the HD models that it sells. It is not unusual for an HDD o go through dozens of firmware revisions during the model’s lifecycle. If a PCB becomes damaged, or a component burns out, it is possible to either mend or replace this PCB. Repairing the PCB is easier than replacing the PCB as a replacement would need to be found with identical firmware. With an HDD that is over 6 months old, the procurement of an identical revision can be time consuming and difficult. What is a Head Crash? A ‘head crash’ occurs when the heads of a hard disk drive touch the rotating platter surface. The head normally rides on a thin film of moving air entrapped at the surface of the platter. A shock to a working hard disk, or even a tiny particle of dirt or other debris can cause the head to bounce against the disk, destroying the thin magnetic coating on the disk. AND THIS MEANS LOSS OF DATA! Since most new drives spin at rates between 7,200 and 15,000 rpm, the damage caused to the magnetic coating can be extensive. At 7,200 rpm the edge of the platter is travelling at over 74 miles per hour (120 km/h), and as the crashed head drags over the platter surface they generally overheat due to friction, making the drive or at least parts of it unusable until the heads cool. Following a head crash, particles of material scraped free of the drive surface greatly increase the chances of further head crashes or damage to the platters. Data stored in the media that is scraped off the platter is of course unrecoverable, and because of the way that data is stored, randomly over a disk surface, this data may be whole files or parts of many files. The most severe head crashes are the ones where the entire stack of heads, crash on each of the platters in the stack. A violent movement, or shock, to a working hard disk drive usually causes this. The chance of a good recovery in these circumstances is often remote and is generally limited to partial files. Levels of Complexity in Data Recovery Logical corruption: This means that the computer is unable to make sense of the data that is randomly stored across the disk. The HDD loses its logical format and does not show up in the system. Disk utilities can see the drive but it shows it as unallocated space. This is usually caused by the computer’s index system being damaged or corrupted. The data is still there, but the computer is unable to recognise it for what it is, and thereby unable to reconstitute it into a readable document or file. Where structural corruption is the cause of the data loss, your chances of getting all the data back are extremely good. With the use of advanced tools and professional software and disk editing methods, Data Recovery Doctors can return the hard drive to a state that is understood by the computer. The files are most often undamaged after recovery. Electronics failure: This means that the external electrical circuitry of the hard drive has failed. Recovery from a hard disk in these cases is possible, as long as a replacement circuit board can be located or the circuitry can be repaired by Data Recovery technicians. This is not as simple as it sounds, as each hard disk may go through many revisions during its life-cycle, and a revision specific for printed circuit board or PCB must located in our parts inventory, or must be ordered from our suppliers. Mechanical failure: This means that the internal mechanics of the hard disk drive have failed through internal factors such as age, or minor manufacturing defects, or as a result of external factors such as shock, heat or water. This is more serious than an electronics failure as the internal mechanics within a modern hard disk are very delicate, and have extremely small tolerances. Again specific revision parts are required, and the internal mechanics will need to be mended or replaced in order for the hard drive to be able to read the data again. The hard disk needs to be disassembled in a class 100 clean environment to prevent damage to the disk platters upon which the magnetic media stores the data. Media damage: This means that the magnetic media on the surface of the hard disk platters, has become damaged or corrupted. This is mainly caused by what is known as a ‘head-crash’, where the electronic heads that read the data, from the disk surface, actually crash into the spinning platter surface and begin to scrape the media away. Once magnetic media that contains your data is scraped away, and turned into dust, data recovery becomes extremely difficult and expensive. As a computer stores data randomly across a set of platters in a hard disk drive, a relatively minor head crash can damage many files. Whole files and sometimes parts of files can be recovered but it is likely that the quality of the recovery is going to be lower than another type of hardware failure. On many occasions, the media damage is so severe that little valuable data can be retrieved. Undelete Files: When a user deletes a file, whether accidentally or intentionally, the actual data is not destroyed, but the computer system now regards that data as no longer required. The data stored on a hard disk drive, are pieces of the document in random areas of the magnetic media on the hard disk surface. It does this to speed up the time taken to ‘write’ the data. Where ever the ‘heads’ happen to be when the save command is received, they ‘write’ data to the magnetic media. As bits and pieces of the file are stored in different areas, the computer system requires an index or map to be able to put the bits back together again, in the right order, to reconstitute the file. The index is stored in a FAT or ‘File Allocation Table’. When one deletes a file, the entry in the table is removed, telling the computer that those areas that previously contained a part of a file, are no longer required, and are available for new data to be stored. The computer does NOT go and ‘over-write’ the original data, so it remains in place until another set of data is randomly stored there. As long as the ‘deleted data’ has not been overwritten by new data, it can be found, reconstituted and recovered. Once deleted, data is over-written by new data, it is virtually impossible to recover it. If you re-install Windows and realize that your data is missing, stop installing any applications as the more data your write on the disk, the less the chances of data recovery will become.

Friday, November 21, 2008

meaning of: data bus, address bus, control bus

what is data bus? In computer architecture, a bus is a subsystem that transfers data between computer components inside a computer or between computers. Unlike a point-to-point connection, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together. Early computer buses were literally parallel electrical buses with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB. Address Bus An address bus is a computer bus, controlled by CPUs or DMA-capable peripherals for specifying the physical addresses of computer memory elements that the requesting unit wants to access (read or write). The width of an address bus, along with the size of addressable memory elements, generally determines how much memory can be directly accessed. For example, a 16-bit wide address bus (commonly used in the 8-bit processors of the 1970s and early 1980s) reaches across 216 (65,536) memory locations , whereas a 32-bit address bus (common in PC processors as of 2004) can address 232 (4,294,967,296) locations. Some microprocessors, such as the Digital/Compaq Alpha 21264 and Alpha 21364 have an address bus that is narrower than the amount of memory they can address. The address bus is clocked faster than the system or memory bus, enabling it to transfer an address in the same amount of time as an address bus of the same width as the address. In most microcomputers such addressable "locations" are 8-bit bytes, conceptually at least. In such case the above examples translate to 64 kilobytes (KB) and 4 gigabytes (GB) respectively. However, it should be noted that accessing an individual byte frequently requires reading or writing the full bus width (a word) at once. In these instances the least significant bits of the address bus may not even be implemented - it is instead the responsibility of the controlling device to isolate the individual byte required from the complete word transmitted. This is the case, for instance, with the VESA Local Bus which lacks the two least significant bits, limiting this bus to aligned 32 bit transfers. Historically, there were also some examples of computers which were only able to address larger words, such as 36 or 48 bits long. Control Bus A control bus is (part of) a computer bus, used by CPUs for communicating with other devices within the computer. While the address bus carries the information on which device the CPU is communicating with and the data bus carries the actual data being processed, the control bus carries commands from the CPU and returns status signals from the devices, for example if the data is being read or written to the device the appropriate line (read or write) will be active (logic zero).

Internal parts of the computer,motherboard and meaning of ZIP

INTERNAL PARTS OF THE COMPUTER 1.Motherboard 2.cpu 3.cards, pci,amg,ram and lan card 4.IDE cables 5.hard drives 6.floppy disk drive 7.optical disk drive 8. PSU 9.CMOS-complimentary metal oxide, semi conductor (stored electricity) 10. cooling devices motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as modern personal computers. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple computers, the logic board. It is also sometimes casually shortened to mobo. The ZIP file format is a data compression and archival format. A ZIP file contains one or more files that have been compressed, to reduce their file size, or stored as-is. A number of compression algorithms are permitted in zip files but as of 2008 only DEFLATE is widely used and supported. The format was originally evolved by Phil Katz for PKZIP from the previous ARC compression format by Thom Henderson. However, many software utilities other than PKZIP itself are now available to create, modify, or open (unzip, decompress) ZIP files, notably WinZip, BOMArchiveHelper, StuffIt, KGB Archiver, PicoZip, Info-ZIP, WinRAR, IZArc, 7-Zip, ALZip, TUGZip, PeaZip, Universal Extractor and Zip Genius. Microsoft has included built-in ZIP support (under the name "compressed folders") in later versions of its Windows operating system. Apple has included built-in ZIP support in Mac OS X 10.3 and Mac OS X 10.4 via the BOMArchiveHelper utility, now called Archive Utility in Mac OS X 10.5 . The zip, zipcloak, zipnote, zipsplit tools are used widely in unix-like systems. ZIP files generally use the file extensions ".zip" or ".ZIP" and the MIME media type application/zip. Some software uses the ZIP file format as a wrapper for a large number of small items in a specific structure. Generally when this is done a different file extension is used. Examples of this usage are Java JAR files, Python .egg files, id Software .pk3/.pk4 files, package files for StepMania and Winamp/Windows Media Player skins, XPInstall, as well as OpenDocument and Office Open XML office formats. Both OpenDocument and Office Open XML formats use the JAR file format internally, so files can be easily uncompressed and compressed using tools for ZIP files. Google Earth makes use of KMZ files, which are just KML files in ZIP format. Mozilla Firefox Add-ons are zip files with extension "xpi". Nokia and Sony Ericsson mobile phone themes are zipped with extension "nth" and "thm", respectively.

external parts of the computer

cpu,processor,memory

A Central Processing Unit (CPU) is a machine that can execute computer programs. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage. The term itself and its initialism have been in use in the computer industry at least since the early 1960s (Weik 1961). The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same. Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are suited for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones to children's toys. Processor A word processor (more formally known as document preparation system) is a computer application used for the production (including composition, editing, formatting, and possibly printing) of any sort of printable material. Word processor may also refer to an obsolete type of stand-alone office machine, popular in the 1970s and 80s, combining the keyboard text-entry and printing functions of an electric typewriter with a dedicated computer for the editing of text. Although features and design varied between manufacturers and models, with new features added as technology advanced, word processors for several years usually featured a monochrome display and the ability to save documents on memory cards or diskettes. Later models introduced innovations such as spell-checking programs, increased formatting options, and dot-matrix printing. As the more versatile combination of a personal computer and separate printer became commonplace, the word processor disappeared. Word processors are descended from early text formatting tools (sometimes called text justification tools, from their only real capability). Word processing was one of the earliest applications for the personal computer in office productivity. Although early word processors used tag-based markup for document formatting, most modern word processors take advantage of a graphical user interface. Most are powerful systems consisting of one or more programs that can produce any arbitrary combination of images, graphics and text, the latter handled with type-setting capability. Microsoft Word is the most widely used computer word processing system; Microsoft estimates over five hundred million people use the Office suite, which includes Word. There are also many other commercial word processing applications, such as WordPerfect, which dominated the market from the mid-1980s to early-1990s, particularly for machines running Microsoft's MS-DOS operating system. Open-source applications such as OpenOffice.org Writer and KWord are rapidly gaining in popularity.[citation needed] Online word processors such as Google Docs are a relatively new category. MEMORY Different RAM Types and its uses The type of RAM doesn't matter nearly as much as how much of it you've got, but using plain old SDRAM memory today will slow you down. There are three main types of RAM: SDRAM, DDR and Rambus DRAM. SDRAM (Synchronous DRAM) Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is not an extension of older EDO DRAM but a new type of DRAM altogether. SDRAM started out running at 66 MHz, while older fast page mode DRAM and EDO max out at 50 MHz. SDRAM is able to scale to 133 MHz (PC133) officially, and unofficially up to 180MHz or higher. As processors get faster, new generations of memory such as DDR and RDRAM are required to get proper performance. DDR (Double Data Rate SDRAM) DDR basically doubles the rate of data transfer of standard SDRAM by transferring data on the up and down tick of a clock cycle. DDR memory operating at 333MHz actually operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It is incompatible with SDRAM physically, but uses a similar parallel bus, making it easier to implement than RDRAM, which is a different technology. Rambus DRAM (RDRAM) Despite it's higher price, Intel has given RDRAM it's blessing for the consumer market, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is a serial memory technology that arrived in three flavors, PC600, PC700, and PC800. PC800 RDRAM has double the maximum throughput of old PC100 SDRAM, but a higher latency. RDRAM designs with multiple channels, such as those in Pentium 4 motherboards, are currently at the top of the heap in memory throughput, especially when paired with PC1066 RDRAM memory. DIMMs vs. RIMMs DRAM comes in two major form factors: DIMMs and RIMMS. DIMMs are 64-bit components, but if used in a motherboard with a dual-channel configuration (like with an Nvidia nForce chipset) you must pair them to get maximum performance. So far there aren't many DDR chipset that use dual-channels. Typically, if you want to add 512 MB of DIMM memory to your machine, you just pop in a 512 MB DIMM if you've got an available slot. DIMMs for SDRAM and DDR are different, and not physically compatible. SDRAM DIMMs have 168-pins and run at 3.3 volts, while DDR DIMMs have 184-pins and run at 2.5 volts. RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maximum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a dual-channel 32-bit interface. You have to plan more when upgrading and purchasing RDRAM. Simm Dimm Sodimm From the top: SIMM, DIMM and SODIMM memory modules Memory Speed SDRAM initially shipped at a speed of 66MHz. As memory buses got faster, it was pumped up to 100MHz, and then 133MHz. The speed grades are referred to as PC66 (unofficially), PC100 and PC133 SDRAM respectively. Some manufacturers are shipping a PC150 speed grade. However, this is an unofficial speed rating, and of little use unless you plan to overclock your system. DDR comes in PC1600, PC2100, PC2700 and PC3200 DIMMs. A PC1600 DIMM is made up of PC200 DDR chips, while a PC2100 DIMM is made up of PC266 chips. PC2700 uses PC333 DDR chips and PC3200 uses PC400 chips that haven't gained widespread support. Go for PC2700 DDR. It is about the cost of PC2100 memory and will give you better performance. RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Go for PC1066 RDRAM if you can find it. If you can't, PC800 RDRAM is widely available. CAS Latency SDRAM comes with latency ratings or "CAS (Column Address Strobe) latency" ratings. Standard PC100 / PC133 SDRAM comes in CAS 2 or CAS 3 speed ratings. The lower latency of CAS 2 memory will give you more performance. It also costs a bit more, but it's worth it. DDR memory comes in CAS 2 and CAS 2.5 ratings, with CAS 2 costing more and performing better. RDRAM has no CAS latency ratings, but may eventually come in 32 and 4 bank forms with 32-bank RDRAM costing more and performing better. For now, it's all 32-bank RDRAM. Understanding Cache Cache Memory is fast memory that serves as a buffer between the processor and main memory. The cache holds data that was recently used by the processor and saves a trip all the way back to slower main memory. The memory structure of PCs is often thought of as just main memory, but it's really a five or six level structure: The first two levels of memory are contained in the processor itself, consisting of the processor's small internal memory, or registers, and L1 cache, which is the first level of cache, usually contained in the processor. The third level of memory is the L2 cache, usually contained on the motherboard. However, the Celeron chip from Intel actually contains 128K of L2 cache within the form factor of the chip. More and more chip makers are planning to put this cache on board the processor itself. The benefit is that it will then run at the same speed as the processor, and cost less to put on the chip than to set up a bus and logic externally from the processor. The fourth level, is being referred to as L3 cache. This cache used to be the L2 cache on the motherboard, but now that some processors include L1 and L2 cache on the chip, it becomes L3 cache. Usually, it runs slower than the processor, but faster than main memory. The fifth level (or fourth if you have no "L3 cache") of memory is the main memory itself. The sixth level is a piece of the hard disk used by the Operating System, usually called virtual memory. Most operating systems use this when they run out of main memory, but some use it in other ways as well. This six-tiered structure is designed to efficiently speed data to the processor when it needs it, and also to allow the operating system to function when levels of main memory are low. You might ask, "Why is all this necessary?" The answer is cost. If there were one type of super-fast, super-cheap memory, it could theoretically satisfy the needs of this entire memory architecture. This will probably never happen since you don't need very much cache memory to drastically improve performance, and there will always be a faster, more expensive alternative to the current form of main memory. Memory Redundancy One important aspect to consider in memory is what level of redundancy you want. There are a few different levels of redundancy available in memory. Depending on your motherboard, it may support all or some of these types of memory: The cheapest and most prevalent level of redundancy is non-parity memory. When you have non-parity memory in your machine and it encounters a memory error, the operating system will have no way of knowing and will most likely crash, but could corrupt data as well with no way of telling the OS. This is the most common type of memory, and unless specified, that's what you're getting. It works fine for most applications, but I wouldn't run life support systems on it. The second level of redundancy is parity memory (also called true parity). Parity memory has extra chips that act as parity chips. Thus, the chip will be able to detect when a memory error has occurred and signal the operating system. You'll probably still crash, but at least you'll know why. The third level of redundancy is ECC (Error Checking and Correcting). This requires even more logic and is usually more expensive. Not only does it detect memory errors, but it also corrects 1-bit ECC errors. If you have a 2-bit error, you will still have some problems. Some motherboards enable you to have ECC memory. Older memory types Fast Page Mode DRAM Fast Page Mode DRAM is plain old DRAM as we once knew it. The problem with standard DRAM was that it maxes out at about 50 MHz. EDO DRAM EDO DRAM gave people up to 5% system performance increase over DRAM. EDO DRAM is like FPM DRAM with some cache built into the chip. Like FPM DRAM, EDO DRAM maxes out at about 50 MHz. Early on, some system makers claimed that if you used EDO DRAM you didn't need L2 cache in your computer to get decent performance. They were wrong. It turns out that EDO DRAM works along with L2 cache to make things even faster, but if you lose the L2 cache, you lose a lot of speed.

Sunday, November 16, 2008

History and development of Computer

Computers: History and Development Nothing epitomizes modern life better than the computer. For better or worse, computers have infiltrated every aspect of our society. Today computers do much more than simply compute: supermarket scanners calculate our grocery bill while keeping store inventory; computerized telephone switching centers play traffic cop to millions of calls and keep lines of communication untangled; and automatic teller machines let us conduct banking transactions from virtually anywhere in the world. But where did all this technology come from and where is it heading? To fully understand and appreciate the impact computers have on our lives and promises they hold for the future, it is important to understand their evolution. Early Computing Machines and Inventors The abacus, which emerged about 5,000 years ago in Asia Minor and is still in use today, may be considered the first computer. This device allows users to make computations using a system of sliding beads arranged on a rack. Early merchants used the abacus to keep trading transactions. But as the use of paper and pencil spread, particularly in Europe, the abacus lost its importance. It took nearly 12 centuries, however, for the next significant advance in computing devices to emerge. In 1642, Blaise Pascal (1623-1662), the 18-year-old son of a French tax collector, invented what he called a numerical wheel calculator to help his father with his duties. This brass rectangular box, also called a Pascaline, used eight movable dials to add sums up to eight figures long. Pascal's device used a base of ten to accomplish this. For example, as one dial moved ten notches, or one complete revolution, it moved the next dial - which represented the ten's column - one place. When the ten's dial moved one revolution, the dial representing the hundred's place moved one notch and so on. The drawback to the Pascaline, of course, was its limitation to addition. In 1694, a German mathematician and philosopher, Gottfried Wilhem von Leibniz (1646-1716), improved the Pascaline by creating a machine that could also multiply. Like its predecessor, Leibniz's mechanical multiplier worked by a system of gears and dials. Partly by studying Pascal's original notes and drawings, Leibniz was able to refine his machine. The centerpiece of the machine was its stepped-drum gear design, which offered an elongated version of the simple flat gear. It wasn't until 1820, however, that mechanical calculators gained widespread use. Charles Xavier Thomas de Colmar, a Frenchman, invented a machine that could perform the four basic arithmetic functions. Colmar's mechanical calculator, the arithometer, presented a more practical approach to computing because it could add, subtract, multiply and divide. With its enhanced versatility, the arithometer was widely used up until the First World War. Although later inventors refined Colmar's calculator, together with fellow inventors Pascal and Leibniz, he helped define the age of mechanical computation. The real beginnings of computers as we know them today, however, lay with an English mathematics professor, Charles Babbage (1791-1871). Frustrated at the many errors he found while examining calculations for the Royal Astronomical Society, Babbage declared, "I wish to God these calculations had been performed by steam!" With those words, the automation of computers had begun. By 1812, Babbage noticed a natural harmony between machines and mathematics: machines were best at performing tasks repeatedly without mistake; while mathematics, particularly the production of mathematic tables, often required the simple repetition of steps. The problem centered on applying the ability of machines to the needs of mathematics. Babbage's first attempt at solving this problem was in 1822 when he proposed a machine to perform differential equations, called a Difference Engine. Powered by steam and large as a locomotive, the machine would have a stored program and could perform calculations and print the results automatically. After working on the Difference Engine for 10 years, Babbage was suddenly inspired to begin work on the first general-purpose computer, which he called the Analytical Engine. Babbage's assistant, Augusta Ada King, Countess of Lovelace (1815-1842) and daughter of English poet Lord Byron, was instrumental in the machine's design. One of the few people who understood the Engine's design as well as Babbage, she helped revise plans, secure funding from the British government, and communicate the specifics of the Analytical Engine to the public. Also, Lady Lovelace's fine understanding of the machine allowed her to create the instruction routines to be fed into the computer, making her the first female computer programmer. In the 1980's, the U.S. Defense Department named a programming language ADA in her honor. Babbage's steam-powered Engine, although ultimately never constructed, may seem primitive by today's standards. However, it outlined the basic elements of a modern general purpose computer and was a breakthrough concept. Consisting of over 50,000 components, the basic design of the Analytical Engine included input devices in the form of perforated cards containing operating instructions and a "store" for memory of 1,000 numbers of up to 50 decimal digits long. It also contained a "mill" with a control unit that allowed processing instructions in any sequence, and output devices to produce printed results. Babbage borrowed the idea of punch cards to encode the machine's instructions from the Jacquard loom. The loom, produced in 1820 and named after its inventor, Joseph-Marie Jacquard, used punched boards that controlled the patterns to be woven. In 1889, an American inventor, Herman Hollerith (1860-1929), also applied the Jacquard loom concept to computing. His first task was to find a faster way to compute the U.S. census. The previous census in 1880 had taken nearly seven years to count and with an expanding population, the bureau feared it would take 10 years to count the latest census. Unlike Babbage's idea of using perforated cards to instruct the machine, Hollerith's method used cards to store data information which he fed into a machine that compiled the results mechanically. Each punch on a card represented one number, and combinations of two punches represented one letter. As many as 80 variables could be stored on a single card. Instead of ten years, census takers compiled their results in just six weeks with Hollerith's machine. In addition to their speed, the punch cards served as a storage method for data and they helped reduce computational errors. Hollerith brought his punch card reader into the business world, founding Tabulating Machine Company in 1896, later to become International Business Machines (IBM) in 1924 after a series of mergers. Other companies such as Remington Rand and Burroughs also manufactured punch readers for business use. Both business and government used punch cards for data processing until the 1960's. In the ensuing years, several engineers made other significant advances. Vannevar Bush (1890-1974) developed a calculator for solving differential equations in 1931. The machine could solve complex differential equations that had long left scientists and mathematicians baffled. The machine was cumbersome because hundreds of gears and shafts were required to represent numbers and their various relationships to each other. To eliminate this bulkiness, John V. Atanasoff (b. 1903), a professor at Iowa State College (now called Iowa State University) and his graduate student, Clifford Berry, envisioned an all-electronic computer that applied Boolean algebra to computer circuitry. This approach was based on the mid-19th century work of George Boole (1815-1864) who clarified the binary system of algebra, which stated that any mathematical equations could be stated simply as either true or false. By extending this concept to electronic circuits in the form of on or off, Atanasoff and Berry had developed the first all-electronic computer by 1940. Their project, however, lost its funding and their work was overshadowed by similar developments by other scientists. Five Generations of Modern Computers First Generation (1945-1956) With the onset of the Second World War, governments sought to develop computers to exploit their potential strategic importance. This increased funding for computer development projects hastened technical progress. By 1941 German engineer Konrad Zuse had developed a computer, the Z3, to design airplanes and missiles. The Allied forces, however, made greater strides in developing powerful computers. In 1943, the British completed a secret code-breaking computer called Colossus to decode German messages. The Colossus's impact on the development of the computer industry was rather limited for two important reasons. First, Colossus was not a general-purpose computer; it was only designed to decode secret messages. Second, the existence of the machine was kept secret until decades after the war. American efforts produced a broader achievement. Howard H. Aiken (1900-1973), a Harvard engineer working with IBM, succeeded in producing an all-electronic calculator by 1944. The purpose of the computer was to create ballistic charts for the U.S. Navy. It was about half as long as a football field and contained about 500 miles of wiring. The Harvard-IBM Automatic Sequence Controlled Calculator, or Mark I for short, was a electronic relay computer. It used electromagnetic signals to move mechanical parts. The machine was slow (taking 3-5 seconds per calculation) and inflexible (in that sequences of calculations could not change); but it could perform basic arithmetic as well as more complex equations. Another computer development spurred by the war was the Electronic Numerical Integrator and Computer (ENIAC), produced by a partnership between the U.S. government and the University of Pennsylvania. Consisting of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints, the computer was such a massive piece of machinery that it consumed 160 kilowatts of electrical power, enough energy to dim the lights in an entire section of Philadelphia. Developed by John Presper Eckert (1919-1995) and John W. Mauchly (1907-1980), ENIAC, unlike the Colossus and Mark I, was a general-purpose computer that computed at speeds 1,000 times faster than Mark I. In the mid-1940's John von Neumann (1903-1957) joined the University of Pennsylvania team, initiating concepts in computer design that remained central to computer engineering for the next 40 years. Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) in 1945 with a memory to hold both a stored program as well as data. This "stored memory" technique as well as the "conditional control transfer," that allowed the computer to be stopped at any point and then resumed, allowed for greater versatility in computer programming. The key element to the von Neumann architecture was the central processing unit, which allowed all computer functions to be coordinated through a single source. In 1951, the UNIVAC I (Universal Automatic Computer), built by Remington Rand, became one of the first commercially available computers to take advantage of these advances. Both the U.S. Census Bureau and General Electric owned UNIVACs. One of UNIVAC's impressive early achievements was predicting the winner of the 1952 presidential election, Dwight D. Eisenhower. First generation computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different binary-coded program called a machine language that told it how to operate. This made the computer difficult to program and limited its versatility and speed. Other distinctive features of first generation computers were the use of vacuum tubes (responsible for their breathtaking size) and magnetic drums for data storage. Second Generation Computers (1956-1963) By 1948, the invention of the transistor greatly changed the computer's development. The transistor replaced the large, cumbersome vacuum tube in televisions, radios and computers. As a result, the size of electronic machinery has been shrinking ever since. The transistor was at work in the computer by 1956. Coupled with early advances in magnetic-core memory, transistors led to second generation computers that were smaller, faster, more reliable and more energy-efficient than their predecessors. The first large-scale machines to take advantage of this transistor technology were early supercomputers, Stretch by IBM and LARC by Sperry-Rand. These computers, both developed for atomic energy laboratories, could handle an enormous amount of data, a capability much in demand by atomic scientists. The machines were costly, however, and tended to be too powerful for the business sector's computing needs, thereby limiting their attractiveness. Only two LARCs were ever installed: one in the Lawrence Radiation Labs in Livermore, California, for which the computer was named (Livermore Atomic Research Computer) and the other at the U.S. Navy Research and Development Center in Washington, D.C. Second generation computers replaced machine language with assembly language, allowing abbreviated programming codes to replace long, difficult binary codes. Throughout the early 1960's, there were a number of commercially successful second generation computers used in business, universities, and government from companies such as Burroughs, Control Data, Honeywell, IBM, Sperry-Rand, and others. These second generation computers were also of solid state design, and contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, tape storage, disk storage, memory, operating systems, and stored programs. One important example was the IBM 1401, which was universally accepted throughout industry, and is considered by many to be the Model T of the computer industry. By 1965, most large business routinely processed financial information using second generation computers. It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. The stored program concept meant that instructions to run a computer for a specific function (known as a program) were held inside the computer's memory, and could quickly be replaced by a different set of instructions for a different function. A computer could print customer invoices and minutes later design products or calculate paychecks. More sophisticated high-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) came into common use during this time, and have expanded to the current day. These languages replaced cryptic binary machine code with words, sentences, and mathematical formulas, making it much easier to program a computer. New types of careers (programmer, analyst, and computer systems expert) and the entire software industry began with second generation computers. Third Generation Computers (1964-1971) Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. The quartz rock eliminated this problem. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit (IC) in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor. As a result, computers became ever smaller as more components were squeezed onto the chip. Another third-generation development included the use of an operating system that allowed machines to run many different programs at once with a central program that monitored and coordinated the computer's memory. Fourth Generation (1971-Present) After the integrated circuits, the only place to go was down - in size, that is. Large scale integration (LSI) could fit hundreds of components onto one chip. By the 1980's, very large scale integration (VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration (ULSI) increased that number into the millions. The ability to fit so much onto an area about half the size of a U.S. dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability. The Intel 4004 chip, developed in 1971, took the integrated circuit one step further by locating all the components of a computer (central processing unit, memory, and input and output controls) on a minuscule chip. Whereas previously the integrated circuit had had to be manufactured to fit a special purpose, now one microprocessor could be manufactured and then programmed to meet any number of demands. Soon everyday household items such as microwave ovens, television sets and automobiles with electronic fuel injection incorporated microprocessors. Such condensed power allowed everyday people to harness a computer's power. They were no longer developed exclusively for large business or government contracts. By the mid-1970's, computer manufacturers sought to bring computers to general consumers. These minicomputers came complete with user-friendly software packages that offered even non-technical users an array of applications, most popularly word processing and spreadsheet programs. Pioneers in this field were Commodore, Radio Shack and Apple Computers. In the early 1980's, arcade video games such as Pac Man and home video game systems such as the Atari 2600 ignited consumer interest for more sophisticated, programmable home computers. In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. Computers continued their trend toward a smaller size, working their way down from desktop to laptop computers (which could fit inside a briefcase) to palmtop (able to fit inside a breast pocket). In direct competition with IBM's PC was Apple's Macintosh line, introduced in 1984. Notable for its user-friendly design, the Macintosh offered an operating system that allowed users to move screen icons instead of typing instructions. Users controlled the screen cursor using a mouse, a device that mimicked the movement of one's hand on the computer screen. As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. As opposed to a mainframe computer, which was one powerful computer that shared time with many terminals for many applications, networked computers allowed individual computers to form electronic co-ops. Using either direct wiring, called a Local Area Network (LAN), or telephone lines, these networks could reach enormous proportions. A global web of computer circuitry, the Internet, for example, links computers worldwide into a single network of information. During the 1992 U.S. presidential election, vice-presidential candidate Al Gore promised to make the development of this so-called "information superhighway" an administrative priority. Though the possibilities envisioned by Gore and others for such a large network are often years (if not decades) away from realization, the most popular use today for computer networks such as the Internet is electronic mail, or E-mail, which allows users to type in a computer address and send messages through networked terminals across the office or across the world. Fifth Generation (Present and Beyond) Defining the fifth generation of computers is somewhat difficult because the field is in its infancy. The most famous example of a fifth generation computer is the fictional HAL9000 from Arthur C. Clarke's novel, 2001: A Space Odyssey. HAL performed all of the functions currently envisioned for real-life fifth generation computers. With artificial intelligence, HAL could reason well enough to hold conversations with its human operators, use visual input, and learn from its own experiences. (Unfortunately, HAL was a little too human and had a psychotic breakdown, commandeering a spaceship and killing most humans on board.) Though the wayward HAL9000 may be far from the reach of real-life computer designers, many of its functions are not. Using recent engineering advances, computers may be able to accept spoken word instructions and imitate human reasoning. The ability to translate a foreign language is also a major goal of fifth generation computers. This feat seemed a simple objective at first, but appeared much more difficult when programmers realized that human understanding relies as much on context and meaning as it does on the simple translation of words. Many advances in the science of computer design and technology are coming together to enable the creation of fifth-generation computers. Two such engineering advances are parallel processing, which replaces von Neumann's single central processing unit design with a system harnessing the power of many CPUs to work as one. Another advance is superconductor technology, which allows the flow of electricity with little or no resistance, greatly improving the speed of information flow. Computers today have some attributes of fifth generation computers. For example, expert systems assist doctors in making diagnoses by applying the problem-solving steps a doctor might use in assessing a patient's needs. It will take several more years of development before expert systems are in widespread use.

Thursday, January 1, 1970