RSS

Rules When TYPING
Ergonomics is the study of your work environment and how you adapt. It takes into consideration your comfort level in your workstation. Invariably, individuals twist or strain to reach the keyboard or sit in odd contortions which result in neck, back, or wrist pain. Some simple workstation modifications, posture awareness, and chair consciousness can prevent problems.
bullet
Lower the height of the chair so that your back touches the back of the chair and you are comfortable.
bullet
Your feet should rest firmly on the floor slightly in front of you.
bullet
Center your keyboard in front of your monitor. Your eyes should be at the same level as the tool bar.
bullet
Keep the keyboard and mouse close to the edge of the desk.
bullet
Keyboard and mouse should be positioned so your arms fall naturally at your sides, with wrists straight out in front while typing/mousing.
bullet
Support your wrist and forearms with a gel pad or wrist support.
bullet
Avoid repetitive gripping of the mouse.
bullet
Keep frequently used items close - avoid reaching for anything!
bullet
Do wrist, finger, and hand exercises.
Sitting in a chair places 400 pounds of pressure on your lower back. If your back is unable to support your body, the strain which it is undergoing will affect other areas of your body as well, including your hands, arms, and wrists.
There are a number of steps you can take to reduce strain as you work. First, consider your desk posture. Be sure to sit with your back low against the back of your chair. You may need to roll up a towel or buy a lumbar roll to maintain the natural curve of your spine. Be sure the back of the head is lifted, the breastbone is lifted, and the lower back is supported. Your back should be angled backward a few degrees to widen the angle between the torso and the thighs: this increases blood flow plus reduces the compression of the spine.
Your arms should be relaxed and loose at your sides, with your forearms and hands parallel to the floor. The correct wrist and hand position should create a 90-degree angle and the wrists should not be flexed or extended, but rather should be in a neutral position. Keep your thighs at a right angle to your torso, and your knees at a right angle to your thighs.
Be sure to change your position frequently, and avoid using excessive force while typing at the keyboard. Over time, a heavy typing style could aggravate hand, wrist, or finger pain symptoms by placing joints and tissues under continual stress. Lastly, consider the use of ergonomic devices such as back supports, mouse wristpads, and keyboard gel wristpads (see illustrations at left).More detailed coverage of this topic is available in the video Say Goodbye to Wrist Pain, which is available through this web site.


If you want to know the proper positioning of fingers in typing go to:
http://www.customtyping.com/tutorials/kb/Correct_fingering.htm

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

PARTS OF A SYSTEM UNIT




Motherboard

Motherboard or system board is the main printed, flat circuit board in an electronic device such as microcomputers. The board contains expansion slots (sockets) that accept additional boards (expansion Cards). In a microcomputer, the motherboard contains the microprocessor, the primary storage chips (or main memory cards), the buses, and all the chips used for controlling the peripherals.
Microprocessor
A microprocessor is a processor whose elements are miniaturized into one or a few integrated circuits contained in a single silicon microchip. It executes instructions. In a microcomputer, the central processing unit (CPU) is held on a single microprocessor. In order to function as a processor, it requires a system clock, primary storage, and power supply.
Several important lines of microcomputers use some families of microprocessor chips. Intel and Motorola are the major companies that produce important microprocessors for IBM compatible and Macintosh computers.
Microprocessor Capacity



The capacity of a microprocessor chip is represented in word sizes. A word size is the number of bits (e.g., 8, 16, or 32 bits) that a computer (CPU) can process at a time.
If word has more bits, the computer (CPU) are more powerful and faster. For example, a 16-bit-word computer can access 2 bytes (1 byte = 8 bits) at a time, while a 32-bit-word computer can access 4 bytes at a time. Therefore, the 32-bit computer is faster than the 16-bit computer.
CISC and RISC Chips

Central Processing Unit (CPU)
The central processing unit (CPU) is the computing part of the computer that interprets and executes program instructions. It is also known as the processor. In a microcomputer, the CPU is contained on a single microprocessor chip within the system unit. The CPU has two parts: the control unit and the arithmetic-logic unit.Additional storage units called registers within control unit and ALU help make processing more efficient.


  • Control Unit: A control unit is the circuitry that locates, retrieves, interprets and executes each instruction in the central processing unit. The control unit directs electronic signals between primary storage and the ALU, and between the CPU and input/output devices.
  • Atithmetic-Logic Unit (ALU): ALU is a high-speed circuit part in the CPU. The arithmetic-logic unit (ALU) performs arithmetic (math) operations, logic (comparison) operations and related operations. The ALU retrieves alphanumeric data from memory and then does actual calculating and comparing. It sends the results of the operation back to memory again.



CPU Models



IBM and IBM Compatibles
CPU NO. (Word Size in Bits)CPU SPEED (MHz)BUS SIZE (Bits)
8088 (16)5-108
8086 (16)6-1216
80286 (16)6-1616
80386DX (32)16-4032
80386SX (32)16-2516
80486DX (32)25-6632
80486SX (32)16-2532
Pentium (32)60-16632-64
Pentium Pro (32)150-23132-64

Macintosh
CPU NO. (Word Size in Bits)CPU SPEED (MHz)BUS SIZE (Bits)
68000 (32)816
68020 (32)1632
68030 (32)16-4032
68040 (32)132
601( )11
604( )11

Memory Chips



A memory chip is a chip that holds programs and data either temporarily or permanently. The major categories of memory chips are RAMs and ROMs.RAM Chips


RAM stands for random-access memory. Random- access memory holds the data or instructions that the CPU is presently processing. The type of primary storage is RAM. That is, a collection of RAM chips builds primary storage.
Whenever a CPU writes data or instructions to RAM, it wipes out the previous contents of RAM, and when a CPU read data or instructions from RAM, it keeps their contents.
ROM Chips
ROM stands for read-only memory. A ROM chip is a memory chip that stores instructions and data permanently. Its contents are placed into the ROM chip at the time of manufacture and cannot be modified by the user. A CPU can read and retrieve the instructions and data from the ROM chip, but it cannot change the contents in ROM.
ROM chips usually contain special instructions for computer operations such as ROM BIOS. The variations on the ROM chip are the following:

  • PROM (Programmable Read-Only Memory): A permanent storage device that becomes a read-only memory after it is written once by the customer rather than by the chip manufacturer. For example, a software producer can write instructions onto the PROM using special equipment.
  • EPROM (Erasable Programmable Read-Only Memory): EPROM is a reusable PROM-chip that can be erased by a special ultraviolet light. EPROM holds its content until erased and new instructions can be written on it.
  • EEPROM (Electrically Erasable Programmable Read-Only Memory): EEPROM-chip can be erased, either within a computer or externally, by electric power. The process usually requires more voltage than the common +5 volts used in logic circuits.
Primary Storage (Memory)
Primary storage (internal storagemain memory or memory) is the computer's working storage space that holds data, instructions for processing, and processed data (information) waiting to be sent to secondary storage. Physically, primary storage is a collection of RAM chips.
The contents are held in primary storage only temporarily. Capacity varies with different computers. Data or instructions are stored in primary storage locations called addresses.
System Clock
The clock is a device that generates periodic, accurately spaced signals used for several purposes such as regulation of the operations of a processor or generation of interrupts. The clock circuit uses the fixed vibrations generated from a quartz crystal to deliver a steady stream of pulses to the processor. The system clock controls the speed of all the operations within a computer.
The clock speed is the internal speed of a computer. The clock speed is expressed in megahertzes (MHz). 33 MHz means 33 million cycles per second. A computer processor's speed is faster if it has higher clock speed. For example, a 100-Mhz processor is four times as fast internally as the same processor running at 25MHz.
Expansion Slots/Boards
Open/Closed architectures
  • Open Architecture: This architecture is a system whose specifications are made public to encourage third-party vendors to develop add-on products for it. Most microcomputers adopt open architecture. They allow users to expand their systems using optional expansion boards.
  • Closed Architecture: This is a system whose technical specifications are not made public. With a machine that has closed architecture, users cannot easily add new peripherals.
Expansion Slots
Expansion slots are receptacles inside a system unit that printed circuit boards (expansion boards) are plugged into. Computer buyers need to look at the number of expansion slots when they buy a computer, because the number of expansion slots decides future expansion. In microcomputers, the expansion slots are directly connected to the bus.
Expansion Boards
Expansion boards are also called expansion cardscontroller cardsplug-in boardsadapter cards, or interface cards. Expansion boards are printed circuit boards that have many electronic components including chips. They are plugged into expansion slots.
Expansion boards are connected to peripherals through ports located on the edge of expansion boards. Expansion boards include memory expansion cards (e.g., SIMM), I/O controller cards (e.g., SCSI Card), video display card, sound cards, communications cards, etc.
Ports
port is an external connecting socket on the outside the computer. This is a pathway into and out of the computer. A port lets users plug in outside peripherals, such as monitors, scanners and printers.Serial Ports


Serial ports are external I/O connectors used to attach modems, scanners or other serial interface devices to the computer. The typical serial ports use a 9-pin DB-9 or a 25-pin DB-25 connector. Serial ports transmit bits one after another on a single communications line. Serial lines frequently are used to link equipment that is not located close by.
Parallel Ports
Parallel ports are external I/O connectors on a computer used to hook up printers or other parallel interface devices. The parallel port uses a DB-25 connector. This port transmits several bits simultaneously. Parallel lines move information faster than serial lines do.
Buses
bus is a data pathway between several hardware components inside or outside a computer. It not only connects the parts of the CPU to each other, but also links the CPU with other important hardware. The other important hardware includes memory, a disk control unit, a terminal control unit, a printer control unit, and a communications control unit. The capacity of a bus is expressed as bits. A larger capacity bus is faster in data transfer. For example, a 32-bit bus is faster than an 8-bit bus.Three Main Bus Architectures

  • ISA (Industry Standard Architecture): ISA is pronounced i- suh. This is the original PC bus architecture. It includes the 8-bit (PC, XT) and 16-bit (AT) buses in IBM personal computer series and compatibles. Now, it refers specially to the 16-bit AT bus.
  • MCA (Micro Channel Architecture): A 32-bit bus used in IBM P/S 2 series and other IBM models. This architecture allows multiprocessing that allows several processors to work simultaneously. Micro channel architecture is not compatible with PC bus architecture.
  • EISA (Extended Industry Standard Architecture): EISA is pronounced eesa. This is a bus standard for PCs that extends the AT bus (the ISA bus) architecture to a 32-bit bus. This architecture also allows more than one CPU to share the bus. The purpose of EISA is to extend and amend the old ISA standard, so that all existing AT expansion boards can work with an EISA slot.
Local Buses
The performance of a microcomputer is often restrained by the relatively slow video cards and other peripherals, which cannot keep up with today's fast CPUs. A local bus reduces the performance gap between the high-speed microprocessors and slower hard disks, video boards and other peripherals.
There are two local-bus systems available today. Each bus hopes to boost microcomputer performance for I/O-intensive tasks. They are a VL-Bus and a PCI local bus.

  • VL-Bus (VESA Local Bus): VESA specification was introduced by the VESA (Video Electronics Standards Association). VL-Bus added peripheral components and connectors to the existing motherboard's 486 local bus and was available first. Performance of the VL-Bus architecture declines sharply when supporting more than two devices, and the specification is currently limited to a 32-bit data path and 33-MHz operation. This design is vanishing.
  • PCI (Peripheral Component Interconnect): A PCI chip set adds a 64-bit-wide bus between the microprocessor and peripherals to offer a 64-bit data path. This chip supports speeds of 66-MHz. PCI can transfer data either 32- or 64-bits at a time. This architecture is developed by Intel, Compaq, DEC, IBM and NCR. \PCI technology incorporates a managing layer to route and manage data for efficient handling of high-speed data transfers between the microprocessor and peripherals.
    Its design goals are to produce a low-cost, high-performance interface and support future generations of peripherals. PCI provides excellent compatibility, higher throughput and automatic configuration of peripheral cards. PCI also has features such as expandability and plug-and-play flexibility.
  • Comparison: Both technologies employ a microprocessor's local bus instead of the system input/output bus to rapidly exchange data between the processor and peripherals.\The VESA design reached the market first and is less expensive than PCI, but PCI is technically superior. A VL-Bus usually supports only two or three local-bus peripherals, while PCI can support up to 10 local buses. PCI uses fewer bus lines than VL-Bus. This enables PCI to eventually cost less to manufacture. PCI is now dominating the market.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

...ALL ABOUT SOFTWARE...


SOFTWARE- is a collection of computer programs and related data that provide the instructions for telling a computer what to do and how to do it. In other words, software is a conceptual entity which is a set of computer programs, procedures, and associated documentation concerned with the operation of a data processing system. We can also say software refers to one or more computer programs and data held in the storage of the computer for some purposes.


Major Types of Software

      Programming Software: This is one of the most commonly known and popularly used forms of computer software. These software come in forms of tools that assist a programmer in writing computer programs. Computer programs are sets of logical instructions that make a computer system perform certain tasks. The tools that help the programmers in instructing a computer system include text editors, compilers and interpreters.

       System Software: It helps in running the computer hardware and the computer system. System software is a collection of operating systems; devise drivers, servers, windowing systems and utilities. System software helps an application programmer in abstracting away from hardware, memory and other internal complexities of a computer. 

      Application Software: It enables the end users to accomplish certain specific tasks. Business software, databases and educational software are some forms of application software. Different word processors, which are dedicated for specialized tasks to be performed by the user, are other examples of application software.

   ...Apart from these three basic types of software, there are some other well-known forms of computer software like inventory management software, ERP, utility software, accounting software and others. Take a look at some of them...


      Inventory Management Software: This type of software helps an organization in tracking its goods and materials on the basis of quality as well as quantity. Warehouse inventory management functions encompass the internal warehouse movements and storage. Inventory software helps a company in organizing inventory and optimizing the flow of goods in the organization, thus leading to an improved customer service.

      Utility Software: Also known as service routine, utility software helps in the management of computer hardware and application software. It performs a small range of tasks. Disk defragmenters, systems utilities and virus scanners are some of the typical examples of utility software.

      Data Backup and Recovery Software: An ideal data backup and recovery software provides functionality beyond simple copying of data files. This software often supports user needs of specifying what is to be backed up and when. Backup and recovery software preserve the original organization of files and allow an easy retrieval of the backed up data.


Importance of software...
                     Software helps us to do what we want to the computer.It enhance our way of using computers. If there was no software we cannot use the computer and it cannot do any task. It will not also be enjoyable to use without the application software. Every kinds of software has something to do and they all help the computer in making it useful as well as the computer does the same.


  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS



What is a Blog?

             Curiosity about the word "blog" is nothing to worry about. Many people of today's generation commonly use the internet. Instead of using old books, they prefer to find answers to their question from the internet. 
           Don't you know that the information you get from the internet is a blog made by people? Some people make blogs because they earn from it. Some may just want to help people to know about the things they are looking for. Or maybe making blog is just one of their hobbies.

           Bloggers are the people who make blogs. Blogging may have benefited them but can also cause danger to them. They should be careful on what they are posting. Blog also has disadvantages to people especially to the readers. Not all the blogs created by the bloggers are correct. Blogs are only suggestions of people. Blogs and information on the internet are not all approved by the department that checks published information unlike the books that we have then and now. 

  

What is a Blog?

             Curiosity about the word "blog" is nothing to worry about. Many people of today's generation commonly use the internet. Instead of using old books, they prefer to find answers to their question from the internet. 
           Don't you know that the information you get from the internet is a blog made by people? Some people make blogs because they earn from it. Some may just want to help people to know about the things they are looking for. Or maybe making blog is just one of their hobbies.

           Bloggers are the people who make blogs. Blogging may have benefited them but can also cause danger to them. They should be careful on what they are posting. Blog also has disadvantages to people especially to the readers. Not all the blogs created by the bloggers are correct. Blogs are only suggestions of people. Blogs and information on the internet are not all approved by the department that checks published information unlike the books that we have then and now. 


  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

CoMpUtEr HiStOrY

First Generation (1940-1956) Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.
Second Generation (1956-1963) Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.
Third Generation (1964-1971) Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Fourth Generation (1971-Present) Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer—from the central processing unit and memory to input/output controls—on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUI's, the mouse and handheld-devices.
Fifth Generation (Present and Beyond) Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

Early electronic digital computation


Friden paper tape punch. Punched tape programs would be much longer than the short fragment of yellow paper tape shown.
The era of modern computing began with a flurry of development before and during World War II, as electronic circuit elements replaced mechanical equivalents, and digital calculations replaced analog calculations. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium. Defining a single point in the series as the "first computer" misses many subtleties (see the table "Defining characteristics of some early digital computers of the 1940s" below).
Alan Turing's 1936 paper proved enormously influential in computing and computer science in two ways. Its main purpose was to prove that there were problems (namely the halting problem) that could not be solved by any sequential process. In doing so, Turing provided a definition of a universal computer which executes a program stored on tape. This construct came to be called a Turing machine. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Nine-track magnetic tape
For a computing machine to be a practical general-purpose computer, there must be some convenient read-write mechanism, punched tape, for example. With knowledge of Alan Turing's theoretical 'universal computing machine' John von Neumann defined an architecture which uses the same memory both to store programs and data: virtually all contemporary computers use this architecture (or some variant). While it is theoretically possible to implement a full computer entirely mechanically (as Babbage's design showed), electronics made possible the speed and later the miniaturization that characterize modern computers.
There were three parallel streams of computer development in the World War II era; the first stream largely ignored, and the second stream deliberately kept secret. The first was the German work of Konrad Zuse. The second was the secret development of the Colossus computers in the UK. Neither of these had much influence on the various computing projects in the United States. The third stream of computer development, Eckert and Mauchly's ENIAC and EDVAC, was widely publicized.
George Stibitz is internationally recognized as one of the fathers of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator that he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to calculate using binary form.


Zuse


A reproduction of Zuse's Z1 computer
Working in isolation in Germany, Konrad Zuse started construction in 1936 of his first Z-series calculators featuring memory and (initially limited) programmability. Zuse's purely mechanical, but already binary Z1, finished in 1938, never worked reliably due to problems with the precision of parts.
Zuse's later machine, the Z3, was finished in 1941. It was based on telephone relays and did work satisfactorily. The Z3 thus became the first functional program-controlled, all-purpose, digital computer. In many ways it was quite similar to modern machines, pioneering numerous advances, such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.
Programs were fed into Z3 on punched films. Conditional jumps were missing, but since the 1990s it has been proved theoretically that Z3 was still a universal computer (as always, ignoring physical storage limitations). In two 1936 patent applications, Konrad Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in the British SSEM of 1948. Zuse also claimed to have designed the first higher-level programming language, which he named Plankalkül, in 1945 (published in 1948) although it was implemented for the first time in 2000 by a team around Raúl Rojas at the Free University of Berlin—five years after Zuse died.
Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.

[edit]Colossus


Colossus was used to break German ciphers during World War II.
During World War II, the British at Bletchley Park (40 miles north of London) achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was attacked with the help of electro-mechanical machines called bombes. The bombe, designed by Alan Turing and Gordon Welchman, after the Polish cryptographic bomba by Marian Rejewski (1938), came into productive use in 1941. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.
The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, termed "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Professor Max Newman and his colleagues helped specify the Colossus. The Mk I Colossus was built between March and December 1943 by Tommy Flowers and his colleagues at the Post Office Research Station at Dollis Hill in London and then shipped to Bletchley Park in January 1944.
Colossus was the world's first electronic programmable computing device. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand, to keep secret that the British were capable of cracking Lorenz during the oncoming cold war. As a result the machines were not included in many histories of computing. A reconstructed copy of one of the Colossus machines is now on display at Bletchley Park.


American developments

In 1937, Claude Shannon showed there is a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers. In his master's thesis at MIT, for the first time in history, Shannon showed that electronic relays and switches can realize the expressions of Boolean algebra. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design. George Stibitz completed a relay-based computer he dubbed the "Model K" at Bell Labs in November 1937. Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Calculator,[48] completed January 8, 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on September 11, 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely, in this case over a phone line. Some participants in the conference who witnessed the demonstration were John von Neumann, John Mauchly, and Norbert Wiener, who wrote about it in their memoirs.

Atanasoff–Berry Computer replica at 1st floor of Durham Center, Iowa State University
In 1939, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC) The Atanasoff-Berry Computer was the world's first electronic digital computer. The design used over 300 vacuum tubes and employed capacitors fixed in a mechanically rotating drum for memory. Though the ABC machine was not programmable, it was the first to use electronic tubes in an adder. ENIAC co-inventor John Mauchly examined the ABC in June 1941, and its influence on the design of the later ENIAC machine is a matter of contention among computer historians. The ABC was largely forgotten until it became the focus of the lawsuit Honeywell v. Sperry Rand, the ruling of which invalidated the ENIAC patent (and several others) as, among many reasons, having been anticipated by Atanasoff's work.
In 1939, development began at IBM's Endicott laboratories on the Harvard Mark I. Known officially as the Automatic Sequence Controlled Calculator, the Mark I was a general purpose electro-mechanical computer built with IBM financing and with assistance from IBM personnel, under the direction of Harvard mathematician Howard Aiken. Its design was influenced by Babbage's Analytical Engine, using decimal arithmetic and storage wheels and rotary switches in addition to electromagnetic relays. It was programmable via punched paper tape, and contained several calculation units working in parallel. Later versions contained several paper tape readers and the machine could switch between readers based on a condition. Nevertheless, the machine was not quite Turing-complete. The Mark I was moved to Harvard University and began operation in May 1944.


ENIAC


ENIAC performed ballistics trajectory calculations with 160 kW of power
The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic general-purpose computer. It combined, for the first time, the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. (Colossus couldn't add). It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, and contained over 18,000 vacuum tubes. One of the major engineering feats was to minimize tube burnout, which was a common problem at that time. The machine was in almost constant use for the next ten years.
ENIAC was unambiguously a Turing-complete device. It could compute any problem (that would fit in memory). A "program" on the ENIAC, however, was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that evolved from it. Once a program was written, it had to be mechanically set into the machine. Six women did most of the programming of ENIAC. (Improvements completed in 1948 made it possible to execute stored programs set in function table memory, which made programming less a "one-off" effort, and more systematic).

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS