When an enterprise is widely distributed with no central location(s), the function of core routing may be subsumed by the WAN service to which the enterprise subscribes, and the distribution routers become the highest tier.
Sunday, December 30, 2007
Core
When an enterprise is widely distributed with no central location(s), the function of core routing may be subsumed by the WAN service to which the enterprise subscribes, and the distribution routers become the highest tier.
Distribution
They may also provide connectivity to groups of servers or to external networks. In the latter application, the router's functionality must be carefully considered as part of the overall security architecture. Separate from the router may be a Firewall or VPN concentrator, or the router may include these and other security functions.
When an enterprise is primarily on one campus, there may not be a distinct distribution tier, other than perhaps off-campus access. In such cases, the access routers, connected to LANs, interconnect via core routers.
Access
Enterprise Routers
A three-layer model is in common use, not all of which need be present in smaller networks
Small and Home Office (SOHO) connectivity
While functionally similar to routers, residential gateways use network address translation instead of routing. Instead of connecting local computers to the remote network directly, a residential gateway must make local computers appear to be a single computer.
Routers for Internet connectivity and internal use
Provider Edge Router: Placed at the edge of an ISP network, it speaks external BGP (eBGP) to a BGP speaker in another provider or large enterprise Autonomous System (AS).
Subscriber Edge Router: Located at the edge of the subscriber's network, it speaks eBGP to its provider's AS(s). It belongs to an end user (enterprise) organization.
Inter-provider Border Router: Interconnecting ISPs, this is a BGP speaking router that maintains BGP sessions with other BGP speaking routers in other providers' ASes.
Core router: A router that resides within the middle or backbone of the network rather than at its periphery.
Within an ISP: Internal to the provider's AS, such a router speaks internal BGP (iBGP) to that provider's edge routers, other intra-provider core routers, or the provider's inter-provider border routers.
"Internet backbone:" The Internet does not have a clearly identifiable backbone, as did its predecessors. See default-free zone (DFZ). Nevertheless, it is the major ISPs' routers that make up what many would consider the core. These ISPs operate all four types of the BGP-speaking routers described here. In ISP usage, a "core" router is internal to an ISP, and used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching (MPLS).
Types of routers
Disadvantages of network bridges
2-Does not scale to extremely large networks
3-Buffering introduces store and forward delays - on average traffic destined for bridge will be related to the number of stations on the rest of the LAN
5-Bridging of different MAC protocols introduces errors
6-Because bridges do more than repeaters by viewing MAC addresses, the extra processing makes them slower than repeaters
7-Bridges are more expensive than repeaters
source:wikipedia
Advantages of network bridges
1- Self configuring
2- Primitive bridges are often inexpensive
3- Reduce size of collision domain by microsegmentation in non switched networks
4- Transparent to protocols above the MAC layer
5- Allows the introduction of management - performance information and access control
6- LANs interconnected are separate and physical constraints such as number of stations, repeaters and segment length don't apply
source:wikipedia
Bridging versus routing
When designing a network, you can choose to put multiple segments into one bridged network or to divide it into different networks interconnected by routers. If a host is physically moved from one network area to another in a routed network, it has to get a new IP address; if this system is moved within a bridged network, it doesn't have to reconfigure anything.
Transparent bridging and Source route bridging
Source route bridging – With source route bridging two frame types are used in order to find the route to the destination network segment. Single-Route (SR) frames comprise most of the network traffic and have set destinations, while All-Route(AR) frames are used to find routes. Bridges send AR frames by broadcasting on all network branches; each step of the followed route is registered by the bridge performing it. Each frame has a maximum hop count, which is determined to be greater than the diameter of the network graph, and is decremented by each bridge. Frames are dropped when this hop count reaches zero, to avoid indefinite looping of AR frames. The first AR frame which reaches its destination is considered to have followed the best route, and the route can be used for subsequent SR frames; the other AR frames are discarded. This method of locating a destination network can allow for indirect load balancing among multiple bridges connecting two networks. The more a bridge is loaded, the less likely it is to take part in the route finding process for a new destination as it will be slow to forward packets. A new AR packet will find a different route over a less busy path if one exists. This method is very different from transparent bridge usage, where redundant bridges will be inactivated; however, more overhead is introduced to find routes, and space is wasted to store them in frames. A switch with a faster backplane can be just as good for performance, if not for fault tolerance.
Network Bridge
Since bridging takes place at the data link layer of the OSI model, a bridge processes the information from each frame of data it receives. In an Ethernet frame, this provides the MAC address of the frame's source and destination.
Router
A router may create or maintain a table of the available routes and their conditions and use this information along with distance and cost algorithms to determine the best route for a given packet. Typically, a packet may travel through a number of network points with routers before arriving at its destination. Routing is a function associated with the Network layer (layer 3) in the standard model of network programming, the Open Systems Interconnection (OSI) model. A layer-3 switch is a switch that can perform routing functions.
Thursday, December 27, 2007
Switch VS Hub
When a hub receives a packet (chunk) of data (a frame in Ethernet lingo) at one of its ports from a PC on the network, it transmits (repeats) the packet to all of its ports and, thus, to all of the other PCs on the network. If two or more PCs on the network try to send packets at the same time a collision is said to occur. When that happens all of the PCs have to go though a routine to resolve the conflict. The process is prescribed in the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. Each Ethernet Adapter has both a receiver and a transmitter. If the adapters didn't have to listen with their receivers for collisions they would be able to send data at the same time they are receiving it (full duplex). Because they have to operate at half duplex (data flows one way at a time) and a hub retransmits data from one PC to all of the PCs, the maximum bandwidth is 100 Mhz and that bandwidth is shared by all of the PC's connected to the hub. The result is when a person using a computer on a hub downloads a large file or group of files from another computer the network becomes congested. In a 10 Mhz 10Base-T network the affect is to slow the network to nearly a crawl. The affect on a small, 100 Mbps (million bits per scond), 5-port network is not as significant.
Two computers can be connected directly together in an Ethernet with a crossover cable. A crossover cable doesn't have a collision problem. It hardwires the Ethernet transmitter on one computer to the receiver on the other. Most 100BASE-TX Ethernet Adapters can detect when listening for collisions is not required with a process known as auto-negotiation and will operate in a full duplex mode when it is permitted. The result is a crossover cable doesn't have delays caused by collisions, data can be sent in both directions simultaneously, the maximum available bandwidth is 200 Mbps, 100 Mbps each way, and there are no other PC's with which the bandwidth must be shared.
An Ethernet switch automatically divides the network into multiple segments, acts as a high-speed, selective bridge between the segments, and supports simultaneous connections of multiple pairs of computers which don't compete with other pairs of computers for network bandwidth. It accomplishes this by maintaining a table of each destination address and its port. When the switch receives a packet, it reads the destination address from the header information in the packet, establishes a temporary connection between the source and destination ports, sends the packet on its way, and then terminates the connection.
Picture a switch as making multiple temporary crossover cable connections between pairs of computers (the cables are actually straight-thru cables; the crossover function is done inside the switch). High-speed electronics in the switch automatically connect the end of one cable (source port) from a sending computer to the end of another cable (destination port) going to the receiving computer on a per packet basis. Multiple connections like this can occur simultaneously. It's as simple as that. And like a crossover cable between two PCs, PC's on an Ethernet switch do not share the transmission media, do not experience collisions or have to listen for them, can operate in a full-duplex mode, have bandwidth as high as 200 Mbps, 100 Mbps each way, and do not share this bandwidth with other PCs on the switch. In short, a switch is "more better."
Network Hub
Hubs also often come with a BNC and/or AUI connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. The availability of low-priced network switches has largely rendered hubs obsolete but they are still seen in older installations and more specialized applications.
Network Switch
A network switch is a computer networking device that connects network segments.
Low-end network switches appear nearly identical to network hubs, but a switch contains more "intelligence" (and comes with a correspondingly slightly higher price tag) than a network hub. Network switches are capable of inspecting data packets as they are received, determining the source and destination device of that packet, and forwarding it appropriately. By delivering each message only to the connected device it was intended for, a network switch conserves network bandwidth and offers generally better performance than a hub.
Network Interface Card ( NIC )
Wednesday, December 12, 2007
HYBRID TOPOLOGY
o Star – Bus o Star – Ring
Star – Bus
A star-bus topology is multiple star networks connected to each other via bus connection. All of the computers connected to each star network will have a centre piece which will either be a hub or a switch and the bus cabling will connect to the hub or switch to connect each star topology to each other.
One of the advantages of having this type of hybrid topology is that the rest of the computers in the star won’t be affected. This network is very easy to set up and is very easy to add on networks too. Adding on other networks could also cause problems, if the bus cabling is just Ethernet cable and there are alot of computers sharing information then the network could become slow and files could become corrupt or even lost. Another problem that could arise if the central hub or switch fails then that star network will not be able to communicate.
Star-Ring
A star-ring topology is multiple star networks wired to a ring connection. Again, each node within the star in this network will be connected to either a hub or a switch. If a computer goes down, then the network will still be alive compared to a normal ring topology, the whole network would go down. Because token passing is used in a ring topology, having more computers connected will not slow down the network which will allow greater traffic to be sent around the network at any one time.
RING TOPOLOGY
BUS TOPOLOGY
Bus Topology
All devices are connected to a central cable, called the bus or backbone
source: webopedia
STAR TOPOLOGY
All devices are connected to a central hub. Nodes communicate across the network by passing data through the hub.
source:webopedia
Network topologies
The physical layout of a Network is called its Topology. It also involve in how the network devices will comunicate to each others.
Example are : Star topology, Bus Topology, Ring Topology, Hybrid Topology etc.
Tuesday, December 11, 2007
WAN ( Wide Area Network )
MAN ( Metropolitan Area Network )
This Network type is designed for a town or city. If we conect all LAN to each other in a city or town then the network will be called a MAN ( Metropolitan Area Network ).
LAN ( Local Area Network )
The network that is in a single building or office is called a LAN. Infact the computers are geographically close together. Example : School or college LAB Network, a single office Network etc.
Types of computer Networks
1- LAN ( Local Area Network )
2- MAN ( Metropolitan Area Network )
3- WAN ( Wide Area Network )
Computer Network
Monday, December 10, 2007
External security
At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
Internal security
An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
Security
The operating system provides access to a number of resources, directly or indirectly, such as files on a local disk, privileged system calls, personal information about users, and the services offered by the programs running on the system;
The operating system is capable of distinguishing between some requesters of these resources who are authorized (allowed) to access the resource, and others who are not authorized (forbidden). While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. Requesters, in turn, divide into two categories:
Internal security: an already running program. On some systems, a program once it is running has no limitations, but commonly the program has an identity which it keeps and is used to check all of its requests for resources.
External security: a new request from outside the computer, such as a login at a connected console or some kind of network connection. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?").
Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.
Networking
Many operating systems also support one or more vendor-specific legacy networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access.
Disk and file systems
Unix demarcates its path components with a slash (/), a convention followed by operating systems that emulated it or at least its concept of hierarchical directories, such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but had already also adopted the CP/M convention of using slashes for additional options to commands, so instead used the backslash (\) as its component separator. Microsoft Windows continues with this convention; Japanese editions of Windows use ¥, and Korean editions use ₩.[1] Versions of Mac OS prior to OS X use a colon (:) for a path separator. RISC OS uses a period (.).
Unix and Unix-like operating systems allow for any character in file names other than the slash and NUL characters (including line feed (LF) and other control characters). Unix file names are case sensitive, which allows multiple files to be created with names that differ only in case. By contrast, Microsoft Windows file names are not case sensitive by default. Windows also has a larger set of punctuation characters that are not allowed in file names.
File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates is an alternative to journalling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.
Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS, GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS, along with the FAT file systems, and NTFS.
Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The NTFS file system is the most efficient and reliable of the four Windows file systems, and as of Windows Vista, is the only file system which the operating system can be installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for flash drives.
Mac OS X supports HFS+ as its primary file system, and it supports several other file systems as well, including FAT16, FAT32, NTFS and ZFS.
Common to all these (and other) operating systems is support for file systems typically found on removable media. FAT12 is the file system most commonly found on floppy discs. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to UDF supported by Linux 2.6 kernels and Windows Vista that facilitates rewriting to DVDs in the same fashion as what has been possible with floppy disks.
Memory management
Another important part of memory management is managing virtual addresses. If multiple processes are in memory at once, they must be prevented from interfering with each other's memory (unless there is an explicit request to utilise shared memory. This is achieved by having separate address spaces. Each process sees the whole virtual address space, typically from address 0 up to the maximum size of virtual memory, as uniquely assigned to it. The operating system maintains a page table that match virtual addresses to physical addresses. These memory allocations are tracked so that when a process terminates, all memory used by that process can be made available for other processes.
The operating system can also write inactive memory pages to secondary storage. This process is called "paging" or "swapping" – the terminology varies between operating systems.
It is also typical for operating systems to employ otherwise unused physical memory as a page cache; requests for data from a slower device can be retained in memory to improve performance. The operating system can also pre-load the in-memory cache with data that may be requested by the user in the near future; SuperFetch is an example of this.
Process management
Most operating systems enable concurrent execution of many processes and programs at once via multitasking, even with one CPU. The mechanism was used in mainframes since the early 1960s, but in the personal computers it became available in 1990s. Process management is an operating system's way of dealing with running those multiple processes. On the most fundamental of computers (those containing one processor with one core) multitasking is done by simply switching processes quickly. Depending on the operating system, as more processes run, either each time slice will become smaller or there will be a longer delay before each process is given a chance to run. Process management involves computing and distributing CPU time as well as other resources. Most operating systems allow a process to be assigned a priority which affects its allocation of CPU time. Interactive operating systems also employ some level of feedback in which the task with which the user is working receives higher priority. Interrupt driven processes will normally run at a very high priority. In many systems there is a background process, such as the System Idle Process in Windows, which will run when no other process is waiting for the CPU.
Services of an Operating System
1- Process management
2- Memory management
3- Disk and file systems
4- Networking
5- Security
6- Internal security
7- External security
source:wikipedia
Definition of an Operating System (OS)
An operating system (OS) is any software that can boot (start up) a computer and manage its functions. Operating systems come in various forms to meet different requirements. An operating system is an interface between user and hardware.
source:wikipedia
Operating System (OS)
The most commonly-used contemporary desktop OS is Microsoft Windows, with Mac OS X also being well-known. Linux, GNU and the BSD are popular Unix-like systems.
Types Of Computer Softwares
System software helps run the computer hardware and computer system. It includes operating systems, device drivers, diagnostic tools, servers, windowing systems, utilities and more. The purpose of systems software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such as accessory devices as communications, printers, readers, displays, keyboards, etc.
Programming software usually provides tools to assist a programmer in writing computer programs and software using different programming languages in a more convenient way. The tools include text editors, compilers, interpreters, linkers, debuggers, and so on. An Integrated development environment (IDE) merges those tools into a software bundle, and a programmer may not need to type multiple commands for compiling, interpreter, debugging, tracing, and etc., because the IDE usually has an advanced graphical user interface, or GUI.
Application software allows end users to accomplish one or more specific (non-computer related) tasks. Typical applications include industrial automation, business software, educational software, medical software, databases, and computer games. Businesses are probably the biggest users of application software, but almost every field of human activity now uses some form of application software.
Computer Software
Saturday, December 8, 2007
Micro Processor
A microprocessor is a programmable digital electronic component that incorporates the functions of a central processing unit (CPU) on a single semiconducting integrated circuit (IC). The microprocessor was born by reducing the word size of the CPU from 32 bits to 4 bits, so that the transistors of its logic circuits would fit onto a single part. One or more microprocessors typically serve as the CPU in a computer system, embedded system, or handheld device. Microprocessors made possible the advent of the microcomputer in the mid-1970s. Before this period, electronic CPUs were typically made from bulky discrete switching devices (and later small-scale integrated circuits) containing the equivalent of only a few transistors. By integrating the processor onto one or a very few large-scale integrated circuit packages (containing the equivalent of thousands or millions of discrete transistors), the cost of processing capacity was greatly reduced. Since the advent of the IC in the mid-1970s, the microprocessor has become the most prevalent implementation of the CPU, nearly completely replacing all other forms. See History of computing hardware for pre-electronic and early electronic computers.
Since the early 1970s, the increase in processing capacity of evolving microprocessors has been known to generally follow Moore's Law. It suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every 18 months. In the early 1990s, microprocessor's heat generation (TDP) - due to current leakage - emerged, as a leading developmental constraint[1]. From their humble beginnings as the drivers for calculators, the continued increase in processing capacity has led to the dominance of microprocessors over every other form of computer; every system from the largest mainframes to the smallest handheld computers now uses a microprocessor at its core.
Friday, December 7, 2007
Input/Output
input/output
- I/O (input/output), pronounced "eye-oh," describes any operation, program, or device that transfers data to or from a computer. Typical I/O devices are printers, hard disks, keyboards, and mouses. In fact, some devices are basically input-only devices (keyboards and mouses); others are primarily output-only devices (printers); and others provide both input and output of data (hard disks, diskettes, writable CD-ROMs).
source: techtarget
Organization of Computing System
2- Computer Software (S/W)
3- Operating System (O.S)
4- Computer Networks (N/W)
---------------------------------------------------------------------------------------
1- Computer Hardware:- All physical Components of Computer are called its hardware. We can say that all physical Devices are computer hardware. Keyboard, Mouse, Printer, Scanner, Hard Disk, CD-ROM etc.
There are three main units of Computer Hardware.
i- Input Unit
ii- Processing Unit
iii- Output Unit
i- Input Unit: The Devices that are used to give data and/or Instruction to computer system are related to Input Unit. Like Keyboard, Mouse etc.
ii- Processing Unit: This unit is responsible for all data processing in computer system. The device for this purpose is called CPU (Central Processing Unit) or Micro Processor.
iii- Output Unit: The devices that are used to show results of processed data, are related to output Unit. Like Monitor, Printer etc.
We will see these topics in detail latter on.
Wednesday, December 5, 2007
Generations of Computer Developments
The Five Generations of Computers: The history of computer development is often referred to in reference to the different generations of computing devices. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.
Second Generation - 1956-1963: Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.
Source: http://www.webopedia.com/
Introduction to Computer
The abacus is a calculator. Its first recorded use was in 500 B.C. The Chinese used it to add, subtract, multiply, and divide.
Analytical Engine (A Pre-Electronic Computer)
The first mechanical computer was the analytical engine, conceived and partially constructed by Charles Babbage in London, England, between 1822 and 1871. It was designed to receive instructions from punched cards, make calculations with the aid of a memory bank, and print out solutions to math problems. Although Babbage lavished the equivalent of $6,000 of his own money—and $17,000 of the British government's money—on this extraordinarily advanced machine, the precise work needed to engineer its thousands of moving parts was beyond the ability of the technology of the day to produce in large volume. It is doubtful whether Babbage's brilliant concept could have been realized using the available resources of his own century. If it had been, however, it seems likely that the analytical engine could have performed the same functions as many early electronic computers.
The first computer designed expressly for data processing was patented on January 8, 1889, by Dr. Herman Hollerith of New York. The prototype model of this electrically operated tabulator was built for the U.S. Census Bureau to compute results of the 1890 census.
Using punched cards containing information submitted by respondents to the census questionnaire, the Hollerith machine made instant tabulations from electrical impulses actuated by each hole. It then printed out the processed data on tape. Dr. Hollerith left the Census Bureau in 1896 to establish the Tabulating Machine Company to manufacture and sell his equipment. The company eventually became IBM, and the 80-column punched card used by the company, shown in Figure 1.2, is still known as the Hollerith card.
The Digital Electronic Computer
The first modern digital computer, the ABC (Atanasoff–Berry Computer), was built in a basement on the Iowa State University campus in Ames, Iowa, between 1939 and 1942. The development team was led by John Atanasoff, a professor of physics and mathematics, and Clifford Berry, a graduate student. This machine utilized concepts still in use today: binary arithmetic, parallel processing, regenerative memory, separate memory, and computer functions. When completed, it weighed 750 pounds and could store 3000 bits (.4 KB) of data.
The technology developed for the ABC machine was passed from Atanasoff to John W. Mauchly, who, together with engineer John Presper Eckert, developed the first large-scale digital computer, ENIAC (Electronic Numerical Integrator and Computer). It was built at the University of Pennsylvania's Moore School of Electrical Engineering. Begun as a classified military project, ENIAC was designed to prepare firing and bombing tables for the U.S. Army and Navy. When finally assembled in 1945, ENIAC consisted of 30 separate units, plus a power supply and forced-air cooling. It weighed 30 tons, and used 19,000 vacuum tubes, 1500 relays, and hundreds of thousands of resistors, capacitors, and inductors. It required 200 kilowatts of electrical power to operate.
Another computer history milestone is the Colossus I, an early digital computer built at a secret British government research establishment at Bletchley Park, Buckinghamshire, England, under the direction of Professor Max Newman. Colossus I was designed for a single purpose: cryptanalysis, or code breaking. Using punched paper tape input, it scanned and analyzed 5000 characters per second. Colossus became operational in December 1943 and proved to be an important technological aid to the Allied victory in World War II. It enabled the British to break the otherwise impenetrable German "Enigma" codes.
The 1960s and 1970s marked the golden era of the mainframe computer. Using the technology pioneered with ABC, ENIAC, and Colossus, large computers that served many users (with accompanying large-scale support) came to dominate the industry.
As these highlights show, the concept of the computer has indeed been with us for quite a while. The following table provides an overview of the evolution of modern computers—it is a timeline of important events.
Professional Course Outline
Main Modules
1- Introduction to Computer and Components
2- Operating System (MS Windows XP)
3- Microsoft Office
4- Internet and Electronic Mail
5- Typing Tutor
6- Hardware Maintenance and Trouble Shooting
7- Electronic Commerce
8- Web Development and Maintenance
9- Corel Draw
10- Photo Shop
11- Basics of Computer Networks
12- Impact of I T on Jobs and Organization
Courses for this Site
Information Technology Institute Bhimber
AJK Information Technology Board offers several programmes in cooperation with the Private sector which include computer literacy, electronic governance and ultimatly software development programes for unemployed IT graduates.