CLICK TO JOIN FOR FREE SONY LABTOP

Register to Get a Labtop

Sunday, December 30, 2007

Core

In enterprises, core router may provide a "collapsed backbone" interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth.
When an enterprise is widely distributed with no central location(s), the function of core routing may be subsumed by the WAN service to which the enterprise subscribes, and the distribution routers become the highest tier.
source:wikipedia

Distribution

Distribution routers aggregate traffic from multiple access routers, either at the same site, or to collect the data streams from multiple sites to a major enterprise location. Distribution routers often are responsible for enforcing quality of service across a WAN, so they may have considerable memory, multiple WAN interfaces, and substantial processing intelligence.
They may also provide connectivity to groups of servers or to external networks. In the latter application, the router's functionality must be carefully considered as part of the overall security architecture. Separate from the router may be a Firewall or VPN concentrator, or the router may include these and other security functions.
When an enterprise is primarily on one campus, there may not be a distinct distribution tier, other than perhaps off-campus access. In such cases, the access routers, connected to LANs, interconnect via core routers.
source:wikipedia

Access

Access routers, including SOHO, are located at customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost.
source:wikipedia

Enterprise Routers

All sizes of routers may be found inside enterprises. While the most powerful routers tend to be found in ISPs, academic and research facilities, as well as large businesses, may need large routers.
A three-layer model is in common use, not all of which need be present in smaller networks
source:wikipedia

Small and Home Office (SOHO) connectivity

Residential gateways (often called routers) are frequently used in homes to connect to a broadband service, such as IP over cable or DSL. A home router may allow connectivity to an enterprise via a secure Virtual Private Network.
While functionally similar to routers, residential gateways use
network address translation instead of routing. Instead of connecting local computers to the remote network directly, a residential gateway must make local computers appear to be a single computer.
source:wikipedia

Routers for Internet connectivity and internal use

Routers intended for ISP and major enterprise connectivity will almost invariably exchange routing information with the Border Gateway Protocol. RFC 4098 defines several types of BGP-speaking routers:
Provider Edge Router: Placed at the edge of an ISP network, it speaks external BGP (eBGP) to a BGP speaker in another provider or large enterprise Autonomous System (AS).
Subscriber Edge Router: Located at the edge of the subscriber's network, it speaks eBGP to its provider's AS(s). It belongs to an end user (enterprise) organization.
Inter-provider Border Router: Interconnecting ISPs, this is a BGP speaking router that maintains BGP sessions with other BGP speaking routers in other providers' ASes.
Core router: A router that resides within the middle or backbone of the network rather than at its periphery.
Within an ISP: Internal to the provider's AS, such a router speaks internal BGP (iBGP) to that provider's edge routers, other intra-provider core routers, or the provider's inter-provider border routers.
"Internet backbone:" The Internet does not have a clearly identifiable backbone, as did its predecessors. See default-free zone (DFZ). Nevertheless, it is the major ISPs' routers that make up what many would consider the core. These ISPs operate all four types of the BGP-speaking routers described here. In ISP usage, a "core" router is internal to an ISP, and used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching (MPLS).
source:wikipedia

Types of routers

Routers may provide connectivity inside enterprises, between enterprises and the Internet, and inside Internet Service Providers (ISP). The largest routers (for example the Cisco CRS-1 or Juniper T1600) interconnect ISPs, are used inside ISPs, or may be used in very large enterprise networks. An example of an enterprise router would be the Cisco 7600 (pictured above). The smallest routers provide connectivity for small and home offices (for example the Linksys BEFSR41).
source:wikipedia

Disadvantages of network bridges

1-Does not limit the scope of broadcasts
2-Does not scale to extremely large networks
3-Buffering introduces store and forward delays - on average traffic destined for bridge will be related to the number of stations on the rest of the LAN
5-Bridging of different MAC protocols introduces errors
6-Because bridges do more than repeaters by viewing MAC addresses, the extra processing makes them slower than repeaters
7-Bridges are more expensive than repeaters
source:wikipedia

Advantages of network bridges


1- Self configuring
2- Primitive bridges are often inexpensive
3- Reduce size of collision domain by microsegmentation in non switched networks
4- Transparent to protocols above the MAC layer
5- Allows the introduction of management - performance information and access control
6- LANs interconnected are separate and physical constraints such as number of stations, repeaters and segment length don't apply

source:wikipedia

Bridging versus routing

Bridging and Routing are both ways of performing data control, but work through different methods. Bridging takes place at OSI Model Layer 2 (Data-Link Layer) while Routing takes place at the OSI Model Layer 3 (Network Layer). This difference means that a bridge directs frames according to hardware assigned MAC addresses while a router makes its decisions according to arbitrarily assigned IP Addresses. As a result of this, bridges are not concerned with and are unable to distinguish networks while routers can.
When designing a network, you can choose to put multiple segments into one bridged network or to divide it into different networks interconnected by routers. If a host is physically moved from one network area to another in a routed network, it has to get a new IP address; if this system is moved within a bridged network, it doesn't have to reconfigure anything.
source:wikipedia

Transparent bridging and Source route bridging

Bridges use two methods to resolve the network segment that a MAC address belongs to.
Transparent bridging – This method uses a forwarding database to send frames across network segments. The forwarding database is initially empty and entries in the database are built as the bridge receives frames. If an address entry is not found in the forwarding database, the frame is rebroadcast to all ports of the bridge, forwarding the frame to all segments except the source address. By means of these broadcast frames, the destination network will respond and a route will be created. Along with recording the network segment to which a particular frame is to be sent, bridges may also record a bandwidth metric to avoid looping when multiple paths are available. Devices that have this transparent bridging functionality are also known as adaptive bridges.
Source route bridging – With source route bridging two frame types are used in order to find the route to the destination network segment. Single-Route (SR) frames comprise most of the network traffic and have set destinations, while All-Route(AR) frames are used to find routes. Bridges send AR frames by broadcasting on all network branches; each step of the followed route is registered by the bridge performing it. Each frame has a maximum hop count, which is determined to be greater than the diameter of the network graph, and is decremented by each bridge. Frames are dropped when this hop count reaches zero, to avoid indefinite looping of AR frames. The first AR frame which reaches its destination is considered to have followed the best route, and the route can be used for subsequent SR frames; the other AR frames are discarded. This method of locating a destination network can allow for indirect load balancing among multiple bridges connecting two networks. The more a bridge is loaded, the less likely it is to take part in the route finding process for a new destination as it will be slow to forward packets. A new AR packet will find a different route over a less busy path if one exists. This method is very different from transparent bridge usage, where redundant bridges will be inactivated; however, more overhead is introduced to find routes, and space is wasted to store them in frames. A switch with a faster backplane can be just as good for performance, if not for fault tolerance.
source:wikipedia

Network Bridge

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model, and the term layer 2 switch is often used interchangeably with bridge. Bridges are similar to repeaters or network hubs, devices that connect network segments at the physical layer, however a bridge works by using bridging where traffic from one network is managed rather than simply rebroadcast to adjacent network segments. In Ethernet networks, the term "bridge" formally means a device that behaves according to the IEEE 802.1D standard - this is most often referred to as a network switch in marketing literature.
Since bridging takes place at the data link layer of the OSI model, a bridge processes the information from each frame of data it receives. In an Ethernet frame, this provides the MAC address of the frame's source and destination.
source:wikipedia

Router

In packet-switched networks such as the Internet, a router is a device or, in some cases, software in a computer, that determines the next network point to which a packet should be forwarded toward its destination. The router is connected to at least two networks and decides which way to send each information packet based on its current understanding of the state of the networks it is connected to. A router is located at any gateway (where one network meets another), including each point-of-presence on the Internet. A router is often included as part of a network switch.
A router may create or maintain a table of the available routes and their conditions and use this information along with distance and cost algorithms to determine the best route for a given packet. Typically, a packet may travel through a number of network points with routers before arriving at its destination. Routing is a function associated with the Network layer (layer 3) in the standard model of network programming, the Open Systems Interconnection (OSI) model. A layer-3 switch is a switch that can perform routing functions.
source:searchnetworking.techtarget.com

Thursday, December 27, 2007

Switch VS Hub

Although hubs and switches both glue the PCs in a network together, a switch is more expensive and a network built with switches is generally considered faster than one built with hubs. Why?
When a hub receives a packet (chunk) of data (a frame in Ethernet lingo) at one of its ports from a PC on the network, it transmits (repeats) the packet to all of its ports and, thus, to all of the other PCs on the network. If two or more PCs on the network try to send packets at the same time a collision is said to occur. When that happens all of the PCs have to go though a routine to resolve the conflict. The process is prescribed in the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. Each Ethernet Adapter has both a receiver and a transmitter. If the adapters didn't have to listen with their receivers for collisions they would be able to send data at the same time they are receiving it (full duplex). Because they have to operate at half duplex (data flows one way at a time) and a hub retransmits data from one PC to all of the PCs, the maximum bandwidth is 100 Mhz and that bandwidth is shared by all of the PC's connected to the hub. The result is when a person using a computer on a hub downloads a large file or group of files from another computer the network becomes congested. In a 10 Mhz 10Base-T network the affect is to slow the network to nearly a crawl. The affect on a small, 100 Mbps (million bits per scond), 5-port network is not as significant.
Two computers can be connected directly together in an Ethernet with a crossover cable. A crossover cable doesn't have a collision problem. It hardwires the Ethernet transmitter on one computer to the receiver on the other. Most 100BASE-TX Ethernet Adapters can detect when listening for collisions is not required with a process known as auto-negotiation and will operate in a full duplex mode when it is permitted. The result is a crossover cable doesn't have delays caused by collisions, data can be sent in both directions simultaneously, the maximum available bandwidth is 200 Mbps, 100 Mbps each way, and there are no other PC's with which the bandwidth must be shared.
An Ethernet switch automatically divides the network into multiple segments, acts as a high-speed, selective bridge between the segments, and supports simultaneous connections of multiple pairs of computers which don't compete with other pairs of computers for network bandwidth. It accomplishes this by maintaining a table of each destination address and its port. When the switch receives a packet, it reads the destination address from the header information in the packet, establishes a temporary connection between the source and destination ports, sends the packet on its way, and then terminates the connection.
Picture a switch as making multiple temporary crossover cable connections between pairs of computers (the cables are actually straight-thru cables; the crossover function is done inside the switch). High-speed electronics in the switch automatically connect the end of one cable (source port) from a sending computer to the end of another cable (destination port) going to the receiving computer on a per packet basis. Multiple connections like this can occur simultaneously. It's as simple as that. And like a crossover cable between two PCs, PC's on an Ethernet switch do not share the transmission media, do not experience collisions or have to listen for them, can operate in a full-duplex mode, have bandwidth as high as 200 Mbps, 100 Mbps each way, and do not share this bandwidth with other PCs on the switch. In short, a switch is "more better."
source:duxcw

Network Hub

An network hub or concentrator is a device for connecting multiple twisted pair or fiber optic Ethernet devices together, making them act as a single network segment. Hubs work at the physical layer (layer 1) of the OSI model, and the term layer 1 switch is often used interchangeably with hub. The device is thus a form of multiport repeater. Network hubs are also responsible for forwarding a jam signal to all ports if it detects a collision.
Hubs also often come with a BNC and/or AUI connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. The availability of low-priced network switches has largely rendered hubs obsolete but they are still seen in older installations and more specialized applications.
source:wikipedia

Network Switch

A network card, network adapter, LAN Adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It is both an OSI layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.
A network switch is a computer networking device that connects network segments.

Low-end network switches appear nearly identical to network hubs, but a switch contains more "intelligence" (and comes with a correspondingly slightly higher price tag) than a network hub. Network switches are capable of inspecting data packets as they are received, determining the source and destination device of that packet, and forwarding it appropriately. By delivering each message only to the connected device it was intended for, a network switch conserves network bandwidth and offers generally better performance than a hub.
source:wikipedia

Network Interface Card ( NIC )

A network card, network adapter, LAN Adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It is both an OSI layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.
source:wikipedia

Network Devices

Network Devices

1- NIC (Network Interface Card)
2- Switch/Hub
3-Router/Bridge

Wednesday, December 12, 2007

Network Topologies

1- Star topology
2- Ring Topology
3- Bus Topology
4- Hybrid Topology

HYBRID TOPOLOGY

A hybrid topology is two or more topologies connected to each other to form a complete network. There are many different combinations that a hybrid can do, some of them are:
o Star – Bus o Star – Ring
Star – Bus
A star-bus topology is multiple star networks connected to each other via bus connection. All of the computers connected to each star network will have a centre piece which will either be a hub or a switch and the bus cabling will connect to the hub or switch to connect each star topology to each other.
One of the advantages of having this type of hybrid topology is that the rest of the computers in the star won’t be affected. This network is very easy to set up and is very easy to add on networks too. Adding on other networks could also cause problems, if the bus cabling is just Ethernet cable and there are alot of computers sharing information then the network could become slow and files could become corrupt or even lost. Another problem that could arise if the central hub or switch fails then that star network will not be able to communicate.
Star-Ring
A star-ring topology is multiple star networks wired to a ring connection. Again, each node within the star in this network will be connected to either a hub or a switch. If a computer goes down, then the network will still be alive compared to a normal ring topology, the whole network would go down. Because token passing is used in a ring topology, having more computers connected will not slow down the network which will allow greater traffic to be sent around the network at any one time.
source:wikipedia

RING TOPOLOGY

Ring Topology
All devices are connected to one another in the shape of a closed loop, so that each device is connected directly to two other devices, one on either side of it.



source:webopedia


BUS TOPOLOGY

Bus Topology
All devices are connected to a central cable, called the bus or backbone




source: webopedia









STAR TOPOLOGY

All devices are connected to a central hub. Nodes communicate across the network by passing data through the hub.





source:webopedia


Network topologies

The physical layout of a Network is called its Topology. It also involve in how the network devices will comunicate to each others.
Example are : Star topology, Bus Topology, Ring Topology, Hybrid Topology etc.

Tuesday, December 11, 2007

WAN ( Wide Area Network )

Wide Area Network (WAN) is a network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries ). Or, less formally, a network that uses routers and public communications links.
source:wikipedia

MAN ( Metropolitan Area Network )

This Network type is designed for a town or city. If we conect all LAN to each other in a city or town then the network will be called a MAN ( Metropolitan Area Network ).

LAN ( Local Area Network )

The network that is in a single building or office is called a LAN. Infact the computers are geographically close together. Example : School or college LAB Network, a single office Network etc.

Types of computer Networks

1- LAN ( Local Area Network )
2- MAN ( Metropolitan Area Network )
3- WAN ( Wide Area Network )

Computer Network

A group of two or more computers linked together is called a computer Network. In a network computers can communicate to each other, can share data, information and resources.

Monday, December 10, 2007

External security

Typically an operating system offers (or hosts) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the operating systems network address. Services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security.
At the front line of security are hardware devices known as
firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
source:wikipedia

Internal security

Internal security can be thought of as protecting the computer's resources from the programs concurrently running on the system. Most operating systems set programs running natively on the computer's processor, so the problem arises of how to stop these programs doing the same task and having the same privileges as the operating system (which is after all just a program too). Processors used for general purpose operating systems generally have a hardware concept of privilege. Generally less privileged programs are automatically blocked from using certain hardware instructions, such as those to read or write from external devices like disks. Instead, they have to ask the privileged program (operating system kernel) to read or write. The operating system therefore gets the chance to check the program's identity and allow or refuse the request.
An alternative strategy, and the only
sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
source:wikipedia

Security

Many operating systems include some level of security. Security is based on the two ideas that:
The operating system provides access to a number of resources, directly or indirectly, such as files on a local disk, privileged system calls, personal information about users, and the services offered by the programs running on the system;
The operating system is capable of distinguishing between some requesters of these resources who are authorized (allowed) to access the resource, and others who are not authorized (forbidden). While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. Requesters, in turn, divide into two categories:
Internal security: an already running program. On some systems, a program once it is running has no limitations, but commonly the program has an identity which it keeps and is used to check all of its requests for resources.
External security: a new request from outside the computer, such as a login at a connected console or some kind of network connection. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?").
Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The
United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.
source:wikipedia

Networking

Most current operating systems are capable of using the TCP/IP networking protocols. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections.
Many operating systems also support one or more vendor-specific legacy networking protocols as well, for example,
SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access.
source:wikipedia

Disk and file systems

All operating systems include support for a variety of file systems. Modern file systems comprise a hierarchy of directories. While the idea is conceptually similar across all general-purpose file systems, some differences in implementation exist. Two noticeable examples of this are the character used to separate directories, and case sensitivity.
Unix demarcates its
path components with a slash (/), a convention followed by operating systems that emulated it or at least its concept of hierarchical directories, such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but had already also adopted the CP/M convention of using slashes for additional options to commands, so instead used the backslash (\) as its component separator. Microsoft Windows continues with this convention; Japanese editions of Windows use ¥, and Korean editions use ₩.[1] Versions of Mac OS prior to OS X use a colon (:) for a path separator. RISC OS uses a period (.).
Unix and
Unix-like operating systems allow for any character in file names other than the slash and NUL characters (including line feed (LF) and other control characters). Unix file names are case sensitive, which allows multiple files to be created with names that differ only in case. By contrast, Microsoft Windows file names are not case sensitive by default. Windows also has a larger set of punctuation characters that are not allowed in file names.
File systems may provide
journaling, which provides safe recovery in the event of a system crash. A journaled file system writes information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates is an alternative to journalling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.
Many Linux distributions support some or all of
ext2, ext3, ReiserFS, Reiser4, GFS, GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS, along with the FAT file systems, and NTFS.
Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The NTFS file system is the most efficient and reliable of the four Windows file systems, and as of
Windows Vista, is the only file system which the operating system can be installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for flash drives.
Mac OS X supports HFS+ as its primary file system, and it supports several other file systems as well, including FAT16, FAT32, NTFS and ZFS.
Common to all these (and other) operating systems is support for file systems typically found on removable media. FAT12 is the file system most commonly found on
floppy discs. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to UDF supported by Linux 2.6 kernels and Windows Vista that facilitates rewriting to DVDs in the same fashion as what has been possible with floppy disks.
source:wikipedia

Memory management

Current computer architectures arrange the computer's memory in a hierarchical manner, starting from the fastest registers, CPU cache, random access memory and disk storage. An operating system's memory manager coordinates the use of these various types of memory by tracking which one is available, which is to be allocated or deallocated and how to move data between them. This activity, usually referred to as virtual memory management, increases the amount of memory available for each process by making the disk storage seem like main memory. There is a speed penalty associated with using disks or other slower storage as memory – if running processes require significantly more RAM than is available, the system may start thrashing. This can happen either because one process requires a large amount of RAM or because two or more processes compete for a larger amount of memory than is available. This then leads to constant transfer of each process's data to slower storage.
Another important part of memory management is managing virtual addresses. If multiple processes are in memory at once, they must be prevented from interfering with each other's memory (unless there is an explicit request to utilise
shared memory. This is achieved by having separate address spaces. Each process sees the whole virtual address space, typically from address 0 up to the maximum size of virtual memory, as uniquely assigned to it. The operating system maintains a page table that match virtual addresses to physical addresses. These memory allocations are tracked so that when a process terminates, all memory used by that process can be made available for other processes.
The operating system can also write inactive memory pages to secondary storage. This process is called "paging" or "swapping" – the terminology varies between operating systems.
It is also typical for operating systems to employ otherwise unused physical memory as a
page cache; requests for data from a slower device can be retained in memory to improve performance. The operating system can also pre-load the in-memory cache with data that may be requested by the user in the near future; SuperFetch is an example of this.
source: wikipedia

Process management

Every program running on a computer, be it a service or an application, is a process. As long as a von Neumann architecture is used to build computers, only one process per CPU can be run at a time.[citation needed] Older microcomputer OSes such as MS-DOS did not attempt to bypass this limit, with the exception of interrupt processing, and only one process could be run under them (although DOS itself featured TSR as a very partial and not too easy to use solution).
Most operating systems enable concurrent execution of many processes and programs at once via
multitasking, even with one CPU. The mechanism was used in mainframes since the early 1960s, but in the personal computers it became available in 1990s. Process management is an operating system's way of dealing with running those multiple processes. On the most fundamental of computers (those containing one processor with one core) multitasking is done by simply switching processes quickly. Depending on the operating system, as more processes run, either each time slice will become smaller or there will be a longer delay before each process is given a chance to run. Process management involves computing and distributing CPU time as well as other resources. Most operating systems allow a process to be assigned a priority which affects its allocation of CPU time. Interactive operating systems also employ some level of feedback in which the task with which the user is working receives higher priority. Interrupt driven processes will normally run at a very high priority. In many systems there is a background process, such as the System Idle Process in Windows, which will run when no other process is waiting for the CPU.
source:wikipedia

Services of an Operating System

1- Process management
2- Memory management
3- Disk and file systems
4- Networking
5- Security
6- Internal security
7- External security

source:wikipedia

Definition of an Operating System (OS)

An operating system (OS) is any software that can boot (start up) a computer and manage its functions. Operating systems come in various forms to meet different requirements. An operating system is an interface between user and hardware.

source:wikipedia

Operating System (OS)


An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. At the foundation of all system software, an operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating networking and managing file systems. Most operating systems come with an application that provides a user interface for managing the operating system, such as a command line interpreter or graphical user interface. The operating system forms a platform for other system software and for application software.


The most commonly-used contemporary desktop OS is
Microsoft Windows, with Mac OS X also being well-known. Linux, GNU and the BSD are popular Unix-like systems.


source:wikipedia, webopedia

Types Of Computer Softwares

Practical computer systems divide software systems into three major classes: system software, programming software and application software, although the distinction is arbitrary, and often blurred.

System software helps run the computer hardware and computer system. It includes operating systems, device drivers, diagnostic tools, servers, windowing systems, utilities and more. The purpose of systems software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such as accessory devices as communications, printers, readers, displays, keyboards, etc.

Programming software usually provides tools to assist a programmer in writing computer programs and software using different programming languages in a more convenient way. The tools include text editors, compilers, interpreters, linkers, debuggers, and so on. An Integrated development environment (IDE) merges those tools into a software bundle, and a programmer may not need to type multiple commands for compiling, interpreter, debugging, tracing, and etc., because the IDE usually has an advanced graphical user interface, or GUI.

Application software allows end users to accomplish one or more specific (non-computer related) tasks. Typical applications include industrial automation, business software, educational software, medical software, databases, and computer games. Businesses are probably the biggest users of application software, but almost every field of human activity now uses some form of application software.
source:wikipedia

Computer Software

Computer software is a general term used to describe a collection of computer programs, procedures and documentation that perform some task on a computer system. The term includes application software such as word processors which perform productive tasks for users, system software such as operating systems, which interface with hardware to provide the necessary services for application software.
source:wikipedia

Saturday, December 8, 2007

Micro Processor

CPU The Micro Processor

A microprocessor is a programmable digital electronic component that incorporates the functions of a central processing unit (CPU) on a single semiconducting integrated circuit (IC). The microprocessor was born by reducing the word size of the CPU from 32 bits to 4 bits, so that the transistors of its logic circuits would fit onto a single part. One or more microprocessors typically serve as the CPU in a computer system, embedded system, or handheld device. Microprocessors made possible the advent of the microcomputer in the mid-1970s. Before this period, electronic CPUs were typically made from bulky discrete switching devices (and later small-scale integrated circuits) containing the equivalent of only a few transistors. By integrating the processor onto one or a very few large-scale integrated circuit packages (containing the equivalent of thousands or millions of discrete transistors), the cost of processing capacity was greatly reduced. Since the advent of the IC in the mid-1970s, the microprocessor has become the most prevalent implementation of the CPU, nearly completely replacing all other forms. See History of computing hardware for pre-electronic and early electronic computers.
Since the early 1970s, the increase in processing capacity of evolving microprocessors has been known to generally follow Moore's Law. It suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every 18 months. In the early 1990s, microprocessor's heat generation (TDP) - due to current leakage - emerged, as a leading developmental constraint[1]. From their humble beginnings as the drivers for calculators, the continued increase in processing capacity has led to the dominance of microprocessors over every other form of computer; every system from the largest mainframes to the smallest handheld computers now uses a microprocessor at its core.
source:wikipedia

Friday, December 7, 2007

Input/Output

input/output
- I/O (input/output), pronounced "eye-oh," describes any operation, program, or device that transfers data to or from a computer. Typical I/O devices are printers, hard disks, keyboards, and mouses. In fact, some devices are basically input-only devices (keyboards and mouses); others are primarily output-only devices (printers); and others provide both input and output of data (hard disks, diskettes, writable CD-ROMs).


source: techtarget

Organization of Computing System

1- Computer Hardware (H/W)
2- Computer Software (S/W)
3- Operating System (O.S)
4- Computer Networks (N/W)
---------------------------------------------------------------------------------------
1- Computer Hardware:- All physical Components of Computer are called its hardware. We can say that all physical Devices are computer hardware. Keyboard, Mouse, Printer, Scanner, Hard Disk, CD-ROM etc.
There are three main units of Computer Hardware.
i- Input Unit
ii- Processing Unit
iii- Output Unit
i- Input Unit: The Devices that are used to give data and/or Instruction to computer system are related to Input Unit. Like Keyboard, Mouse etc.
ii- Processing Unit: This unit is responsible for all data processing in computer system. The device for this purpose is called CPU (Central Processing Unit) or Micro Processor.
iii- Output Unit: The devices that are used to show results of processed data, are related to output Unit. Like Monitor, Printer etc.
We will see these topics in detail latter on.

Wednesday, December 5, 2007

Generations of Computer Developments

Generations of Computer

The Five Generations of Computers: The history of computer development is often referred to in reference to the different generations of computing devices
. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.
First Generation - 1940-1956: Vacuum Tubes. The first computers used vacuum tubes for circuitry. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine Language to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts. The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Second Generation - 1956-1963: Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.
Third Generation - 1964-1971: The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips,called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Fourth Generation – 1971: The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.
Fifth Generation - Present and Beyond: Fifth generation computing devices, based on artificial intelligence are still in development, though there are some applications, such as voic recognition that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

Source: http://www.webopedia.com/

Introduction to Computer

Development History
The Abacus
The abacus is a calculator. Its first recorded use was in 500 B.C. The Chinese used it to add, subtract, multiply, and divide.

Analytical Engine (A Pre-Electronic Computer)
The first mechanical computer was the analytical engine, conceived and partially constructed by Charles Babbage in London, England, between 1822 and 1871. It was designed to receive instructions from punched cards, make calculations with the aid of a memory bank, and print out solutions to math problems. Although Babbage lavished the equivalent of $6,000 of his own money—and $17,000 of the British government's money—on this extraordinarily advanced machine, the precise work needed to engineer its thousands of moving parts was beyond the ability of the technology of the day to produce in large volume. It is doubtful whether Babbage's brilliant concept could have been realized using the available resources of his own century. If it had been, however, it seems likely that the analytical engine could have performed the same functions as many early electronic computers.
The First Electrically Driven Computer
The first computer designed expressly for data processing was patented on January 8, 1889, by Dr. Herman Hollerith of New York. The prototype model of this electrically operated tabulator was built for the U.S. Census Bureau to compute results of the 1890 census.
Using punched cards containing information submitted by respondents to the census questionnaire, the Hollerith machine made instant tabulations from electrical impulses actuated by each hole. It then printed out the processed data on tape. Dr. Hollerith left the Census Bureau in 1896 to establish the Tabulating Machine Company to manufacture and sell his equipment. The company eventually became IBM, and the 80-column punched card used by the company, shown in Figure 1.2, is still known as the Hollerith card.

The Digital Electronic Computer
The first modern digital computer, the ABC (Atanasoff–Berry Computer), was built in a basement on the Iowa State University campus in Ames, Iowa, between 1939 and 1942. The development team was led by John Atanasoff, a professor of physics and mathematics, and Clifford Berry, a graduate student. This machine utilized concepts still in use today: binary arithmetic, parallel processing, regenerative memory, separate memory, and computer functions. When completed, it weighed 750 pounds and could store 3000 bits (.4 KB) of data.
The technology developed for the ABC machine was passed from Atanasoff to John W. Mauchly, who, together with engineer John Presper Eckert, developed the first large-scale digital computer, ENIAC (Electronic Numerical Integrator and Computer). It was built at the University of Pennsylvania's Moore School of Electrical Engineering. Begun as a classified military project, ENIAC was designed to prepare firing and bombing tables for the U.S. Army and Navy. When finally assembled in 1945, ENIAC consisted of 30 separate units, plus a power supply and forced-air cooling. It weighed 30 tons, and used 19,000 vacuum tubes, 1500 relays, and hundreds of thousands of resistors, capacitors, and inductors. It required 200 kilowatts of electrical power to operate.
Another computer history milestone is the Colossus I, an early digital computer built at a secret British government research establishment at Bletchley Park, Buckinghamshire, England, under the direction of Professor Max Newman. Colossus I was designed for a single purpose: cryptanalysis, or code breaking. Using punched paper tape input, it scanned and analyzed 5000 characters per second. Colossus became operational in December 1943 and proved to be an important technological aid to the Allied victory in World War II. It enabled the British to break the otherwise impenetrable German "Enigma" codes.
The 1960s and 1970s marked the golden era of the mainframe computer. Using the technology pioneered with ABC, ENIAC, and Colossus, large computers that served many users (with accompanying large-scale support) came to dominate the industry.
As these highlights show, the concept of the computer has indeed been with us for quite a while. The following table provides an overview of the evolution of modern computers—it is a timeline of important events.

Professional Course Outline

Main Modules

1- Introduction to Computer and Components
2- Operating System (MS Windows XP)
3- Microsoft Office
4- Internet and Electronic Mail
5- Typing Tutor
6- Hardware Maintenance and Trouble Shooting
7- Electronic Commerce
8- Web Development and Maintenance
9- Corel Draw
10- Photo Shop
11- Basics of Computer Networks
12- Impact of I T on Jobs and Organization

Courses for this Site

1- Professional Course
2- Elementary Course
You can learn a lot about these courses from this site without any fee . I will describe all key concepts related to above mentioned courses totally free . Your comments and recommendations will treated as important.

Information Technology Institute Bhimber

AJK Information Technology Board offers several programmes in cooperation with the Private sector which include computer literacy, electronic governance and ultimatly software development programes for unemployed IT graduates.

Google