TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL TCP / IP E-COMMERCE CONCEPTS
The realization that stand - alone computers made no sense made the network possible. When there were too many of them, people realized that stand - alone networks made little sense either, and that they also needed to talk to one another. This was the problem confronting the US Government and the academic community in the late 60s. Everything they had was heterogeneous-computers, networks, operating systems and networking software. Connecting these networks was either impossible or done using expensive proprietary network devices. Something had to be done. Rather than surrender to the monopoly of vendors, the US Department of Defence (DOD) initiated work on a project with a simple objective: develop a set of standard rules (Protocols) which could be used by all machines and networks to communicate.
The solution had to be vendor - neutral, independent of the hardware or the operating system, and even the geographical location. The solution they found was TCP/IP/. It became so successful that both the Internet and the World Wide Web adopted it is their protocol. TCP and IP were developed to connect a number different networks designed by different vendors into a network of networks (the “Internet”). It was initially successful because it delivered a few basic services that everyone needs (file transfer, electronic mail, remote logon) across a very large number of client and server systems. Several computers in a small department can use TCP/IP (along with other protocols) on a single LAN. The
IP component provides routing from the department to the enterprise network, then to regional networks, and finally to the global Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be robust and automatically recover from any node or phone line failure. This design allows the construction of very large networks with less central management. However, because of the automatic recovery, network problems can go undiagnosed and uncorrected for long periods of time.
Internet Protocols:
A protocol is a set of rules that determines how two computers communicate with one another over a network. The protocols around which the Internet was designed embody a series of design principles.
- Interoperable-the system supports computers and software from different vendors. For EC, this means that the customers or businesses are not required to buy specific systems in order to conduct business.
- Layered-the collection of Internet protocols works in layers with each layer building on the layers at lower levels. This layered architecture is shown in
- Simple-each of the layers in the architecture provides only a few functions or operations. This means that application programmers are hidden from the complexities of the underlying hardware.
- End-to end- the Internet is based on end-to-end protocols. This means that the interpretation of the data happens at the application layer(i.e the sending and the receiving side) and not at the network layers. It is much like the post office. The job of the post office is to deliver the mail, only the sender and the receiver are concerned about its contents.

What is TCP/IP?
TCP/IP is a set of protocols developed to allow cooperating computers to share resources across a network. It was developed by a community of researchers centered around the ARPAnet. Certainly the ARPAnet is the best- known TCP/IP network. The most accurate name for the set of protocols are describing is the “Internet protocol suite”. TCP and IP are two of the protocols in this suite. Because TCP and IP are the best known of the protocols, it has become common to use the term TCP/IP or IP/ TCP to refer to the whole family.
TCP/IP is a family of protocols. A few provide “low- level” functions needed for many applications. These include IP, TCP, and UDP.
- IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world.
- TCP - is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.
- UDP (User Datagram Protocol) is a simple transport-layer protocol. It does not provide the same features as TCP, and is thus considered “unreliable.” Again, although this is unsuitable for some applications, it does have much more applicability in other applications than the more reliable and robust TCP.
One of the things that makes UDP nice is its simplicity. Because it doesn’t need to keep track of the sequence of packets, whether they ever made it to their destination, etc., it has lower overhead than TCP. This is another reason why it’s more suited to streaming-data applications: there’s less screwing around that needs to be done with making sure all the packets are there, in the right order, and that sort of thing.
Others are protocols for doing specific tasks, e.g. transferring files between computers, sending mail, or finding out who is logged in on another computer.
TCP/IP Services
Initially TCP/IP was used mostly between minicomputers or mainframes. These machines had their own disks, and generally were self- contained. Thus the most important “traditional” TCP/IP services are:
- File transfer. The file transfer protocol (FTP) allows a user on any computer to get files from another computer, or to send files to another computer. Security is handled by requiring the user to specify a user name and password for the other computer.
- Remote login The network terminal protocol (TELNET) allows a user to log in on any other computer on the network. You start a remote session by specifying a computer to connect to. From that time until you finish the session, anything you type is sent to the other computer. Note that you are really still talking to your own computer. But the telnet program effectively makes your computer invisible while it is running. Every character you type is sent directly to the other system. Generally, the connection to the remote computer behaves much like a dialup connection. That is, the remote system will ask you to log in and give a password, in whatever manner it would normally ask a user who had just dialed it up.
- Computer mail. This allows you to send messages to users on other computers. Originally, people tended to use only one or two specific computers. They would maintain “mail files” on those machines. The computer mail system is simply a way for you to add a message to another user’s mail file. There are some problems with this in an environment where microcomputers are used. The most serious is that a micro is not well suited to receive computer mail. When you send mail, the mail software expects to be able to open a connection to the addressee’s computer, in order to send the mail. If this is a microcomputer, it may be turned off, or it may be running an application other than the mail system. For this reason, mail is normally handled by a larger system, where it is practical to have a mail server running all the time. Microcomputer mail software then becomes a user interface that retrieves mail from the mail server.
Features Of Tcp/Ip
A protocol is a set of rules that have to use by two or more machines to talk to one another. These rules are independent of the applications that have no idea of what is going on at the two ends of the communication channel. The goals of TCP/IP were set by the US Department of Defence, and today, they are its inherent features:
- Independence of vendor, type of machine and network - This was necessary to finally break the monopoly of vendors who claimed that their product alone will save the world.
- Failure recovery - Being originally meant for the defence network, it should be able to divert data immediately through other routes if one or more parts of the network went down.
- Facility to connect new sub networks without significant disruption of services
- High error rate handling - The transmission, irrespective of the distance travelled, must be 100% reliable, with facilities for full error control.
- Enable reliable transmission of files, remote login and remote execution of commands.
TCP/IP originally began by the development of a collection of programs (the DARPA set) that enabled computers to talk among themselves. Later, Berkeley developed an entire suite of tools that are today known as the r – utilities because all their command names are prefixed with an “r. Some of the most important application available in the TCP / IP family are:
- ftp and rep for file transfer
- telnet and rlogin for logging in to remote machines
- rsh (rcmd in SCO UNIX) for executing a command in a remote machine without logging in
- The Network File System (NFS) which lets one machine treat the file system of a remote machine as its own
- The electronic mail service using the Simple Mail Transport Protocol (SMTP), Post Office Protocol (POP) and the mail, pine and elm mailers
- Remote printing which allows people to access printers on remote computers as if they were connected locally
- The Hyper Text Transport Protocol (HTTP) of the World Wide Web which browsers like Netscape use to fetch HTML documents
- The point – to – Point Protocol (PPP) which makes all these facilities available through a telephone line
TCP/IP Terminology
The Internet standards use a specific set of terms when referring to network elements and concepts related to TCP/IP networking. These terms provide a foundation for subsequent chapters illustrates the components of an IP network.

Common terms and concepts in TCP/IP are defined as follows:
- Node Any device, including routers and hosts, which runs an implementation of IP.
- Router A node that can forward IP packets not explicitly addressed to itself. On an IPv6 network, a router also typically advertises its presence and host configuration information.
- Host A node that cannot forward IP packets not explicitly addressed to itself (a non-router). A host is typically the source and the destination of IP traffic. A host silently discards traffic that it receives but that is not explicitly addressed to itself.
- Upper-layer protocol A protocol above IP that uses IP as its transport. Examples include Internet layer protocols such as the Internet Control Message Protocol (ICMP) and Transport layer protocols such as the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).
- LAN segment A portion of a subnet consisting of a single medium that is bounded by bridges.
- Subnet One or more LAN segments that are bounded by routers and use the same IP address prefix. Other terms for subnet are network segment and link.
- Network Two or more subnets connected by routers. Another term for network is internet work.
- Air A node connected to the same subnet as another node.
- Interface The representation of a physical or logical attachment of a node to a subnet. An example of a physical interface is a network adapter.
- Address An identifier that can be used as the source or destination of IP packets and that is assigned at the Internet layer to an interface or set of interfaces.
- Packet The protocol data unit (PDU) that exists at the Internet layer and comprises an IP header and payload.
In a network, a computer is known as a host, sometimes a node, and every such host has a hostname. This name is unique throughout the network. Each machine is fitted with a network interface card that is connected by wire to the corresponding cards in other machines. All communication between hosts normally takes place through these network interfaces only.
Every TCP/IP network has an address that is used by external networks to direct their messages. Every host in the network has an address as well, and the combination of these two addresses forms the complete network address of the host. For instance, 192.168 (or strictly speaking, 192.168.0.0) could be the address of a network, and a host within the network could have the host address of 45.67. In that case, 192.168.45.67 represents the complete network address of the host. This address has to be unique not only within the network, but also to all connected networks. And, if the network is hooked up to the Internet, it has to be unique throughout the world.
When two or more networks are connected together, and uses the TCP/IP protocol for communication, we have an internet, popularly known as an intranet, which is the super – network of all networks. A local internet or intranet may easily be connected to the Internet which also uses the same protocol. Now many installations have several kinds of computers, including microcomputers, workstations, minicomputers, and mainframes. These computers are likely to be configured to perform specialized tasks. Although people are still likely to work with one specific computer, that computer will call on other systems on the net for specialized services. This has led to the “server/client” model of network services.
A server is a system that provides a specific service for the rest of the network. A client is another system that uses that service. (Note that the server and client need not be on different computers. They could be different programs running on the same computer.)
Here are the kinds of servers typically present in a modern computer setup. Note that these computer services can all be provided within the framework of TCP/IP.
- network file systems. A network file system provides the illusion that disks or other devices from one system are directly connected to other systems. There is no need to use a special network utility to access a file on another system. Your computer simply thinks it has some extra disk drives. These extra “virtual” drives refer to the other system’s disks. This capability is useful for several different purposes. It lets you put large disks on a few computers, but still give others access to the disk space. Aside from the obvious economic benefits, this allows people working on several computers to share common files. It makes system maintenance and backup easier, because you don’t have to worry about updating and backing up copies on lots of different machines. A number of vendors now offer high-performance diskless computers. These computers have no disk drives at all. They are entirely dependent upon disks attached to common “file servers”.
- remote printing. This allows you to access printers on other computers as if they were directly attached to yours. (The most commonly used protocol is the remote lineprinter protocol from Berkeley Unix)
- remote execution. This is useful when you can do most of your work on a small computer, but a few tasks require the resources of a larger system. There are a number of different kinds of remote execution. Some operate on a command by command basis. That is, you request that a specific command or set of commands should run on some specific computer. However there are also “remote procedure call” systems that allow a program to call a subroutine that will run on another computer.
- name servers. In large installations, there are a number of different collections of names that have to be managed. This includes users and their passwords, names and network addresses for computers, and accounts. It becomes very tedious to keep this data up to date on all of the computers. Thus the databases are kept on a small number of systems. Other systems access the data over the network.
- terminal servers. Many installations no longer connect terminals directly to computers. Instead they connect them to terminal servers. A terminal server is simply a small computer that only knows how to run telnet (or some other protocol to do remote login). If your terminal is connected to one of these, you simply type the name of a computer, and you are connected to it. Generally it is possible to have active connections to more than one computer at the same time. The terminal server will have provisions to switch between connections rapidly, and to notify you when output is waiting for another connection.
- network-oriented window systems. Until recently, high-performance graphics programs had to execute on a computer that had a bit-mapped graphics screen directly attached to it. Network window systems allow a program to use a display on a different computer. Full-scale network window systems provide an interface that lets you distribute jobs to the systems that are best suited to handle them, but still give you a single graphically-based user interface.
TCP/IP – Connectionless technology:
TCP/IP is built on “connectionless” technology. Information is transferred as a sequence of “datagrams”. A datagram is a collection of data that is sent as a single message. Each of these datagrams is sent through the network individually. There are provisions to open connections (i.e. to start a conversation that will continue for some time). However at some level, information from those connections is broken up into datagrams, and those datagrams are treated by the network as completely separate. For example, suppose you want to transfer a 15000 octet file. Most networks can’t handle a 15000 octet datagram. So the protocols will break this up into something like 30 500-octet datagrams. Each of these datagrams will be sent to the other end. At that point, they will be put back together into the 15000-octet file. However while those datagrams are in transit, the network doesn’t know that there is any connection between them. It is perfectly possible that datagram 14 will actually arrive before datagram 13. It is also possible that somewhere in the network, an error will occur, and some datagram won’t get through at all. In that case, that datagram has to be sent again.
Comments
Post a Comment