Virtual Private Networks


Background
Virtual private networks (VPNs) are a fairly quixotic subject; there is no single defining product, nor
even much of a consensus among VPN vendors as to what comprises a VPN. Consequently, everyone
knows what a VPN is, but establishing a single definition can be remarkably difficult. Some definitions
are sufficiently broad as to enable one to claim that Frame Relay qualifies as a VPN when, in fact, it is
an overlay network. Although an overlay network secures transmissions through a public network, it
does so passively via logical separation of the data streams.
VPNs provide a more active form of security by either encrypting or encapsulating data for transmission
through an unsecured network. These two types of security—encryption and encapsulation—form the
foundation of virtual private networking. However, both encryption and encapsulation are generic terms
that describe a function that can be performed by a myriad of specific technologies. To add to the
confusion, these two sets of technologies can be combined in different implementation topologies. Thus,
VPNs can vary widely from vendor to vendor.
This chapter provides an overview of building VPNs using the Layer 2 Tunneling Protocol (L2TP), and
it explores the possible implementation topologies.

Layer 2 Tunneling Protocol
The Internet Engineering Task Force (IETF) was faced with competing proposals from Microsoft and
Cisco Systems for a protocol specification that would secure the transmission of IP datagrams through
uncontrolled and untrusted network domains. Microsoft’s proposal was an attempt to standardize the
Point-to-Point Tunneling Protocol (PPTP), which it had championed. Cisco, too, had a protocol designed
to perform a similar function. The IETF combined the best elements of each proposal and specified the
open standard L2TP.
The simplest description of L2TP’s functionality is that it carries the Point-to-Point Protocol (PPP)
through networks that aren’t point-to-point. PPP has become the most popular communications protocol
for remote access using circuit-switched transmission facilities such as POTS lines or ISDN to create a
temporary point-to-point connection between the calling device and its destination.
L2TP simulates a point-to-point connection by encapsulating PPP datagrams for transportation through
routed networks or internetworks. Upon arrival at their intended destination, the encapsulation is
removed, and the PPP datagrams are restored to their original format. Thus, a point-to-point
communications session can be supported through disparate networks. This technique is known as
tunneling.

Operational Mechanics
In a traditional remote access scenario, a remote user (or client) accesses a network by directly
connecting a network access server (NAS). Generally, the NAS provides several distinct functions: It
terminates the point-to-point communications session of the remote user, validates the identity of that
user, and then serves that user with access to the network. Although most remote access technologies
bundle these functions into a single device, L2TP separates them into two physically separate devices:
the L2TP Access Server (LAS) and the L2TP Network Server (LNS).
As its names imply, the L2TP Access Server supports authentication, and ingress. Upon successful
authentication, the remote user’s session is forwarded to the LNS, which lets that user into the network.
Their separation enables greater flexibility for implementation than other remote access technologies.

Implementation Topologies
L2TP can be implemented in two distinct topologies:
• Client-aware tunneling
• Client-transparent tunneling
The distinction between these two topologies is whether the client machine that is using L2TP to access
a remote network is aware that its connection is being tunneled.

Client-Aware Tunneling
The first implementation topology is known as client-aware tunneling. This name is derived from the
remote client initiating (hence, being “aware” of) the tunnel. In this scenario, the client establishes a
logical connection within a physical connection to the LAS. The client remains aware of the tunneled
connection all the way through to the LNS, and it can even determine which of its traffic goes through
the tunnel.

Client-Transparent Tunneling
Client-transparent tunneling features L2TP access concentrators (LACs) distributed geographically
close to the remote users. Such geographic dispersion is intended to reduce the long-distance telephone
charges that would otherwise be incurred by remote users dialing into a centrally located LAC.
TheremoteusersneednotsupportL2TPdirectly;theymerelyestablishapoint-to-pointcommunicatio
sessionwiththeLACusingPPP.Ostensibly,theuserwillbeencapsulatingIPdatagramsinPPPframe
The LAC exchanges PPP messages with the remote user and establishes an L2TP tunnel with the LN
through which the remote user’s PPP messages are passed.
The LNS is the remote user’s gateway to its home network. It is the terminus of the tunnel; it strips of
all L2TP encapsulation and serves up network access for the remote user.

Adding More Security
As useful as L2TP is, it is important to recognize that it is not a panacea. It enables flexibility in
delivering remote access, but it does not afford a high degree of security for data in transit. This is due
in large part to the relatively nonsecure nature of PPP. In fairness, PPP was designed explicitly for
point-to-point communications, so securing the connection should not have been a high priority.
An additional cause for concern stems from the fact that L2TP’s tunnels are not cryptographic. Their
datapayloadsaretransmittedintheclear,wrappedonlybyL2TPandPPPframing.However,additional
security may be afforded by implementing the IPSec protocols in conjunction with L2TP. The IPSec
protocols support strong authentication technologies as well as encryption.

Summary
VPNs offer a compelling vision of connectivity through foreign networks at greatly reduced operating
costs. However, the reduced costs are accompanied by increased risk. L2TP offers an open standard
approach for supporting a remote access VPN. When augmented by IPSec protocols, L2TP enables the
realization of the promise of a VPN: an open standard technology for securing remote access in a
virtually private network.

Fiber Distributed Data Interface


Introduction
TheFiberDistributedDataInterface(FDDI)specifiesa100-Mbpstoken-passing,dual-ringLANusing
fiber-opticcable.FDDIisfrequentlyusedashigh-speedbackbonetechnologybecauseofitssupportfor
high bandwidth and greater distances than copper. It should be noted that relatively recently, a related
copper specification, called Copper Distributed Data Interface (CDDI), has emerged to provide
100-Mbps service over copper. CDDI is the implementation
of FDDI protocols over twisted-pair copper wire. This chapter focuses mainly on FDDI specifications
and operations, but it also provides a high-level overview of CDDI.
FDDI uses dual-ring architecture with traffic on each ring flowing in opposite directions (called
counter-rotating). The dual rings consist of a primary and a secondary ring. During normal operation,
theprimaryringisusedfordatatransmission,andthesecondaryringremainsidle.Aswillbediscussed
in detail later in this chapter, the primary purpose of the dual rings is to provide superior reliability

and
robustness. Figure 8-1 shows the counter-rotating primary and secondary FDDI rings.

Standards
FDDIwasdevelopedbytheAmericanNationalStandardsInstitute(ANSI)X3T9.5standardscommittee
inthemid-1980s.Atthetime,high-speedengineeringworkstationswerebeginningtotaxthebandwidth
of existing local-area networks (LANs) based on Ethernet and Token Ring. A new LAN media was
needed that could easily support these workstations and their new distributed applications. At the same
time, network reliability had become an increasingly important issue as system managers migrated
mission-criticalapplicationsfromlargecomputerstonetworks.FDDIwasdevelopedtofilltheseneeds.
After completing the FDDI specification, ANSI submitted FDDI to the International Organization for
Standardization (ISO), which created an international version of FDDI that is completely compatible
with the ANSI standard version.

FDDI Transmission Media
FDDIuses opticalfiberas theprimarytransmission medium,butit alsocanrun overcoppercabling. As
mentioned earlier, FDDI over copper is referred to as Copper-Distributed Data Interface (CDDI).
Optical fiber has several advantages over copper media. In particular, security, reliability, and
performance all are enhanced with optical fiber media because fiber does not emit electrical signals. A
physical medium that does emit electrical signals (copper) can be tapped and therefore would permit
unauthorized access to the data that is transiting the medium. In addition, fiber is immune to electrical
interference from radio frequency interference (RFI) and electromagnetic interference (EMI). Fiber
historically has supported much higher bandwidth (throughput potential) than copper, although recent
technological advances have made copper capable of transmitting at 100 Mbps. Finally, FDDI allows 2
km between stations using multimode fiber, and even longer distances using a single mode.
FDDIdefinestwotypesofopticalfiber:single-modeandmultimode.Amodeisarayoflightthatenters
the fiber at a particular angle. Multimode fiber uses LED as the light-generating device, while
single-mode fiber generally uses lasers.
Multimode fiber allows multiple modes of light to propagate through the fiber. Because these modes of
light enter the fiber at different angles, they will arrive at the end of the fiber at different times.

This
characteristic is known as modal dispersion. Modal dispersion limits the bandwidth and distances that
can be accomplished using multimode fibers. For this reason, multimode fiber is generally used for
connectivity within a building or a relatively geographically contained environment.
Single-mode fiber allows only one mode of light to propagate through the fiber. Because only a single
mode of light is used, modal dispersion is not present with single-mode fiber. Therefore, single-mode
fiber is capable of delivering considerably higher performance connectivity over much larger distances,
which is why it generally is used for connectivity between buildings and within environments that are
more geographically dispersed.
Figure8-2depictssingle-modefiberusingalaserlightsourceandmultimodefiberusingalightemitting
diode (LED) light source.

FDDI Specifications
FDDIspecifiesthephysicalandmedia-accessportionsoftheOSIreferencemodel.FDDIisnotactually
a single specification, but it is a collection of four separate specifications, each with a specific

function.
Combined, these specifications have the capability to provide high-speed connectivity between
upper-layer protocols such as TCP/IP and IPX, and media such as fiber-optic cabling.
FDDI’s four specifications are the Media Access Control (MAC), Physical Layer
Protocol (PHY), Physical-Medium Dependent (PMD), and Station Management (SMT) specifications.
The MAC specification defines how the medium is accessed, including frame format, token handling,
addressing, algorithms for calculating cyclic redundancy check (CRC) value, and error-recovery
mechanisms.ThePHYspecificationdefinesdataencoding/decodingprocedures,clockingrequirements,
and framing, among other functions. The PMD specification defines the characteristics of the
transmission medium, including fiber-optic links, power levels, bit-error rates, optical components, and
connectors. The SMT specification defines FDDI station configuration, ring configuration, and ring
control features, including station insertion and removal, initialization, fault isolation and recovery,
scheduling, and statistics collection.
FDDI is similar to IEEE 802.3 Ethernet and IEEE 802.5 Token Ring in its relationship with the OSI
model. Its primary purpose is to provide connectivity between upper OSI layers of common protocols
and the media used to connect network devices. Figure 8-3 illustrates the four FDDI specifications and
their relationship to each other and to the IEEE-defined Logical Link Control (LLC) sublayer. The LLC
sublayer is a component of Layer 2, the MAC layer, of the OSI reference model.

FDDI Station-Attachment Types
OneoftheuniquecharacteristicsofFDDIisthatmultiplewaysactuallyexistbywhichtoconnectFDDI
devices. FDDI defines four types of devices: single-attachment station (SAS), dual-attachment station
(DAS), single-attached concentrator (SAC), and dual-attached concentrator (DAC).
An SAS attaches to only one ring (the primary) through a concentrator. One of the primary advantages
ofconnectingdeviceswithSASattachmentsisthatthedeviceswillnothaveanyeffectontheFDDIring
if they are disconnected or powered off. Concentrators will be covered in more detail in the following
discussion.
EachFDDIDAShastwoports,designatedAandB.TheseportsconnecttheDAStothedualFDDIring.
Therefore,eachportprovidesaconnectionforboththeprimaryandthesecondaryrings.Asyouwillsee
in the next section, devices using DAS connections will affect the rings if they are disconnected or
poweredoff.Figure8-4showsFDDIDASAandBportswithattachmentstotheprimaryandsecondary
rings.
ct840804
An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block of an
FDDI network. It attaches directly to both the primary and secondary rings and ensures that the failure
orpower-downofanySASdoesnotbringdownthering.ThisisparticularlyusefulwhenPCs,orsimilar
devices that are frequently powered on and off, connect to the ring. Figure 8-5 shows the ring
attachments of an FDDI SAS, DAS, and concentrator.

FDDI Fault Tolerance
FDDI provides a number of fault-tolerant features. In particular, FDDI’s dual-ring environment, the
implementation of the optical bypass switch, and dual-homing support make FDDI a resilient media
technology.

Dual Ring
FDDI’s primary fault-tolerant feature is the dual ring. If a station on the dual ring fails or is powered
down, or if the cable is damaged, the dual ring is automatically wrapped (doubled back onto itself) into
a single ring. When the ring is wrapped, the dual-ring topology becomes a single-ring topology. Data
continues to be transmitted on the FDDI ring without performance impact during the wrap condition.
When a single station fails, as shown in Figure 8-6, devices on either side of the failed (or
powered-down) station wrap, forming a single ring. Network operation continues for the remaining
stations on the ring. When a cable failure occurs, as shown in Figure 8-7, devices on either side of the
cable fault wrap. Network operation continues for all stations.
It should be noted that FDDI truly provides fault tolerance against a single failure only. When two or
more failures occur, the FDDI ring segments into two or more independent rings that are incapable of
communicating with each other.

Optical Bypass Switch
An optical bypass switch provides continuous dual-ring operation if a device on the dual ring fails. This
is used both to prevent ring segmentation and to eliminate failed stations from the ring. The optical
bypass switch performs this function using optical mirrors that pass light from the ring directly to the
DAS device during normal operation. If a failure of the DAS device occurs, such as a power-off, the
optical bypass switch will pass the light through itself by using internal mirrors and thereby will
maintain the ring’s integrity.
Thebenefitofthiscapabilityisthattheringwillnotenterawrappedconditionincaseofadevicefailure.
Figure8-8showsthefunctionalityofanopticalbypassswitchinanFDDInetwork.WhenusingtheOB,
you will notice a tremendous digression of your network as the packets are sent through the OB unit.

Dual Homing
Critical devices, such as routers or mainframe hosts, can use a fault-tolerant technique called dual
homingtoprovideadditionalredundancyandtohelpguaranteeoperation.Indual-homingsituations,the
criticaldeviceisattachedtotwoconcentrators.Figure8-9showsadual-homedconfigurationfordevices
such as file servers and routers.
One pair of concentrator links is declared the active link; the other pair is declared passive. The

passive
link stays in backup mode until the primary link (or the concentrator to which it is attached) is
determined to have failed. When this occurs, the passive link automatically activates.

FDDI Frame Format
The FDDIframe format issimilar tothe format ofa Token Ringframe. This isone ofthe areas inwhich
FDDIborrowsheavilyfromearlierLANtechnologies,suchasTokenRing.FDDIframescanbeaslarge
as 4,500 bytes. Figure 8-10 shows the frame format of an FDDI data frame and token.
• Preamble—Gives a unique sequence that prepares each station for an upcoming frame.
• Start delimiter—Indicates the beginning of a frame by employing a signaling pattern that
differentiates it from the rest of the frame.
• Frame control—Indicates the size of the address fields and whether the frame contains
asynchronous or synchronous data, among other control information.
• Destinationaddress—Containsaunicast(singular),multicast(group),orbroadcast(everystation)
address. As with Ethernet and Token Ring addresses, FDDI destination addresses are 6 bytes long.
• Source address—Identifies the single station that sent the frame. As with Ethernet and Token Ring
addresses, FDDI source addresses are 6 bytes long.
• Data—Contains either information destined for an upper-layer protocol or control information.
• Frame check sequence (FCS)—Is filed by the source station with a calculated cyclic redundancy
check value dependent on frame contents (as with Token Ring and Ethernet). The destination
address recalculates the value to determine whether the frame was damaged in transit. If so, the
frame is discarded.
• End delimiter—Contains unique symbols; cannot be data symbols that indicate the end of the
frame.
• Framestatus—Allowsthesourcestationtodeterminewhetheranerroroccurred;identifieswhether
the frame was recognized and copied by a receiving station.

Copper Distributed Data Interface
Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over twisted-pair
copper wire. Like FDDI, CDDI provides data rates of 100 Mbps and uses dual-ring architecture to
provide redundancy. CDDI supports distances of about 100 meters from desktop to concentrator.
CDDI is defined by the ANSI X3T9.5 Committee. The CDDI standard is officially named the
Twisted-Pair Physical Medium-Dependent (TP-PMD) standard. It is also referred to as the Twisted-Pair
Distributed Data Interface (TP-DDI), consistent with the term Fiber Distributed Data Interface (FDDI).
CDDI is consistent with the physical and media-access control layers defined by the ANSI standard.
The ANSI standard recognizes only two types of cables for CDDI: shielded twisted pair (STP) and
unshieldedtwistedpair(UTP).STPcablinghas150-ohmimpedanceandadherestoEIA/TIA568(IBM
Type1)specifications.UTPisdata-gradecabling(Category5)consistingoffourunshieldedpairsusing
tight-pair twists and specially developed insulating polymers in plastic jackets adhering to EIA/TIA
568B specifications.

Introduction to WAN Technologies


What Is a WAN?
A WAN is a data communications network that covers a relatively broad geographic area and that often
uses transmission facilities provided by common carriers, such as telephone companies. WAN
technologies generally function at the lower three layers of the OSI reference model: the physical layer,
the data link layer, and the network layer. Figure 3-1 illustrates the relationship between the common
WAN technologies and the OSI model.


Point-to-Point Links
A point-to-point link provides a single, pre-established WAN communications path from the customer
premises through a carrier network, such as a telephone company, to a remote network. Point-to-point
linesareusuallyleasedfromacarrierandthusareoftencalledleasedlines.Forapoint-to-pointline,the
carrierallocatespairsofwireandfacilityhardwaretoyourlineonly.Thesecircuitsaregenerallypriced
based on bandwidth required and distance between the two connected points. Point-to-point links are
generally more expensive than shared services such as Frame Relay. Figure 3-2 illustrates a typical
point-to-point link through a WAN.


Circuit Switching
Switched circuits allow data connections that can be initiated when needed and terminated when
communication is complete. This works much like a normal telephone line works for voice
communication. Integrated Services Digital Network (ISDN) is a good example of circuit switching.
When a router has data for a remote site, the switched circuit is initiated with the circuit number of the
remote network. In the case of ISDN circuits, the device actually places a call to the telephone number
of the remote ISDN circuit. When the
two networks are connected and authenticated, they can transfer data. When the data transmission is
complete, the call can be terminated. Figure 3-3 illustrates an example of this type of circuit.


Packet Switching
Packet switching is a WAN technology in which users share common carrier resources. Because this
allows the carrier to make more efficient use of its infrastructure, the cost to the customer is generally
much better than with point-to-point lines. In a packet switching setup, networks have connections into
thecarrier’snetwork,andmanycustomerssharethecarrier’snetwork.Thecarriercanthencreatevirtua
circuits between customers’ sites by which packets of data are delivered from one to the other through
the network. The section of the carrier’s network that is shared is often referred to as a cloud.
Some examples of packet-switching networks include Asynchronous Transfer Mode (ATM), Frame
Relay, Switched Multimegabit Data Services (SMDS), and X.25.


WAN Virtual Circuits
A virtual circuit is a logical circuit created within a shared network between two network devices. Two
types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs).
SVCs are virtual circuits that are dynamically established on demand and terminated when transmission
is complete. Communication over an SVC consists of three phases: circuit establishment, data transfer,
andcircuittermination.Theestablishmentphaseinvolvescreatingthevirtualcircuitbetweenthesource
and destination devices. Data transfer involves transmitting data between the devices over the virtual
circuit,andthecircuitterminationphaseinvolvestearingdownthevirtualcircuitbetweenthesourceand
destinationdevices.SVCsareusedinsituationsinwhichdatatransmissionbetweendevicesissporadic,
largely because SVCs increase bandwidth used due to the circuit establishment and termination phases,
but they decrease the cost associated with constant virtual circuit availability.
PVC is a permanently established virtual circuit that consists of one mode: data transfer. PVCs are used
in situations in which data transfer between devices is constant. PVCs decrease the bandwidth use
associated with the establishment and termination of virtual circuits, but they increase costs due to
constantvirtualcircuitavailability.PVCsaregenerallyconfiguredbytheserviceproviderwhenanorder
is placed for service.


WAN Dialup Services
Dialup services offer cost-effective methods for connectivity across WANs. Two popular dialup
implementations are dial-on-demand routing (DDR) and dial backup.
DDR is a technique whereby a router can dynamically initiate a call on a switched circuit when it needs
to send data. In a DDR setup, the router is configured to initiate the call when certain criteria are met,
such as a particular type of network traffic needing to be transmitted. When the connection is made,
traffic passes over the line. The router configuration specifies an idle timer that tells the router to drop
the connection when the circuit has remained idle for a certain period.
Dial backup is another way of configuring DDR. However, in dial backup, the switched circuit is used
to provide backup service for another type of circuit, such as point-to-point or packet switching. The
router is configured so that when a failure is detected on the primary circuit, the dial backup line is
initiated. The dial backup line then supports the WAN connection until the primary circuit is restored.
When this occurs, the dial backup connection is terminated.


WAN Devices
WANs use numerous types of devices that are specific to WAN environments. WAN switches, access
servers,modems,CSU/DSUs,andISDNterminaladaptersarediscussedinthefollowingsections.Other
devices found in WAN environments that are used in WAN implementations include routers, ATM
switches, and multiplexers.


WAN Switch
A WAN switch is a multiport internetworking device used in carrier networks. These devices typically
switch such traffic as Frame Relay, X.25, and SMDS, and operate at the data link layer of the OSI
referencemodel.Figure3-5illustratestworoutersatremoteendsofaWANthatareconnectedbyWAN
switches.


Access Server
Anaccessserveractsasaconcentrationpointfordial-inanddial-outconnections


Modem
A modem is a device that interprets digital and analog signals, enabling data to be transmitted over
voice-grade telephone lines. At the source, digital signals are converted to a form suitable for
transmission over analog communication facilities. At the destination, these analog signals are returned
to their digital form. Figure 3-7 illustrates a simple modem-to-modem connection through a WAN.


CSU/DSU
A channel service unit/digital service unit (CSU/DSU) is a digital-interface device used to connect a
router to a digital circuit like a T1. The CSU/DSU also provides signal timing for communication
between these devices. Figure 3–8 illustrates the placement of the CSU/DSU in a WAN implementation.


ISDN Terminal Adapter
An ISDN terminal adapter is a device used to connect ISDN Basic Rate Interface (BRI) connections to
other interfaces, such as EIA/TIA-232 on a router. A terminal adapter is essentially an ISDN modem,
although it is called a terminal adapter because it does not actually convert analog to digital signals.

Introduction to LAN Protocols


What Is a LAN?
A LAN is a high-speed data network that covers a relatively small geographic area. It typically connects
workstations,personalcomputers,printers,servers,andotherdevices.LANsoffercomputerusersmany
advantages,includingsharedaccesstodevicesandapplications,fileexchangebetweenconnectedusers,
and communication between users via electronic mail and other applications.


LAN Protocols and the OSI Reference Model
LAN protocols function at the lowest two layers of the OSI reference model, as discussed in Chapter 1,
“Internetworking Basics,” between the physical layer and the data link layer. Figure 2-2 illustrates how
several popular LAN protocols map to the OSI reference model.


LAN Media-Access Methods
Mediacontentionoccurswhentwoormorenetworkdeviceshavedatatosendatthesametime.Because
multiple devices cannot talk on the network simultaneously, some type of method must be used to allow
one device access to the network media at a time. This is done in two main ways: carrier sense multiple
access collision detect (CSMA/CD) and token passing.
In networks using CSMA/CD technology such as Ethernet, network devices contend for the network
media. When a device has data to send, it first listens to see if any other device is currently using the
network. If not, it starts sending its data. After finishing its transmission, it listens again to see if a
collision occurred. A collision occurs when two devices send data simultaneously. When a collision
happens, each device waits a random length of time before resending its data. In most cases, a collision
will not occur again between the two devices. Because of this type of network contention, the busier a
network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as
the number of devices on a single network increases.
In token-passing networks such as Token Ring and FDDI, a special network frame called a token is
passed around the network from device to device. When a device has data to send, it must wait until it
has the token and then sends its data. When the data transmission is complete, the token is released so
that other devices may use the network media. The main advantage of token-passing networks is that
they are deterministic. In other words, it is easy to calculate the maximum time that will pass before a
device has the opportunity to send data. This explains the popularity of token-passing networks in some
real-time environments such as factories, where machinery must be capable of communicating at a
determinable interval.
For CSMA/CD networks, switches segment the network into multiple collision domains. This reduces
the number of devices per network segment that must contend for the media. By creating smaller
collision domains, the performance of a network can be increased significantly without requiring
addressing changes.

NormallyCSMA/CDnetworksarehalf-duplex,meaningthatwhileadevicesendsinformation,itcannot
receive at the time. While that device is talking, it is incapable of also listening for other traffic. This is
much like a walkie-talkie. When one person wants to talk, he presses the transmit button and begins
speaking. While he is talking, no one else on the same frequency can talk. When the sending person is
finished, he releases the transmit button and the frequency is available to others.
When switches are introduced, full-duplex operation is possible. Full-duplex works much like a
telephone—you can listen as well as talk at the same time. When a network device is attached directly
to the port of a network switch, the two devices may be capable of operating in full-duplex mode. In
full-duplex mode, performance can be increased, but
not quite as much as some like to claim. A 100-Mbps Ethernet segment is capable of transmitting 200
Mbps of data, but only 100 Mbps can travel in one direction at a time. Because most data connections
areasymmetric(withmoredatatravelinginonedirectionthantheother),thegainisnotasgreatasmany
claim. However, full-duplex operation does increase the throughput of most applications because the
network media is no longer shared. Two devices on a full-duplex connection can send data as soon as it
is ready.
Token-passing networks such as Token Ring can also benefit from network switches. In large networks,
the delay between turns to transmit may be significant because the token is passed around the network.


LAN Transmission Methods
LAN data transmissions fall into three classifications: unicast, multicast, and broadcast.
In each type of transmission, a single packet is sent to one or more nodes.
Inaunicasttransmission,asinglepacketissentfromthesourcetoadestinationonanetwork.First,the
source node addresses the packet by using the address of the destination node. The package is then sent
onto the network, and finally, the network passes the packet to its destination.
A multicast transmission consists of a single data packet that is copied and sent to a specific subset of
nodes on the network. First, the source node addresses the packet by using a multicast address. The
packet is then sent into the network, which makes copies of the packet and sends a copy to each node
that is part of the multicast address.
A broadcast transmission consists of a single data packet that is copied and sent to all nodes on the
network. In these types of transmissions, the source node addresses the packet by using the broadcast
address. The packet is then sent on to the network, which makes copies of the packet and sends a copy
to every node on the network.


LAN Topologies
LAN topologies define the manner in which network devices are organized. Four common LAN
topologies exist: bus, ring, star, and tree. These topologies are logical architectures, but the actual
devices need not be physically organized in these configurations. Logical bus and ring topologies, for
example, are commonly organized physically as a star. A bus topology is a linear LAN architecture in
which transmissions from network stations propagate the length of the medium and are received by all
other stations. Of the three
most widely used LAN implementations, Ethernet/IEEE 802.3 networks—including
100BaseT—implement a bus topology.

A ring topology is a LAN architecture that consists of a series of devices connected to one another by
unidirectional transmission links to form a single closed loop. Both Token Ring/IEEE 802.5 and FDDI
networks implement a ring topology.

A star topology is a LAN architecture in which the endpoints on a network are connected to a common
central hub, or switch, by dedicated links. Logical bus and ring topologies are often implemented
physically in a star topology.

A tree topology is a LAN architecture that is identical to the bus topology, except that branches with
multiple nodes are possible in this case.


LAN Devices
Devices commonly used in LANs include repeaters, hubs, LAN extenders, bridges, LAN switches, and
routers.
A repeater is a physical layer device used to interconnect the media segments of an extended network.
Arepeateressentiallyenablesaseriesofcablesegmentstobetreatedasasinglecable.Repeatersreceive
signals from one network segment and amplify, retime, and retransmit those signals to another network
segment. These actions prevent signal deterioration caused by long cable lengths and large numbers of
connecteddevices.Repeatersareincapableofperformingcomplexfilteringandothertrafficprocessing.
In addition, all electrical signals, including electrical disturbances and other errors, are repeated and
amplified. The total number of repeaters and network segments that can be connected is limited due to
timing and other issues. Figure 2-6 illustrates a repeater connecting two network segments.

A hub is a physical layer device that connects multiple user stations, each via a dedicated cable.
Electricalinterconnectionsareestablishedinsidethehub.Hubsareusedtocreateaphysicalstarnetwork
whilemaintainingthelogicalbusorringconfigurationoftheLAN.Insomerespects,ahubfunctionsas
a multiport repeater.

A LAN extender is a remote-access multilayer switch that connects to a host router. LAN extenders
forward traffic from all the standard network layer protocols (such as IP, IPX, and AppleTalk) and filter
traffic based on the MAC address or network layer protocol type. LAN extenders scale well because the
host router filters out unwanted broadcasts and multicasts. However, LAN extenders are not capable of
segmenting traffic or creating security firewalls. Figure 2–7 illustrates multiple LAN extenders
connected to the host router through a WAN.

Internetworking Basics


What Is an Internetwork?

An internetwork is a collection of individual networks, connected by intermediate networking devices,
thatfunctionsasasinglelargenetwork.Internetworkingreferstotheindustry,products,andprocedures
thatmeetthechallengeofcreatingandadministeringinternetworks.Figure1-1illustratessomedifferent
kinds of network technologies that can be interconnected by routers and other networking devices to
create an internetwork.

History of Internetworking

The first networks were time-sharing networks that used mainframes and attached terminals. Such
environments were implemented by both IBM’s Systems Network Architecture (SNA) and Digital’s
network architecture.
Local-area networks (LANs) evolved around the PC revolution. LANs enabled multiple users in a
relatively small geographical area to exchange files and messages, as well as access shared resources
such as file servers and printers.
Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create
connectivity. Some of the technologies used for connecting LANs include T1, T3, ATM, ISDN, ADSL,
Frame Relay, radio links, and others. New methods of connecting dispersed LANs are appearing
everyday.
Today, high-speed LANs and switched internetworks are becoming widely used, largely because they
operate at very high speeds and support such high-bandwidth applications as multimedia and
videoconferencing.
Internetworking evolved as a solution to three key problems: isolated LANs, duplication
of resources, and a lack of network management. Isolated LANs made electronic communication
between different offices or departments impossible. Duplication of resources meant that the same
hardwareandsoftwarehadtobesuppliedtoeachofficeordepartment,asdidseparatesupportstaff.This
lack of network management meant that no centralized method of managing and troubleshooting
networks existed.

Open System Interconnection Reference Model

The Open System Interconnection (OSI) reference model describes how information from a software
application in one computer moves through a network medium to a software application in another
computer. The OSI reference model is a conceptual model composed of seven layers, each specifying
particular network functions. The model was developed by the International Organization for
Standardization (ISO) in 1984, and it is now considered the primary architectural model for
intercomputer communications. The OSI model divides the tasks involved with moving information
betweennetworkedcomputersintosevensmaller,moremanageabletaskgroups.Ataskorgroupoftasks
isthenassignedtoeachofthesevenOSIlayers.Eachlayerisreasonablyself-containedsothatthetasks
assignedtoeachlayercanbeimplementedindependently.Thisenablesthesolutionsofferedbyonelayer
to be updated without adversely affecting the other layers. The following list details the seven layers of
the Open System Interconnection (OSI) reference model:
• Layer 7—Application
• Layer 6—Presentation
• Layer 5—Session
• Layer 4—Transport
• Layer 3—Network
• Layer 2—Data link
• Layer 1—Physical


Characteristics of the OSI Layers

The seven layers of the OSI reference model can be divided into two categories: upper layers and lower
layers.
The upper layers of the OSI model deal with application issues and generally are implemented only in
software. The highest layer, the application layer, is closest to the end user. Both users and application
layerprocessesinteractwithsoftwareapplicationsthatcontainacommunicationscomponent.Theterm
upper layer is sometimes used to refer to any layer above another layer in the OSI model.
ThelowerlayersoftheOSImodelhandledatatransportissues.Thephysicallayerandthedatalinklayer
areimplementedinhardwareandsoftware.Thelowestlayer,thephysicallayer,isclosesttothephysical
network medium (the network cabling, for example) and is responsible for actually placing information
on the medium.

Protocols

TheOSImodelprovidesaconceptualframeworkforcommunicationbetweencomputers,butthemodel
itself is not a method of communication. Actual communication is made possible by using
communication protocols. In the context of data networking, a protocol is a formal set of rules and
conventions that governs how computers exchange information over a network medium. A protocol
implements the functions of one or more of the OSI layers.
Awidevarietyofcommunicationprotocolsexist.SomeoftheseprotocolsincludeLANprotocols,WAN
protocols,networkprotocols,androutingprotocols.LANprotocolsoperateatthephysicalanddatalink
layersoftheOSImodelanddefinecommunicationoverthevariousLANmedia.WANprotocolsoperate
atthelowestthreelayersoftheOSImodelanddefinecommunicationoverthevariouswide-areamedia.
Routing protocols are network layer protocols that are responsible for exchanging information between
routers so that the routers can select the proper path for network traffic. Finally, network protocols are
the various upper-layer protocols that exist in a given protocol suite. Many protocols rely on others for
operation.Forexample,manyroutingprotocolsusenetworkprotocolstoexchangeinformationbetween
routers.ThisconceptofbuildinguponthelayersalreadyinexistenceisthefoundationoftheOSImodel.


OSI Model and Communication Between Systems

Information being transferred from a software application in one computer system to a software
applicationinanothermustpassthroughtheOSIlayers.Forexample,ifasoftwareapplicationinSystem
A has information to transmit to a software application in System B, the application program in System
A will pass its information to the application layer (Layer 7) of System A. The application layer then
passes the information to the presentation layer (Layer 6), which relays the data to the session layer
(Layer5),andsoondowntothephysicallayer(Layer1).Atthephysicallayer,theinformationisplaced
on the physical network medium and is sent across the medium to System B. The physical layer of
System B removes the information from the physical medium, and then its physical layer passes the
informationuptothedatalinklayer(Layer2),whichpassesittothenetworklayer(Layer3),andsoon,
until it reaches the application layer (Layer 7) of System B. Finally, the application layer of System B
passes the information to the recipient application program to complete the communication process.

Interaction Between OSI Model Layers

A given layer in the OSI model generally communicates with three other OSI layers: the layer directly
above it, the layer directly below it, and its peer layer in other networked computer systems. The data
link layer in System A, for example, communicates with the network layer of System A, the physical
layer of System A, and the data link layer in System B


OSI Layer Services

One OSI layer communicates with another layer to make use of the services provided by the second
layer. The services provided by adjacent layers help a given OSI layer communicate with its peer layer
in other computer systems. Three basic elements are involved in layer services: the service user, the
service provider, and the service access point (SAP).
In this context, the service user is the OSI layer that requests services from an adjacent OSI layer. The
service provider is the OSI layer that provides services to service users. OSI layers can provide services
to multiple service users. The SAP is a conceptual location at which one OSI layer can request the
services of another OSI layer.

OSI Model Layers and Information Exchange

The seven OSI layers use various forms of control information to communicate with their peer layers in
other computer systems. This control information consists of specific requests and instructions that are
exchanged between peer OSI layers.
Control information typically takes one of two forms: headers and trailers. Headers are prepended to
data that has been passed down from upper layers. Trailers are appended to data that has been passed
down from upper layers. An OSI layer is not required to attach a header or a trailer to data from upper
layers.
Headers, trailers, and data are relative concepts, depending on the layer that analyzes the information
unit.Atthenetworklayer,forexample,aninformationunitconsistsofaLayer3headeranddata.Atthe
data link layer, however, all the information passed down by the network layer (the Layer 3 header and
the data) is treated as data.
In other words, the data portion of an information unit at a given OSI layer potentially
cancontainheaders,trailers,anddatafromallthehigherlayers.Thisisknownasencapsulation.Figure
1-6 shows how the header and data from one layer are encapsulated into the header of the next lowest
layer.

OSI Model Physical Layer

The physical layer defines the electrical, mechanical, procedural, and functional specifications for
activating, maintaining, and deactivating the physical link between communicating network systems.
Physical layer specifications define characteristics such as voltage levels, timing of voltage changes,
physical data rates, maximum transmission distances, and physical connectors. Physical layer
implementations can be categorized as either LAN or WAN specifications. Figure 1-7 illustrates some
common LAN and WAN physical layer implementations.

OSI Model Data Link Layer
The data link layer provides reliable transit of data across a physical network link. Different data link
layerspecificationsdefinedifferentnetworkandprotocolcharacteristics,includingphysicaladdressing,
network topology, error notification, sequencing of frames, and flow control. Physical addressing (as
opposed to network addressing) defines how devices are addressed at the data link layer. Network
topology consists of the data link layer specifications that often define how devices are to be physically
connected, such as in a bus or a ring topology. Error notification alerts upper-layer protocols that a
transmission error has occurred, and the sequencing of data frames reorders frames that are transmitted
out of sequence. Finally, flow control moderates the transmission of data so that the receiving device is
not overwhelmed with more traffic than it can handle at one time.
The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into two
sublayers: Logical Link Control (LLC) and Media Access Control (MAC). Figure 1-8 illustrates the
IEEE sublayers of the data link layer.
The Logical Link Control (LLC) sublayer of the data link layer manages communications between
devices over a single link of a network. LLC is defined in the IEEE 802.2 specification and supports
both connectionless and connection-oriented services used by higher-layer protocols. IEEE 802.2
defines a number of fields in data link layer frames that enable multiple higher-layer protocols to share
a single physical data link. The Media Access Control (MAC) sublayer of the data link layer manages
protocol access to the physical network medium. The IEEE MAC specification defines MAC addresses,
which enable multiple devices to uniquely identify one another at the data link layer.


OSI Model Network Layer
The network layer defines the network address, which differs from the MAC address. Some network
layer implementations, such as the Internet Protocol (IP), define network addresses in a way that route
selection can be determined systematically by comparing the source network address with the
destination network address and applying the subnet mask. Because this layer defines the logical
network layout, routers can use this layer to determine how to forward packets. Because of this, much
of the design and configuration work for internetworks happens at Layer 3, the network layer.
OSI Model Transport Layer
The transport layer accepts data from the session layer and segments the data for transport across the
network.Generally,thetransportlayerisresponsibleformakingsurethatthedataisdeliverederror-free
and in the proper sequence. Flow control generally occurs at the transport layer.
Flow control manages data transmission between devices so that the transmitting device does not send
more data than the receiving device can process. Multiplexing enables data from several applications to
be transmitted onto a single physical link. Virtual circuits are established, maintained, and terminated
by the transport layer. Error checking involves creating various mechanisms for detecting transmission
errors,whileerrorrecoveryinvolvesacting,suchasrequestingthatdataberetransmitted,toresolveany
errors that occur.
The transport protocols used on the Internet are TCP and UDP.

OSI Model Session Layer
The session layer establishes, manages, and terminates communication sessions. Communication
sessions consist of service requests and service responses that occur between applications located in
different network devices. These requests and responses are coordinated by protocols implemented at
the session layer. Some examples of session-layer implementations include Zone Information Protocol
(ZIP), the AppleTalk protocol that coordinates the name binding process; and Session Control Protocol
(SCP), the DECnet Phase IV session layer protocol.

OSI Model Presentation Layer
The presentation layer provides a variety of coding and conversion functions that are applied to
application layer data. These functions ensure that information sent from the application layer of one
system would be readable by the application layer of another system. Some examples of presentation
layer coding and conversion schemes include common data representation formats, conversion of
character representation formats, common data compression schemes, and common data encryption
schemes.
Commondatarepresentationformats,ortheuseofstandardimage,sound,andvideoformats,enablethe
interchange of application data between different types of computer systems. Conversion schemes are
used to exchange information with systems by using different text and data representations, such as
EBCDIC and ASCII. Standard data compression schemes enable data that is compressed at the source
device to be properly decompressed at the destination. Standard data encryption schemes enable data
encrypted at the source device to be properly deciphered at the destination.
Presentation layer implementations are not typically associated with a particular protocol stack. Some
well-known standards for video include QuickTime and Motion Picture Experts Group (MPEG).
QuickTime is an Apple Computer specification for video and audio, and MPEG is a standard for video
compression and coding.
Among the well-known graphic image formats are Graphics Interchange Format (GIF), Joint
Photographic Experts Group (JPEG), and Tagged Image File Format (TIFF). GIF is a standard for
compressing and coding graphic images. JPEG is another compression and coding standard for graphic
images, and TIFF is a standard coding format for graphic images.

OSI Model Application Layer
TheapplicationlayeristheOSIlayerclosesttotheenduser,whichmeansthatboththeOSIapplication
layer and the user interact directly with the software application.
This layer interacts with software applications that implement a communicating component. Such
application programs fall outside the scope of the OSI model. Application layer functions typically
include identifying communication partners, determining resource availability, and synchronizing
communication.
Whenidentifyingcommunicationpartners,theapplicationlayerdeterminestheidentityandavailability
of communication partners for an application with data to transmit.
When determining resource availability, the application layer must decide whether sufficient network
resources for the requested communication exist. In synchronizing communication, all communication
between applications requires cooperation that is managed by the application layer.
Some examples of application layer implementations include Telnet, File Transfer Protocol (FTP), and
Simple Mail Transfer Protocol (SMTP).

ISO Hierarchy of Networks
Large networks typically are organized as hierarchies. A hierarchical organization provides such
advantages as ease of management, flexibility, and a reduction in unnecessary traffic. Thus, the
International Organization for Standardization (ISO) has adopted a number of terminology conventions
foraddressing networkentities.Key termsdefinedin thissectioninclude endsystem(ES), intermediate
system (IS), area, and autonomous system (AS).
An ES is a network device that does not perform routing or other traffic forwarding functions. Typical
ESs include such devices as terminals, personal computers, and printers. An IS is a network device that
performs routing or other traffic-forwarding functions. Typical ISs include such devices as routers,
switches, and bridges. Two types of IS networks exist: intradomain IS and interdomain IS. An
intradomain IS communicates within a single autonomous system, while an interdomain IS
communicateswithinandbetweenautonomoussystems.Anareaisalogicalgroupofnetworksegments
and their attached devices. Areas are subdivisions of autonomous systems (AS’s). An AS is a collection
ofnetworksunderacommonadministrationthatshareacommonroutingstrategy.Autonomoussystems
are subdividedintoareas,andanASissometimescalledadomain.

Internetwork Addressing
Internetwork addresses identify devices separately or as members of a group. Addressing schemes vary
depending on the protocol family and the OSI layer. Three types of internetwork addresses are
commonly used: data link layer addresses, Media Access Control (MAC) addresses, and network layer
addresses.

Data Link Layer Addresses
A data link layer address uniquely identifies each physical network connection of a network device.
Data-link addresses sometimes are referred to as physical or hardware addresses. Data-link addresses
usually exist within a flat address space and have a pre-established and typically fixed relationship to a
specific device.
End systems generally have only one physical network connection and thus have only one data-link
address. Routers and other internetworking devices typically have multiple physical network
connections and therefore have multiple data-link addresses.

Standards Organizations
A wide variety of organizations contribute to internetworking standards by providing forums for
discussion, turning informal discussion into formal specifications, and proliferating specifications after
they are standardized.
Most standards organizations create formal standards by using specific processes: organizing ideas,
discussingtheapproach,developingdraftstandards,votingonallorcertainaspectsofthestandards,and
then formally releasing the completed standard to the public.
Some of the best-known standards organizations that contribute to internetworking standards include
these:
• International Organization for Standardization (ISO)—ISO is an international standards
organization responsible for a wide range of standards, including many that are relevant to
networking.Itsbest-knowncontributionisthedevelopmentoftheOSIreferencemodelandtheOSI
protocol suite.
• American National Standards Institute (ANSI)—ANSI, which is also a member of
the ISO, is the coordinating body for voluntary standards groups within the United States. ANSI
developed the Fiber Distributed Data Interface (FDDI) and other communications standards.
• Electronic Industries Association (EIA)—EIA specifies electrical transmission standards,
including those used in networking. The EIA developed the widely used EIA/TIA-232 standard
(formerly known as RS-232).
• Institute of Electrical and Electronic Engineers (IEEE)—IEEE is a professional organization
that defines networking and other standards. The IEEE developed the widely used LAN standards
IEEE 802.3 and IEEE 802.5.
• International Telecommunication Union Telecommunication Standardization Sector
(ITU-T)—Formerly called the Committee for International Telegraph and Telephone (CCITT),
ITU-T is now an international organization that develops communication standards. The ITU-T
developed X.25 and other communications standards.
• Internet Activities Board (IAB)—IAB is a group of internetwork researchers who discuss issues
pertinent to the Internet and set Internet policies through decisions and task forces. The IAB
designates some Request For Comments (RFC) documents as Internet standards, including
Transmission Control Protocol/Internet Protocol (TCP/IP) and the Simple Network Management
Protocol (SNMP).

Virtual Private Networks


Background
Virtual private networks (VPNs) are a fairly quixotic subject; there is no single defining product, nor
even much of a consensus among VPN vendors as to what comprises a VPN. Consequently, everyone
knows what a VPN is, but establishing a single definition can be remarkably difficult. Some definitions
are sufficiently broad as to enable one to claim that Frame Relay qualifies as a VPN when, in fact, it is
an overlay network. Although an overlay network secures transmissions through a public network, it
does so passively via logical separation of the data streams.
VPNsprovideamoreactiveformofsecuritybyeitherencryptingorencapsulatingdatafortransmission
through an unsecured network. These two types of security—encryption and encapsulation—form the
foundationofvirtualprivatenetworking.However,bothencryptionandencapsulationaregenericterms
that describe a function that can be performed by a myriad of specific technologies. To add to the
confusion,thesetwosetsoftechnologiescanbecombinedindifferentimplementationtopologies.Thus,
VPNs can vary widely from vendor to vendor.
This chapter provides an overview of building VPNs using the Layer 2 Tunneling Protocol (L2TP), and
it explores the possible implementation topologies.


Layer 2 Tunneling Protocol
The Internet Engineering Task Force (IETF) was faced with competing proposals from Microsoft and
Cisco Systems for a protocol specification that would secure the transmission of IP datagrams through
uncontrolled and untrusted network domains. Microsoft’s proposal was an attempt to standardize the
Point-to-PointTunnelingProtocol(PPTP),whichithadchampioned.Cisco,too,hadaprotocoldesigned
to perform a similar function. The IETF combined the best elements of each proposal and specified the
open standard L2TP.

The simplest description of L2TP’s functionality is that it carries the Point-to-Point Protocol (PPP)
throughnetworksthataren’tpoint-to-point.PPPhasbecomethemostpopularcommunicationsprotocol
for remote access using circuit-switched transmission facilities such as POTS lines or ISDN to create a
temporary point-to-point connection between the calling device and its destination.
L2TP simulates a point-to-point connection by encapsulating PPP datagrams for transportation through
routed networks or internetworks. Upon arrival at their intended destination, the encapsulation is
removed, and the PPP datagrams are restored to their original format. Thus, a point-to-point
communications session can be supported through disparate networks. This technique is known as
tunneling.


Operational Mechanics
In a traditional remote access scenario, a remote user (or client) accesses a network by directly
connecting a network access server (NAS). Generally, the NAS provides several distinct functions: It
terminates the point-to-point communications session of the remote user, validates the identity of that
user, and then serves that user with access to the network. Although most remote access technologies
bundle these functions into a single device, L2TP separates them into two physically separate devices:
the L2TP Access Server (LAS) and the L2TP Network Server (LNS).
As its names imply, the L2TP Access Server supports authentication, and ingress. Upon successful
authentication, the remote user’s session is forwarded to the LNS, which lets that user into the network.
Their separation enables greater flexibility for implementation than other remote access technologies.


Implementation Topologies
L2TP can be implemented in two distinct topologies:
• Client-aware tunneling
• Client-transparent tunneling
The distinction between these two topologies is whether the client machine that is using L2TP to access
a remote network is aware that its connection is being tunneled.


Client-Aware Tunneling
The first implementation topology is known as client-aware tunneling. This name is derived from the
remote client initiating (hence, being “aware” of) the tunnel. In this scenario, the client establishes a
logical connection within a physical connection to the LAS. The client remains aware of the tunneled
connection all the way through to the LNS, and it can even determine which of its traffic goes through
the tunnel.


Client-Transparent Tunneling
Client-transparent tunneling features L2TP access concentrators (LACs) distributed geographically
close to the remote users. Such geographic dispersion is intended to reduce the long-distance telephone
charges that would otherwise be incurred by remote users dialing into a centrally located LAC.
TheremoteusersneednotsupportL2TPdirectly;theymerelyestablishapoint-to-pointcommunication
sessionwiththeLACusingPPP.Ostensibly,theuserwillbeencapsulatingIPdatagramsinPPPframes.
The LAC exchanges PPP messages with the remote user and establishes an L2TP tunnel with the LNS
through which the remote user’s PPP messages are passed.
The LNS is the remote user’s gateway to its home network. It is the terminus of the tunnel; it strips off
all L2TP encapsulation and serves up network access for the remote user.


Adding More Security
As useful as L2TP is, it is important to recognize that it is not a panacea. It enables flexibility in
delivering remote access, but it does not afford a high degree of security for data in transit. This is due
in large part to the relatively nonsecure nature of PPP. In fairness, PPP was designed explicitly for
point-to-point communications, so securing the connection should not have been a high priority.
An additional cause for concern stems from the fact that L2TP’s tunnels are not cryptographic. Their
datapayloadsaretransmittedintheclear,wrappedonlybyL2TPandPPPframing.However,additional
security may be afforded by implementing the IPSec protocols in conjunction with L2TP. The IPSec
protocols support strong authentication technologies as well as encryption.


Summary
VPNs offer a compelling vision of connectivity through foreign networks at greatly reduced operating
costs. However, the reduced costs are accompanied by increased risk. L2TP offers an open standard
approach for supporting a remote access VPN. When augmented by IPSec protocols, L2TP enables the
realization of the promise of a VPN: an open standard technology for securing remote access in a
virtually private network.

Simple Network Management Protocol


Background
The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the
exchange of management information between network devices. It is part of the Transmission Control
Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP enables network administrators to manage
network performance, find and solve network problems, and plan for network growth.
Two versions of SNMP exist: SNMP version 1 (SNMPv1) and SNMP version 2 (SNMPv2). Both
versions have a number of features in common, but SNMPv2 offers enhancements, such as additional
protocoloperations.StandardizationofyetanotherversionofSNMP—SNMPVersion3(SNMPv3)—is
pending. This chapter provides descriptions of the SNMPv1 and SNMPv2 protocol operations.


SNMP Basic Components
An SNMP-managed network consists of three key components: managed devices, agents, and
network-management systems (NMSs).
A managed device is a network node that contains an SNMP agent and that resides on a managed
network. Managed devices collect and store management information and make this information
available to NMSs using SNMP. Managed devices, sometimes called network elements, can be routers
and access servers, switches and bridges, hubs, computer hosts, or printers.
An agent is a network-management software module that resides in a managed device. An agent has
localknowledgeofmanagementinformationandtranslatesthatinformationintoaformcompatiblewith
SNMP.
AnNMSexecutesapplicationsthatmonitorandcontrolmanageddevices.NMSsprovidethebulkofthe
processing and memory resources required for network management. One or more NMSs must exist on
any managed network.


SNMP Basic Commands
Managed devices are monitored and controlled using four basic SNMP commands: read, write, trap,
and traversal operations.
The read command is used by an NMS to monitor managed devices. The NMS examines different
variables that are maintained by managed devices.
The write command is used by an NMS to control managed devices. The NMS changes the values of
variables stored within managed devices.
The trap command is used by managed devices to asynchronously report events to the NMS. When
certain types of events occur, a managed device sends a trap to the NMS.
Traversal operations are used by the NMS to determine which variables a managed device supports and
to sequentially gather information in variable tables, such as a routing table.


SNMP Management Information Base
A Management Information Base (MIB) is a collection of information that is organized hierarchically.
MIBs are accessed using a network-management protocol such as SNMP. They are comprised of
managed objects and are identified by object identifiers.
Amanagedobject(sometimescalledaMIBobject,anobject,oraMIB)isoneofanynumberofspecific
characteristics of a managed device. Managed objects are comprised of one or more object instances,
which are essentially variables.
Two types of managed objects exist: scalar and tabular. Scalar objects define a single object instance.
Tabular objects define multiple related object instances that are grouped in MIB tables.
AnexampleofamanagedobjectisatInput,whichisascalarobjectthatcontainsasingleobjectinstance,
the integer value that indicates the total number of input AppleTalk packets on a router interface.
An object identifier (or object ID) uniquely identifies a managed object in the MIB hierarchy. The MIB
hierarchy can be depicted as a tree with a nameless root, the levels of which are assigned by different
organizations. Figure 56-3 illustrates the MIB tree.
The top-level MIB object IDs belong to different standards organizations, while lower-level object IDs
are allocated by associated organizations.
Vendorscandefineprivatebranchesthatincludemanagedobjectsfortheirownproducts.MIBsthathave
not been standardized typically are positioned in the experimental branch.
The managed object atInput can be uniquely identified either by the object
name—iso.identified-organization.dod.internet.private.enterprise.cisco.temporary
variables.AppleTalk.atInput—or by the equivalent object descriptor, 1.3.6.1.4.1.9.3.3.1.


SNMP and Data Representation
SNMP must account for and adjust to incompatibilities between managed devices. Different computers
usedifferentdatarepresentationtechniques,whichcancompromisethecapabilityofSNMPtoexchange
information between managed devices. SNMP uses a subset of Abstract Syntax Notation One (ASN.1)
to accommodate communication between diverse systems.


SNMP Version 1
SNMPversion1(SNMPv1)istheinitialimplementationoftheSNMPprotocol.ItisdescribedinRequest
For Comments (RFC) 1157 and functions within the specifications of the Structure of Management
Information (SMI). SNMPv1 operates over protocols such as User Datagram Protocol (UDP), Internet
Protocol (IP), OSI Connectionless Network Service (CLNS), AppleTalk Datagram-Delivery Protocol
(DDP), and Novell Internet Packet Exchange (IPX). SNMPv1 is widely used and is the de facto
network-management protocol in the Internet community.
SNMPv1 and Structure of Management Information
The Structure of Management Information (SMI) defines the rules for describing management
information, using Abstract Syntax Notation One (ASN.1). The SNMPv1 SMI is defined in RFC 1155.
The SMI makes three key specifications: ASN.1 data types, SMI-specific data types, and SNMP MIB
tables.


SNMPv1 and ASN.1 Data Types
The SNMPv1 SMI specifies that all managed objects have a certain subset of Abstract Syntax Notation
One (ASN.1) data types associated with them. Three ASN.1 data types are required: name, syntax, and
encoding. The name serves as the object identifier (object ID). The syntax defines the data type of the
object (for example, integer or string). The SMI uses a subset of the ASN.1 syntax definitions. The
encoding data describes how information associated with a managed object is formatted as a series of
data items for transmission over the network.


SNMPv1 and SMI-Specific Data Types
The SNMPv1 SMI specifies the use of a number of SMI-specific data types, which are divided into two
categories: simple data types and application-wide data types.
Three simple data types are defined in the SNMPv1 SMI, all of which are unique values: integers, octet
strings, and object IDs. The integer data type is a signed integer in the range of –2,147,483,648 to
2,147,483,647. Octet strings are ordered sequences of 0 to 65,535 octets. Object IDs come from the set
of all object identifiers allocated according to the rules specified in ASN.1.
Sevenapplication-widedatatypesexistintheSNMPv1SMI:networkaddresses,counters,gauges,time
ticks,opaques,integers,andunsignedintegers.Networkaddressesrepresentanaddressfromaparticular
protocol family. SNMPv1 supports only 32-bit IP addresses. Counters are non-negative integers that
increase until they reach a maximum value and then return to zero. In SNMPv1, a 32-bit counter size is
specified. Gauges are non-negative integers that can increase or decrease but that retain the maximum
value reached. A time tick represents a hundredth of a second since some event. An opaque represents
an arbitrary encoding that is used to pass arbitrary information strings that do not conform to the strict
data typing used by the SMI. An integer represents signed integer-valued information. This data type
redefinestheintegerdatatype,whichhasarbitraryprecisioninASN.1butboundedprecisionintheSMI.
An unsigned integer represents unsigned integer-valued information and is useful when values are
always non-negative. This data type redefines the integer data type, which has arbitrary precision in
ASN.1 but bounded precision in the SMI.


SNMP MIB Tables
TheSNMPv1SMIdefineshighlystructuredtablesthatareusedtogrouptheinstancesofatabularobject
(that is, an object that contains multiple variables). Tables are composed of zero or more rows, which
are indexed in a way that allows SNMP to retrieve or alter an entire row with a single Get, GetNext,or
Set command.


SNMPv1 Protocol Operations
SNMP is a simple request/response protocol. The network-management system issues a request, and
managed devices return responses. This behavior is implemented by using one of four protocol
operations: Get, GetNext, Set, and Trap. The Get operation is used by the NMS to retrieve the value of
one or more object instances from an agent. If the agent responding to the Get operation cannot provide
valuesforalltheobjectinstancesinalist,itdoesnotprovideanyvalues.TheGetNextoperationisused
by the NMS to retrieve the value of the next object instance in a table or a list within an agent. The Set
operation is used by the NMS to set the values of object instances within an agent. The Trap operation
is used by agents to asynchronously inform the NMS of a significant event.


SNMP Version 2
SNMP version 2 (SNMPv2) is an evolution of the initial version, SNMPv1. Originally, SNMPv2 was
published as a set of proposed Internet standards in 1993; currently, it is a draft standard. As with
SNMPv1, SNMPv2 functions within the specifications of the Structure of Management Information
(SMI). In theory, SNMPv2 offers a number of improvements to SNMPv1, including additional protocol
operations.


SNMPv2 and Structure of Management Information
The Structure of Management Information (SMI) defines the rules for describing management
information, using ASN.1.
The SNMPv2 SMI is described in RFC 1902. It makes certain additions and enhancements to the
SNMPv1 SMI-specific data types, such as including bit strings, network addresses, and counters. Bit
stringsaredefinedonlyinSNMPv2andcomprisezeroormorenamedbitsthatspecifyavalue.Network
addresses represent an address from a particular protocol family. SNMPv1 supports only 32-bit IP
addresses,butSNMPv2cansupportothertypesofaddressesaswell.Countersarenon-negativeintegers
that increase until they reach a maximum value and then return to zero. In SNMPv1, a 32-bit counter
size is specified. In SNMPv2, 32-bit and 64-bit counters are defined.


SMI Information Modules
The SNMPv2 SMI also specifies information modules, which specify a group of related definitions.
Three types of SMI information modules exist: MIB modules, compliance statements, and capability
statements. MIB modules contain definitions of interrelated managed objects. Compliance statements
provide a systematic way to describe a group of managed objects that must be implemented for
conformancetoastandard.Capabilitystatementsareusedtoindicatethepreciselevelofsupportthatan
agent claims with respect to a MIB group. An NMS can adjust its behavior toward agents according to
the capabilities statements associated with each agent.


SNMPv2 Protocol Operations
The Get, GetNext, and Set operations used in SNMPv1 are exactly the same as those used in SNMPv2.
However, SNMPv2 adds and enhances some protocol operations. The SNMPv2 Trap operation, for
example, serves the same function as that used in SNMPv1, but it uses a different message format and
is designed to replace the SNMPv1 Trap.
SNMPv2alsodefinestwonewprotocoloperations:GetBulkandInform.TheGetBulkoperationisused
by the NMS to efficiently retrieve large blocks of data, such as multiple rows in a table. GetBulk fills a
response message with as much of the requested data as will fit. The Inform operation allows one NMS
to send trap information to another NMS and to then receive a response. In SNMPv2, if the agent
responding to GetBulk operations cannot provide values for all the variables in a list, it provides partial
results.


SNMP Management
SNMP is a distributed-management protocol. A system can operate exclusively as either an NMS or an
agent, or it can perform the functions of both. When a system operates as both an NMS and an agent,
another NMS might require that the system query manage devices and provide a summary of the
information learned, or that it report locally stored management information.


SNMP Security
SNMP lacks any authentication capabilities, which results in vulnerability to a variety of security
threats. These include masquerading occurrences, modification of information, message sequence and
timing modifications, and disclosure. Masquerading consists of an unauthorized entity attempting to
perform management operations by assuming the identity of an authorized management entity.
Modification of information involves an unauthorized entity attempting to alter a message generated by
anauthorizedentitysothatthemessageresultsinunauthorizedaccountingmanagementorconfiguration
managementoperations.Messagesequenceandtimingmodificationsoccurwhenanunauthorizedentity
reorders, delays, or copies and later replays a message generated by an authorized entity. Disclosure
results when an unauthorized entity extracts values stored in managed objects, or learns of notifiable
events by monitoring exchanges between managers and agents. Because SNMP does not implement
authentication,manyvendorsdonotimplementSetoperations,therebyreducingSNMPtoamonitoring
facility.


SNMP Interoperability
As presently specified, SNMPv2 is incompatible with SNMPv1 in two key areas: message formats and
protocol operations. SNMPv2 messages use different header and protocol data unit (PDU) formats than
SNMPv1 messages. SNMPv2 also uses two protocol operations that are not specified in SNMPv1.
Furthermore, RFC 1908 defines two possible SNMPv1/v2 coexistence strategies: proxy agents and
bilingual network-management systems.


Proxy Agents
An SNMPv2 agent can act as a proxy agent on behalf of SNMPv1 managed devices, as follows:
• An SNMPv2 NMS issues a command intended for an SNMPv1 agent.
• The NMS sends the SNMP message to the SNMPv2 proxy agent.
• The proxy agent forwards Get, GetNext, and Set messages to the SNMPv1 agent unchanged.
• GetBulkmessagesareconvertedbytheproxyagenttoGetNextmessagesandthenareforwardedto
the SNMPv1 agent.
TheproxyagentmapsSNMPv1trapmessagestoSNMPv2trapmessagesandthenforwardsthemtothe
NMS.


Bilingual Network-Management System
Bilingual SNMPv2 network-management systems support both SNMPv1 and SNMPv2. To support this
dual-management environment, a management application in the bilingual NMS must contact an agent.
The NMS then examines information stored in a local database to determine whether the agent supports
SNMPv1orSNMPv2.Basedontheinformationinthedatabase,theNMScommunicateswiththeagent
using the appropriate version of SNMP.


SNMP Reference: SNMPv1 Message Formats
SNMPv1 messages contain two parts: a message header and a protocol data unit (PDU). Figure 56-4
illustrates the basic format of an SNMPv1 message.


SNMPv1 Message Header
SNMPv1 message headers contain two fields: Version Number and Community Name.
The following descriptions summarize these fields:
• Version number—Specifies the version of SNMP used.
• Community name—Defines an access environment for a group of NMSs. NMSs within the
community are said to exist within the same administrative domain. Community names serve as a
weak form of authentication because devices that do not know the proper community name are
precluded from SNMP operations.


SNMPv1 Protocol Data Unit
SNMPv1 PDUs contain a specific command (Get, Set, and so on) and operands that indicate the object
instances involved in the transaction. SNMPv1 PDU fields are variable in length, as prescribed by
ASN.1.

Point-to-Point Protocol


Introduction
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP
traffic over point-to-point links. PPP also established a standard for the assignment and management of
IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol
multiplexing, link configuration, link quality testing, error detection, and option negotiation for such
capabilities as network layer address negotiation and data-compression negotiation. PPP supports these
functions by providing an extensible Link Control Protocol (LCP) and a family of Network Control
Protocols (NCPs) to negotiate optional configuration parameters and facilities. In addition to IP, PPP
supports other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet.


PPP Components
PPP provides a method for transmitting datagrams over serial point-to-point links. PPP contains three
main components:
• Amethodforencapsulatingdatagramsoverseriallinks.PPPusestheHigh-LevelDataLinkControl
(HDLC) protocol as a basis for encapsulating datagrams over point-to-point links. (See Chapter 16,
“Synchronous Data Link Control and Derivatives,” for more information on HDLC.)
• An extensible LCP to establish, configure, and test the data link connection.
• A family of NCPs for establishing and configuring different network layer protocols. PPP is
designed to allow the simultaneous use of multiple network layer protocols.


General Operation
To establish communications over a point-to-point link, the originating PPP first sends LCP frames to
configure and (optionally) test the data link. After the link has been established and optional facilities
have been negotiated as needed by the LCP, the originating PPP sends NCP frames to choose and
configure one or more network layer protocols. When each of the chosen network layer protocols has
beenconfigured,packetsfromeachnetworklayerprotocolcanbesentoverthelink.Thelinkwillremain
configured for communications until explicit LCP or NCP frames close the link, or until some external
event occurs (for example, an inactivity timer expires or a user intervenes).


Physical Layer Requirements
PPPiscapableofoperatingacrossanyDTE/DCEinterface.ExamplesincludeEIA/TIA-232-C(formerly
RS-232-C), EIA/TIA-422 (formerly RS-422), EIA/TIA-423 (formerly RS-423), and International
TelecommunicationUnionTelecommunicationStandardizationSector(ITU-T)(formerlyCCITT)V.35.
The only absolute requirement imposed by PPP is the provision of a duplex circuit, either dedicated or
switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP
link layer frames. PPP does not impose any restrictions regarding transmission rate other than those
imposed by the particular DTE/DCE interface in use.



PPP Link Layer
PPP uses the principles, terminology, and frame structure of the International Organization for
Standardization (ISO) HDLC procedures (ISO 3309-1979), as modified by ISO 3309:1984/PDAD1
“Addendum 1: Start/Stop Transmission.” ISO 3309-1979 specifies the HDLC frame structure for use in
synchronousenvironments.ISO3309:1984/PDAD1specifiesproposedmodificationstoISO3309-1979
to allow its use in asynchronous environments. The PPP control procedures use the definitions and
controlfieldencodingsstandardizedinISO4335-1979andISO4335-1979/Addendum1-1979.

The following descriptions summarize the PPP frame fields :
• Flag—A single byte that indicates the beginning or end of a frame. The flag field consists of the
binary sequence 01111110.
• Address—A single byte that contains the binary sequence 11111111, the standard broadcast
address. PPP does not assign individual station addresses.
• Control—A single byte that contains the binary sequence 00000011, which calls for transmission
of user data in an unsequenced frame. A connectionless link service similar to that of Logical Link
Control (LLC) Type 1 is provided. (For more information about LLC types and frame types, refer
to Chapter 16.)
• Protocol—Two bytes that identify the protocol encapsulated in the information field of the frame.
The most up-to-date values of the protocol field are specified in the most recent Assigned Numbers
Request For Comments (RFC).
• Data—Zero or more bytes that contain the datagram for the protocol specified in the protocol field.
The end of the information field is found by locating the closing flag sequence and allowing 2 bytes
for the FCS field. The default maximum length
oftheinformationfieldis1,500bytes.Byprioragreement,consentingPPPimplementationscanuse
other values for the maximum information field length.
• Frame check sequence (FCS)—Normally 16 bits (2 bytes). By prior agreement, consenting PPP
implementations can use a 32-bit (4-byte) FCS for improved error detection.
The LCP can negotiate modifications to the standard PPP frame structure. Modified frames, however,
always will be clearly distinguishable from standard frames.


PPP Link-Control Protocol
The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the
point-to-point connection. LCP goes through four distinct phases.
First, link establishment and configuration negotiation occur. Before any network layer datagrams (for
example, IP) can be exchanged, LCP first must open the connection and negotiate configuration
parameters.Thisphaseiscompletewhenaconfiguration-acknowledgmentframehasbeenbothsentand
received.
This is followed by link quality determination. LCP allows an optional link quality determination phase
following the link-establishment and configuration-negotiation phase. In this phase, the link is tested to
determine whether the link quality is sufficient to bring up network layer protocols. This phase is
optional.LCPcandelaytransmissionofnetworklayerprotocolinformationuntilthisphaseiscomplete.
At this point, network layer protocol configuration negotiation occurs. After LCP has finished the link
quality determination phase, network layer protocols can be configured separately by the appropriate
NCP and can be brought up and taken down at any time. If LCP closes the link, it informs the network
layer protocols so that they can take appropriate action.
Finally, link termination occurs. LCP can terminate the link at any time. This usually is done at the
request of a user but can happen because of a physical event, such as the loss of carrier or the expiration
of an idle-period timer.
ThreeclassesofLCPframesexist.Link-establishmentframesareusedtoestablishandconfigurealink.
Link-termination frames are used to terminate a link, and link-maintenance frames are used to manage
and debug a link.
These frames are used to accomplish the work of each of the LCP phases.


Summary
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP
traffic over point-to-point links. PPP also established a standard for assigning and managing IP
addresses, asynchronous and bit-oriented synchronous encapsulation, network protocol multiplexing,
link configuration, link quality testing, error detection, and option negotiation for added networking
capabilities.
PPP provides a method for transmitting datagrams over serial point-to-point links, which include the
following three components:
• A method for encapsulating datagrams over serial links
• An extensible LCP to establish, configure, and test the connection
• A family of NCPs for establishing and configuring different network layer protocols
PPP is capable of operating across any DTE/DCE interface. PPP does not impose any restriction
regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.
Six fields make up the PPP frame. The PPP LCP provides a method of establishing, configuring,
maintaining, and terminating the point-to-point connection.

Internet Protocols


Background
The Internet protocols are the world’s most popular open-system (nonproprietary) protocol suite
becausetheycanbeusedtocommunicateacrossanysetofinterconnectednetworksandareequally
well suited for LAN and WAN communications. The Internet protocols consist of a suite of
communication protocols, of which the two best known are the Transmission Control Protocol
(TCP) and the Internet Protocol (IP). The Internet protocol suite not only includes lower-layer
protocols (such as TCP and IP), but it also specifies common applications such as electronic mail,
terminalemulation,andfiletransfer.Thischapterprovidesabroadintroductiontospecificationsthat
comprise the Internet protocols. Discussions include IP addressing and key upper-layer protocols
used in the Internet. Specific routing protocols are addressed individually later in this document.
Internet protocols were first developed in the mid-1970s, when the Defense Advanced Research
ProjectsAgency(DARPA)becameinterestedinestablishingapacket-switchednetworkthatwould
facilitate communication between dissimilar computer systems at research institutions. With the
goal of heterogeneous connectivity in mind, DARPA funded research by Stanford University and
Bolt,Beranek,andNewman(BBN).TheresultofthisdevelopmenteffortwastheInternetprotocol
suite, completed in the late 1970s.
TCP/IPlaterwasincludedwithBerkeleySoftwareDistribution(BSD)UNIXandhassincebecome
the foundation on which the Internet and the World Wide Web (WWW) are based.
Documentation of the Internet protocols (including new or revised protocols) and policies are
specified in technical reports called Request For Comments (RFCs), which are published and then
reviewed and analyzed by the Internet community. Protocol refinements are published in the new
RFCs. To illustrate the scope of the Internet protocols, Figure 30-1 maps many of the protocols of
the Internet protocol suite and their corresponding OSI layers. This chapter addresses the basic
elements and operations of these and other key Internet protocols.


Internet Protocol (IP)
TheInternetProtocol(IP)isanetwork-layer(Layer3)protocolthatcontainsaddressinginformation
and some control information that enables packets to be routed. IP is documented in RFC 791 and
is the primary network-layer protocol in the Internet protocol suite. Along with the Transmission
Control Protocol (TCP), IP represents the heart of the Internet protocols. IP has two primary
responsibilities: providing connectionless, best-effort delivery of datagrams through an
internetwork;andprovidingfragmentationandreassemblyofdatagramstosupportdatalinkswith
different maximum-transmission unit (MTU) sizes.


IP Packet Format
An IP packet contains several types of information, The following discussion describes the IP packet fields:
• Version—Indicates the version of IP currently used.
• IP Header Length (IHL)—Indicates the datagram header length in 32-bit words.
• Type-of-Service—Specifies how an upper-layer protocol would like a current datagram to be
handled, and assigns datagrams various levels of importance.
• Total Length—Specifies the length, in bytes, of the entire IP packet, including the data and
header.
• Identification—Containsanintegerthatidentifiesthecurrentdatagram.Thisfieldisusedtohelp
piece together datagram fragments.
• Flags—Consists of a 3-bit field of which the two low-order (least-significant) bits control
fragmentation.Thelow-orderbitspecifieswhetherthepacketcanbefragmented.Themiddlebit
specifies whether the packet is the last fragment in a series of fragmented packets. The third or
high-order bit is not used.
• Fragment Offset—Indicates the position of the fragment’s data relative to the beginning of the
dataintheoriginaldatagram,whichallowsthedestinationIPprocesstoproperlyreconstructthe
original datagram.
• Time-to-Live—Maintains a counter that gradually decrements down to zero, at which point the
datagram is discarded. This keeps packets from looping endlessly.
• Protocol—Indicateswhichupper-layerprotocolreceivesincomingpacketsafterIPprocessingis
complete.
• Header Checksum—Helps ensure IP header integrity.
• Source Address—Specifies the sendingnode.
• Destination Address—Specifies the receiving node.



IP Subnet Addressing
IP networks can be divided into smaller networks called subnetworks (or subnets). Subnetting
provides the network administrator with several benefits, including extra flexibility, more efficient
use of network addresses, and the capability to contain broadcast traffic (a broadcast will not cross
a router).
Subnets are under local administration. As such, the outside world sees an organization as a single
network and has no detailed knowledge of the organization’s internal structure.
A given network address can be broken up into many subnetworks. For example, 172.16.1.0,
172.16.2.0,172.16.3.0,and172.16.4.0areallsubnetswithinnetwork171.16.0.0.(All0sinthehost
portion of an address specifies the entire network.)



IP Subnet Mask
A subnet address is created by “borrowing” bits from the host field and designating them as the
subnet field. The number of borrowed bits varies and is specified by the subnet mask. Figure 30-6
shows how bits are borrowed from the host address field to create the subnet address field.
SubnetmasksusethesameformatandrepresentationtechniqueasIPaddresses.Thesubnetmask,
however,hasbinary1sinallbitsspecifyingthenetworkandsubnetworkfields,andbinary0sinall
bits specifying the host field. Figure 30-7 illustrates a sample subnet mask.

Subnet mask bits should come from the high-order (left-most) bits of the host field.DetailsofClassBandCsubnetmasktypesfollow.ClassAaddressesarenotdiscussed
in this chapter because they generally are subnetted on an 8-bit boundary.

ThedefaultsubnetmaskforaClassBaddressthathasnosubnettingis255.255.0.0,whilethesubnet
mask for a Class B address 171.16.0.0 that specifies eight bits of subnetting is 255.255.255.0. The
8
reason for this is that eight bits of subnetting or 2 – 2 (1 for the network address and 1 for the
8
broadcast address) = 254 subnets possible, with 2 – 2 = 254 hosts per subnet.
The subnet mask for a Class C address 192.168.2.0 that specifies five bits of subnetting is
5
255.255.255.248.With five bits available for subnetting, 2 – 2 = 30 subnets possible, with
3
2 – 2 = 6 hosts per subnet.
Thereferencechartsshownintable30–2andtable30–3canbeusedwhenplanningClassBandC
networks to determine the required number of subnets and hosts, and the appropriate subnet mask.


How Subnet Masks are Used to Determine the Network Number
The router performs a set process to determine the network (or more specifically, the subnetwork)
address.First,therouterextractstheIPdestinationaddressfromtheincomingpacketandretrieves
the internal subnet mask. It then performs alogical AND operation to obtain the network number.
This causes the host portion of the IP destination address to be removed, while the destination
network number remains. The router then looks up the destination network number and matches it
with an outgoing interface. Finally, it forwards the frame to the destination IP address. Specifics
regarding the logical AND operation are discussed in the following section.

Logical AND Operation
Three basic rules govern logically “ANDing” two binary numbers. First, 1 “ANDed” with 1 yields
1.Second,1“ANDed”with0yields0.Finally,0“ANDed”with0yields0.Thetruthtableprovided
in table 30–4 illustrates the rules for logical AND operations.

TwosimpleguidelinesexistforrememberinglogicalANDoperations:Logically“ANDing”a1with
a 1 yields the original value, and logically “ANDing” a 0 with any number yields 0.
Figure30-9illustratesthatwhenalogicalANDofthedestinationIPaddressandthesubnetmaskis
performed, the subnetwork number remains, which the router uses to forward the packet.


Address Resolution Protocol (ARP) Overview
Fortwomachinesonagivennetworktocommunicate,theymustknowtheothermachine’sphysical
(orMAC)addresses.BybroadcastingAddressResolutionProtocols(ARPs),ahostcandynamically
discover the MAC-layer address corresponding to a particular IP network-layer address.
AfterreceivingaMAC-layeraddress,IPdevicescreateanARPcachetostoretherecentlyacquired
IP-to-MACaddressmapping,thusavoidinghavingtobroadcastARPSwhentheywanttorecontact
a device. If the device does not respond within a specified time frame, the cache entry is flushed.
InadditiontotheReverseAddressResolutionProtocol(RARP)isusedtomapMAC-layeraddresses
toIPaddresses.RARP,whichisthelogicalinverseofARP,mightbeusedbydisklessworkstations
thatdonotknowtheirIPaddresseswhentheyboot.RARPreliesonthepresenceofaRARPserver
with table entries of MAC-layer-to-IP address mappings.

Internet Routing
Internet routing devices traditionally have been called gateways. In today’s terminology, however,
thetermgatewayrefersspecificallytoadevicethatperformsapplication-layerprotocoltranslation
between devices. Interior gateways refer to devices that perform these protocol functions between
machines or networks under the same administrative control or authority, such as a corporation’s
internal network. These are known as autonomous systems. Exterior gateways perform protocol
functions between independent networks.
Routers within the Internet are organized hierarchically. Routers used for information exchange
within autonomous systems are called interior routers, which use a variety of Interior Gateway
Protocols(IGPs)toaccomplishthispurpose.TheRoutingInformationProtocol(RIP)isanexample
of an IGP.
Routers that move information between autonomous systems are called exterior routers. These
routersuseanexteriorgatewayprotocoltoexchangeinformationbetweenautonomoussystems.The
Border Gateway Protocol (BGP) is an example of an exterior gateway protocol.


IP Routing
IProutingprotocolsaredynamic.Dynamicroutingcallsforroutestobecalculatedautomaticallyat
regularintervalsbysoftwareinroutingdevices.Thiscontrastswithstaticrouting,whereroutersare
establishedbythenetworkadministratoranddonotchangeuntilthenetworkadministratorchanges
them.
AnIProutingtable,whichconsistsofdestinationaddress/nexthoppairs,isusedtoenabledynamic
routing. An entry in this table, for example, would be interpreted as follows: to get to network
172.31.0.0, send the packet out Ethernet interface 0 (E0).
IProutingspecifiesthatIPdatagramstravelthroughinternetworksonehopatatime.Theentireroute
is not known at the onset of the journey, however. Instead, at each stop, the next destination is
calculated by matching the destination address within the datagram with an entry in the current
node’s routing table.
Each node’s involvement in the routing process is limited to forwarding packets based on internal
information.Thenodesdonotmonitorwhetherthepacketsgettotheirfinaldestination,nordoesIP
provide for error reporting back to the source when routing anomalies occur. This task is left to
anotherInternetprotocol,theInternetControl-MessageProtocol(ICMP),whichisdiscussedinthe
following section.


Internet Control Message Protocol (ICMP)
TheInternet Control Message Protocol (ICMP) is a network-layer Internet protocol that provides
message packets to report errors and other information regarding IP packet processing back to the
source. ICMP is documented in RFC 792.


ICMP Messages
ICMPsgenerateseveralkindsofusefulmessages,includingDestinationUnreachable,EchoRequest
andReply,Redirect,TimeExceeded,andRouterAdvertisementandRouterSolicitation.IfanICMP
messagecannotbedelivered,nosecondoneisgenerated.ThisistoavoidanendlessfloodofICMP
messages.
WhenanICMPdestination-unreachablemessageissentbyarouter,itmeansthattherouterisunable
tosendthepackagetoitsfinaldestination.Therouterthendiscardstheoriginalpacket.Tworeasons
exist for why a destination might be unreachable. Most commonly, the source host has specified a
nonexistent address. Less frequently, the router does not have a route to the destination.
Destination-unreachablemessagesincludefourbasictypes:networkunreachable,hostunreachable,
protocol unreachable, and port unreachable.Network-unreachable messages usually mean that a
failure has occurred in the routing or addressing of a packet.Host-unreachable messages usually
indicates delivery failure, such as a wrong subnet mask.Protocol-unreachable messages generally
mean that the destination does not support the upper-layer protocol specified in the packet.
Port-unreachable messages imply that the TCP socket or port is not available.
AnICMPecho-requestmessage,whichisgeneratedbythepingcommand,issentbyanyhosttotest
nodereachabilityacrossaninternetwork.TheICMPecho-replymessageindicatesthatthenodecan
be successfully reached.
An ICMP Redirect message is sent by the router to the source host to stimulate more efficient
routing. The router still forwards the original packet to the destination. ICMP redirects allow host
routingtablestoremainsmallbecauseitisnecessarytoknowtheaddressofonlyonerouter,evenif
that router does not provide the best path. Even after receiving an ICMP Redirect message, some
devices might continue using the less-efficient route.
An ICMP Time-exceeded message is sent by the router if an IP packet’s Time-to-Live field
(expressed in hops or seconds) reaches zero. The Time-to-Live field prevents packets from
continuouslycirculatingtheinternetworkiftheinternetworkcontainsaroutingloop.Therouterthen
discards the original packet.


ICMP Router-Discovery Protocol (IDRP)
IDRP uses Router-Advertisement and Router-Solicitation messages to discover the addresses of
routers on directly attached subnets. Each router periodically multicasts Router-Advertisement
messages from each of its interfaces. Hosts then discover addresses of routers on directly attached
subnets by listening for these messages. Hosts can use Router-Solicitation messages to request
immediate advertisements rather than waiting for unsolicited messages.
IRDPoffersseveraladvantagesoverothermethodsofdiscoveringaddressesofneighboringrouters.
Primarily, it does not require hosts to recognize routing protocols, nor does it require manual
configuration by an administrator.
Router-Advertisementmessages enable hosts to discover the existence of neighboring routers, but
notwhichrouterisbesttoreachaparticulardestination.Ifahostusesapoorfirst-hoproutertoreach
a particular destination, it receives a Redirect message identifying a better choice.


Transmission Control Protocol (TCP)
The TCP provides reliable transmission of data in an IP environment. TCP corresponds to the
transportlayer(Layer4)oftheOSIreferencemodel.AmongtheservicesTCPprovidesarestream
data transfer, reliability, efficient flow control, full-duplex operation, and multiplexing.
With stream data transfer, TCP delivers an unstructured stream of bytes identified by sequence
numbers.Thisservicebenefitsapplicationsbecausetheydonothavetochopdataintoblocksbefore
handing it off to TCP. Instead, TCP groups bytes into segments and passes them to IP for delivery.
TCPoffersreliabilitybyprovidingconnection-oriented,end-to-endreliablepacketdeliverythrough
an internetwork. It does this by sequencing bytes with a forwarding acknowledgment number that
indicates to the destination the next byte the source expects to receive. Bytes not acknowledged
within a specified time period are retransmitted. The reliability mechanism of TCP allows devices
to deal with lost, delayed, duplicate, or misread packets. A time-out mechanism allows devices to
detect lost packets and request retransmission.
TCP offers efficient flow control, which means that, when sending acknowledgments back to the
source, the receiving TCP process indicates the highest sequence number it can receive without
overflowing its internal buffers.
Full-duplex operation means that TCP processes can both send and receive at the same time.
Finally, TCP’s multiplexing means that numerous simultaneous upper-layer conversations can be
multiplexed over a single connection.


TCP Connection Establishment
Tousereliabletransportservices,TCPhostsmustestablishaconnection-orientedsessionwithone
another. Connection establishment is performed by using a “three-way handshake” mechanism.
Athree-wayhandshakesynchronizesbothendsofaconnectionbyallowingbothsidestoagreeupon
initialsequencenumbers.Thismechanismalsoguaranteesthatbothsidesarereadytotransmitdata
and know that the other side is ready to transmit as well. This is necessary so that packets are not
transmitted or retransmitted during session establishment or after session termination.
Each host randomly chooses a sequence number used to track bytes within the stream it is sending
and receiving. Then, the three-way handshake proceeds in the following manner:
The first host (Host A) initiates a connection by sending a packet with the initial sequence number
(X) and SYN bit set to indicate a connection request. The second host (Host B) receives the SYN,
records the sequence number X, and replies by acknowledging the SYN(with an ACK = X + 1).
HostB includes its own initial sequence number (SEQ = Y). An ACK = 20 means the host has
received bytes 0 through 19 and expects byte 20 next. This technique is calledforward
acknowledgment.HostAthenacknowledgesallbytesHostBsentwithaforwardacknowledgment
indicating the next byte Host A expects to receive (ACK = Y + 1). Data transfer then can begin.


Positive Acknowledgment and Retransmission (PAR)
A simple transport protocol might implement a reliability-and-flow-control technique where the
source sends one packet, starts a timer, and waits for an acknowledgment before sending a new
packet. If the acknowledgment is not received before the timer expires, the source retransmits the
packet. Such a technique is calledpositive acknowledgment and retransmission (PAR).
By assigning each packet a sequence number, PAR enables hosts to track lost or duplicate packets
caused by network delays that result in premature retransmission. The sequence numbers are sent
back in the acknowledgments so that the acknowledgments can be tracked.
PAR is an inefficient use of bandwidth, however, because a host must wait for an acknowledgment
before sending a new packet, and only one packet can be sent at a time.


TCP Sliding Window
ATCP sliding window provides more efficient use of network bandwidth than PAR because it
enables hosts to send multiple bytes or packets before waiting for an acknowledgment.
In TCP, the receiver specifies the current window size in every packet. Because TCP provides a
byte-stream connection, window sizes are expressed in bytes. This means that a window is the
numberofdatabytesthatthesenderisallowedtosendbeforewaitingforanacknowledgment.Initial
window sizes are indicated at connection setup, but might vary throughout the data transfer to
provide flow control. A window size of zero, for instance, means “Send no data.”
InaTCPsliding-windowoperation,forexample,thesendermighthaveasequenceofbytestosend
(numbered 1 to 10) to a receiver who has a window size of five. The sender then would place a
window around the first five bytes and transmit them together. It would then wait for an
acknowledgment.
The receiver would respond with an ACK = 6, indicating that it has received bytes 1 to 5 and is
expectingbyte6next.Inthesamepacket,thereceiverwouldindicatethatitswindowsizeis5.The
sender then would move the sliding window five bytes to the right and transmit bytes 6 to 10. The
receiverwouldrespondwithanACK=11,indicatingthatitisexpectingsequencedbyte11next.In
this packet, the receiver might indicate that its window size is 0 (because, for example, its internal
buffersarefull).Atthispoint,thesendercannotsendanymorebytesuntilthereceiversendsanother
packet with a window size greater than 0.


TCP Packet Field Descriptions
The following descriptions summarize the TCP packet fields illustrated in Figure 30-10:
• SourcePortandDestinationPort—Identifiespointsatwhichupper-layersourceanddestination
processes receive TCP services.
• SequenceNumber—Usuallyspecifiesthenumberassignedtothefirstbyteofdatainthecurrent
message.Intheconnection-establishmentphase,thisfieldalsocanbeusedtoidentifyaninitial
sequence number to be used in an upcoming transmission.
• AcknowledgmentNumber—Containsthesequencenumberofthenextbyteofdatathesenderof
the packet expects to receive.
• Data Offset—Indicates the number of 32-bit words in the TCP header.
• Reserved—Remains reserved for future use.
• Flags—Carries a variety of control information, including the SYN and ACK bits used for
connection establishment, and the FIN bit used for connection termination.
• Window—Specifiesthesizeofthesender’sreceivewindow(thatis,thebufferspaceavailablefor
incoming data).
• Checksum—Indicates whether the header was damaged in transit.
• Urgent Pointer—Points to the first urgent data byte in the packet.
• Options—Specifies various TCP options.
• Data—Contains upper-layer information.


User Datagram Protocol (UDP)
The User Datagram Protocol (UDP) is a connectionless transport-layer protocol (Layer 4) that
belongs to the Internet protocol family. UDP is basically an interface between IP and upper-layer
processes.UDPprotocolportsdistinguishmultipleapplicationsrunningonasingledevicefromone
another.
UnliketheTCP,UDPaddsnoreliability,flow-control,orerror-recoveryfunctionstoIP.Becauseof
UDP’s simplicity, UDP headers contain fewer bytes and consume less network overhead than TCP.
UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in
cases where a higher-layer protocol might provide error and flow control.
UDPisthetransportprotocolforseveralwell-knownapplication-layerprotocols,includingNetwork
FileSystem(NFS),SimpleNetworkManagementProtocol(SNMP),DomainNameSystem(DNS),
and Trivial File Transfer Protocol (TFTP).
The UDP packet format contains four fields, as shown in Figure30-11. These include source and
destination ports, length, and checksum fields.


Internet Protocols Application-Layer Protocols
The Internet protocol suite includes many application-layer protocols that represent a wide variety
of applications, including the following:
• File Transfer Protocol (FTP)—Moves files between devices
• Simple Network-Management Protocol (SNMP)—Primarily reports anomalous network
conditions and sets network threshold values
• Telnet—Serves as a terminal emulation protocol
• X Windows—Serves as a distributed windowing and graphics system used for communication
between X terminals and UNIX workstations
• Network File System (NFS), External Data Representation (XDR), and Remote Procedure Call
(RPC)—Work together to enable transparent access to remote network resources
• Simple Mail Transfer Protocol (SMTP)—Provides electronic mail services
• Domain Name System (DNS)—Translates the names of network nodes into network addresses