September 16, 2013

LAN Switching and VLANs

LAN Switching and VLANs:

A LAN switch is a device that provides much higher port density at a lower cost than traditional bridges.
For this reason, LAN switches can accommodate network designs featuring fewer users per segment, thereby increasing the average available bandwidth per user.

This article provides a summary of general LAN switch operation and maps LAN switching to the OSI reference model.


The trend toward fewer users per segment is known as microsegmentation. Micro-segmentation allows the creation of private or dedicated segments-that is, one user per segment.
Each user receives instant access to the full bandwidth and does not have to contend for available bandwidth with other users.
As a result, collisions (a normal phenomenon in shared-medium networks employing hubs) do not occur, as long as the equipment operates in full-duplex mode.
A LAN switch forwards frames based on either the frame's Layer 2 address (Layer 2 LAN switch) or, in some cases, the frame's Layer 3 address (multilayer LAN switch).
A LAN switch is also called a frame switch because it forwards Layer 2 frames, whereas an ATM switch forwards cells.

Figure: A LAN Switch Is a Data Link Layer Device













LAN Switch Operation:
LAN switches are similar to transparent bridges in functions such as learning the topology, forwarding, and filtering. These switches also support several new and unique features, such as dedicated communication between devices through full-duplex operations, multiple simultaneous conversations, and media-rate adaption.
Full-duplex communication between network devices increases file-transfer throughput.
Multiple simultaneous conversations can occur by forwarding, or switching, several packets at the same time, thereby increasing network capacity by the number of conversations supported.
Full-duplex communication effectively doubles the throughput, while with media-rate adaption, the LAN switch can translate between 10 and 100 Mbps, allowing bandwidth to be allocated as needed.
Deploying LAN switches requires no change to existing hubs, network interface cards (NICs), or cabling.

VLANs Defined:
A VLAN is defined as a broadcast domain within a switched network. Broadcast domains describe the extent that a network propagates a broadcast frame generated by a station.
Some switches may be configured to support a single or multiple VLANs. Whenever a switch supports multiple VLANs, broadcasts within one VLAN never appear in another VLAN.
Switch ports configured as a member of one VLAN belong to a different broadcast domain, as compared to switch ports configured as members of a different VLAN.
Creating VLANs enables administrators to build broadcast domains with fewer users in each broadcast domain.
This increases the bandwidth available to users because fewer users will contend for the bandwidth.
Routers also maintain broadcast domain isolation by blocking broadcast frames.
Therefore, traffic can pass from one VLAN to another only through a router.
Normally, each subnet belongs to a different VLAN. Therefore, a network with many subnets will probably have many VLANs. Switches and VLANs enable a network administrator to assign users to broadcast domains based upon the user's job need.
This provides a high level of deployment flexibility for a network administrator.
Advantages of VLANs include the following:
Segmentation of broadcast domains to create more bandwidth
Additional security by isolating users with bridge technologies
Deployment flexibility based upon job function rather than physical placement.

Switch Port Modes:
Switch ports run in either access or trunk mode. In access mode, the interface belongs to one and only one VLAN. Normally a switch port in access mode attaches to an end user device or a server.
The frames transmitted on an access link look like any other Ethernet frame.
Trunks, on the other hand, multiplex traffic for multiple VLANs over the same physical link. Trunk links usually interconnect switches,
As shown in Figure: Switches Interconnected with Trunk Links.
However, they may also attach end devices such as servers that have special adapter cards that participate in the multiplexing protocol.

Figure: Switches Interconnected with Trunk Links


Note that some of the devices attach to their switch using access links, while the connections between the switches utilize trunk links.
To multiplex VLAN traffic, special protocols exist that encapsulate or tag (mark) the frames so that the receiving device knows to which VLAN the frame belongs.
Trunk protocols are either proprietary or based upon IEEE 802.1Q.

LAN Switching Forwarding:
LAN switches can be characterized by the forwarding method that they support. In the store-and-forward switching method, error checking is performed and erroneous frames are discarded. With the cut-through switching method, latency is reduced by eliminating error checking.
With the store-and-forward switching method, the LAN switch copies the entire frame into its onboard buffers and computes the cyclic redundancy check (CRC).
The frame is discarded if it contains a CRC error or if it is a runt (less than 64 bytes, including the CRC) or a giant (more than 1518 bytes, including the CRC).
If the frame does not contain any errors, the LAN switch looks up the destination address in its forwarding, or switching, table and determines the outgoing interface. 
It then forwards the frame toward its destination.
With the cut-through switching method, the LAN switch copies only the destination address (the first 6 bytes following the preamble) into its onboard buffers. It then looks up the destination address in its switching table, determines the outgoing interface, and forwards the frame toward its destination.
A cut-through switch provides reduced latency because it begins to forward the frame as soon as it reads the destination address and determines the outgoing interface.
Some switches can be configured to perform cut-through switching on a per-port basis until a user-defined error threshold is reached, when they automatically change to store-and-forward mode.
When the error rate falls below the threshold, the port automatically changes back to store-and-forward mode.
LAN switches must use store-and-forward techniques to support multilayer switching.
The switch must receive the entire frame before it performs any protocol-layer operations.
For this reason, advanced switches that perform Layer 3 switching are store-and-forward devices.

LAN Switching Bandwidth:
LAN switches also can be characterized according to the proportion of bandwidth allocated to each port.

Symmetric switching provides evenly distributed bandwidth to each port, while asymmetric switching provides unlike, or unequal, bandwidth between some ports.

An asymmetric LAN switch provides switched connections between ports of unlike bandwidths, such as a combination of 10BaseT and 100BaseT.
This type of switching is also called 10/100 switching. Asymmetric switching is optimized for client/server traffic flows in which multiple clients simultaneously communicate with a server, requiring more bandwidth dedicated to the server port to prevent a bottleneck at that port.
A symmetric switch provides switched connections between ports with the same bandwidth, such as all 10BaseT or all 100BaseT.
Symmetric switching is optimized for a reasonably distributed traffic load, such as in a peer-to-peer desktop environment.
A network manager must evaluate the needed amount of bandwidth for connections between devices to accommodate the data flow of network-based applications when deciding to select an asymmetric or symmetric switch.


LAN Switch and the OSI Model: 
LAN switches can be categorized according to the OSI layer at which they filter and forward, or switch, frames. These categories are: Layer 2, Layer 2 with Layer 3 features, or multilayer.
A Layer 2 LAN switch is operationally similar to a multiport bridge but has a much higher capacity and supports many new features, such as full-duplex operation. A Layer 2 LAN switch performs switching and filtering based on the OSI data link layer (Layer 2) MAC address. As with bridges, it is completely transparent to network protocols and user applications.
A Layer 2 LAN switch with Layer 3 features can make switching decisions based on more information than just the Layer 2 MAC address. Such a switch might incorporate some Layer 3 traffic-control features, such as broadcast and multicast traffic management, security through access lists, and IP fragmentation.
A multilayer switch makes switching and filtering decisions based on OSI data link layer (Layer 2) and OSI network layer (Layer 3) addresses. This type of switch dynamically decides whether to switch (Layer 2) or route (Layer 3) incoming traffic. 
A multilayer LAN switch switches within a workgroup and routes between different workgroups.
Layer 3 switching allows data flows to bypass routers. The first frame passes through the router as normal to ensure that all security policies are observed. 
The switches watch the way that the router treats the frame and then replicate the process for subsequent frames.
For example, if a series of FTP frames flows from a 10.0.0.1 to 192.168.1.1, the frames normally pass through a router. Multilayer switching observes how the router changes the Layer 2 and Layer 3 headers and imitates the router for the rest of the frames.
This reduces the load on the router and the latency through the network.

September 15, 2013

Fiber Distributed Data Interface (FDDI)

The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN using fiber-optic cable.
FDDI is frequently used as high-speed backbone technology because of its support for high bandwidth and greater distances than copper.
It should be noted that relatively recently, a related copper specification, called Copper Distributed Data Interface (CDDI), has emerged to provide 100-Mbps service over copper. CDDI is the implementation of FDDI protocols over twisted-pair copper wire.

FDDI uses dual-ring architecture with traffic on each ring flowing in opposite directions (called counter-rotating). The dual rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmission, and the secondary ring remains idle. As will be discussed in detail later in this article, the primary purpose of the dual rings is to provide superior reliability and robustness.
Figure: FDDI Uses Counter-Rotating Primary and Secondary Rings









FDDI Transmission Media:
FDDI uses optical fiber as the primary transmission medium, but it also can run over copper cabling.
As mentioned earlier, FDDI over copper is referred to as Copper-Distributed Data Interface (CDDI).
Optical fiber has several advantages over copper media. In particular, security, reliability, and performance all are enhanced with optical fiber media because fiber does not emit electrical signals.
A physical medium that does emit electrical signals (copper) can be tapped and therefore would permit unauthorized access to the data that is transiting the medium.
In addition, fiber is immune to electrical interference from radio frequency interference (RFI) and electromagnetic interference (EMI).
Fiber historically has supported much higher bandwidth (throughput potential) than copper, although recent technological advances have made copper capable of transmitting at 100 Mbps.
Finally, FDDI allows 2 km between stations using multimode fiber, and even longer distances using a single mode.
FDDI defines two types of optical fiber: single-mode and multimode. A mode is a ray of light that enters the fiber at a particular angle. Multimode fiber uses LED as the light-generating device, while single-mode fiber generally uses lasers.
Multimode fiber allows multiple modes of light to propagate through the fiber. Because these modes of light enter the fiber at different angles, they will arrive at the end of the fiber at different times.
This characteristic is known as modal dispersion. Modal dispersion limits the bandwidth and distances that can be accomplished using multimode fibers. For this reason, multimode fiber is generally used for connectivity within a building or a relatively geographically contained environment.
Single-mode fiber allows only one mode of light to propagate through the fiber. Because only a single mode of light is used, modal dispersion is not present with single-mode fiber. Therefore, single-mode fiber is capable of delivering considerably higher performance connectivity over much larger distances, which is why it generally is used for connectivity between buildings and within environments that are more geographically dispersed.

Figure: Light Sources Differ for Single-Mode and Multimode Fibers










FDDI Specifications: 

FDDI specifies the physical and media-access portions of the OSI reference model. FDDI is not actually a single specification, but it is a collection of four separate specifications, each with a specific function.

Combined, these specifications have the capability to provide high-speed connectivity between upper-layer protocols such as TCP/IP and IPX, and media such as fiber-optic cabling.
FDDI's four specifications are the
Media Access Control (MAC), 
Physical Layer Protocol (PHY),
Physical-Medium Dependent (PMD), and
Station Management (SMT) specifications.

The MAC specification defines how the medium is accessed, including frame format, token handling, addressing, algorithms for calculating cyclic redundancy check (CRC) value, and error-recovery mechanisms.
The PHY specification defines data encoding/decoding procedures, clocking requirements, and framing, among other functions.
The PMD specification defines the characteristics of the transmission medium, including fiber-optic links, power levels, bit-error rates, optical components, and connectors. The SMT specification defines FDDI station configuration, ring configuration, and ring control features, including station insertion and removal, initialization, fault isolation and recovery, scheduling, and statistics collection.
FDDI is similar to IEEE 802.3 Ethernet and IEEE 802.5 Token Ring in its relationship with the OSI model. Its primary purpose is to provide connectivity between upper OSI layers of common protocols and the media used to connect network devices.


Figure: FDDI Specifications Map to the OSI Hierarchical Model










FDDI Station-Attachment Types
One of the unique characteristics of FDDI is that multiple ways actually exist by which to connect FDDI devices. FDDI defines four types of devices: single-attachment station (SAS), dual-attachment station (DAS), single-attached concentrator (SAC), and dual-attached concentrator (DAC).
An SAS attaches to only one ring (the primary) through a concentrator. One of the primary advantages of connecting devices with SAS attachments is that the devices will not have any effect on the FDDI ring if they are disconnected or powered off. Concentrators will be covered in more detail in the following discussion.

Each FDDI DAS has two ports, designated A and B. These ports connect the DAS to the dual FDDI ring. Therefore, each port provides a connection for both the primary and the secondary rings. As you will see in the next section, devices using DAS connections will affect the rings if they are disconnected or powered off.

Figure: FDDI DAS Ports Attach to the Primary and Secondary Rings










FDDI Frame Format

The FDDI frame format is similar to the format of a Token Ring frame. This is one of the areas in which FDDI borrows heavily from earlier LAN technologies, such as Token Ring. FDDI frames can be as large as 4,500 bytes.

Figure: The FDDI Frame Is Similar to That of a Token Ring 



The following descriptions summarize the FDDI data frame and token fields illustrated in the above figure.

Preamble - Gives a unique sequence that prepares each station for an upcoming frame.

Start delimiter - Indicates the beginning of a frame by employing a signaling pattern that differentiates it from the rest of the frame.

Frame control - Indicates the size of the address fields and whether the frame contains asynchronous or synchronous data, among other control information.

Destination address - Contains a unicast (singular), multicast (group), or broadcast (every station) address. As with Ethernet and Token Ring addresses, FDDI destination addresses are 6 bytes long.

Source address - Identifies the single station that sent the frame. As with Ethernet and Token Ring addresses, FDDI source addresses are 6 bytes long.

Data - Contains either information destined for an upper-layer protocol or control information.
Frame check sequence (FCS) - Is filed by the source station with a calculated cyclic redundancy check value dependent on frame contents (as with Token Ring and Ethernet).
The destination address recalculates the value to determine whether the frame was damaged in transit. If so, the frame is discarded.

End delimiter - Contains unique symbols; cannot be data symbols that indicate the end of the frame.


Frame status - Allows the source station to determine whether an error occurred; identifies whether the frame was recognized and copied by a receiving station.

Virtual Private Networks(VPN)

Virtual Private Networks:

VPNs provide a more active form of security by either encrypting or encapsulating data for transmission through an unsecured network.
These two types of security-encryption and encapsulation-form the foundation of virtual private networking.
However, both encryption and encapsulation are generic terms that describe a function that can be performed by a myriad of specific technologies.
To add to the confusion, these two sets of technologies can be combined in different implementation topologies. Thus, VPNs can vary widely from vendor to vendor.

Layer 2 Tunneling Protocol:

The Internet Engineering Task Force (IETF) was faced with competing proposals from Microsoft and Cisco Systems for a protocol specification that would secure the transmission of IP datagrams through uncontrolled and untrusted network domains.
Microsoft's proposal was an attempt to standardize the Point-to-Point Tunneling Protocol (PPTP), which it had championed.
Cisco, too, had a protocol designed to perform a similar function. The IETF combined the best elements of each proposal and specified the open standard L2TP.

The simplest description of L2TP's functionality is that it carries the Point-to-Point Protocol (PPP) through networks that aren't point-to-point.
PPP has become the most popular communications protocol for remote access using circuit-switched transmission facilities such as POTS lines or ISDN to create a temporary point-to-point connection between the calling device and its destination.
L2TP simulates a point-to-point connection by encapsulating PPP datagrams for transportation through routed networks or internetworks. Upon arrival at their intended destination, the encapsulation is removed, and the PPP datagrams are restored to their original format.
Thus, a point-to-point communications session can be supported through disparate networks. This technique is known as tunneling.

Operational Mechanics:
In a traditional remote access scenario, a remote user (or client) accesses a network by directly connecting a network access server (NAS).
Generally, the NAS provides several distinct functions: It terminates the point-to-point communications session of the remote user, validates the identity of that user, and then serves that user with access to the network.
Although most remote access technologies bundle these functions into a single device, L2TP separates them into two physically separate devices: the L2TP Access Server (LAS) and the L2TP Network Server (LNS).

As its names imply, the L2TP Access Server supports authentication, and ingress. Upon successful authentication, the remote user's session is forwarded to the LNS, which lets that user into the network. Their separation enables greater flexibility for implementation than other remote access technologies.

Implementation Topologies:
L2TP can be implemented in two distinct topologies:
  • Client-aware tunneling
  • Client-transparent tunneling

The distinction between these two topologies is whether the client machine that is using L2TP to access a remote network is aware that its connection is being tunneled.

Client-Aware Tunneling:
The first implementation topology is known as client-aware tunneling. This name is derived from the remote client initiating (hence, being "aware" of) the tunnel. In this scenario, the client establishes a logical connection within a physical connection to the LAS. The client remains aware of the tunneled connection all the way through to the LNS, and it can even determine which of its traffic goes through the tunnel.

Client-Transparent Tunneling:
Client-transparent tunneling features L2TP access concentrators (LACs) distributed geographically close to the remote users. Such geographic dispersion is intended to reduce the long-distance telephone charges that would otherwise be incurred by remote users dialing into a centrally located LAC.

The remote users need not support L2TP directly; they merely establish a point-to-point communication session with the LAC using PPP.
Ostensibly, the user will be encapsulating IP datagrams in PPP frames.
The LAC exchanges PPP messages with the remote user and establishes an L2TP tunnel with the LNS through which the remote user's PPP messages are passed.

The LNS is the remote user's gateway to its home network. It is the terminus of the tunnel; it strips off all L2TP encapsulation and serves up network access for the remote user.

Point-to-Point Protocol

The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links.

PPP also established a standard for the assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network layer address negotiation and data-compression negotiation.

PPP supports these functions by providing an extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to negotiate optional configuration parameters and facilities.

In addition to IP, PPP supports other protocols, including Novell's Internetwork Packet Exchange (IPX)

PPP Components:
PPP provides a method for transmitting datagrams over serial point-to-point links. PPP contains three main components:-
A method for encapsulating datagrams over serial links.
PPP uses the High-Level Data Link Control (HDLC) protocol as a basis for encapsulating datagrams over point-to-point links.

PPP is designed to allow the simultaneous use of multiple network layer protocols.


Physical Layer Requirements:

PPP is capable of operating across any DTE/DCE interface. Examples include EIA/TIA-232-C (formerly RS-232-C), EIA/TIA-422 (formerly RS-422), EIA/TIA-423 (formerly RS-423), and International Telecommunication Union Telecommunication Standardization Sector (ITU-T) (formerly CCITT) V.35.

The only absolute requirement imposed by PPP is the provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link layer frames.
PPP does not impose any restrictions regarding transmission rate other than those imposed by the particular DTE/DCE interface in use




PPP Link Layer:

PPP uses the principles, terminology, and frame structure of the International Organization for Standardization (ISO) HDLC procedures (ISO 3309-1979), as modified by ISO 3309:1984/PDAD1 "Addendum 1: Start/Stop Transmission." ISO 3309-1979 specifies the HDLC frame structure for use in synchronous environments.
ISO 3309:1984/PDAD1 specifies proposed modifications to ISO 3309-1979 to allow its use in asynchronous environments.

The PPP control procedures use the definitions and control field encodings standardized in ISO 4335-1979 and ISO 4335-1979.
The PPP frame format appears in Figure:
Six Fields Make Up the PPP Frame.

Figure: Six Fields Make Up the PPP Frame








Flag - A single byte that indicates the beginning or end of a frame.
The flag field consists of the binary sequence 01111110.

Address - A single byte that contains the binary sequence 11111111, the standard broadcast address. PPP does not assign individual station addresses.

Control - A single byte that contains the binary sequence 00000011, which calls for transmission of user data in an unsequenced frame.
A connectionless link service similar to that of Logical Link Control (LLC) Type 1 is provided.

Protocol - Two bytes that identify the protocol encapsulated in the information field of the frame.
The most up-to-date values of the protocol field are specified in the most recent Assigned Numbers Request For Comments (RFC).

Data - Zero or more bytes that contain the datagram for the protocol specified in the protocol field. The end of the information field is found by locating the closing flag sequence and allowing 2 bytes for the FCS field. The default maximum length of the information field is 1,500 bytes. By prior agreement, consenting PPP implementations can use other values for the maximum information field length.

Frame check sequence (FCS) - Normally 16 bits (2 bytes). By prior agreement, consenting PPP implementations can use a 32-bit (4-byte) FCS for improved error detection.




PPP Link-Control Protocol:

The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection.
LCP goes through four distinct phases.
First, link establishment and configuration negotiation occur. Before any network layer datagrams (for example, IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters.
This phase is complete when a configuration-acknowledgment frame has been both sent and received.
This is followed by link quality determination.
LCP allows an optional link quality determination phase following the link-establishment and configuration-negotiation phase.
In this phase, the link is tested to determine whether the link quality is sufficient to bring up network layer protocols.
This phase is optional. LCP can delay transmission of network layer protocol information until this phase is complete.
At this point, network layer protocol configuration negotiation occurs.
After LCP has finished the link quality determination phase, network layer protocols can be configured separately by the appropriate NCP and can be brought up and taken down at any time.

If LCP closes the link, it informs the network layer protocols so that they can take appropriate action.

Finally, link termination occurs. LCP can terminate the link at any time. This usually is done at the request of a user but can happen because of a physical event, such as the loss of carrier or the expiration of an idle-period timer.

Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, and link-maintenance frames are used to manage and debug a link.

September 12, 2013

CSMA/CD:



Carrier Sense Multiple Access/Collision Detection

CSMA/CD is a Media Access Control (MAC) protocol that defines how network devices respond when two devices attempt to use a data channel simultaneously and encounter a data collision. 
The CSMA/CD rules define how long the device should wait if a collision occurs. The medium is often used by multiple data nodes, so each data node receives transmissions from each of the other nodes on the medium.

There are several CSMA access modes: 
1-persistent, P-persistent and O-persistent.

  • 1-persistent is used in CSMA/CD systems, like Ethernet.This mode waits for the medium to be idle, then transmits data.

  • P-persistent is used in CSMA/CA systems, like Wi-Fi.This mode waits for the medium to be idle, then transmits data with a probability p.If the data node does not transmit the data (a probability of 1 - p), the sender waits for the medium to be idle again and transmit the data with the same probability p.

  • O-persistent is used by CobraNet, LonWorks, and the controller area network. This mode assigns a transmission order to each data node. When the medium becomes idle, the data node next in line is able to transmit data. The data node next in line waits for the medium to be idle again and then transmits its data. After each data node transmits data, the transmission order is updated to reflect what data nodes have already transmitted, moving each data node through the queue.

September 11, 2013

CIDR (Classless Interdomain Routing)

Classless Interdomain Routing (CIDR) was introduced to improve both address space utilization and routing scalability in the Internet.
It was needed because of the rapid growth of the Internet and growth of the IP routing tables held in the Internet routers.

CIDR moves way from the traditional IP classes (Class A, Class B, Class C, and so on). In CIDR , an IP network is represented by a prefix, which is an IP address and some indication of the length of the mask.
Length means the number of left-most contiguous mask bits that are set to one. So network 172.16.0.0/255.255.0.0. can be represented as 172.16.0.0/16. CIDR also depicts a more hierarchical Internet architecture, where each domain takes its IP addresses from a higher level. This allows for the summarization of the domains to be done at the higher leveL.

For example, if an ISP owns network 172.16.0.0/16, then the ISP can offer 172.16.1.0/24, 172.16.2.0/24, and so on to customers. Yet, when advertising to other providers, the ISP only needs to advertise 172.16.0.0/16.

Sample Config

Routers A and B are connected via serial interface.

Router A
   hostname router'a'
  !
  ip routing

  !int e 0
  ip address 172.16.50.1 255.255.255.0
  !(subnet 50)
  int e 1 ip address 172.16.55.1 255.255.255.0
  !(subnet 55)
  int t 0 ip address 172.16.60.1 255.255.255.0
  !(subnet 60) int s 0
  ip address 172.16.65.1 255.255.255.0 (subnet 65)
  !s 0 connects to router B
  router rip
  network 172.16.0.0

Router B

  hostname router'b'
  !
  ip routing
  !int e 0
  ip address 192.1.10.200 255.255.255.240
  !(subnet 192)
  int e 1
  ip address 192.1.10.66 255.255.255.240
  !(subnet 64)
  int s 0
  ip address 172.16.65.2 (same subnet as router A's s 0)
  !Int s 0 connects to router A
  router rip
  network 192.1.10.0
  network 172.16.0.0

Host/Subnet Quantities Table:

Class B                   Effective  Effective
# bits        Mask         Subnets     Hosts
-------  ---------------  ---------  ---------
  1      255.255.128.0           2     32766
  2      255.255.192.0           4     16382
  3      255.255.224.0           8      8190
  4      255.255.240.0          16      4094
  5      255.255.248.0          32      2046
  6      255.255.252.0          64      1022
  7      255.255.254.0         128       510
  8      255.255.255.0         256       254
  9      255.255.255.128       512      126
  10     255.255.255.192      1024     62
  11     255.255.255.224      2048     30
  12     255.255.255.240      4096     14
  13     255.255.255.248      8192     6
  14     255.255.255.252     16384    2


Class C                   Effective  Effective
# bits        Mask         Subnets     Hosts
-------  ---------------  ---------  ---------
  1      255.255.255.128      2        126
  2      255.255.255.192      4         62
  3      255.255.255.224      8         30
  4      255.255.255.240     16         14
  5      255.255.255.248     32          6
  6      255.255.255.252     64          2

 
*Subnet all zeroes and all ones included. These
 might not be supported on some legacy systems.
*Host all zeroes and all ones excluded.

Network Masks

Network Masks:
A network mask helps you know which portion of the address identifies the network and which portion of the address identifies the node.
Class A, B, and C networks have default masks, also known as natural masks, as shown here:

Class A: 255.0.0.0
Class B: 255.255.0.0
Class C: 255.255.255.0

An IP address on a Class A network that has not been subnetted would have an address/mask pair similar to: 8.20.15.1    255.0.0.0.
To see how the mask helps you identify the network and node parts of the address, convert the address and mask to binary numbers.

8.20.15.1 = 00001000.00010100.00001111.00000001
255.0.0.0 = 11111111.00000000.00000000.00000000


Once you have the address and the mask represented in binary, then identifying the network and host ID is easier. Any address bits which have corresponding mask bits set to 1 represent the network ID.
Any address bits that have corresponding mask bits set to 0 represent the node ID.


8.20.15.1 = 00001000.00010100.00001111.00000001
255.0.0.0 = 11111111.00000000.00000000.00000000
            -----------------------------------
             net id |      host id            

netid =  00001000 = 8
hostid = 00010100.00001111.00000001 = 20.15.1


Understanding Subnetting:

Subnetting allows you to create multiple logical networks that exist within a single Class A, B, or C network. If you do not subnet, you are only able to use one network from your Class A, B, or C network, which is unrealistic.

Each data link on a network must have a unique network ID, with every node on that link being a member of the same network.
If you break a major network (Class A, B, or C) into smaller subnetworks, it allows you to create a network of interconnecting subnetworks. Each data link on this network would then have a unique network/subnetworkID.

Any device, or gateway, connecting n networks/subnetworks has n distinct IP addresses, one for each network / subnetwork that it interconnects.

In order to subnet a network, extend the natural mask using some of the bits from the host ID portion of the address to create a subnetwork ID. 
For example, given a Class C network of 204.17.5.0 which has a natural mask of 255.255.255.0, you can create subnets in this manner:

204.17.5.0  =   11001100.00010001.00000101.00000000
255.255.255.224 = 11111111.11111111.11111111.11100000
                  --------------------------|sub|----


By extending the mask to be 255.255.255.224, you have taken three bits (indicated by "sub") from the original host portion of the address and used them to make subnets.

With these three bits, it is possible to create eight subnets. With the remaining five host ID bits, each subnet can have up to 32 host addresses, 30 of which can actually be assigned to a device since host ids of all zeros or all ones are not allowed (it is very important to remember this). 
So, with this in mind, these subnets have been created.
  
204.17.5.0 255.255.255.224     host address range 1 to 30
204.17.5.32 255.255.255.224    host address range 33 to 62
204.17.5.64 255.255.255.224    host address range 65 to 94
204.17.5.96 255.255.255.224    host address range 97 to 126
204.17.5.128 255.255.255.224   host address range 129 to 158
204.17.5.160 255.255.255.224   host address range 161 to 190
204.17.5.192 255.255.255.224   host address range 193 to 222
204.17.5.224 255.255.255.224   host address range 225 to 254



Examples:

Sample Exercise 1

Now that you have an understanding of subnetting, put this knowledge to use. In this example, you are given two address / mask combinations, written with the prefix/length notation, which have been assigned to two devices. Your task is to determine if these devices are on the same subnet or different subnets. You can do this by using the address and mask of each device to determine to which subnet each address belongs.

DeviceA: 172.16.17.30/20
DeviceB: 172.16.28.15/20


Determining the Subnet for DeviceA:

172.16.17.30  -   10101100.00010000.00010001.00011110
255.255.240.0 -   11111111.11111111.11110000.00000000
                  -----------------| sub|------------
subnet      =  10101100.00010000.00010000.00000000 = 172.16.16.0

Looking at the address bits that have a corresponding mask bit set to one, and setting all the other address bits to zero (this is equivalent to performing a logical "AND" between the mask and address), shows you to which subnet this address belongs. In this case, DeviceA belongs to subnet 172.16.16.0.

Determining the Subnet for DeviceB:

172.16.28.15  -   10101100.00010000.00011100.00001111
255.255.240.0 -   11111111.11111111.11110000.00000000
                  -----------------| sub|------------
subnet =          10101100.00010000.00010000.00000000 = 172.16.16.0

From these determinations, DeviceA and DeviceB have addresses that are part of the same subnet.

Understanding IP Addresses

Understanding IP Addresses:

An IP address is an address used in order to uniquely identify a device on an IP network. 
The address is made up of 32 binary bits, which can be divisible into a network portion and host portion with the help of a subnet mask.
The 32 binary bits are broken into four octets (1 octet = 8 bits). Each octet is converted to decimal and separated by a period (dot).

For this reason, an IP address is said to be expressed in dotted decimal format (for example, 172.16.81.100).

The value in each octet ranges from 0 to 255 decimal, or 00000000 - 11111111 binary.

Here is how binary octets convert to decimal: The right most bit, or least significant bit, of an octet holds a value of 20.
The bit just to the left of that holds a value of 21. This continues until the left-most bit, or most significant bit, which holds a value of 27. So if all binary bits are a one, the decimal equivalent would be 255 as shown here:

    1  1  1  1 1 1 1 1
  128 64 32 16 8 4 2 1 (128+64+32+16+8+4+2+1=255)

Here is a sample octet conversion when not all of the bits are set to 1.
  0  1 0 0 0 0 0 1
  0 64 0 0 0 0 0 1 (0+64+0+0+0+0+0+1=65)
And this is sample shows an IP address represented in both binary and decimal.

        10.       1.      23.      19 (decimal)
  00001010.00000001.00010111.00010011 (binary)


These octets are broken down to provide an addressing scheme that can accommodate large and small networks. There are five different classes of networks, A to E.



Given an IP address, its class can be determined from the three high-order bits. Figure 1 shows the significance in the three high order bits and the range of addresses that fall into each class. For informational purposes, Class D and Class E addresses are also shown.
FIGURE 1


















  • In a Class A address, the first octet is the network portion, 
  • so the Class A example in Figure 1 has a major network address of 1.0.0.0 - 127.255.255.255.
  • Octets 2, 3, and 4 (the next 24 bits) are for the network manager to divide into subnets and hosts as he/she sees fit. 
  • Class A addresses are used for networks that have more than 65,536 hosts (actually, up to 16777214 hosts!).


  • In a Class B address, the first two octets are the network portion.
  • so the Class B example in Figure 1 has a major network address of 128.0.0.0 - 191.255.255.255.
  • Octets 3 and 4 (16 bits) are for local subnets and hosts. 
  • Class B addresses are used for networks that have between 256 and 65534 hosts.

  • In a Class C address, the first three octets are the network portion.
  • The Class C example in Figure 1 has a major network address of 192.0.0.0 - 233.255.255.255.
  • Octet 4 (8 bits) is for local subnets and hosts - perfect for networks with less than 254 hosts.

Introduction to OSPF Part 2