Title: CS 2200 Lecture 25 TCPIP
1CS 2200 Lecture 25TCP/IP
- (Lectures based on the work of Jay Brockman,
Sharon Hu, Randy Katz, Peter Kogge, Bill Leahy,
Ken MacKenzie, Richard Murphy, and Michael
Niemier)
2A disk, pictorially
- When accessing data we read or write to a sector
- All sectors the same size, outer tracks just less
dense - To read or write, moveable arm with read/write
head moves over each surface - Cylinder all tracks under the arms at a given
point on all surfaces - To read or write
- Disk controller moves arm over proper track a
seek - The time to move is called the seek time
- When sector found, data is transferred
3The speed of light? No.
- Time required for a requested track sector to
rotate under the read/write head is called the
rotation latency or rotational delay - Involves mechanical components on the order of
milliseconds - i.e. were no longer moving at the speed of light
like in our CPU! - Time required to actually write or read data is
called the transfer time - (a function of block size, rotation speed,
recording density on a track, and speed of the
electronics connecting the disk to the computer)
4Disk odds n ends
- Often transfer time is a very small portion of a
full access - Its possible to use techniques (discussed in
caches) to help reduce disk overhead. Any
thoughts? - To help reduce complexity theres usually
additional HW called a disk controller - Disk controller helps manage disk accesses
- but also adds more overhead controller time
- (Can also have a queuing delay)
- (Time spent waiting for a disk to become free if
its already in use for another access)
5Example average disk access time
- What is the average time to read or write a
512-byte sector for a typical disk? - The average seek time is given to be 9 ms
- The transfer rate is 4 MB per second
- The disk rotates at 7200 RPM
- The controller overhead is 1 ms
- The disk is currently idle before any requests
are made (so there is no queuing delay) - Average disk access time average seek time
average rotational delay transfer time
controller overhead
6(Recall) Interrupts/Exceptions/Traps protection
I/O (kernel) space
a loop
user space
PC (mem. addr.)
a system call
kernel space
an interrupt
time
7(Recall) Process States
New
Terminated
Ready
Running
A longer example using these states later on in
lecture
Waiting
8DMA
N
Processor tells controller to make DMA
transfer. Assume disk to memory. (Includes N
number of bytes)
9Arbitration
- DMA implies multiple owners of the bus
- must decide who owns the bus from cycle to cycle
- Arbitration
- Daisy chain
- Centralized parallel arbitration
- Distributed arbitration by self selection
- Distributed arbitration by collision detection
- (see board for detailed examples and pictures)
10Physical Intuition Check
- If we read surface X, track Y, sector Z, whats
the easiest next thing to read? - Remember, the disk is actually rotating
- If we want to read all the sectors on the whole
disk from 0,0,0 to the end, what order is fastest
to read them?
11Disk as one big array
- Interleave sectors, then surfaces, then tracks
- To store files, we need to allocate space out of
this big array. - Would like to have fast
- sequential access
- random access
- allocation/deallocation time
- Also
- nice to be able to grow files
- nice not to waste space
- Data structures?
181.6GB
0
12Allocation Strategies
- 1. Fixed contiguous regions
- Regions (a.k.a. possible choices)
- One track
- One cylinder
- Contiguous range of cylinders
- Characteristics
- Data structures (on the disk at a well-known
place) - Free list bit map of free cylinders
- Directory mapping table (file name to cylinder
address) - SA and RA quick allocation is quick as well
- Cannot grow file size above allocation
-(
13Allocation Strategies
- 2. Contiguous regions with overflow areas
- Secondary area for growth (also contiguous)
- Characteristics
- RA requires some computation
- SA as fast as 1.
14Allocation Strategies
- 3. Linked allocation
- File divided into (sector sized) blocks
- Each block points to next one disk becomes a
giant linked list! - Characteristics
- Data structures (on the disk at a well-known
place) - free-list (a linked list) mapping table (file
name to starting disk block) - SA slow due to seek time RA pointer chasing
- Growth easy allocation is expensive
- (We have to find free boxes)
15Allocation Strategies
- 4. File Allocation Table (FAT) MS-DOS
- At the beginning of each partition, a table that
contains one entry for each disk block in that
partition (0 free) - A file occupies a number of entries in the FAT
- Each entry points to the next entry in the FAT
for SA (-1 the last block) - Characteristics
- FAT is the data structure efficient for
allocation less chance of screwing up as in (3) - SA requires FAT lookup (can be alleviated by
caching the FAT) i.e. think multiple disk
access - RA requires some computation
- Limit on size of partition (this is BAD!) growth
easy
16Allocation Strategies
- 5. Indexed Allocation
- File represented by an index block (on the disk),
a table of disk blocks for the file - Characteristics
- Data structures
- Mapping table (file name to index block)
- Free list (can be a bit map of available disk
blocks) - Limit on the size of the file
- Wasted space
- See board
17Allocation Strategies
- 6. Multilevel Indexed Allocation
- File represented by an index block (on the disk),
an indirection table of disk blocks for that file
- One level indirection
- Two level indirection
- Triple indirection
- See next slide/board for picture
18Multilevel Indexed Allocation
19Allocation Strategies
- 7. Hybrid (BSD Unix)
- Combination of (5) and (6)
- Each file represented by an i-node (index node)
- Index to first n disk blocks, plus
- A single indirect index, a double indirect index,
a triple indirect index - Characteristics
- SA requires i-node reference
- overhead reduced by in-memory cache
- RA requires some computation but much quicker
than (3) - Growth is easy
- Allocation overhead same as (3)
20Unix Inode
(see board talk) Q why is this a good strategy?
21Disk Scheduling
- Algorithm Objective
- Minimize seek time
- Assumptions (for this example)
- Single disk module
- Requests for equal sized blocks
- Random distribution of locations on disk
- One movable arm
- Seek time proportional to tracks crossed
- No delay due to controller
- Read/write times are equal
22FCFS
23SSTF
24SCAN
25C-Scan
26C-Look
27Policies
- N-Step SCAN
- Two queues
- active (N requests or less)
- latent
- Service active queue
- When no more in active, transfer N requests from
latent to active - Leads to lower variance compared to SCAN
- Worse than SCAN for mean waiting time
28Algorithm Selection
- SSTF (Shortest Seek Time First) is common. Better
than FCFS. - If load is heavy SCAN and C-SCAN best because
less likely to have starvation problems - We could calculate an optimum for any series of
requests but costly - Depends on number type of requests
- e.g. Imagine we only have one request pending
- Also depends on file layout
- Recommend a modular scheduling algorithm that can
be changed.
29Typical Question
- Suppose a disk drive has 5000 cylinders, numbered
from 0 to 4999. Currently at cylinder 143 and
previous request was at 125. Queue (in FIFO
order) is - 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130
- Starting from current position what is total
distance moved (in cyclinders) the disk head
moves to satisfy all requests - Using FCFS, SSTF, SCAN, LOOK, C-SCAN
30FCFS
- 143 - 86 57
- 86 - 1470 1384
- 1470 - 913 557
- 913 - 1774 861
- 1774 - 948 826
- 948 - 1509 561
- 1509 - 1022 487
- 1022 - 1750 728
- 1750 - 130 1620
- 7081 Cylinderslt-- Answer
31Flynns taxonomy
- Single instruction stream, single data stream
(SISD) - Essentially, this is a uniprocessor
- Single instruction stream, multiple data streams
(SIMD) - Same instruction executed by multiple processors
with different data streams - Each processor has own data memory, but 1
instruction memory and control processor to
fetch/dispatch instructions
32Flynns Taxonomy
- Multiple instruction streams, single data streams
(MISD) - Can anyone think of a good application for this
machine? - Multiple instruction streams, multiple data
streams (MIMD) - Each processor fetches its own instructions and
operates on its own data
33Sharing Data (another view)
Uniform Memory Access - UMA
Memory
Symmetric Multiprocessor SMP
34Sharing Data (another view)
Non-Uniform Memory Access - NUMA
35Communicating between nodes
- One way to communicate b/t processors treats
physically separate memories as 1 big memory - (i.e. 1 big logically shared address space)
- Any processor can make a memory reference to any
memory location even if its at a different node - Machines are called distributed shared
memory(DSM) - Same physical address on two processors refers to
the same one location in memory - Another method involves private address spaces
- Memories are logically disjoint cannot be
addressed be a remote processor - Same physical address on two processors refers to
two different locations in memory - These are multicomputers
36Multicomputer
Proc Cache A
Proc Cache B
interconnect
memory
memory
37But both can have a cache coherency problem
Cache A
Cache B
Read X Write X
Read X ...
Oops!
X 1
X 0
memory
X 0
38Cache coherence protocols
- Directory Based
- Whether or not some physical memory location is
shared or not is recorded in 1 central location - Called the directory
- Snooping
- Every cache w/entries from centralized main
memory also has a particular blocks sharing
status - No centralized state kept
- Caches connected to shared memory bus
- If there is bus traffic, caches check (or
snoop) to see if they have the block being
transferred on bus - Main focus of upcoming discussion
39Side note Snoopy Cache
CPU
CPU references check cache tags (as
usual) Cache misses filled from memory (as
usual) Other read/write on bus must check tags,
too, and possibly invalidate
State Tag Data
Bus
40Write invalidate example
- Assumes neither cache had value/location X in it
1st - When 2nd miss by B occurs, CPU A responds with
value canceling response from memory. - Update Bs cache memory contents of X updated
- Typical and simple
41Maintaining the cache coherency requirement
- Alternative to write invalidate update all
cached copies of a data item when the item is
written - Called a write update/broadcast protocol
- One problem bandwidth issues could quickly get
out of hand - Solution track whether or not a word in the
cache is shared (i.e. contained in another cache) - If the word is not shared, theres no need to
broadcast on a write
42Write update example
(Shaded parts are different than before)
- Assumes neither cache had value/location X in it
1st - CPU and memory contents show value after
processor and bus activity both completed - When CPU A broadcasts the write, cache in CPU B
and memory location X are updated
43Advantages and disadvantages
- Shared memory good
- Compatibility with well-understood mechanisms in
use in centralized multiprocessors used shared
memory - Its easy to program
- Especially if communication patterns are complex
- Easier just to do a load/store operation and not
worry about where the data might be (i.e. on
another node with DSM) - But, you also take a big time performance hit
- Smaller messages are more efficient w/shared
memory - Might communicate via memory mapping instead of
going through OS - (like wed have to do for a remote procedure call)
44Advantages and disadvantages
- Shared memory good (continued)
- Caching can be controlled by the hardware
- Reduces the frequency of remote communication by
supporting automatic caching of all data - Message-passing good
- The HW is lots simpler
- Especially by comparison with a scalable
shared-memory implementation that supports
coherent caching of data - Communication is explicit
- Forces programmers/compiler writers to think
about it and make it efficient - This could be a bad thing too FYI
45Specifics of snooping
- Normal cache tags can be used
- Existing valid bit makes it easy to invalidate
- What about read misses?
- Easy to handle too rely on snooping capability
- What about writes?
- Wed like to know if any other copies of the
block are cached - If theyre NOT, we can save bus bandwidth
- Can add extra bit of state to solve this problem
state bit - Tells us if block is shared, if we must generate
an invalidate - When write to a block in shared state happens,
cache generates invalidation and marks block as
private - No other invalidations sent by that processor for
that block
46Specifics of snooping
- When invalidation sent, state of owners
(processor with sole copy of cache block) cache
block is changed from shared to unshared (or
exclusive) - If another processor later requests cache block,
state must be made shared again - Snooping cache also sees any misses
- Knows when exclusive cache block has been
requested by another processor and state should
be made shared
47Specifics of snooping
- More overhead
- Every bus transaction would have to check
cache-addr. tags - Could easily overwhelm normal CPU cache accesses
- Solutions
- Duplicate the tags snooping/CPU accesses can go
on in parallel - Employ a multi-level cache with inclusion
- Everything in the L1 cache also in L2 snooping
checks L2, CPU L1
48Threads
Recall from board code, data, files shared No
process context switching
- Can be context switched more easily
- Registers and PC
- Not memory management
- Can run on different processors concurrently in
an SMP - Share CPU in a uniprocessor
- May (Will) require concurrency control
programming like mutex locks.
This is why we talked about critical sections,
etc. 1st
49Process Vs. Thread
P1
P2
user
PCB
PCB
kernel
Kernel code and data
- Two single-threaded applications on one machine
50Process Vs. Thread
P1
P2
user
PCB
PCB
kernel
Kernel code and data
- P1 is multithreaded P2 is single-threaded
- Computational state (PC, regs, ) for each thread
- How different from process state?
51Things to know?
- The reason threads are around?
- 2. Benefits of increased concurrency?
- 3. Why do we need software controlled "locks"
(mutexes) of shared data? - 4. How can we avoid potential deadlocks/race
conditions. - 5. What is meant by producer/consumer thread
synchronization/communication using pthreads? - 6. Why use a "while" loop around a
pthread_cond_wait() call? - 7. Why should we minimize lock scope (minimize
the extent of code within a lock/unlock block)? - 8. Do you have any control over thread
scheduling?
52See handwritten notes for more on mutex,
condition variables, and threads
533 kinds of networks
- Massively Parallel Processor (MPP) network
- Typically connects 1000s of nodes over a short
distance - Often banks of computers
- Used for high performance/scientific computing
- Local Area Network (LAN)
- Connects 100s of computers usually over a few kms
- Most traffic is 1-to-1 (between client and
server) - While MPP is over all nodes
- Used to connect workstations together (like in
Fitz) - Wide Area Network (WAN)
- Connects computers distributed throughout the
world - Used by the telecommunications industry
54Performance parameters (see board)
- Bandwidth
- Maximum rate at which interconnection network can
propagate data once a message is in the network - Usually headers, overhead bits included in
calculation - Units are usually in megabits/second, not
megabytes - Sometimes see throughput
- Network bandwidth delivered to an application
- Time of Flight
- Time for 1st bit of message to arrive at receiver
- Includes delays of repeaters/switches length /
m (speed of light) (m determines property of
transmission material) - Transmission Time
- Time required for message to pass through the
network - size of message divided by the bandwidth
55Performance parameters (see board)
- Transport latency
- Time of flight transmission time
- Time message spends in interconnection network
- But not overhead of pulling out or pushing into
the network - Sender overhead
- Time for mP to inject a message into the
interconnection network including both HW and SW
components - Receiver overhead
- Time for mP to pull a message out of
interconnection network, including both HW and SW
components - So, total latency of a message is
56Some more odds and ends
- Note from the example (with regard to longer
distance) - Time of flight dominates the total latency
component - Repeater delays would factor significantly into
the equation - Message transmission failure rates rise
significantly - Its possible to send other messages with no
responses from previous ones - If you have control of the network
- Can help increase network use by overlapping
overheads and transport latencies - Can simplify the total latency equation to
- Total latency Overhead (Message
size/bandwidth) - Leads to
- Effective bandwidth Message size/Total latency
57Switched vs. shared
Node
Node
Node
Shared Media (Ethernet)
Node
Node
Switched Media (ATM)
(A. K. A. data switching interchanges,
multistage interconnection networks, interface
message processors)
Switch
Node
Node
58Switch topology
- Switch topologyreally just a fancy term for
describing - How different nodes of a network can be connected
together - Many topologies have been proposed, researched,
etc. only a few are actually used - MPP designers usually the most creative
- Have used regular topologies to simplify
packaging, scalability - LANs and WANs more random
- Often a function of what equipment is around,
distances, etc. - Two common switching organizations
- Crossbar
- Allows any node to communicate with any other
node with 1 pass through an interconnection - Omega
- Uses less HW (n/2 log2n vs. n2 switches) more
contention
59Connection-Based vs. Connectionless
- Telephone operator sets up connection between
the caller and the receiver - Once the connection is established, conversation
can continue for hours - Share transmission lines over long distances by
using switches to multiplex several conversations
on the same lines - Time division multiplexing divide B/W
transmission line into a fixed number of slots,
with each slot assigned to a conversation - Problem lines busy based on number of
conversations, not amount of information sent - Advantage reserved bandwidth
(see board for ex.)
60Routing Messages
- Shared Media
- Broadcast to everyone!
- Switched Media needs real routing. Options
- Source-based routing message specifies path to
the destination (changes of direction) - Virtual Circuit circuit established from source
to destination, message picks the circuit to
follow - Destination-based routing message specifies
destination, switch must pick the path - deterministic always follow same path
- adaptive pick different paths to avoid
congestion, failures - randomized routing pick between several good
paths to balance network load
61Store and Forward vs. Cut-Through
- Store-and-forward policy each switch waits for
the full packet to arrive in switch before
sending to the next switch (good for WAN) - Cut-through routing or worm hole routing switch
examines the header, decides where to send the
message, and then starts forwarding it
immediately - In worm hole routing, when head of message is
blocked, message stays strung out over the
network, potentially blocking other messages
(needs only buffer the piece of the packet that
is sent between switches). - Cut through routing lets the tail continue when
head is blocked, accordioning the whole message
into a single switch. (Requires a buffer large
enough to hold the largest packet). - See board
62Ethernet Evolution
- X_Base_Y
- X stands for the available media bandwidth
- Base stands for base band signaling on the medium
- Y stands for the maximum distance a station can
be from the vampire tap (i.e. Length of Attach
Unit Interface)
63More detail
- In 100BaseT
- In 100 Mbps Ethernet (known as Fast Ethernet),
there are three types of physical wiring that can
carry signals - 100BASE-T4 (four pairs of telephone twisted pair
wire - 100BASE-TX (two pairs of data grade twisted-pair
wire) - 100BASE-FX (a two-strand optical fiber cable)
- This designation is an Institute of Electrical
and Electronics Engineers shorthand identifier. - The "100" in the media type designation refers to
the transmission speed of 100 Mbps. - The "BASE" refers to base band signaling
- i.e. only Ethernet signals are carried on the
medium - The "T4," "TX," and "FX" refer to the physical
medium that carries the signal. - (Through repeaters, media segments of different
physical types can be used in the same system.) - Source http//www.ggrego.com/glossary/glossary_n
um.htm
64A short summary
- This designation is an Institute of Electrical
and Electronics Engineers (IEEE) shorthand
identifier (i.e. X BASE Y) - The "10" in the media type designation refers to
the transmission speed of 10 Mbps. - The "BASE" refers to base band signaling, which
means that only Ethernet signals are carried on
the medium. - The "T" represents twisted-pair the "F"
represents fiber optic cable - and the "2", "5", and "36" refer to the coaxial
cable segment length - (the 185 meter length has been rounded up to "2"
for 200).
65Broadband vs. Baseband
- A baseband network has a single channel that is
used for communication between stations. Ethernet
specifications which use BASE in the name refer
to baseband networks. - BASE refers to BASE BAND signaling. Only
Ethernet signals are carried on the medium - A broadband network is much like cable
television, where different services communicate
across different frequencies on the same cable. - Broadband communications would allow a Ethernet
network to share the same physical cable as voice
or video services. 10BROAD36 is an example of
broadband networking.
66Ethernet
- The various Ethernet specifications include a
maximum distance - What do we do if we want to go further?
- Repeater
- Hardware device used to extend a LAN
- Amplifies all signals on one segment of a LAN and
transmits them to another - Passes on whatever it receives (GIGO)
- Knows nothing of packets, addresses
- Any limit?
67Repeaters
R1
R2
R3
68Bridges
- We want to improve performance over that provided
by a simple repeater - Add functionality (i.e. more hardware)
- Bridge can detect if a frame is valid and then
(and only then) pass it to next segment - Bridge does not forward interference or other
problems - Computers connected over a bridged LAN don't know
that they are communicating over a bridge
69Bridges
- Typical bridge consists of conventional CPU,
memory and two NIC's. - Does more than just pass information from one
segment to another - A bridge can be constructed to
- Only pass valid frame if necessary
- Learn what is connected to network "on the fly"
70Ethernet vs. Ethernet w/bridges
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Single Ethernet 1 packet at a time
Node
Node
Node
Node
Node
Node
Bridge
Bridge
Node
Node
Node
Node
Node
Multiple Ethernets Multiple packets at a time
71Network Interface Card
- NIC
- Sits on the host station
- Allows a host to connect to a hub or a bridge
- Hub merely extends multiple segments into
single LAN do not help with performance only 1
message can transmit at a time - If connected to a hub, then NIC has to use
half-duplex mode of communication (i.e. it can
only send or receive at a time) - If connected to a bridge, then NIC (if it is
smart) can use either half/full duplex mode - Bridges learn Media Access Control (MAC) address
and the speed of the NIC it is talking to.
72Routers
- Routers
- Devices that connect LANs to WANs or WANs to WANs
- Resolve incompatible addresses (generally slower
than bridges) - Divides interconnection networks into smaller
subnets which simplifies manageability and
security - Work much like bridges
- Pay attention to the upper network layer
protocols - (OSI layer 3) rather than physical layer (OSI
layer 1) protocols. - (This will make sense later)
- Will decide whether to forward a packet by
looking at the protocol level addresses (for
instance, TCP/IP addresses) rather than the MAC
address. - (This will make sense later)
73Why we need protocols/layers
- Enable sharing the hardware network links
- Overcome sources of unreliability in the network
- Lost packets
- Temporary failure of an intervening routing node
- Mangled packets
- Reflections on the media, soft errors in
communication hardware buffers, etc. - Out of order delivery
- Packets of the same message routed via different
intervening nodes leading to different latencies
74Recall
- A protocol is the set of rules used to describe
all of the hardware and (mostly) software
operations used to send messages from Processor A
to Processor B - Common practice is to attach headers/trailers to
the actual payload forming a packet or frame.
75Protocol Family Concept
Message
Message
Message
76Layering Advantages
- Layering allows functionally partitioning the
responsibilities (similar to having procedures
for modularity in writing programs) - Allows easily integrating (plug and play) new
modules at a particular layer without any changes
to the other layers - See the board
- Rigidity is only at the level of the interfaces
between the layers, not in the implementation of
these interfaces - By specifying the interfaces judiciously
inefficiencies can be avoided
77ISO Model
7
- Interact with user e.g. mail, telnet, ftp
Presentation
- Char conv., echoing, format diffs endian-ness
6
Session
- Process to process comm. e.g. Unix sockets
5
Transport
- Packetizing, seq-num, retrans. e.g. TCP, UDP
4
Network
- Routing, routing tables e.g. IP
3
- Interface to physical media, error recovery e.g.
retransmit on collision in Ethernet
Data Link
2
- Electrical and mechanical characteristics of
physical media e.g. Ethernet
Physical
1
78ISO Model Examples
7
User program
Presentation
6
Session
- Sockets open/close/read/write interface
5
Kernel Software
Transport
- TCP reliable infinite-length stream
4
Network
- IP unreliable datagrams anywhere in
world
3
- Ethernet unreliable datagrams on local segment
Data Link
2
Hardware
- 10baseT ethernet spec twisted pair w/RJ45s
Physical
1
79Layering Summary
- Key to protocol families is that communication
occurs logically at the same level of the
protocol, called peer-to-peer, - but is implemented via services at the next lower
level - Encapsulation carry higher level information
within lower level envelope - Fragmentation break packet into multiple smaller
packets and reassemble - Danger is each level increases latency if
implemented as hierarchy (e.g., multiple check
sums)
80Layering SummaryTCP atop IP atop Ethernet
- Application sends message
- TCP breaks into 64KB segments, adds 20B header
- IP adds 20B header, sends to network
- If Ethernet, broken into 1500B packets with
headers, trailers (24B)
- All Headers, trailers have length field,
destination, ...
81Techniques Protocols Use
- Sequencing for Out-of-Order Delivery
- Attach sequence number to packet, Maintain
counter, If arriving packet is next sequence pass
it on, If not save until correct point - Sequencing to Eliminate Duplicate Packets
- Use sequence numbers to detect and discard
duplicates - Retransmitting Lost Packets
- Avoiding Replay Caused by Excessive Delay
- Protocols mark each session with unique Session
ID - Incorrect session ID causes packet to be
discarded - Flow Control to Prevent Data Overrun
- Use sliding window
- Mechanism to Avoid Network Congestion
- Switches detect congestion use packet loss to
adjust packet rate
82Naming and Name Resolution
- Within a system each process has an ID
- Across a network process have no knowledge of one
another - Generally processes are identified by
- lthost name, identifiergt
- Must have a system to resolve names
- 1. Every system can have file with complete
listing of all other hosts Internet originally
used this! - 2. Distribute name information across network and
have appropriate distribution and retrieval
protocol - This is Domain Name Server in
use today
83Domain Name Servers
- Imagine that a system wants to locate
- gaia.cc.gatech.edu
- The kernel will issue a request to a name
server for the edu domain. This name server will
be at a known address. - The edu name server will issue the address where
the gatech.edu name server is located. - This name server is queried and it returns the
address of cc.gatech.edu which when queried will
return the Internet Address of gaia.cc.gatech.edu
(130.207.9.18)
84Next Hop Forwarding
This table is for switch 2.
85Step oneDefine universal packet format
86Step twoEncapsulate the universal packetsin
(any) local network frame format
Used to send msg. from 1 network to another (or
wi/the same)but we want a uniform standard.
Frame Header
Frame Data
Used to communicate within 1 network
87Physical Network Connection
Router
Router facilitates communication
between networks
Individual Networks
Each cloud represents arbitrary network
technology LAN, WAN, ethernet, token ring, ATM,
etc.
88Routers
- A router is
- a special-purpose computer dedicated to the task
of interconnecting networks. - A router can interconnect networks that use
different technologies - (including different media, physical addressing
schemes or frame formats)
Router
89Router operation
- Unpack IP packet from frame format of source
network - Perform routing decision
- Re-pack IP packet in frame format of the
destination network - (see board for demo packing, unpacking,
repacking)
90Reassembly
- Fragments are never reassembled until the final
destination - Why?
- Reduce amount of state information in routers.
When packets arrive at a router they can simply
be forwarded - Allows routes to change dynamically. Intermediate
reassembly would be problematic if all fragments
didn't arrive.
91Example
Source Host
Net 1
header 1
Router 1
Net 2
header 2
Router 2
Net 3
header 3
Destination Host
92TCP/IP
- A number of different protocols have been
developed to permit internetworking - TCP/IP (actually a suite of protocols) was the
first developed. - Work began in 1970 (same time as LAN's were
developed) - Most of the development of TCP/IP was funded by
the US Government (ARPA)
93Layer upon layer upon layer...
- Layer 1 Physical
- Basic network hardware (same as ISO model Layer
1) - Layer 2 Network Interface
- How to organize data into frames and how to
transmit over network (similar to ISO model Layer
2) - Layer 3 Internet
- Specify format of packets sent across the
internet as well as forwarding mechanisms used by
routers - Layer 4 Transport
- Like ISO Layer 4 specifies how to ensure reliable
transfer - Layer 5 Application
- Corresponds to ISO Layers 6 and 7. Each Layer 5
protocol specifies how one application uses an
internet
94IP Address Hierarchy
- Addresses are broken into a prefix and a suffix
for routing efficiency - The Prefix is uniquely assigned to an individual
network. - The Suffix is uniquely assigned to a host within
a given network
1
1
2
Network 1
Network 2
3
3
5
95Five Classes of IP Address
Primary Classes
96Computingthe Class
(take a quiz) (then see the board)
97Classes and Dotted Decimal
- Range of Values
- 0 through 127
- 128 through 191
- 192 through 223
- 224 through 239
- 240 through 255
Does this mean there are 64 Class B networks?
Does this mean there are 32 Class C networks?
(on the board)
98Division of the Address Space
Address Class
Bits in Prefix
Maximum Number of Networks
Bits in Suffix
Maximum Number of Hosts per Network
A B C
7 14 21
128 16384 2097152
24 16 8
16777216 65536 256
(on the board)
99Special IP Address Summary
Prefix
Suffix
Type of Address
Purpose
All-0's
All-0's
This computer
Used during bootstarp
Network
All-0's
Network
Identifies a network
Network
all-1's
Directed broadcast
Broadcast on specified net
All-1's
All-1's
Limited broadcast
Broadcast on local net
127
Any
Loopback
Testing
Network
All-0's
Directed broadcast
Berkley broadcast
100Routers and IP Addressing
- Each host has an address
- Each router has two (or more) addresses!
- Why?
- A router has connections to multiple physical
networks - Each IP address contains a prefix that specifies
a physical network - An IP address does not really identify a specific
computer but rather a connection between a
computer and a network. - A computer with multiple network connections
(e.g. a router) must be assigned an IP address
for each connection
101Example
Ethernet 131.108.0.0
Token Ring 223.240.129.0
131.108.99.5
223.240.129.2
223.240.129.17
78.0.0.17
WAN 78.0.0.0
Note!
(on the board)
102Address Resolution Protocol
- IP addresses are virtual
- LAN/WAN hardware doesn't understand IP addresses
- Frame transmitted across a network must have
hardware address of destination (in that network) - Three basic mechanisms for resolving addresses
1. Address translation table Used primarily in
WAN's 2. Translation by mathematical function 3.
Distributed computation across network Protocol
addresses are abstractions Physical hardware does
not know how to locate a computer from its
protocol address Protocol address of next hop
must be must be translated to hardware address
103Resolving Addresses
- 1. Address translation table
- Used primarily in WAN's
- 2. Translation by mathematical function
- 3. Distributed computation across network
- Protocol addresses are abstractions
- Physical hardware does not know how to locate a
computer from its protocol addess - Protocol address of next hop must be must be
translated to hardware address
104Address Resolution Protocol
- TCP/IP can use any of the three methods
- Table lookup usually used in a WAN
- Closed-form computation is used with configurable
networks - Message exchanged used in LAN's with static
addressing - To insure that all computers agree TCP/IP
includes an Address Resolution Protocol - Two types of messages are supported
- Request a hardware address given a protocol
address - Reply containing IP Address and hardware request
105IP Addresses and Routing Table Entries
R1
R2
R3
Assume message with IP address
192.4.10.3 arrives at router R2
for each entry in table if(Mask Addr)
Dest forward to NextHop
(see board)
106Routing Table Computation
- Routing tables are computed automatically
- Two basic approached are used
- Static routing
- Program runs when packet switch boots
- Advantages Simple with low network overhead
- Disadvantage Inflexible
- Dynamic routing
- Program builds routing table on boot and then as
conditions change adjusts table - Advantage Allows network to handle problems
automatically