Title: Advanced topics in Computer Networks
1Advanced topics inComputer Networks
Lecture 12 MPLS
- University of Tehran
- Dept. of EE and Computer Engineering
- By
- Dr. Nasser Yazdani
2Outline
- Different options for packet switching.
- Label switching
- MPLS
3Packet Switching
- Switch on Layer 2 (bridges) two major problems in
scaling. - Broadcast storm on the net.
- The size of lookup table.
- Use existing switching technology to send IP (
IP over ATM), very complicated! - Switch packets directly in TCP/IP protocol.
- Does not scale for higher speeds and supporting
some service like diff. Service. - Identify each flow with an ID.
4Label Switching
- The idea is to identify a flow with an ID.
- Combines label swapping with routing.
- Identify and separate different functional
components in layer 3. - The routing functional components
- Forwarding
- Forwarding engine
- Forwarding tables.
- Control.
- Routing protocols
- Maintain labels
5Basic concepts
- Forwarding Equivalence Classes A set of all
possible packets forwarded and treated by a
router in the same way disjoint of other sets.
For instance, to the same next hop or having the
same QoS. - Consistent routing and forwarding information in
the network. This is done by forwarding
components.
6Forwarding Components
- LabelShort, fixed-length entity. No structured.
- Forwarding tablesdifferent tables for unicast
and multicast forwarding
Incoming Tag Outgoing Tag Outgoing interface Outgoing link level inform.
If the switch finds a entry with the incoming
tag, it replaces the tag in the packet with the
outgoing tag and the link information, and
forwards it to the outgoing interface.
7Forwarding Components (cont)
- Labels are carried in packets. In ATM in VPI/VCI
part and in IP in the link layer - Labels come inside MPLS headers. MPLS headers are
inserted between layer3 and layer2 headers. This
type of headers are called shim headers. Shim
header is 4 bytes and its format is
Link layer header Shim header 3 layer header data
Shim header is identified from type of frames in
Ethernet and PPP. ( See parser for details.)
8(No Transcript)
9Control Components (Binding)
- How Labels are bound to FECs
- Topology Driven Binding is done by the routing
protocols like OSPF or BGP. Each route or prefix
is associated with a label. We implement this. It
is simple and does label aggregation and merging
in some degree. - Request or Control Driven Binding is done by
signal in upper layer such RSVP. - Data Driven receiving a data flow trigger the
binding process. The most complicated case.
10Control Components (Binding)
- Local versus remote binding in the first one
label assigned to a FEC locally. In the second
one, router receives label from other router. - Upstream versus Downstream in the second one,
incoming labels are binded locally and outgoing
remotely. Upstream is reverse. - Free labels A pool of free label to assign.
11Basic concepts (Binding)
- MPLS is connection oriented and uses label
switching to forward packets. Then, a Label
Switch Path (LSP) must be set first before
forwarding any packet. - Setting path is done in upper layer or control
plane. - Labels are allocated and distributed in by the
Label Distribution Protocol (LDP) which is in the
upper layer. - Labels are bounded to Flow Equivalent Classes
(FECs). A FEC is a set of packets which are
treated in the same way for forwarding. - Implement MTU (maximum Unit) discovery mechanism
in the upper layer. If the packet size pass this
value, it discards the packet (should be included
in the parser).
12Proposed Architectures
- IP switching (Ipsilon)
- Tag-switching (Cisco)
- CSR (Toshiba)
- ARIS (IBM)
- (Aggregate route-based IP switching)
- Multiple Protocol Label Switching (MPLS).
13Why MPLS?
- To utilize ATM switching technique in IP routers.
This improve the price/performance of forwarding
process. - To simplify forwarding and routing process. Use
MPLS tags for forwarding instead of IP header. - Improve scalability by defining nested domains.
- To extend the functionality and flexibility in
bringing new services such as traffic shape, QoS,
etc in an easy and manageable manner. - And more ..
14What is MPLS
- Based on label switching technology
- Use link layer services from L2
- Single short fixed label
- Fast exact label matching
- Based on multiple data link media
- Connection-oriented switching
- Can support multiple layer 3 protocols
15(No Transcript)
16Example of MPLS backbone
IP
MPLS edge router
MPLS Switch/router
MPLS Switch/router
MPLS Switch/router
MPLS edge router
MPLS Switch/router
MPLS edge router
IP
IP
17Core MPLS Components
- Routing
- using standard L3 routing
- Labels
- local issue, but decided based on global
knowledge - Encapsulation
- May use the following fields label, TTL, CoS,
stack indicator, next header type indicator,
checksum -
18MPLS Link Layers
- MPLS is intended to run over multiple link layers
- Specifications for the following link layers
currently exist - ATM label contained in VCI/VPI
- Frame Relay label contained in DLCI field
- PPP/LAN uses shim header inserted between L2
and L3 header
19Basic processing
- Build forwarding table
- Using OSPF, IS-IS, EIGRP
- Distribute tag information via TCP
- Edge router forwarding packet across the MPLS
network - Analyzes the network-layer header
20LSP establishment
- Independent each LSR makes decision
independently. - Fast label binding convergence
- Ordered Used in ARIS. Established from one end
to another. - Provide loop prevention
- Selection FEC at the end point
- Tight control over packet forwarding
- Increased time in LSP setting.
21Dealing with loop
- Loop mitigation minimize loop effects. TTL field
(What to do in ATM?) - Limit buffer space for a specific VC.
- Loop prevention
- Loop detection Path vector (ARIS)
- Colored threads. For each LSP a unique color. If
thread come back, the node will see loop. - How make color unique. Use IP addr. With a unique
number. - Color is initiated in end point.
22Encapsulation
- Packet flows are identified and forwarded by
labels instead of TCP/IP header. - Labels come inside MPLS headers. MPLS headers are
inserted between layer3 and layer2 headers. This
type of headers are called shim headers. Shim
header is 4 bytes and its format is
Label20 Exp3 S1 TTL8
Shim header is identified from type of frames in
Ethernet and PPP. ( See parser for details.)
23Basic concepts (headers)
- Labels Total label space is 220. Some labels are
reserved - Label 0 IPv4 Explicit NULL label. Label stack
must be popped and the packet forwarded based on
the IPv4 header. - Label 1 Router Alert Label. The packet is sent
to the upper layer application. If the packet
needs to be forwarded. It is done based on the
label beneath stack. For forwarding the Router
Alert Label is push back to the stack. Send IP
packet and shim header together. - Label 2 the same as 0, but for IPv6
- Label 3 the same as 1, but for IPv6.
- Label 4-15 are reserved in the Ethernet link
but, can be used in PPP.
24Basic concepts (Headers)
- Total labels needed are 128K or 217 (discussed
later). Then, 17 bits are enough to represent
labels. - 3 extra bits is used to identify the network
layer protocol and multicasting. - 0xx 2 bits for Protocols identification
- 00 for IP protocol.
- 01 for IPX protocol.
- 11 for multicasting group.
- The protocol number is valid only for the bottom
stack header.
25Basic concepts (Headers)
- Exp experimental bits. We use these bit to
support DiffServ in MPLS. We can have 7 classes.
000 is reserved for the best effort or default
case. 64 classes of DiffServ mapped to 7 classes
in shim header. - S bit or Stack bit shows the bottom of stack. 1
indicates the bottom of stack. - TTL Time To Live. It is the same as TTL in IP
header. This is to prevent permanently
circulating packets in a loop.
26(No Transcript)
27(No Transcript)
28(No Transcript)
29(No Transcript)
30(No Transcript)
31(No Transcript)
32Scalability of MPLS over ATM
- Non VC merging
- Each source-destination pair mapped to a unique
VC value, O(n2) - VP merging
- For same destination has same VP, for each stream
has unique VC, O(n) - VC merging
- Maps incoming VC labels for the same destination
to the same outgoing VC labels
33A General View
Classifier
IP lookup
Header
Packet ID
Parser
Packet ID
IP address
Packet
MPLS Engine
Shim Header
RGA, Header
VMI Interface
header
Config
Ports, TOS, header
To VMI
Multicasting Engine
CPU Interface
- This a general view and have only showed the
connection from - Parser and MPLS Engine to the other components.
34Design Principles
- Binding is done by Topology driven approach and
by the routing protocols. - QoS is accomplished by DiffServ and CR-LDP. We
dont support RSVP in this version. - Supporting different hierarchy in MPLS domains.
- Support Multicasting.
- No Support for VPN.
- Forwarding is done completely in layer 2.
- MPLS engine works only with IP packets. Then, if
the ingress link is ATM, packets must be
assembled first. In the egress, if the link is
ATM, packets are segmented after going through
the MPLS engine.
35Tables
- Routing Table This is the traditional routing
table which contains labels associated with
prefixes or network addresses. - This table is consulted in LER in the ingress and
egress of MPLS domain for forwarding packets. - This table is merged with the prefix matching
index structure. - The size of table is 128K which the number of the
supported prefixes.
FEC33 Label20 Port8 QoS
36Tables (Cont)
- Forwarding Table (LIB) or Label Information
Base is the main table for packet forwarding. - The size of table is the same as routing table
128K. - Out Label can be 17 bits, the other 3 bits,
depends on the network layer protocol and can be
set from the router (LSR) configuration. - Port is the output port for the unicast
forwarding. - TTL Time To Live is a value which is subtracted
form shim header TTL. This is helpful when the
MPLS packets are routed in an ATM domain. This
value is precomputed by LDP. The default value is
1.
In Label 20 Out Label20 Port8 TTL8 QoS Action 3
37Tables (Cont)
- QoS Quality of service parameter. This is used
in CR-LSR or in constrain based Label switching
Routers. CR-LDP specification define the
following traffic parameters - Peak Data Rate (PDR)
- Peak Burst Size (PBS)
- Committed Burst Size (CBS)
- Excess Burst Size (EBS)
- Frequency and Weight.
- Action is the action which MPLS should do with
this header and packet forwarding. It is three
bits and the valid values are. - Default 000, Swap the label and forward based on
the top of stack header values.
38Tables (Cont)
- POP 001, Pop stack and forward based on the
beneath stack values. - POPFRWD 010, Pop stack and forward based on the
IP header values. This is done usually in the
LER and in the egress of MPLS domain. - PUSH 011, Forward based on the label and push
label into the stack. - FRWDPUSH 100, Forward based on the IP header
values and push the label into stack. This is
done usually in the LER and in the ingress of
MPLS domain.Note - The code number of these action may change based
on the LDP implementation. - We may need other codes based on the
implementation and future use.
39Forwarding Algorithm
- We differentiate between LER in the ingress of a
MPLS domain and inside the MPLS domain itself. - We make few conventions to develop the
algorithms. P, S and T followed by dot . shows
the main data element and the next character
shows the field or entity. - P.header Packet header.
- S.header Shim header or MPLS header
- T.TTL TTL value in the table.
40Forwarding Algorithm (Cont)
- For LER in the Ingress, in entrance to MPLS
domain do - If P.TTL0, discard the packet.
- Else Get OutLabel and the port number from the
routing table. - S.TTL ? P.TTL 1
- S.Expt ? mapping DSP field of the IP header
- / This is the first domain. /
- S.S ? 1
- S.Label ? T.OutLabel
- Push the shim header into the stack.
- Forward the packet to the port.
41Forwarding Algorithm (Cont)
- For General case when we are inside MPLS domain
do - Loop Get the corresponding entry for the
incoming label in the MPLS table. - If S.TTL 0 or S.TTL T.TTL ? 0, / Discard
packet / - Send RemPacket signal to the PSSB
- Get the Packets IP header
- Send DiscrdPacket signal with IP header to CPU
- / CPU generates an ICMP discarded packet /
- Else / packet must be forwarded /
- If S.Label 0 / forward based on IP header /
- send the packet Id to Lookup and classifier.
- Set the IpRoute signal
- If S.Label 1 / forward packet to upper layer
/ - send the packet to CPU.
42Forwarding Algorithm (Cont)
- If S.Label 2 or 3 / IPv6 packet, discard/
- Send RemPacket signal to the PSSB
- Get the Packets IP header
- Send DiscrdPacket signal with IP header to CPU
- switch (T.Action)
- POP / Pop and forward based on the next header
/ - if S.S 1, / the last stack, we may not need
this / - send the packet Id to Lookup and
classifier. - Set the IpRoute signal
- P.TTL ? S.TTL
- else / There is another stack /
- remove the current stack and get the new
stack. - go to loop /repeat the forwarding
algorithm /
43Forwarding Algorithm (Cont)
- POPFRWD / Pop and forward based on the IP
header / - send the packet Id to Lookup and classifier.
- Set the IpRoute signal
- if S.S 1, / the last stack /
- P.TTL ? S.TTL 1
- Set NoShimHeader signal
- end if
- remove stack
- PUSH / Should be checked when it happens /
- PUSHFRWD / it is a new MPLS ingress, forward is
done based on IP header. Assumed this is not the
first stack. / - S.S ? 0 / Creating a new stack /
- S.Exp ? the current stack value
- S.OutLabel ? T.OutLabel / from routing table
/ - S.TTL ? The current TTL 1
- Push the shim header into the stack
- forward packet to the port for routing table
/
44Forwarding Algorithm (Cont)
- DEFAULT / Just swap the label/
- S.OutLabel ? T.OutLabel / from MPLS table /
- S.TTL ? The current TTL 1
- forward packet to the port for MPLS table /
- End
45Multicasting
- There is no any well defined standard or internet
draft for multicasting in MPLS. - In order to support multicasting in MPLS we need
to define - Multicasting protocol in detail.
- Labeling mechanism and addressing in the link
layer.
46Multicasting
- Label All labels starting with 11 is reserved
for multicasting. The multicasting MPLS packet
can be identified from the link layer frame type. - For Ethernet the e-type is hex 8848.
- For PPP the protocol field type is hex 0283.
- DLL Address An Ethernet address starting with
01-00-5E is a multicast address. For the rest,
the 20 bit of the MPLS label is directory mapped
to 20 bit Ethernet address. Bit 21-24 are always
0.
0000 0001 0000 0000 0101 1110 0000 MPLS multicast address
47Multicasting
- Protocol Few internet drafts have been published
recently. However, none them has specified a
detail mechanism for MPLS multicasting. In
general, we can have two approaches - Leave the forwarding process to IP multicasting.
This method has problem when MPLS packet wants to
go through ATM domain. - Design a new protocol based on the general
specifications of the internet drafts.
48Multicasting
- In designing a MPLS multicast protocol, we must
deal with two problems. - Creating multicast tree.
- Allocating and distributing labels.
- Definitely, LDP must deal with the second
problem. However, creating Multicast tree can be
based on - Topology driven Map the L3 tree in the routing
protocol into L2. - Request driven By signaling
- Traffic driven Map the L3 tree in the routing
protocol into L2 when data arrive.
49Multicasting
- We implement Topology driven approach
- The IP multicast protocol will take care of
creating multicast tree. - Since the multicast tree, or table, is also in
layer 2, the mapping can be done mostly in the
layer 2. - LDP is only modified to allocate multicast label
and distribute it on the path. - It sounds this is the minimum modification which
can be done to support MPLS multicasting.
50Multicasting
CPU Interface
LDP
Config
config
Ctrl signals
Ctrl signals
Ctrl signals
IP Multicast
MPLS Multicast
Data
Data
- A copy of IP multicast tree is build in MPLS
multicast. - whenever an IP multicast group is created, MPLS
part is - informed to updates its tree.
- MPLS send the new IP group number to LDP, to
bind a label - and distribute to the other LSRs in the domain.
- Multicast parts are implemented in layer 2 and
LDP is in upper - layers and implemented in software.
51Multicasting
- Required functionality in IP multicast part.
- If new multicast group
- send the group number, ports and related to
MPLS. - If a multicast group is deleted,
- send the delete signal with the group number to
MPLS. - If update on a multicast group,
- send update signal, group number and update
information to MPLS.
52Multicasting
- Required functionality for the MPLS multicasting.
- If new group signal
- send the group number to LDP to bind a label.
- create an entry in the routing table.
- enter the label received from LDP in the
routing table. - If a multicast group is deleted,
- send the delete signal with the group number to
LDP. - delete the label entry in the routing table.
- If update on a multicast group,
- find the corresponding label for the group.
- update the corresponding data in the table.
53Multicasting
- Label distribution the label distribution
policy can have a big impact in the multicast
engine design in layer 2. In general, there are
two label distribution policies - Down stream The receiver allocates the labels
and sends them to the sender. - Up Stream The send allocates the labels and
distributes them to the receiver. - We support down stream policy because
- We support a limited label space for multicasting
(1K). - It can be guaranteed the incoming labels are
unique for each interface. - Then, there is possible that a multicast flow to
have different label in the egress.
54Multicasting
- Forwarding table The incoming multicast label
is unique, then, it can be an index to the
forwarding table.
Next Out Label20 Port8 TTL8 QoS Action 3
. . .
Main table
1K
Extra space
- Next is the address of the next entry for this
incoming label in - table.
- Entries for an incoming label are chained
together as a link list.
55Multicasting (Table)
- 0 in the next means end of the list or the last
entry in the table. - Width of next depends to the size of Extra space
and size of extra space depends on the number of
out ports and whether the system is design for
the worst case or ave case. - Assume for 64 output ports and each multicast
flow goes to 20 of ports in average, then, - Extra space size 1K x 64 x 20 13K
- Next width 14 bits
56Multicasting (Table)
- If more than one lines are multiplexed to the
input interface, the incoming label space and the
main table space are divided among them. For
instance, assume 4 ports are multiplexed, each of
them uses 255 labels and the first line is mapped
to the first quarter of the main table, the
second line to the second quarter and etc. - The Extra space needs to be managed. Then, we
will need a memory management.
57Conclusion
- This report only consider, the requirement and
mechanisms for MPLS in layer 2. Other issues such
as path establishment and label distribution is
out of the scope of this report. - Some issues such as mapping from DiffServ
parameter into MPLS is remained to further study. - Implementation of this draft needs more
discussion and clarifying some other low level
mechanism such as memory management. - Integrating the MPLS engine with the rest of IP
switching and forward scheme, will bring some new
problems to be solved.