Traveling Salesman Problem (TSP) - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Traveling Salesman Problem (TSP)

Description:

http://www.tsp.gatech.edu// 2 1 4 3 3 1 4 1 2 5 24,978 cities D. Applegate, R. Bixby, V. Chv tal, W. Cook, and K. Helsgaun 2004 15,112 cities D. Applegate, ... – PowerPoint PPT presentation

Number of Views:354
Avg rating:3.0/5.0
Slides: 28
Provided by: som66
Category:

less

Transcript and Presenter's Notes

Title: Traveling Salesman Problem (TSP)


1
  • ????????????????????????????
  • Traveling Salesman Problem (TSP)

2
History of the TSP
  • Mathematical problems related to TSP were treated
    in the 1800s by the Irish mathematician Sir
    William Rowan Hamilton and by the British
    mathematician Thomas Penyngton Kirkman.
  • In the 1930s, the general form of the TSP first
    studied by Karl Menger in Vienna and Harvard.

http//www.tsp.gatech.edu//
3
  • TSP is closely related to the Hamiltonian-cycle
    problem.
  • Problem Statement
  • In TSP, a salesman must visit n cities.
  • The salesman wishes to make a tour or Hamiltonian
    cycle.
  • He must visit each city exactly once and finish
    at the city he starts from.
  • There is a cost c(i, j) or to travel from
    city i to city j.
  • For the symmetric TSP, c(i, j) c(j, i).
  • For the asymmetric TSP, c(i, j) ? c(j, i).
  • Euclidean TSP (triangle inequality)

4
  • The salesman wishes to make the tour with minimum
    total cost.
  • For transportation problems, a vehicle has
    unlimited capacity.
  • TSP is NP-complete.
  • n!/(2n) possible tours (symmetric).
  • The problem can be modeled as a complete graph
    with n vertices.
  • A graph consists of vertices (nodes) and arcs
    (edges).

4
2
3
1
3
2
1
1
4
5
5
Year Research Team Size of Instance
1954 G. Dantzig, R. Fulkerson, and S. Johnson 49 cities
1971 M. Held and R.M. Karp 64 cities
1975 P.M. Camerini, L. Fratta, and F. Maffioli 67 cities
1977 M. Grötschel 120 cities
1980 H. Crowder and M.W. Padberg 318 cities
1987 M. Padberg and G. Rinaldi 532 cities
1987 M. Grötschel and O. Holland 666 cities
1987 M. Padberg and G. Rinaldi 2,392 cities
1994 D. Applegate, R. Bixby, V. Chvátal, and W. Cook 7,397 cities
1998 D. Applegate, R. Bixby, V. Chvátal, and W. Cook 13,509 cities
2001 D. Applegate, R. Bixby, V. Chvátal, and W. Cook 15,112 cities
2004 D. Applegate, R. Bixby, V. Chvátal, W. Cook, and K. Helsgaun 24,978 cities
6
  • Mathematical Formulation as Integer Programming

Subject to
7
LINGO Program for TSP
8
Solution Approaches
  • 1. Heuristic Approaches
  • Good enough solution.
  • Less computational time.
  • Usually followed by local search methods
    (r-opt exchange
  • heuristic).
  • Nearest Neighbor Heuristic, Insertion
    Heuristic, etc.
  • 2. Meta-heuristics
  • More efficient than heuristics.
  • Genetic Algorithm, GRASP, Ant Colony, etc.
  • 3. Exact Algorithms
  • Optimal solution
  • Require more computational time.
  • Branch and Bound, Branch and Cut, etc.

9
Traveling Salesman Problem-TSP
0
Assumptions
1. Undirected and symmetric arcs. 2. All possible
arcs between nodes exist. 3. Each node is visited
exactly once. 4. For transportation problems, a
vehicle has unlimited capacity.
10
TSP
Heuristic Approaches
Basic Nearest Neighbor Heuristic
Randomly select an initial node i as a partial
tour.
j
i
2
1
j
i
3
0
8
4
5
7
6
11
TSP
Modified Nearest Neighbor Heuristic
For every node i as the starting point of the
tour, construct the tour using the basic nearest
neighbor heuristic.
50
52
i
2
1
53
55
3
54
0
8
53
4
55
5
7
51
6
56
12
TSP
Arbitrary Insertion Heuristic
Select a starting tour with k nodes (k 1).
2
1
3
0
8
4
5
7
6
13
Nearest Insertion Heuristic
  • 1. Start with a subgraph consisting of city i
    only.
  • 2. Find city k such that is minimal and
    form the
  • subtour (i, k).
  • 3. Find city k not in the subtour and city l in
    the current
  • subtour such that
    , where j
  • denotes a city not in the current subtour and
    i denotes
  • a city in the current subtour.
  • 4. Find the edge i, j in the subtour which
    minimizes
  • . Insert k between i and j
    .
  • 5. Go to step 3 unless we have a Hamiltonian
    cycle.

14
Farthest Insertion Heuristic
  • 1. Start with a subgraph consisting of city i
    only.
  • 2. Find city k such that is maximal and
    form the
  • subtour (i, k).
  • 3. Find city k not in the subtour and city l in
    the current
  • subtour such that
    , where j
  • denotes a city not in the current subtour and
    i denotes
  • a city in the current subtour.
  • 4. Find the edge i, j in the subtour which
    minimizes
  • . Insert k between i and j
    .
  • 5. Go to step 3 unless we have a Hamiltonian
    cycle.

15
Convex Hull Insertion Heuristic
  • 1. Form the convex hull of the set of cities. The
    hull gives an initial subtour.
  • 2. For each city k not yet contained in the
    subtour, decide between which two cities i and j
    on the subtour to insert city k such that
    is minimal.
  • 3. From all (i, k, j) found in step 2, determine
    the (i, k, j) such that
    is minimal.
  • 4. Insert city k in subtour between cities i
    and j.
  • 5. Repeat step 2 through 4 until a Hamiltonian
    cycle is obtained.

16
Minimum Spanning Tree Based Algorithm
  • 1. Construct an MST of the graph corresponding to
    an instance of TSP.
  • 2. Starting at an arbitrary vertex, perform a
    depth-first traversal of the MST and add each
    vertex as visited to a list.
  • 3. Iterate through the list of vertices, marking
    each vertex encountered as visited. When a
    previously visited vertex is encountered, remove
    this vertex from the list unless it is the
    starting vertex.

17
MST Algorithm
  • 1. Begin with any node i and join node i to node
    j in the network that is closest to node i . Now
    arc (i , j ) is the spanning tree.
  • 2. Choose a node in the network (not in the
    spanning tree) that is closet to a node in the
    spanning tree.
  • 3. Connect the node chosen in step 2 to the
    spanning tree (to its closet node in the spanning
    tree).
  • 4. Repeat steps 1 3 until a minimum spanning
    tree is found.

18
(No Transcript)
19
Neighborhood Search
2- Opt
2
2
3
3
4
4
1
1
5
5
Depot
Depot
3- Opt
2
2
3
3
4
4
1
1
5
5
Depot
Depot
20
(No Transcript)
21
  • Problem
  • A beer distributor has received orders from seven
    customers for delivery the next day. The number
    of cases required by each customer and travel
    times between each pair of customer are as
    follows
  • Customer 1 2 3 4
    5 6 7
  • Cases 46 55 33 30
    24 75 30
  • Assume a delivery truck has unlimited capacity.
  • Construct a vehicle route using any heuristics.

0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7 20 57 51 51 10 50 50 55 20 50 10 25 30 11 50 15 30 10 60 60 20 90 53 47 38 10 90 12
22
Greedy Randomized Adaptive Search
Procedure(GRASP)
  • GRASP is an iterative process.
  • In each iteration, a solution is obtained.
  • Consist of two phases
  • a construction phase.
  • a local search phase.
  • The best overall solution is kept as a result.
  • The process is terminated when some termination
    criterion is met.

23
  • Procedure grasp( )
  • 1. InputInstance ( )
  • 2. for GRASP stopping criterion not satisfied
  • 3. ConstructGreedyRandomizedSolution
    (Solution)
  • 4. LocalSearch (Solution)
  • UpdateSolution (Solution, BestSolutionFound)
  • 6. rof
  • 7. return (BestSolutionFound)
  • end grasp

24
Construction Phase
  • A feasible solution is iteratively constructed,
    one element at a time.
  • In each construction iteration, a candidate list
    of elements is created by ordering the elements
    with respect to a greedy function.
  • One of the best candidates in the list is
    randomly chosen.
  • The benefits associated with each element are
    updated at each construction iteration.
  • The list of the best candidates is called the
    restricted candidate list (RCL).
  • The solutions obtained in the construction phase
    are not guarantee to be locally optimal.

25
  • Procedure ConstructGreedyRandomizedSolution
    (Solution)
  • 1. Solution
  • 2. for Solution construction not done
  • 3. MakeRCL (RCL)
  • 4. s SelectElementAtRandom (RCL)
  • Solution Solution s
  • 6. AdaptGreedyFunction (s)
  • 7. rof
  • end ConstructGreedyRandomizedSolution

26
Local Search Phase
  • Each constructed solution is improved by applying
    a local search.
  • In the local search algorithm, the current
    solution is successively replaced by a better
    solution in the neighborhood of the current
    solution.
  • The local search is terminated when no improved
    solution is found in the neighborhood.

27
  • Procedure local(P, N(P), s)
  • 1. for s not locally optimal
  • 2. Find a better solution t N(s)
  • 3. Let s t
  • 4. rof
  • 5. return ( s as local optimal for P )
  • end local
Write a Comment
User Comments (0)
About PowerShow.com