Explained: What is Traveling Salesman Problem (TSP)

Rakesh Patel

  • Last Updated: January 24, 2023

What is traveling salesman problem

Traveling salesman problem is not new for delivery-based businesses. Its recent expansion has insisted that industry experts find optimal solutions in order to facilitate delivery operations.

The major challenge is to find the most efficient routes for performing multi-stop deliveries. Without the shortest routes, your delivery agent will take more time to reach the final destination. Sometimes problems may arise if you have multiple route options but fail to recognize the efficient one.

Eventually, traveling salesman problem would cost you time and result in late deliveries . So, before it becomes an irreparable issue for your delivery company, let us understand the traveling salesman problem and find optimal solutions in this blog.

Table of Content

  • What is the Traveling Salesman Problem (TSP)?
  • Commom challenges of Traveling Salesman Problem (TSP)

What are Some Popular Solutions to Traveling Salesman Problem?

What are other optimal solutions to the traveling salesman problem, what are some real-life applications of traveling salesman problem.

What is Traveling Salesman Problem (TSP)?

The traveling Salesman Problem (TSP) is a combinatorial problem that deals with finding the shortest and most efficient route to follow for reaching a list of specific destinations.

It is a common algorithmic problem in the field of delivery operations that might hamper the multiple delivery process and result in financial loss. TSP turns out when you have multiple routes available but choosing a minimum cost path is really hard for you or a traveling person.

How difficult is it to solve?

It is quite difficult to solve TSP as it is known as NP-hard, which means there is no polynomial time algorithm to solve it for more numbers of addresses. So, with an increasing amount of addresses, the complexity of solving TSP increases exponentially.

So, it is impossible to find TSP solutions manually. Also, many mathematical algorithms and the fastest computers fail to solve TSP.

However, TSP can be eliminated by determining the optimized and efficient path using approximation algorithms or automated processes.

Common Challenges of Traveling Salesman Problem (TSP)

Being a salesman is not easy, as you need to face various unavoidable challenges in your everyday schedules.

These are major challenges in the Traveling Salesman Problem (TSP) as you are required to create a route with the shortest distances using hundreds and thousands of permutations and combinations that asks for less fuel, fulfill on-time delivery to customers, and are ready to modify routes considering last minute changes.

These are some of the near-optimal solutions to find the shortest route to a combinatorial optimization problem.

1. Nearest Neighbor (NN)

Nearest neighbor algorithm

The Nearest Neighbor Method is probably the most basic TSP heuristic. The method followed by this algorithm states that the driver must start by visiting the nearest destination or closest city. Once all the cities in the loop are covered, the driver can head back to the starting point.

Solving TSP using this efficient method, requires the user to choose a city at random and then move on to the closest unvisited city and so on. Once all the cities on the map are covered, you must return to the city you started from.

2. The Branch and Bound Algorithm

The Branch and Bound Algorithm for traveling salesman problem

The Branch & Bound method follows the technique of breaking one problem into several little chunks of problems. So it solves a series of problems. Each of these sub-problems may have multiple solutions. The solution you choose for one problem may have an effect on the solutions of subsequent sub-problems.

3. The Brute Force Algorithm

Brute Force Algorithm to solve traveling salesman problem

The Brute Force Approach takes into consideration all possible minimum cost permutation of routes using a dynamic programming approach. First, calculate the total number of routes. Draw and list all the possible routes that you get from the calculation. The distance of each route must be calculated and the shortest route will be the most optimal solution.

Other Optimal Solutions to the Traveling Salesman Problem

Blow Away TSP using Upper

Need a permanent solution for recurring TSP? Sign up with Upper to keep your tradesmen updated all the time. Lay off your manual calculation and adopt an automated process now!

crossline

Most businesses see a rise in the Traveling Salesman Problem (TSP) due to the last mile delivery challenges . The last mile delivery is the process of delivering goods from the warehouse (or a depot) to the customer’s preferred location. Considering the supply chain management, it is the last mile deliveries that cost you a wholesome amount.

At the same time, you need to sacrifice financial loss in order to maintain your current position in the market. Suppose last mile delivery costs you $11, the customer will pay $8 and you would suffer a loss. This is because of pre-defined norms which may favor the customer to pay less amount.

Real-Life Applications of Traveling Salesman Problem

This hefty last mile delivery cost is the result of a lack of Vehicle routing problem(VRP) software. VRP finds you the most efficient routes so that operational costs will not get increase. So, by using the right VRP software, you would not have to bother about TSP.

Such delivery management software uses an automated process that doesn’t need manual intervention or calculations to pick the best routes. Hence, it is the easiest way to get rid of the traveling Salesman Problem (TSP).

How TSP and VRP Combinedly Pile Up Challenges?

The traveling Salesman Problem is an optimization problem studied in graph theory and the field of operations research. In this optimization problem, the nodes or cities on the graph are all connected using direct edges or routes. The weight of each edge indicates the distance covered on the route between the two cities.

The problem is about finding an optimal route that visits each city once and returns to the starting and ending point after covering all cities once.

The TSP is often studied in a generalized version which is the Vehicle Routing Problem . It is one of the most broadly worked on problems in mathematical optimization. VRP deals with finding or creating a set of routes for reducing time, fuel, and delivery costs.

Is there any real world solution to TSP and VRP?

Many solutions for TSP and VRP are based on academics which means they are not so practical in everyday life. The reason is that many of them are just limited to perfection, but need a dynamic programming-based solution. So, if businesses really want to get rid of them, they need a TSP solver integrated with route optimization software.

The right TSP solver will help you disperse such modern challenges. It offers in-built route planning and optimization solutions in such a way that your tradesman doesn’t get stranded while delivering the parcel. Also, it is equipped with an efficient algorithm that provides true solutions to the TSP. As a result, the delivery manager can create a route plan hassle-free in a few minutes.

Can a Route Planner Resolve the Traveling Salesman Problem (TSP)?

In the general case, the Traveling Salesman Problem (TSP) involves finding the shortest optimized and possible route that includes a set of stops and returns to the starting point. The number of possible routes increases exponentially as the number of locations increases. Finding the best solution becomes difficult computationally, even for moderately sized problems.

But, Upper Route Planner, a route optimization software , is built differently. Upper has all the solutions you need when talking about TSP.

For example, if you are in charge of planning delivery routes with more than 500 stops in them, all you need to do is import an Excel or CSV file with multiple addresses into Upper, review, allot delivery drivers, optimize, and dispatch with a single click. This delivery route planning solution saves you hours of time spent on planning delivery routes and optimizing them.

Also, once the delivery is completed, Upper lets you collect proof of delivery. This is how the Upper Route Planner is a simple solution to the Traveling Salesman Problem.

Upper Route Planner

A simple-to-use route planner that every one is talking about

TSP stands for traveling Salesman Problem, while VRP is an abbreviation form of vehicle routing problem (VRP). In the delivery industry, both of them are widely known by their abbreviation form.

Yes, you can prevent TSP by using the right route planner. The online route planner helps you get the optimized path so that your delivery agents don’t have to deal with such challenges. In addition, they don’t struggle with multiple routes. Instead, they can progress on the shortest route.

The new method has made it possible to find solutions that are almost as good. This was done by the Christofides algorithm, the popular algorithm in theoretical computer science. This algorithm plugs into an alternate version of the problem that finds a combination of paths as per permutations of cities. It made the round trip route much longer. The round trip produced by the new method, while still not being efficient enough is better than the old one.

The vehicle routing problem (VRP) reduces the transportation costs as well as drivers’ expenses. It helps you serve more customers with fewer fleets and drivers. Thus, you don’t have any variation in the time taken to travel.

Create Optimized Routes Using Upper and Bid Goodbye to Traveling Salesman Problem

As a business owner, If you are dealing with TSP and want to get rid of them, we recommend using a TSP solver like Upper Route Planner. The online route planner is capable of plucking out the most efficient routes no matter how big your TSP is. It has an in-built sophisticated algorithm that helps you get the optimized path in a matter of seconds.

Therefore, you won’t fall prey to such real-world problems and perform deliveries in minimum time. Upper’s delivery route planner offers a dedicated driver app that makes sure your tradesman doesn’t go wrongfooted and quickly wraps up pending deliveries. Don’t just agree with our words, book a demo on Upper and disperse TSP once and for all.

Rakesh Patel

Rakesh Patel is the founder and CEO of Upper Route Planner. A subject matter expert in building simple solutions for day-to-day problems, Rakesh has been involved in technology for 30+ years. Looking to help delivery businesses eliminate on-field delivery challenges, Rakesh started Upper Route Planner with the ultimate goal of simplistic operations in mind.

Sign Up Now!

Get weekly updates from Upper Route Planner.

Related Guides

What is green transportation

What is Green Transportation and its Significance? 

What are DTC codes

What are DTC Codes & How to Diagnose Your Vehicle with DTC Codes?

Contactless Delivery

What is Contactless Delivery? Ultimate Guide in 2023

Payload capacity guide

Payload Capacity: How Much Load Your Truck can Handle?

Curbside Pickup and Delivery

What is Curbside Pickup and Delivery? Is This Popular Service Trend Right for Your Business?

How to become a truck dispatcher with no experience from home

How to Become a Truck Dispatcher with No Experience?

Sign Up with Upper Route Planner and automate your daily business process route planning, scheduling, and optimizing!

Popup Image

Stress-Free Route Planning Plan. Dispatch. Track.

Streamline your delivery business operations with Upper Route Planner.

https://www.bing.com/

https://www.upperinc.com/guides/travelling-salesman-problem/

www.bing.com

Solving Business Challenges

Helping businesses grow, the traveling salesman problem solution for delivery and service businesses.

May 2, 2022

Blog post hero image

Calculating the most efficient route between a number of cities is a major challenge. There are hundreds of possible options which change based on hundreds of different variables.

For instance, in just 15 cities there are over 87 billion possible round trips. 

This is the traveling salesman problem.  

And it’s an incredibly costly one for any delivery, service, or trucking business.

To solve the traveling salesman problem, you need robust algorithms and some serious computational power.

If you don’t want to invest money in an internal team of expert mathematicians and engineers, you need a third-party solution.

In this article, we’ll explore what makes the traveling salesman problem (TSP) so hard and how business owners can use smart software solutions to solve it, reduce mileage, and improve fleet efficiency.

To navigate to the section which interests you the most, click below:

What Is the Traveling Salesman Problem?

The traveling salesman problem (TSP) is a problem that asks, with a list of stops and the distances between each of them, what is the shortest path/possible route that visits each location and returns to the origin?

travelling salesman problem optimal solution

For decades the TSP has been a challenge for many businesses that rely on route planning — for example, field service, shipping, and delivery companies.

With just a starting point and a few stops, planning a route can seem complicated enough.

But once you start planning routes throughout the country, it reaches an almost unthinkable level of complexity. With endless permutations of potential routes between different stops (typically called nodes), identifying the shortest and most efficient route is a huge challenge.

This is what a route would look like if planned to every city in the US with over 500 inhabitants:

travelling salesman problem optimal solution

( Image Source )

Just imagine the number of possible routes you could take when a single possible solution is this complex.

Why the Traveling Salesman Problem Is Still Hard to Solve in 2021

Mathematician Karl Menger discovered the TSP in 1930, over 90 years ago.

Since then, there have been many suggested solutions and algorithms. But they often struggle with the sheer scale, since the number of possible sales routes increases exponentially with each new stop.

And the truth is, real route planning in 2021 for real businesses is more complex than just a list of stops — making things even more complicated.

In the real world, it’s not as easy as simply finding the shortest route.

It’s not just about destinations and distance, there are many other factors in play

It’s not as easy as just taking a list of addresses and creating a route. 

A field sales rep, delivery driver, or technician also has to consider a lot of other factors, like time windows, vehicle capacities, and more.

Delivery time windows and planned sales meetings

What if your delivery drivers or salesperson have planned delivery times or sales meetings? Then it’s not enough to just consider the distance, you also have to factor in when they should be at each location.

This adds a whole new level of complexity, and is a challenge that most route planners simply can’t handle.

This particular instance of the problem is also known as the Vehicle Routing Problem with Time Windows (VRPTW). 

Required vehicle capacities or technician qualifications

For deliveries, each package that you plan for can have specific requirements in terms of shipping and handling, from refrigeration to unloading dimensions.

You also have to consider the overall loading capacity of each vehicle.

But even for maintenance and field service businesses, you have to consider the unique qualifications of each technician, and whether they match a job.

This is also known as the Capacitated Vehicle Routing Problem (CVRP) .

Planning efficient routes that include both pickups and deliveries

If your business delivers any sort of reusable packaging or recyclable materials to your customers, chances are that reverse logistics are a huge priority.

To maximize the efficiency of your business, you want to integrate both pickup and delivery into all planned routes.

This adds another dimension and a level of complexity that most route optimization tools can’t handle.

This is also referred to as the Pick-up and Delivery Vehicle Routing Problem (PDVRP)

Considering priority of leads, existing clients, and deliveries

To run a business efficiently and keep VIP customers satisfied, you also need to make sure your staff prioritize things the right way.

If your routes don’t consider priority, every customer will be treated the same. So you risk alienating loyal customers if you suddenly switch to an automated solution that can’t handle that.

Balancing workloads between multiple drivers

Of course, when planning routes for multiple drivers, you also have to consider their workloads.

Drivers are only human, and have a maximum shift length of 14 hours and mandatory breaks.

What good is an “efficient route” if it’s not possible for a driver to handle it within the allotted time?

Good workload balancing tools will help you reduce overtime and cut driver costs, and make sure all drivers are HOS-compliant at all times.

How The Little Posy Co. Plans Routes for 1,000 Destinations in Minutes With OptimoRoute

travelling salesman problem optimal solution

The Little Posy Co. is an Australian flower delivery business, with hundreds of unique flowers and posies (bouquets) on offer every single day.

Like any flower delivery business or florist, they face huge fluctuations in demand in relation to different holidays.

On Valentine’s Day in 2019, the order volume exploded to over 1,000 orders in a day, the same volume as a regular week.

Normally, this would be an impossible task for Posy’s small team, but OptimoRoute’s optimization algorithm helped streamline the entire process.

“OptimoRoute has changed the impossible to the absolutely possible on Valentine’s Day. It takes the pressure off – I could not fathom how the workflow could function so smoothly without this piece of software.” – Gabby Maynard, The Little Posy Co.

cta banner

Try for free

Reduce your operational costs by 30% Increase delivery capacity by 43% Plan 7x faster

Bulk Import of Orders and Fast Automatic Route Planning

With OptimoRoute, there’s no need to individually add stops or addresses, or plan routes for a single driver at a time.

You can simply import your order list in an Excel spreadsheet or a CSV file.

travelling salesman problem optimal solution

From there, OptimoRoute plans routes automatically, adjusting for real-world restrictions like delivery times, driver service areas, driver schedules, and more.

“I couldn’t even imagine having to dispatch 1,000 orders without OptimoRoute because it’s literally a few clicks of a mouse and the routes are perfectly mapped.” Gabby Maynard, The Little Posy Co.

Historical Data and Analytics Help Forecast Demand

The historical route data and detailed analytics helped Maynard forecast the demand for, and scale the fleet accordingly.

“We rely heavily on historical data – knowing how many drivers I need to recruit for the day, how many orders each driver can take, how long those deliveries would take, and how many drivers were dispatched.” Gabby Maynard, The Little Posy Co.

Final Thoughts

As a business owner, the last thing you want to do is waste unnecessary time sitting in Google Maps trying to plan routes and not getting optimal solutions ( Google Maps route optimization isn’t for business).

The traveling salesman problem affects businesses because planning routes manually requires so much work, ballooning the man hours and total costs of your logistics.

This can often mean oversized dispatching and scheduling departments, and a fleet that is slow to respond to cancellations and last-minute orders.

Thankfully, you don’t need to have a degree in computer science, create an approximation algorithm, or be a mathematician to come up with the best solution for the TSP.

OptimoRoute delivers a problem-free route planning experience, with smart features that help dispatchers plan routes in minutes. It also makes your fleet more flexible, helping dispatchers adjust to real-time changes.

To see how OptimoRoute can help your company overcome the TSP, start your 30-day free trial today .

FAQs about the traveling salesman problem

How do you solve the traveling salesman problem .

Mathematicians and computer scientists have proposed a number of potential heuristics or approximate solutions to solve the traveling salesman problem. One common solution, called the branch and bound method, involves dividing the problem into a series of smaller subproblems that are easier to solve mathematically than attempting to solve it all as a single problem.

What is an example of the traveling salesman problem?

Besides the obvious (well, a traveling salesman), let’s take the example of a delivery service that is delivering to a number of different locations in a city. The dispatcher has to determine the order at which they stop (outside of the starting location). You might think: just stop at the nearest location to the starting point, and then the nearest to that one, etc. 

But what if the nearest stop to the starting location takes you further away from all the other stops? You have to consider how far the next locations are from the first stop you choose. So, as you can see, the number of variables quickly multiplies, and finding the shortest route becomes hugely complex as you add more and more variables.

Why is the traveling salesman problem considered NP-hard?

The traveling salesman problem is considered an NP-hard problem because there isn’t a simple or straightforward solution, and the difficulty of calculating the optimal route with the shortest running time grows as more destinations are added.

Try OptimoRoute ™ for Free

No installation or credit card required

Main categories

Solving Business Challenges hero image

Related Articles

Traveling Salesman Problem (TSP) Implementation

Travelling Salesman Problem (TSP) : Given a set of cities and distances between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point.  Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to find if there exists a tour that visits every city exactly once. Here we know that Hamiltonian Tour exists (because the graph is complete) and in fact, many such tours exist, the problem is to find a minimum weight Hamiltonian Cycle.  For example, consider the graph shown in the figure on the right side. A TSP tour in the graph is 1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80. The problem is a famous NP-hard problem. There is no polynomial-time known solution for this problem.   

Examples: 

In this post, the implementation of a simple solution is discussed.

Below is the implementation of the above idea 

Time complexity:  O(n!) where n is the number of vertices in the graph. This is because the algorithm uses the next_permutation function which generates all the possible permutations of the vertex set.  Auxiliary Space: O(n) as we are using a vector to store all the vertices.

Please Login to comment...

In JAVA/C++ Language

New Course Launch!

Improve your Coding Skills with Practice

Start your coding journey now.

Solving The Traveling Salesman Problem For Deliveries

Illustration showing a black model car sitting on a terrain map of northern Europe.

The Traveling Salesman Problem (TSP) is the challenge of finding the shortest, most efficient route for a person to take, given a list of specific destinations. It is a well-known algorithmic problem in the fields of computer science and operations research, with important real-world applications for logistics and delivery businesses.

There are obviously a lot of different routes to choose from, but finding the best one—the one that will require the least distance or cost—is what mathematicians and computer scientists have spent decades trying to solve.

It’s much more than just an academic problem. Finding more efficient routes, or route optimization , increases profitability for delivery businesses, and reduces greenhouse gas emissions because it means less distance traveled.

In computing, the TSP has commanded so much attention because it’s so easy to describe yet so difficult to solve. In fact, it belongs to the class of combinatorial optimization problems known as NP-complete. This means that TSP is classified as NP-hard because it has no “quick” solution, and the complexity of calculating the best route increases as you add more destinations. 

A random scattering of 22 black dots on a white background.

The problem can be solved by analyzing every round-trip route to determine the shortest one. However, as the number of destinations increases, the corresponding number of roundtrips surpasses the capabilities of even the fastest computers. With 10 destinations, there can be more than 300,000 roundtrip permutations and combinations. With 15 destinations, the number of possible routes could exceed 87 billion.

The random dots are now joined by one line that forms a continuous loop.

The three most popular TSP solutions

Here are some of the most popular solutions to the Travelling Salesman Problem:

1. The brute-force approach

The Brute Force approach, also known as the Naive Approach, calculates and compares all possible permutations of routes or paths to determine the shortest unique solution. To solve the TSP using the Brute-Force approach, you must calculate the total number of routes and then draw and list all the possible routes. Calculate the distance of each route and then choose the shortest one—this is the optimal solution. 

2. The branch and bound method

This method breaks a problem to be solved into several sub-problems. It’s a system for solving a series of sub-problems, each of which may have several possible solutions and where the solution selected for one problem may have an effect on the possible solutions of subsequent sub-problems. To solve the TSP using the Branch and Bound method, you must choose a start node and then set bound to a very large value (let’s say infinity). Select the cheapest arch between the unvisited and current node and then add the distance to the current distance. Repeat the process while the current distance is less then the bound. If the current distance is less than the bound, you’re done. You may now add up the distance so that the bound will be equal to the current distance. Repeat this process until all the arcs have been covered.

3. The nearest neighbor method

This is perhaps the simplest TSP heuristic. The key to this method is to always visit the nearest destination and then go back to the first city when all other cities are visited. To solve the TSP using this method, choose a random city and then look for the closest unvisited city and go there. Once you have visited all cities, you must return to the first city.   ‍

How route optimization algorithms work to solve the Travelling Salesman Problem. Learn more.

Academic tsp solutions.

Academics have spent years trying to find the best solution to the Travelling Salesman Problem The following solutions were published in recent years:

Real-world TSP applications

Despite the complexity of solving the Travelling Salesman Problem, it still finds applications in all verticals. 

For example, TSP solutions can help the logistics sector improve efficiency in the last mile. Last mile delivery is the final link in a supply chain, when goods move from a transportation hub, like a depot or a warehouse, to the end customer. Last mile delivery is also the leading cost driver in the supply chain. It costs an average of $10.1, but the customer only pays an average of $8.08 because companies absorb some of the cost to stay competitive. So bringing that cost down has a direct effect on business profitability.  

The same field of dots from the last images, now in three groups each joined by a single continuous loop. The three loops meet in the middle so that the image looks almost like a flower with three oddly-shaped petals.

Minimizing costs in last mile delivery is essentially in last mile delivery is essentially a Vehicle Routing Problem (VRP ) . VVRP, a generalized version of the TSP, is one of the most widely studied problems in mathematical optimization. Instead of one best path, it deals with finding the most efficient set of routes or paths. The problem may involve multiple depots, hundreds of delivery locations, and several vehicles. As with TSP, determining the best solution to VRP is NP-hard.

Real-life TSP and VRP solvers

While academic solutions to TSP and VRP aim to provide the optimal solution to these NP-hard problems, many of them aren’t practical when solving real world problems, especially when it comes to solving last mile logistical challenges.

That’s because academic solvers strive for perfection and thus take a long time to compute the optimal solutions – hours, days, and sometimes years. If a delivery business needs to plan daily routes, they need a route solution within a matter of minutes. Their business depends on those delivery route planning solutions so they can get their drivers and their goods out the door as soon as possible.

Real-life TSP and VRP solvers use route optimization algorithms that find a near-optimal solutions in a fraction of the time, giving delivery businesses the ability to plan routes quickly and efficiently.

If you want to know more about real-life TSP and VRP solvers, check out the resources below 👇

Route Optimization API - TSP Solver

Route Optimization API - VRP Solver

The easiest-to-use route optimization platform for growing delivery businesses.

Portrait of Marc Kuo

Frequently Asked Questions

Related articles.

Liked this article? See below for more recommended reading!

Illustration. A man with a magnifying glass considers a vertical display featuring stylized logos of different route planning apps.

Best Route Optimization Software In 2023: Review

Illustration. A man in "thinking" pose surrounded by stylized logos of different route planning apps.

7 Best Free Route Planners With Unlimited Stops in 2023

Illustration. A woman stands at a map table with many pins. She is using a spool of thread to create a route between the pins.

Google Maps Route Planner For Deliveries

A 'branch and bound' algorithm is presented for solving the traveling salesman problem. The set of all tours (feasible solutions) is broken up into increasingly small subsets by a procedure called branching. For each subset a lower bound on the length of the tours therein is calculated. Eventually, a subset is found that contains a single tour whose length is less than or equal to some lower bound for every tour. The motivation of the branching and the calculation of the lower bounds are based on ideas frequently used in solving assignment problems. Computationally, the algorithm extends the size of problem that can reasonably be solved without using methods special to the particular problem.

OR professionals in every field of study will find information of interest in this balanced, full-spectrum industry review. Essential reading for practitioners, researchers, educators and students of OR. Computing and decision technology Environment, energy and natural resources Financial services Logistics and supply chain operations Manufacturing operations Optimization Public and military services Simulation Stochastic models Telecommunications Transportation

With over 12,500 members from around the globe, INFORMS is the leading international association for professionals in operations research and analytics. INFORMS promotes best practices and advances in operations research, management science, and analytics to improve operational processes, decision-making, and outcomes through an array of highly-cited publications, conferences, competitions, networking communities, and professional development services.

This item is part of a JSTOR Collection. For terms and use, please refer to our Terms and Conditions Operations Research © 1963 INFORMS Request Permissions

travelling salesman problem optimal solution

Guru99

Travelling Salesman Problem: Python, C++ Algorithm

What is the travelling salesman problem (tsp).

Travelling Salesman Problem (TSP) is a classic combinatorics problem of theoretical computer science. The problem asks to find the shortest path in a graph with the condition of visiting all the nodes only one time and returning to the origin city.

The problem statement gives a list of cities along with the distances between each city.

Objective: To start from the origin city, visit other cities only once, and return to the original city again. Our target is to find the shortest possible path to complete the round-trip route.

Example of TSP

Here a graph is given where 1, 2, 3, and 4 represent the cities, and the weight associated with every edge represents the distance between those cities.

Travelling Salesman Problem (TSP)

The goal is to find the shortest possible path for the tour that starts from the origin city, traverses the graph while only visiting the other cities or nodes once, and returns to the origin city.

For the above graph, the optimal route is to follow the minimum cost path: 1-2-4-3-1. And this shortest route would cost 10+25+30+15 =80

Different Solutions to Travelling Salesman Problem

Travelling Salesman Problem (TSP)

Travelling Salesman Problem (TSP) is classified as a NP-hard problem due to having no polynomial time algorithm. The complexity increases exponentially by increasing the number of cities.

There are multiple ways to solve the traveling salesman problem (tsp). Some popular solutions are:

The brute force approach is the naive method for solving traveling salesman problems. In this approach, we first calculate all possible paths and then compare them. The number of paths in a graph consisting of n cities is n! It is computationally very expensive to solve the traveling salesman problem in this brute force approach.

The branch-and-bound method: The problem is broken down into sub-problems in this approach. The solution of those individual sub-problems would provide an optimal solution.

This tutorial will demonstrate a dynamic programming approach, the recursive version of this branch-and-bound method, to solve the traveling salesman problem.

Dynamic programming is such a method for seeking optimal solutions by analyzing all possible routes. It is one of the exact solution methods that solve traveling salesman problems through relatively higher cost than the greedy methods that provide a near-optimal solution.

The computational complexity of this approach is O(N^2 * 2^N) which is discussed later in this article.

The nearest neighbor method is a heuristic-based greedy approach where we choose the nearest neighbor node. This approach is computationally less expensive than the dynamic approach. But it does not provide the guarantee of an optimal solution. This method is used for near-optimal solutions.

Algorithm for Traveling Salesman Problem

We will use the dynamic programming approach to solve the Travelling Salesman Problem (TSP).

Before starting the algorithm, let’s get acquainted with some terminologies:

Let’s assume S is the subset of cities and belongs to {1, 2, 3, …, n} where 1, 2, 3…n are the cities and i, j are two cities in that subset. Now cost(i, S, j) is defined in such a way as the length of the shortest path visiting node in S, which is exactly once having the starting and ending point as i and j respectively.

For example, cost (1, {2, 3, 4}, 1) denotes the length of the shortest path where:

The dynamic programming algorithm would be:

cost(i, S, j)=min cost (i, S−{i}, j)+dist(i,j) where i∈S and i≠j

For the given figure, the adjacency matrix would be the following:

Travelling Salesman Problem (TSP)

Let’s see how our algorithm works:

Step 1) We are considering our journey starting at city 1, visit other cities once and return to city 1.

Step 2) S is the subset of cities. According to our algorithm, for all |S| > 1, we will set the distance cost(i, S, 1) = ∝. Here cost(i, S, j) means we are starting at city i, visiting the cities of S once, and now we are at city j. We set this path cost as infinity because we do not know the distance yet. So the values will be the following:

Cost (2, {3, 4}, 1) = ∝ ; the notation denotes we are starting at city 2, going through cities 3, 4, and reaching 1. And the path cost is infinity. Similarly-

cost(3, {2, 4}, 1) = ∝

cost(4, {2, 3}, 1) = ∝

Step 3) Now, for all subsets of S, we need to find the following:

cost(i, S, j)=min cost (i, S−{i}, j)+dist(i,j), where j∈S and i≠j

That means the minimum cost path for starting at i, going through the subset of cities once, and returning to city j. Considering that the journey starts at city 1, the optimal path cost would be= cost(1, {other cities}, 1).

Let’s find out how we could achieve that:

Now S = {1, 2, 3, 4}. There are four elements. Hence the number of subsets will be 2^4 or 16. Those subsets are-

1) |S| = Null:

2) |S| = 1:

{{1}, {2}, {3}, {4}}

3) |S| = 2:

{{1, 2}, {1, 3}, {1, 4}, {2, 3}, {2, 4}, {3, 4}}

4) |S| = 3:

{{1, 2, 3}, {1, 2, 4}, {2, 3, 4}, {1, 3, 4}}

5) |S| = 4:

{{1, 2, 3, 4}}

As we are starting at 1, we could discard the subsets containing city 1.

The algorithm calculation:

1) |S| = Φ:

cost (2, Φ, 1) = dist(2, 1) = 10

cost (3, Φ, 1) = dist(3, 1) = 15

cost (4, Φ, 1) = dist(4, 1) = 20

cost (2, {3}, 1) = dist(2, 3) + cost (3, Φ, 1) = 35+15 = 50

cost (2, {4}, 1) = dist(2, 4) + cost (4, Φ, 1) = 25+20 = 45

cost (3, {2}, 1) = dist(3, 2) + cost (2, Φ, 1) = 35+10 = 45

cost (3, {4}, 1) = dist(3, 4) + cost (4, Φ, 1) = 30+20 = 50

cost (4, {2}, 1) = dist(4, 2) + cost (2, Φ, 1) = 25+10 = 35

cost (4, {3}, 1) = dist(4, 3) + cost (3, Φ, 1) = 30+15 = 45

cost (2, {3, 4}, 1) = min [ dist[2,3]+Cost(3,{4},1) = 35+50 = 85,

dist[2,4]+Cost(4,{3},1) = 25+45 = 70 ] = 70

cost (3, {2, 4}, 1) = min [ dist[3,2]+Cost(2,{4},1) = 35+45 = 80,

dist[3,4]+Cost(4,{2},1) = 30+35 = 65 ] = 65

cost (4, {2, 3}, 1) = min [ dist[4,2]+Cost(2,{3},1) = 25+50 = 75

dist[4,3]+Cost(3,{2},1) = 30+45 = 75 ] = 75

cost (1, {2, 3, 4}, 1) = min [ dist[1,2]+Cost(2,{3,4},1) = 10+70 = 80

dist[1,3]+Cost(3,{2,4},1) = 15+65 = 80

dist[1,4]+Cost(4,{2,3},1) = 20+75 = 95 ] = 80

So the optimal solution would be 1-2-4-3-1

Travelling Salesman Problem (TSP)

Pseudo-code

Implementation in c/c++.

Here’s the implementation in C++:

Implementation in Python:

Academic solutions to tsp.

Computer scientists have spent years searching for an improved polynomial time algorithm for the Travelling Salesman Problem. Until now, the problem is still NP-hard.

Though some of the following solutions were published in recent years that have reduced the complexity to a certain degree:

Application of Traveling Salesman Problem

Travelling Salesman Problem (TSP) is applied in the real world in both its purest and modified forms. Some of those are:

Complexity Analysis of TSP

So the total time complexity for an optimal solution would be the Number of nodes * Number of subproblems * time to solve each sub-problem. The time complexity can be defined as O(N 2 * 2^N).

You Might Like:

travelling salesman problem optimal solution

travelling salesman problem optimal solution

Towards Data Science

Pedram Ataee, PhD

Jun 14, 2020

Member-only

Notes From Industry

How to solve the traveling salesman problem — a comparative analysis, through 3 optimization methods: dynamic programming, simulated annealing, and 2-opt..

I am sure you already heard about the traveling salesman problem or TSP. There are many applications for this problem and also many solutions with different performances. Here, I want to share my recent experience to solve the traveling salesman problem, especially with the 2-opt method which is one of the easiest, yet most effective, methods to solve this problem.

If you just want to read about the 2-opt, you can jump directly to the end of this article. You can also use the Python package that I developed below to solve a TSP.

In optimization, 2-opt is a simple local search algorithm with a special swapping mechanism that suits well to solve the…

In this article, I want to share my experience in solving a TSP with 120 cities to visit. The problem had to be solved in less than 5 minutes to be used in practice. I aimed to solve this problem with the following methods:

First, let me explain TSP in brief.

Artificial Intelligence: Unorthodox Lessons: How to Gain Insight and Build Innovative Solutions

Amazon.com: artificial intelligence: unorthodox lessons: how to gain insight and build innovative solutions ebook ….

www.amazon.com

Traveling Salesman Problem

The traveling salesman problem is a classic problem in combinatorial optimization. This problem is to find the shortest path that a salesman should take to traverse through a list of cities and return to the origin city. The list of cities and the distance between each pair are provided.

TSP is useful in various applications in real life such as planning or logistics. For example, a concert tour manager who wants to schedule a series of performances for the band must determine the shortest path for the tour to ensure reducing traveling costs and not make the band unnecessarily exhausted.

This is an NP-hard problem. In simple words, it means you can not guarantee to find the shortest path within a reasonable time limit. This is not unique to TSP though. In real-world optimization problems, you frequently encounter problems for which you must find sub-optimal solutions instead of optimal ones.

I. Dynamic Programming

The dynamic programming or DP method guarantees finding the best answer to TSP. However, its time complexity would exponentially increase with the number of cities. The time complexity with the DP method asymptotically equals N² × 2^N where N is the number of cities.

To give you a hint of how this time complexity increases, let me share my experiments. TSP with 10 cities can be solved by a DP method in almost 0.2 seconds using intel core i7. This number increases to almost 13 seconds (~60 times greater) with 15 cities. That is, the time complexity significantly increases even with a small increment in the number of cities.

To optimize the DP method, I could have used the memoization technique. However, the memoization technique with a large number of cities needs a 2^N × 2^N matrix that can not be easily handled in memory.

Suggestion- If you want to solve traveling salesman problem with a large number of cities the dynamic programming method is not the best choice. The DP method can guarantee the global optimum but it just needs much time and computational power that we mostly can not afford in real-world problems.

II. Simulated Annealing

Simulated Annealing or SA is a heuristic search algorithm that is inspired by the annealing mechanism in the metallurgy industry. Annealing refers to a controlled cooling mechanism that leads to the desired state of the material. But, how does this map to an optimization problem?

The lesson that optimization experts learned from the annealing mechanism is to enforce more control over the search process contrary to the gradient descent algorithm with fixed rules. The SA method has two major rules explained below.

In the SA method, the search process must be continued until finding a good-enough solution or until reaching the stopping criteria. Plus, this method is sensitive to how its parameters including the search step and moving direction are formulated and tuned. SA method is a heuristic search algorithm and, therefore, it is sensitive to its initial point in the search space.

I implemented the SA method and tested it several times. In the end, I could not obtain a better result compared to the 2-opt method given the time constraints to execute. The 2-opt method was executed very fast.

Suggestion- The outcome of the simulated annealing method is sensitive to its parameters and its stopping criteria. A simulated annealing method is a powerful tool. But if you want to work with it, make sure you are aware of its flaws.

The 2-opt algorithm is a simple local search method with a special swapping mechanism that works as its heuristic. The main idea behind the 2-opt method is to remove path crossing in each neighborhood of cities. The 2-opt can be implemented easily and executed fast. For example, TSP with 120 cities can be solved in less than 5 seconds on the intel core i7 using this method. Here, “solved” means the algorithm converges to a good-enough solution that is a sub-optimal solution. The 2-opt method converges fast since it is deterministic in contrary to the SA method.

This method similar to other heuristic search algorithms does not guarantee to find the global optimum. The 2-opt method can be easily trapped in local optimums since it does not have a mechanism to jump out of them. Knowing all of these flaws, this method still works very well in TSP since its heuristic is very relevant, so effective, in this problem .

pdrm83/py2opt

The 2-opt method similar to other heuristic search algorithms is sensitive to its initial point in the search space. That means the final outcomes are changed by different initial points. I used a simple trick to work around this issue and to improve the results of the 2-opt method. I ran the method with different randomized initial points 10 times and selected the best result among them. By this approach, I reduced the sensitivity to the starting point to some extent.

Suggestion- The 2-opt method can be implemented easily and executed fast. Plus, it works much better than the expectation, especially when you decrease its sensitivity to the initial point in the search space. I highly suggest using this method to solve the TSP unless a good-enough result is not appropriate for you.

IV. Summary

The video below is a good summary of this article. You will enjoy watching it.

There are many methods to solve TSP. However, the above methods are the most common ones. I highly suggest the 2-opt method since it is implemented simply and executes fast. However, the simulated annealing method is very powerful if you can properly tune it and you do not have a time constraint to find the final result.

The last words- When you want to find a solution for any problem including TSP, always think about how a simple technique such as the 2-opt method can work well. Why? Because its heuristic is very well-suited to the problem.

Thanks for Reading!

If you like this post and want to support me…

Join Medium with my referral link — Pedram Ataee, PhD

As a medium member, a portion of your membership fee goes to writers you read, and you get full access to every story….

pedram-ataee.medium.com

More from Towards Data Science

Your home for data science. A Medium publication sharing concepts, ideas and codes.

About Help Terms Privacy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store

Pedram Ataee, PhD

🤖 AI Architect 📚 Author of “Artificial Intelligence: Unorthodox Lessons” ❤️ Support my writings @ tinyurl.com/4cbeejnn

Text to speech

Travelling salesman problem solution

Math can be a challenging subject for many learners. But there is support available in the form of Travelling salesman problem solution.

Decide mathematic tasks

Solve word questions

Deal with math questions

Reach support from expert tutors

Deal with mathematic

How to Solve the Traveling Salesman Problem

The traveling salesman problem is a classic problem in combinatorial optimization. This problem is to find the shortest path that a salesman

Clarify math

Math can be difficult to understand, but with a little clarification it can be easy!

Decide math question

If you're looking for a fun way to teach your kids math, try Decide math. It's a great way to engage them in the subject and help them learn while they're having fun.

Explain mathematic equation

Math can be a difficult subject for many people, but it doesn't have to be! By taking the time to explain the problem and break it down into smaller pieces, anyone can learn to solve math problems.

Reliable Support

You can always count on our team for reliable support.

People reviews

I've been doing college precalculus and it's been such a pain, helps alot with school work and homework and for review. Helps you understand math problems as well as a teacher would. It is like VLC Media Player as a proprietary calculator.

It can solve any mathematical problem, can't just express my gratitude to you. Ive heard bad things about the camera- but I only really use the calculator, so Ive never seen those problems, great tool for learning math, math 1 has kicked my butt and this has helped on some harder topics, would still try the question on your own first to see if you get the concept but it can be a great checker for you and great for explaining how it got the answer it did.

How to Solve Travelling Salesman Problems

1) Consider city 1 as the starting and ending point. 2) Generate all (n-1)! Permutations of cities. 3) Calculate the cost of every permutation

One way to save time is to automate your tasks.

To determine a mathematical equation, one would need to first identify the variables involved and then use algebra to solve for the unknown variable.

If you need support, help is always available.

You can build a bright future by setting goals and working towards them.

Solve homework

Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we

To solve a math equation, you need to find the value of the variable that makes the equation true.

The average satisfaction rating for our product is 4.9 out of 5.

You can get help with your calculations online from a variety of sources.

Travelling Salesman Problem using Dynamic Programming

Traveling salesman problem

This web page is a duplicate of https://optimization.mccormick.northwestern.edu/index.php/Traveling_salesman_problems

Author: Jessica Yu (ChE 345 Spring 2014)

Steward: Dajun Yue, Fengqi You

The traveling salesman problem (TSP) is a widely studied combinatorial optimization problem, which, given a set of cities and a cost to travel from one city to another, seeks to identify the tour that will allow a salesman to visit each city only once, starting and ending in the same city, at the minimum cost. 1

travelling salesman problem optimal solution

The origins of the traveling salesman problem are obscure; it is mentioned in an 1832 manual for traveling salesman, which included example tours of 45 German cities but gave no mathematical consideration. 2 W. R. Hamilton and Thomas Kirkman devised mathematical formulations of the problem in the 1800s. 2

It is believed that the general form was first studied by Karl Menger in Vienna and Harvard in the 1930s. 2,3

Hassler Whitney, who was working on his Ph.D. research at Harvard when Menger was a visiting lecturer, is believed to have posed the problem of finding the shortest route between the 48 states of the United States during either his 1931-1932 or 1934 seminar talks. 2 There is also uncertainty surrounding the individual who coined the name “traveling salesman problem” for Whitney’s problem. 2

The problem became increasingly popular in the 1950s and 1960s. Notably, George Dantzig, Delber R. Fulkerson, and Selmer M. Johnson at the RAND Corporation in Santa Monica, California solved the 48 state problem by formulating it as a linear programming problem. 2 The methods described in the paper set the foundation for future work in combinatorial optimization, especially highlighting the importance of cutting planes. 2,4

In the early 1970s, the concept of P vs. NP problems created buzz in the theoretical computer science community. In 1972, Richard Karp demonstrated that the Hamiltonian cycle problem was NP-complete, implying that the traveling salesman problem was NP-hard. 4

Increasingly sophisticated codes led to rapid increases in the sizes of the traveling salesman problems solved. Dantzig, Fulkerson, and Johnson had solved a 48 city instance of the problem in 1954. 5 Martin Grötechel more than doubled this 23 years later, solving a 120 city instance in 1977. 5 Enoch Crowder and Manfred W. Padberg again more than doubled this in just 3 years, with a 318 city solution. 5

In 1987, rapid improvements were made, culminating in a 2,392 city solution by Padberg and Giovanni Rinaldi. In the following two decades, David L. Appelgate, Robert E. Bixby, Vasek Chvátal, & William J. Cook led the cutting edge, solving a 7,397 city instance in 1994 up to the current largest solved problem of 24,978 cities in 2004. 5

Description

Graph theory.

{\displaystyle G=(V,E)}

In the context of the traveling salesman problem, the verticies correspond to cities and the edges correspond to the path between those cities. When modeled as a complete graph, paths that do not exist between cities can be modeled as edges of very large cost without loss of generality. 6 Minimizing the sum of the costs for Hamiltonian cycle is equivalent to identifying the shortest path in which each city is visiting only once.

Classifications of the TSP

The TRP can be divided into two classes depending on the nature of the cost matrix. 3,6

{\displaystyle C}

{\displaystyle \exists ~i,j:c_{ij}\neq c_{ji}}

An ATSP can be formulated as an STSP by doubling the number of nodes. 6

Variations of the TSP

{\displaystyle u}

Formulation

{\displaystyle n}

The objective function is then given by

{\displaystyle {\text{min}}\sum _{i}\sum _{j}c_{ij}y_{ij}}

To ensure that the result is a valid tour, several contraints must be added. 1,3

{\displaystyle \sum _{j}y_{ij}=1,~~\forall i=0,1,...,n-1}

There are several other formulations for the subtour elimnation contraint, including circuit packing contraints, MTZ constraints, and network flow constraints.

aTSP ILP Formulation

The integer linear programming formulation for an aTSP is given by

{\displaystyle {\begin{aligned}{\text{min}}&~~\sum _{i}\sum _{j}c_{ij}y_{ij}\\{\text{s.t}}&~~\sum _{j}y_{ij}=1,~~i=0,1,...,n-1\\&~~\sum _{i}y_{ij}=1,~~j=0,1,...,n-1\\&~~\sum _{i}\sum _{j}y_{ij}\leq |S|-1~~S\subset V,2\leq |S|\leq n-2\\&~~y_{ij}\in \{0,1\},~\forall i,j\in E\\\end{aligned}}}

sTSP ILP Formulation

The symmetric case is a special case of the asymmetric case and the above formulation is valid. 3, 6 The integer linear programming formulation for an sTSP is given by

{\displaystyle {\begin{aligned}{\text{min}}&~~\sum _{i}\sum _{j}c_{ij}y_{ij}\\{\text{s.t}}&~~\sum _{i<k}y_{ik}+\sum _{j>k}y_{kj}=2,~~k\in V\\&~~\sum _{i}\sum _{j}y_{ij}\leq |S|-1~~S\subset V,3\leq |S|\leq n-3\\&~~y_{ij}\in \{0,1\}~\forall i,j\in E\\\end{aligned}}}

Exact algorithms

{\displaystyle O(n!)}

Branch-and-bound algorithms are commonly used to find solutions for TSPs. 7 The ILP is first relaxed and solved as an LP using the Simplex method, then feasibility is regained by enumeration of the integer variables. 7

Other exact solution methods include the cutting plane method and branch-and-cut. 8

Heuristic algorithms

Given that the TSP is an NP-hard problem, heuristic algorithms are commonly used to give a approximate solutions that are good, though not necessarily optimal. The algorithms do not guarantee an optimal solution, but gives near-optimal solutions in reasonable computational time. 3 The Held-Karp lower bound can be calculated and used to judge the performance of a heuristic algorithm. 3

There are two general heuristic classifications 7 :

The best methods tend to be composite algorithms that combine these features. 7

Tour construction procedures

{\displaystyle k}

Tour improvement procedures

{\displaystyle t}

Applications

The importance of the traveling salesman problem is two fold. First its ubiquity as a platform for the study of general methods than can then be applied to a variety of other discrete optimization problems. 5 Second is its diverse range of applications, in fields including mathematics, computer science, genetics, and engineering. 5,6

travelling salesman problem optimal solution

Suppose a Northwestern student, who lives in Foster-Walker , has to accomplish the following tasks:

Distances between buildings can be found using Google Maps. Note that there is particularly strong western wind and walking east takes 1.5 times as long.

It is the middle of winter and the student wants to spend the least possible time walking. Determine the path the student should take in order to minimize walking time, starting and ending at Foster-Walker.

Start with the cost matrix (with altered distances taken into account):

Method 1: Complete Enumeration

All possible paths are considered and the path of least cost is the optimal solution. Note that this method is only feasible given the small size of the problem.

From inspection, we see that Path 4 is the shortest. So, the student should walk 2.28 miles in the following order: Foster-Walker → Annenberg → SPAC → Tech → Foster-Walker

Method 2: Nearest neighbor

Starting from Foster-Walker, the next building is simply the closest building that has not yet been visited. With only four nodes, this can be done by inspection:

So, the student would walk 2.54 miles in the following order: Foster-Walker → Annenberg → Tech → SPAC → Foster-Walker

Method 3: Greedy

With this method, the shortest paths that do not create a subtour are selected until a complete tour is created.

So, the student would walk 2.40 miles in the following order: Foster-Walker → SPAC → Annenberg → Tech → Foster-Walker

travelling salesman problem optimal solution

As we can see in the figure to the right, the heuristic methods did not give the optimal solution. That is not to say that heuristics can never give the optimal solution, just that it is not guaranteed.

Both the optimal and the nearest neighbor algorithms suggest that Annenberg is the optimal first building to visit. However, the optimal solution then goes to SPAC, while both heuristic methods suggest Tech. This is in part due to the large cost of SPAC → Foster-Walker. The heuristic algorithms cannot take this future cost into account, and therefore fall into that local optimum.

We note that the nearest neighbor and greedy algorithms give solutions that are 11.4% and 5.3%, respectively, above the optimal solution. In the scale of this problem, this corresponds to fractions of a mile. We also note that neither heuristic gave the worst case result, Foster-Walker → SPAC → Tech → Annenberg → Foster-Walker.

Only tour building heuristics were used. Combined with a tour improvement algorithm (such as 2-opt or simulated annealing), we imagine that we may be able to locate solutions that are closer to the optimum.

The exact algorithm used was complete enumeration, but we note that this is impractical even for 7 nodes (6! or 720 different possibilities). Commonly, the problem would be formulated and solved as an ILP to obtain exact solutions.

Navigation menu

Scientists Solve Most Complex Traveling Salesman Problem Ever

See how they cracked the deceptively simple—but brutally tough—problem for 22 "cities."

travelling salesman problem optimal solution

Scientists in Japan have solved a more complex traveling salesman problem than ever before. The previous standard for instant solving was 16 “cities,” and these scientists have used a new kind of processor to solve 22 cities. They say it would have taken a traditional von Neumann CPU 1,200 years to do the same task.

The traveling salesman problem is centuries old, and it asks a deceptively simple question: For a salesman with a map of, say, 10 cities with given distances apart and roads connecting them, what’s the shortest path for the salesman to travel to every city and return home? The “deceptive” part is that the math to support the problem quickly grows overwhelmingly complex.

Each leg of the journey has a different length. Not all the cities connect to each other to form an ideal full star topology. And segments that connect, say, five cities have to somehow be compared to segments that connect three or eight. It’s just too variable, in both the proverbial and literal senses. A computer processor must calculate one full route at a time while storing the order of the points touched and the length of each leg of the journey.

More From Popular Mechanics

preview for Popular Mechanics All Sections

The problem combines graph theory (the “map” of points with weighted legs between them) with combinatorics (the different ways you can count through a graph, in this case) and optimization (choosing the best, shortest route from a given graph). Because of this robust subject appeal, it’s been a favorite exercise in math and computer science classes of all kinds for many, many years.

The secret to these researchers’ success with the traveling salesman problem is in the special circuit they designed to calculate the routes. In a traditional computer processor, the routes must be arranged and then calculated and all passed through the processor one point at a time—the system is linear.

In most computing applications, we don’t notice that processing happens this way because the calculations are so rapid that they basically appear simultaneous. But the traveling salesman problem clogs the works because the number of calculations required is so huge. Adding more points on the map only increases the complexity. (Honestly, this news itself underlines how complicated the problem is: It’s major news to be able to solve just 22 cities instead of just 16!)

The researchers took a traditional circuit architecture and changed one critical thing: They rearranged special “spin cells” representing all the graph points so that the spin cells could all communicate with each other, not just the immediate surroundings connected by lines on the graph. This new arrangement allows routes to be made using multiple points at a time instead of just one, with fewer bottlenecks in the computational process.

The potential applications of a more powerful salesman solver are myriad. The abstract problem is infamous because it’s so widely studied and difficult, but its roots are still as an abstraction of a real person’s dilemma: How do I do my job the most efficient way? Every day, taxi and Uber drivers must consider the best route to find the most passengers. Delivery drivers must arrange their addresses in an efficient way. And these applications don’t just involve minimizing distance—fresh food or the value of a fare add even more complexity.

Suddenly, the 22nd address on the route is going to receive a much hotter pizza.

Headshot of Caroline Delbert

Caroline Delbert is a writer, avid reader, and contributing editor at Pop Mech. She's also an enthusiast of just about everything. Her favorite topics include nuclear energy, cosmology, math of everyday things, and the philosophy of it all. 

.css-8psjmo:before{content:'//';display:inline;padding-right:0.3125rem;} Science .css-v6ym3h:before{content:'//';display:inline;padding-left:0.3125rem;}

clarice phelps

USAF's Reactor Creates Jet Fuel Out of Water, Air

the rotating blue light trail surrounds the crystal ball

Ring Lasers Can Allow Time Travel, Scientist Says

elevator buttons tarantula barking dog snake in hand cockroach feet dangling off cliff

Behind the Strange Psychology of Phobias

moai statues in a row, ahu tongariki, easter island, chile

Mystery on Easter Island: More Moai Found in Lake

ironton shipwreck

Ironton Found: Solving An 1894 Shipwreck Mystery

beaker with colorful chemicals in clear box box

Scientists Discovered a Highly Reactive Superacid

cropped image of person reaching towards bright sun

60 Scientists Are Trying to Block the Sun

evolution

A Human Family Still Walks on All Fours

an illustration of brain organoids growing in a petri dish

Brain Organoids Are Alive, But Are They Conscious?

xray of broken arm

Why Human Bones Are Getting More Brittle

medical anatomical female torso figure

Your Computer Might Someday Have a Brain

travelling salesman problem optimal solution

Towards Dev

Okan Yenigün

Traveling Salesman Problem: Branch and Bound Solution

Branch and bound solution to tsp in python.

The traveling salesman problem is a well-known problem in mathematics and optimization. A salesperson must visit several different cities and return to the starting point.

The problem involves determining the sequence in which the cities should be visited by a salesperson so that the resulting trip covers the shortest possible distance and each city is visited exactly once.

It applies to a lot of other interesting problems with real-world impacts, such as delivery routes, tour planning, circuit board design, etc. For example, a delivery company needs to determine the most efficient route for delivering packages to multiple destinations, with the goal of minimizing the distance traveled to save cost and fuel.

Unfortunately, finding a solution to this problem is not always straightforward. It is considered an NP-Hard problem. It means that finding an exact solution for large instances becomes computationally infeasible.

NP stands for non-deterministic polynomial time, which includes problems that can be solved in polynomial time by a non-deterministic Turing machine. However, the class NP-hard includes problems that may not be in NP, but are at least as hard as the hardest problems in NP. In other words, if an NP-hard problem can be solved in polynomial time, then all problems in NP can also be solved in polynomial time.

Travelling Salesman Problem becomes increasingly difficult as the size of the problem increases due to the combinatorial explosion of possible solutions. If there are n cities to visit, the number of possible routes is n! then , imagine that you have 50 cities, then it would take decades to calculate all possible solutions.

There are several ways to solve this problem.

In this blog post, we will examine one of the Exact Algorithms, the Branch and Bound method. The other well-known method of Exact Algorithms is called Dynamic Programming which will be mentioned in another post.

Branch and Bound

Although it sounds like a western movie title, Branch and Bound is a problem-solving strategy. It works for optimization and minimization problems. It uses a state-space tree for solving any problem. The algorithm works by recursively partitioning the search space into smaller subsets and pruning the subsets that cannot contain an optimal solution.

State Space Tree

A tree of state space is a representation of every conceivable state of a problem, starting from the root node as an initial state and ending at the leaf node as a final state.

Let’s assume that we have four items and a sequencing problem: {C1, C2, C3, C4}, and my solution is a subset of this: {C1, C2}.

Let’s draw the state space tree for this problem. We follow a breadth-first search when we are exploring the solutions in the Branch and Bound method.

In the beginning, we consider all options.

After the first level is completed, we start again from node 2. We can visit C2, C3, or C4 in node 2. At this level, we will have no other option in node 8, because, we have nowhere to go in that node.

The above way of doing B&B is called FIFO Branch and Bound because we use Queue for this implementation. There is also another solution that uses Stacks, it is called LIFO Branch and Bound.

Ok, what if we have also a cost for each path? Then the solution is called the Least Cost Branch and Bound (LC-BB).

This time, we calculate the cost (whatever the cost function is for the problem) for every path of visit.

Ok, let’s assume that the costs for the first level are like the below.

We should explore the node with the least cost (node 3).

We reached the solution at node 7.

Traveling Salesman Problem Solution

Imagine we have 4 cities and the salesman starts from one.

The state space tree of the problem would be like below. Remember that in Branch and Bound, we generate nodes in level order. While generating nodes, we calculate the costs. And if the cost is greater than some bounds, we kill the node. Thus, we aren’t generating all nodes, we only generate promising nodes.

Ok. Let’s define a new problem. We have 5 cities to visit. And on each path, the cost is displayed. And as a constraint, it is stated that if we choose any two intermediate vertices ( b and c ), only b can precede c .

Now, we need to calculate lower bounds. For each city i, 1 ≤ i ≤ n , we will find the sum s_i of the distances from city i to the two nearest cities; and then we will compute the sum s of these n numbers. After, we will divide the results by 2, and, round up the result to the nearest integer.

For example, if we start from a, c, and b have the lowest cost. Add the costs: 1+3. For b , the two nearest cities are a and c : Add 3+6, and so on…

lb = [(1+3) + (3+6) + (1+2) + (3+4) + (2+3)] / 2= 14

The state space tree of the problem will be as follows. Now let’s take a closer look at the solution.

The root node is a and we have already calculated the lower bound there as 14. From a , we can go to b , or d . We cannot go to c as indicated as a constraint previously in the problem ( c cannot precede b ).

lb (a,d) = [(1+5) + (3+6) + (1+2) + (3+5) + (2+3)] /2 = 16

We have to select a to d , so, the first term is (1+5). Remember we select the smallest two adjacent values, but one of them must be 5 since we select a to d .

lb (a,e) = [(1+8) + (3+6)+(1+2)+(3+4)+(8+2)] /2 = 19

After the first level, we choose the lowest cost (node 1) and branch out from there.

lb (a,b,c) = [(1+3)+(3+6)+(1+6)+(3+4)+(2+3)]/2 = 16

lb (a,b,d) = [(1+3)+(3+7)+(1+2)+(3+7)+(2+3)]/2 =16

lb (a,b,e) = [(1+3)+(3+9)+(1+2)+(3+4)+(2+9)]/2=19

We can continue with both 5 and 6. In the last level, we calculate the length of each tour.

length (node 8) = 3 + 6 + 4 + 3 + 8 = 24

length (node 11) = 3 + 7 + 3 + 2 + 1 = 16

When we compare all these lengths, we have an optimal solution at node 11. The path a -> b -> d -> e -> c -> a is the optimal solution.

Python Example

This was a class book explanation. Let’s implement it in Python.

Let’s solve this problem for the biggest 5 cities in Germany: Berlin, Hamburg, Munich, Cologne, and Frankfurt.

First, let’s calculate the distances:

Now let’s write a function that finds Branch and Bound solution.

And the result is:

Let’s display the path on the map:

The branch and bound algorithm is particularly useful when the number of cities to be visited is small to moderate in size, and an exact solution is required. In other cases, heuristics or approximation algorithms may be more appropriate. Overall, the branch and bound algorithm is an important technique for solving the traveling salesman problem, and its implementation in Python can be applied to many different scenarios where optimal route planning is required.

Algorithm Analysis and Asymptotic Notations

Explanation of asymptotic notations.

towardsdev.com

Brute Force and Exhaustive Search

Brute force and exhaustive search are explained, multi-armed bandits problem and solutions, an introduction to multi-armed bandits problem, software developing requirements, an introduction to developing requirements concepts, uml class diagrams, a brief introduction to uml class diagrams.

https://en.wikipedia.org/wiki/Travelling_salesman_problem

https://en.wikipedia.org/wiki/NP-hardness

http://naveenkandwal.blogspot.com/2015/01/p-np-np-complete-np-hard.html

https://www.youtube.com/watch?v=aaaVm6uUY5A

https://www.youtube.com/watch?v=1FEP_sNb62k&t=679s

https://www.youtube.com/watch?v=3RBNPc0_Q6g

More from Towards Dev

A publication for sharing projects, ideas, codes, and new theories.

About Help Terms Privacy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store

Okan Yenigün

Full time: problem solver. Part time: number one, steady hand heart surgeon in Japan.

Text to speech

Skip to Main Content

IEEE Account

Purchase Details

Profile Information

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

USA

Traveling Salesman Problem

The Traveling Salesman Problem, or TSP for short, is one of the most intensively studied problems in computational mathematics. These pages are devoted to the history, applications, and current research of this challenge of finding the shortest route visiting each member of a collection of locations and returning to your starting point.

travelling salesman problem optimal solution

The work described here is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Department of Combinatorics and Optimization at the University of Waterloo .

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Theory of Complexity - Definitions, Models, and Applications

How to Solve the Traveling Salesman Problem

Submitted: September 23rd, 2020 Reviewed: January 20th, 2021 Published: February 9th, 2021

DOI: 10.5772/intechopen.96129

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Theory of Complexity

Edited by Ricardo López-Ruiz

Chapter metrics overview

581 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

The Traveling Salesman Problem (TSP) is believed to be an intractable problem and have no practically efficient algorithm to solve it. The intrinsic difficulty of the TSP is associated with the combinatorial explosion of potential solutions in the solution space. When a TSP instance is large, the number of possible solutions in the solution space is so large as to forbid an exhaustive search for the optimal solutions. The seemingly “limitless” increase of computational power will not resolve its genuine intractability. Do we need to explore all the possibilities in the solution space to find the optimal solutions? This chapter offers a novel perspective trying to overcome the combinatorial complexity of the TSP. When we design an algorithm to solve an optimization problem, we usually ask the critical question: “How can we find all exact optimal solutions and how do we know that they are optimal in the solution space?” This chapter introduces the Attractor-Based Search System (ABSS) that is specifically designed for the TSP. This chapter explains how the ABSS answer this critical question. The computing complexity of the ABSS is also discussed.

Author Information

*Address all correspondence to: [email protected]

1. Introduction

The TSP is one of the most intensively investigated optimization problems and often treated as the prototypical combinatorial optimization problem that has provided much motivation for design of new search algorithms, development of complexity theory, and analysis of solution space and search space [ 1 , 2 ]. The TSP is defined as a complete graph Q = V E C , where V = v i : i = 1 2 … n is a set of n nodes, E = e i j : i j = 1 2 … n i ≠ j n × n is an edge matrix containing the set of edges that connects the n nodes, and C = c i j : i j = 1 2 … n i ≠ j n × n is a cost matrix holding a set of traveling costs associated with the set of edges. The solution space S contains a finite set of all feasible tours that a salesman may traverse. A tour s ∈ S is a closed route that visits every node exactly once and returns to the starting node at the end. Like many real-world optimization problems, the TSP is inherently multimodal; that is, it may contain multiple optimal tours in its solution space. We assume that a TSP instance Q contains h ≥ 1 optimal tours in S . We denote f ( s ) as the objective function, s ∗ = min s ∈ S f s as an optimal tour and S ∗ as the set of h optimal tours. The objective of the TSP is to find all h optimal tours in the solution space, that is, S ∗ ⊂ S . Therefore, the argument is

Under this definition, the salesman wants to know what all best alternative tours are available. Finding all optimal solutions is the essential requirement for an optimization search algorithm. In practice, knowledge of multiple optimal solutions is extremely helpful, providing the decision-maker with multiple options, especially when the sensitivity of the objective function to small changes in its variables may be different at the alternative optimal points. Obviously, this TSP definition is elegantly simple but full of challenge to the optimization researchers and practitioners.

Optimization has been a fundamental tool in all scientific and engineering areas. The goal of optimization is to find the best set of the admissible conditions to achieve our objective in our decision-making process. Therefore, the fundamental requirement for an optimization search algorithm is to find all optimal solutions within a reasonable amount of computing time . The focus of computational complexity theory is to analyze the intrinsic difficulty of an optimization problem and the asymptotic property of a search algorithm to solve it. The complexity theory attempts to address this question: “How efficient is a search algorithm for a particular optimization problem, as the number of variables gets large?”

The TSP is known to be NP-hard [ 2 , 3 ]. The problems in NP-hard class are said to be intractable because these problems have no asymptotically efficient algorithm, even the seemingly “limitless” increase of computational power will not resolve their genuine intractability. The intrinsic difficulty of the TSP is that the solution space increases exponentially as the problem size increases, which makes the exhaustive search infeasible. When a TSP instance is large, the number of possible tours in the solution space is so large to forbid an exhaustive search for the optimal tours. A feasible search algorithm for the TSP is one that comes with a guarantee to find all best tours in time at most proportional to n k for some power k .

Do we need to explore all the possibilities in the solution space to find the optimal solutions? Imagine that searching for the optimal solution in the solution space is like treasure hunting. We are trying to hunt for a hidden treasure in the whole world. If we are “blindfolded” without any guidance, it is a silly idea to search every single square inch of the extremely large space. We may have to perform a random search process, which is usually not effective. However, if we are able to use various clues to locate the small village where the treasure was placed, we will then directly go to that village and search every corner of the village to find the hidden treasure. The philosophy behind this treasure-hunting case for optimization is that: if we do not know where the optimal point is in the solution space, we can try to identify the small region that contains the optimal point and then search that small region thoroughly to find that optimal point.

Optimization researchers have developed many optimization algorithms to solve the TSP. Deterministic approaches such as exhaustive enumeration and branch-and-bound can find exact optimal solutions, but they are very expensive from the computational point of view. Stochastic optimization algorithms, such as simple heuristic local search, Evolutionary Algorithms, Particle Swarm Optimization and many other metaheuristics, can find hopefully a good solution to the TSP [ 1 , 4 , 5 , 6 , 7 ]. The stochastic search algorithms trade in guaranteed correctness of the optimal solution for a shorter computing time. In practice, most stochastic search algorithms are based on the heuristic local search technique [ 8 ]. Heuristics are functions that help us decide which one of a set of possible solutions is to be selected next [ 9 ]. A local search algorithm iteratively explores the neighborhoods of solutions trying to improve the current solution by a local change. However, the scope of local search is limited by the neighborhood definition. Therefore, heuristic local search algorithms are locally convergent. The final solution may deviate from the optimal solution. Such a final solution is called a locally optimal solution , denoted as s ′ in this chapter. To distinguish from locally optimal solutions, the optimal solution s ∗ in the solution space is usually called the globally optimal solution .

This chapter studies the TSP from a novel perspective and presents a new search algorithm for the TSP. This chapter is organized in the following sections. Section 2 presents the ABSS algorithm for the TSP. Section 3 describes the important data structure that is a critical player in solving the TSP. Section 4 discusses the nature of heuristic local search algorithm and introduces the concept of solution attractor. Section 5 describes the global optimization features of the ABSS. Section 6 discusses the computational complexity of the ABSS. Section 7 concludes this chapter.

2. The attractor-based search system for the TSP

Figure 1 presents the Attractor-Based Search System (ABSS) for the TSP. In this algorithm, Q is a TSP instance with the edge matrix E and cost matrix C . At beginning of search, the matrix E is initialized by assigning zeros to all elements of E . The function InitialTour() constructs an initial tour s i using any tour-construction technique. The function LocalSearch() takes s i as an input, performs local search using any type of local search technique, and returns a locally optimal tour s j . The function UpdateE() updates the matrix E by recording the edge configuration of tour s j into the matrix. K is the number of search trajectories. After the edge configurations of K locally optimal tours are stored in the matrix E , the function ExhaustedSearch() searches E completely using the depth-first tree search technique, which is a simple recursive search method that traverses a directed graph starting from a node and then searches adjacent nodes recursively. Finally, the ABSS outputs a set of all best tours S ∗ found in the edge configuration of E . The search strategy in the ABSS is straightforward: generating K locally optimal tours, storing their edge configurations in the matrix E , and then identifying the best tours by evaluating all tours represented by the edge configuration of E . The ABSS is a simple and efficient computer program that can solve the TSP effectively. This search algorithm shows strong features of effectiveness, flexibility, adaptability, scalability and efficiency. The computational model of the ABSS is inherently parallel, facilitating implementation on concurrent processors. It can be implemented in many different ways: series, parallel, distributed, or hybrid.

travelling salesman problem optimal solution

The ABSS algorithm for the TSP.

Figure 2 uses a 10-node instance as an example to illustrate how the ABSS works. We randomly generate K = 6 n = 60 initial tours, which edge configurations hit all elements of the matrix E (marked as black color), as shown in Figure 2(a) . It means that these 60 random tours hit all 45 edges that represent all 181440 tours in the solution space. We let each of the search trajectories run 5000 iterations and obtain 60 locally optimal tours. However, due to the small size of the instance, most locally optimal tours have identical edge configurations. Among the 60 locally optimal tours, we find only four distinct locally optimal tours as shown in Figure 2(b) . Figure 2(c) shows the union of the edge configurations of the 60 locally optimal tours, in which 18 edges are hit. Then we use the depth-first tree search, as illustrated in Figure 2(d) , to identify all five tours in the edge configuration of E , which are listed in Figure 2(e) . In fact, one of the five tours is the globally optimal tour. This simple example indicates that (1) local search trajectories converge to small set of edges, and (2) the union of the edge configurations of K locally optimal tours is not just a countable union of the edge configurations of the these tours, but also include the edge configurations of other locally optimal tours. The ABSS consists of two search phases: local search phase and exhaustive search phase. The task of the local search phase is to identify the region that globally optimal tour is located (i.e. the village hiding the treasure), and the task of the exhaustive search phase is to find the globally optimal tour (i.e. find the hidden treasure). The remaining sections will briefly explain the features of the ABSS.

travelling salesman problem optimal solution

A simple example of the ABSS algorithm. (a) Union of the edge configurations of 60 random initial tours, (b) four distinct locally optimal tours, (c) union of the edge configurations of the 60 locally optimal tours, (d) the depth-first tree search on the edge configuration of E , and (e) five tours found in E .

In all experiments mentioned in the chapter, we generate symmetric TSP instances with n nodes. The element c i j of the cost matrix C is assigned a random integer independently drawn from a uniform distribution of the range [1, 1000]. The triangle inequality c i j + c j k ≥ c i k is not assumed in the instances. Although this type of problem instances is application-free, it is mathematically significant. A TSP instance without triangle inequality cannot be approximated within any constant factor. A heuristic local search algorithm usually performs much worse for this type of TSP instances, which offers a strikingly challenge to solving them [ 2 , 3 , 6 , 10 , 11 ]. We use the 2-opt local search technique in the local search phase. The 2-opt neighborhood can be characterized as the neighborhood that induces the greatest correlation between function values of neighboring tours, because neighboring tours differ in the minimum possible four edges. Along the same reasoning line, the 2-opt may have the smallest expected number of locally optimal points [ 12 ]. The local search process randomly selects a solution in the neighborhood of the current solution. A move that gives the first improvement is chosen. The great advantage of the first-improvement pivoting rule is to produce randomized locally optimal points. The software program written for the experiments use several different programming languages and are run in PCs with different versions of Window operating system.

3. The edge matrix E

Usually the edge matrix E is not necessary to be included in the TSP definition because the TSP is a complete graph. However, the edge matrix E is an effective data structure that can help us understand the search behavior of a local search system. General local search algorithm may not require much problem-specific knowledge in order to generate good solutions. However, it may be unreasonable to expect a search algorithm to be able to solve any problem without taking into account the data structure and properties of the problem at hand.

To solve a problem, the first step is to create a manipulatable description of the problem itself. For many problems, the choice of data structure for representing a solution plays a critical role in the analysis of search behavior and design of new search algorithm. For the TSP, a tour can be represented by an ordered list of nodes or an edge configuration of a tour in the edge matrix E , as illustrated in Figure 3 . The improvement of the current tour represents the change in the order of the nodes or the edge configuration of a tour.

travelling salesman problem optimal solution

Two representations of a tour: an ordered list of nodes and an edge configuration of a tour.

Observing the behavior of search trajectories in a local search system can be quite challenging. The edge matrix E is a natural data structure that can help us trace the search trajectories and understand the dynamics of a local search system. An edge e i j is the most basic element of a tour, but contains a piece of information about each of n − 2 ! tours that go through it. Essentially, the nature of local search for the TSP is an edge-selection process: preservation of good edges and rejection of bad edges according to the objective function f s . Each edge has an implicit probability to be selected by a locally optimal tour. A better edge has higher probability to be included in a locally optimal tour. Therefore, the edges in E can be divided into three groups: globally superior edges, G -edges, and bad edges. A globally superior edge is the edge that occurs in many or all locally optimal tours. Although each of these locally optimal tours selects this edge based on its own search trajectory, the edge is globally superior since the edge is selected by these individual tours from different search trajectories going through different search regions. The globally superior edges have higher probability to be selected by a locally optimal tour. A G -edge is the edge that is included in a globally optimal tour. All G -edges are globally superior edges and can be treated as a special subset of the globally superior edges. The edges that are discarded by all search trajectories or selected by only few locally optimal tours are bad edges. A bad edge is impossible to be included in a globally optimal tour. A locally optimal tour usually consists of some G -edges, some globally superior edges and a few bad edges.

The changes of the edge configuration of the matrix E represent the transformations of the search trajectories in a local search system. When all search trajectories reach their end points, the final edge configuration of E represents the final state of the local search system. For a tour s k , we define an element e i j of E as

Then the hit-frequency value e ij in the element e i j is defined as the number of occurrence of the element in K tours, that is

When K search trajectories reach their end points, the value e ij + e ji / K can represent the probability of the edge e i j being hit by a locally optimal tour. We can use graphical technique to observe the convergent behavior of the search trajectories through the matrix E . The hit-frequency value e ij can be easily converted into a unit of half-tone information in a computer, a value that we interpret as a number H ij somewhere between 0 and 1. The value 1 corresponds to black color, 0 to white color, and any value in between to a gray level. Let K be the number of search trajectories, the half-tone information H ij on a computer screen can be represented by the hit-frequency e ij in the element e i j of E :

Figure 4 illustrates a simple example of visualization showing the convergent behavior of 100 search trajectories for a 50-node instance. Figure 4(a) shows the image of the edge configurations of 100 random initial tours. Since each element of E has equal chance to be hit by these initial tours, almost all elements are hit by these initial tours, and all elements have very low H ij values, ranging from 0.00 to 0.02. When the local search system starts searching, the search trajectories constantly change their edge configurations, and therefore the colors in the elements of E are changed accordingly. As the search continues, more and more elements become white (i.e. they are discarded by all search trajectories) and other elements become darker (i.e. they are selected by more search trajectories). When all search trajectories reach their end points, the colored elements represent the final edge configuration of the search system. Figure 4(b) and (c) show the images of edge configuration of E when all search trajectories completed 2000 iterations and 5000 iterations, respectively. At 5000th iteration, the range of H ij values in the elements of E is from 0.00 to 0.42. The value 0.42 means that 42% of the search trajectories select this element. Majority of the elements of E become white color.

travelling salesman problem optimal solution

Visualization of the convergent dynamics of local search system. (a) the image of the edge configurations of 100 initial tours, (b) and (c) the images of edge configurations when the search trajectories are at 2000th and 5000th iteration, respectively.

This simple example has great explanatory power about the global dynamics of the local search system for the TSP. As search trajectories continue searching, the number of edges hit by them becomes smaller and smaller, and better edges are hit by more and more search trajectories. This edge-convergence phenomenon means that all search trajectories are moving closer and closer to each other, and their edge configurations become increasingly similar. This phenomenon describes the globally asymptotic behavior of the local search system.

It is easily verified that under certain conditons, a local search system is able to find the set of the globally optimal tours S ∗ when the number of search trajectories is unlimited, i.e.

However, the required search effort may be very huge – equivalent to enumerating all tours in the solution space. Now one question for the ABSS is “How many search trajectories in the search system do we need to find all globally optimal tours?” The matrix E consists of n n − 1 elements (excluding the diagonal elements). When we randomly construct a tour and record its edge configuration in E , n elements of E will be hit by this tour. If we construct more random tours and record their edge configurations in E , more elements will be hit. We define K as the number of randomly-constructed initial tours, whose edge configurations together will hit all elements of E . We know that all elements of E represent all combinatorial possibilities in the solution space. Therefore, K is the number of search trajectories such that the union of edge configurations of ther initial tours covers the entire solution space. In our experiments, we found that the edge configurations of at most 6 n randomly-constructed tours can guarantee to hit all elements of E . From the tour perspective, K = 6 n random tours represent only a small set of the tours in the solution space. However, from the view of edge-configuration, the union of the edge configurations of 6 n random tours represents the edge configurations of all tours in the solution space. It reveals an amazing fact: the union of the edge configurations of only 6 n random tours contains the edge configurations of all n n − 1 ! / 2 tours in the solution space. It reflects the combinatorial nature of the TSP: the tours in the solution space are formed by different combinations of the edges. The union of the edge configurations of a set of tours contains information about many other tours because one tour shares its edges with many other tours. One fundamental theory that can help us explain this phenomenon is the information theory [ 13 ]. According to the information theory, each solution point contains some information about its neighboring solutions that can be modeled as a function, called information function or influence function . The influence function of the i th solution point in the solution space S is defined as a function Ω i : S → R , such that Ω i is a decreasing function of the distance from a solution point to the i th solution point. The notion of influence function has been extensively used in datamining, data clustering, and pattern recognition.

4. The nature of heuristic local search

Heuristic local search is based on the concept of neighborhood search. A neighborhood of a solution s i , denoted as N s i , is a set of solutions that are in some sense close to s i . For the TSP, a neighborhood of a tour s i is defined as a set of tours that can be reached from s i in one single transition. From edge-configuration perspective, all tours in N s i are very similar because they share significant number of edges with s i . The basic operation of local search is iterative improvement, which starts with an initial tour and searches the neighborhood of the current tour for a better tour. If such a tour is found, it replaces the current tour and the search continues until no improvement can be made. The local search algorithm returns a locally optimal tour.

The behavior of a local search trajectory can be understood as a process of iterating a search function g s . We denote s 0 as an initial point of search and g t s as the t th iteration of the search function g s . A search trajectory s 0 , g s 0 , g 2 s 0 , … , g t s 0 , … converges to a locally optimal point s ′ as its limit, that is,

Therefore, a search trajectory will reach an end point (a locally optimal point) and will stays at this point forever.

In a heuristic local search algorithm, there is a great variety of ways to construct initial tour, choose candidate moves, and define criteria for accepting candidate moves. Most heuristic local search algorithms are based on randomization. In this sense, a heuristic local search algoorithm is a randomized system. There are no two search trajectories that are exactly alike in such a search system. Different search trajectories explore different regions of the solution space and stop at different final points. Therefore, local optimality depends on the initial points, the neighborhood function, randomness in the search process, and time spent on search process. On the other hand, however, a local search algorithm essentially is deterministic and not random in nature. If we observe the motion of all search trajectories, we will see that the search trajectories go towards the same direction, move closer to each other, and eventually converge into a small region in the solution space.

Heuristic local search algorithms are essentially in the domain of dynamical systems. A heuristic local search algorithm is a discrete dynamical system, which has a solution space S (the state space), a set of times T (search iterations), and a search function g : S × T → S that gives the consequents to a solution s ∈ S in the form of s t + 1 = g s t . A search trajectory is the sequence of states of a single search process at successive time-steps, which represents the part of the solution space searched by this search trajectory. The questions about the behavior of a local search system over time are actually the questions about its search trajectories. The most basic question about the search trajectories is “Where do they go in the solution space and what do they do when they get there?”

The attractor theory of dynamical systems is a natural paradigm that can be used to describe the search behavior of a heuristic local search system. The theory of dynamical systems is an extremely broad area of study. A dynamical system is a model of describing the temporal evolution of a system in its state space. The goal of dynamical system analysis is to capture the distinctive properties of certain points or regions in the state space of a given dynamical system. The theory of dynamical systems has discovered that many dynamical systems exhibt attracting behavior in the state space [ 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]. In such a system, all initial states tend to evolve towards a single final point or a set of points. The term attractor is used to describe this single point or the set of points in the state space. The attractor theory of dynamical systems describes the asymptotic behavior of typical trajectories in the dynamical system. Therefore, the attractor theory provides the theoretical foundation to study the search behavior of a heuristic lcoal search system.

In a local search system for the TSP, no matter where we start a search trajectory in the solution space, all search trajectories will converge to a small region in the solution space for a unimodal TSP instance or h small regions for a h -model TSP. We call this small region a solution attractor of the local search system for a given TSP instance, denoted as A . Therefore, the solution attractor of a local search system for the TSP can be defined as an invariant set A ⊂ S consisting of all locally optimal tours and the globally optimal tours. A single search trajectory typically converges to either one of the points in the solution attractor. A search trajectory that is in the solution attractor will remain within the solution attractor forward in time. Because a globally optimal tour s ∗ is a special case of locally optimal tours, it is undoubtedly embodied in the solutioin attractor, that is, s ∗ ∈ A . For a h -modal TSP instance, a local search system will generate h solution attractors A 1 A 2 … A h that attract all search trajectories. Each of the solution attractors has its own set of locally optimal tours, surrounding a globally optimal tour s i ∗ i = 1 2 … h . A particular search trajectory will converge into one of the h solution attractors. All locally optimal tours will be distributed to these solution attractors. According to dynamical systems theory [ 20 ], the closure of an arbitrary union of attractors is still an attractor. Therefore, the solution attractor A of a local search system for a h -modal TSP is a complete collection of h solution attractors A = A 1 ∪ A 2 ∪ … ∪ A h .

Convexity , i.e. ∀ s i ∈ S , g t s i ∈ A for sufficient long t ;

Centrality , i.e. the globally optimal tour s i ∗ is located centrally with respect to the other locally optimal tours in A i i = 1 2 … h ;

Invariance , i.e. ∀ s ′ ∈ A , g t s ′ = s ′ and g t A = A for all time t ;

Inreducibility , i.e. the solution attractor A contains a limit number of invariant locally optimal tours.

travelling salesman problem optimal solution

Illustration of the concepts of serch trajectories and solution attractors in a local search system for a multimodal optimization problem.

A search trajectory in a local search system changes its edge configuration during the search according to the objective function f s and its neighborhood structure. The matrix E can follow the “footprints” of search trajectories to capture the dynamics of the local search system. When all search trajectories reach their end points – the locally optimal tours, the edge configuration of the matrix E will become fixed, which is the edge configuration of the solution attractor A . This fixed edge configuration contains two groups of edges: the edges that are not hit by any of locally optimal tours (non-hit edges) and the edges that are hit by at least one of the locally optimal tours (hit edges). Figure 6 shows the edge grouping in the edge configuration of E when all search trajectories stop at their final points.

travelling salesman problem optimal solution

The grouping of the edges in E when all search trajectories reach their end points.

In the ABSS, we use K search trajectories in the local search phase. Different sets of K search trajectories will generate different final edge configuration of E . Suppose that, we start the local search from a set of K initial points and obtain a edge configuration M a in E when the local search phase is terminated. Then we start the local search process again from a different set of K initial points and obtains a little different edge configuration M b in E . Which edge configuration truly describes the edge configuration of the real solution attractor? Actually, M a and M b are structurally equivalent because they are different only in the set of bad edges, thus M a precisely replicates the dynamical properties of M b . The final edge configuration of the constructed solution attractor generated from K search trajectories is not sensitive to the selection of K search trajectories. This property indicates that a heuristic local search system actually is a deterministic system: although a single search trajectory appears stochastic, all search trajectories from differeitn initial points will be always trapped into the same small region in the solution space and the final edge configuration of E will always converge to the same set of the globally optimal edges.

The convergence of the search trajectories can be measured by the change in the edge configuration of the matrix E . In the local search process, search trajectories collect all available topology information about the quality of the edges from their search experience and record such information in the matrix E . The changes in the edge configuration of E fully reflects the real search evolution of the search system. A state of convergence is achieved once no any more local search trajectory can change the edge configuration of E . For a set of search trajectories to be converging, they must be getting closer and closer to each other, that is, their edge configurations become increasingly similar. As a result, the edge configurations of the search trajectories converge to a small set of edges that contains all globally superior edges and some bad edges. Let W denote total number of edges in E , α t the number of the edges that are hit by all search trajectories at time t , β t the number of the edges that are hit by one or some of the search trajectories, and γ t the number of edges that have no hit at all, then at any time t , we have

For a given TSP instance, W is a constant value W = n n − 1 / 2 for a symmetric instance or W = n n − 1 for an asymmetric instance. During the local search process, the values for α t and γ t will increase and the value for β t will decrease. However, these values cannot increase or decrease foreover. At certain point of time, they will become constant values, that is,

Our experiments confirmed this inference about α t , β t and γ t . Figure 7 illustrates the patterns of α t , β t and γ t curves generated in our experiments. Our experiments also found that, for unimodal TSP instances, the ratio γ t / W could approach to 0.70 quickly for different sizes of TSP instances. For multimodal TSP instances, this ratio depends on the number of the globally optimal points. However, the set of hit edges is still very small.

travelling salesman problem optimal solution

The α t , β t and γ t curves with search iterations.

It contains all locally optimal tours;

It contains a complete collection of solution attractors, i.e. A = A 1 ∪ A 2 ∪ … ∪ A h ;

It contains a complete collection of G -edges, i.e. G = G 1 ∪ G 2 ∪ … ∪ G h .

From this analysis, we can see that the edge matrix E is an extremely useful data structure that not only collcets the information about search trajectories, but also convert local search behavor of individual search trajectories into global search behavor of the search system. The global convergence and deterministic property of the search trajectories make the local search system always converge to the same solution attractors and the edge configurations of the search trajectories always converge to the same set of globally superior edges. The matrix E shows us clearly where the search trajectories go and where all locally optimal points are located. We found the village! However, it is still difficult to identify all G -edges among the globally superior edges. The ABSS uses the exhaustive search phase to find all tours in the solution attractor. Since the local search phase has significantly reduced the size of the search space for the exhaustive search phase, the complete search in the solution attractor becomes feasible.

5. Global optimization feature of the ABSS

The search system should be globally convergent.

The search system should be deterministic and have a rigorous guarantee for finding all globally optimal solutions without excessive computational burden.

The optimality criterion in the system must be based on information on the global behavior of the search system.

The ABSS combines beautifully two crucial aspects in search: exploration and exploitation. In the local search phase, K search trajectories explore the full solution space to identify the globally superior edges, which form the edge configuration of the solution attractor. These K search trajectories are independently and invidually executed, and therefore they create and maintain diversity from beginning to the end. The local search phase is a randomized process due to randomization in the local search function g s . In this sense, the K search trajectories actually perform the Monte Carlo simulation to sample locally optimal tours. The essential idea of Monte Carlo method is using randomness to solve problems that might be deterministic in principle [ 26 ]. In the ABSS, K search trajectories start a sample of initial points from a uniform distribution over the solution space S , and, through the randomized local search process, generate a sample of locally optimal points uniformly distributed in the solution attractor A . The edge configuration of E is actually constructed through this Monte Carlo sampling process.

Each of the K search trajectories passes through many neighborhoods on its way to the final point. For any tour s i , the size of N s i is greater than n 2 ! [ 12 ]. Let N s i ′ denote the neighborhood of the final point s i ′ of the i th search trajectory and Ω N s tran i as the union of the neighborhoods of all transition points of the search trajectory, then we can believe that the search space covered by K search trajectories is

That is, the solution attractor A is formed through the entire solution space S . The solution attractor A contains h unique minimal “convex” sets A i i = 1 2 … h . Each A i has a unique best tour s i ∗ surrounded by a set of locally optimal tours. The tour s i ∗ in A i satisfies f s i ∗ < f s for all s ∈ A i and f s 1 ∗ = f s 2 ∗ = … = f s h ∗ .

We see that the matrix E plays a critical role to transform local search process of the individual search trajectories into a collective global search process of the system. Each time when a local search trajectory finds a better tour and updates the edge configuraton of E , the conditional distribution on the edges are updated. More values are attached to the globally superior edges, and bad edges are discarded. Let W be the complete set of the edges in E and W A the set of edges in the edge configuration of the solution attractor A such that g W is contained in the interior of W . Then the intersection W A of the nested sequence of sets is

and lim t → ∞ g t W A = W A . As a result, the edge configurations of K search trajectories converge to a small set of edges.

∀ s ∈ A i , f s i ∗ < f s

f s 1 ∗ = f s 2 ∗ = … = f s h ∗

min s ∈ A f s = min s ∈ S f s

Therefore the global convergence and deterministic property of the search trajectories in the local search phase make the ABSS always find the same set of globally optimal tours. We conducted several experiments to confirm this argument empirically. In our experiments, for a given TSP instance, the ABSS performed the same search process on the instance several times, each time using a different set of K search trajectories. The ABSS outputed the same set of the best tours in all trials.

Table 1 shows the results of two experiments. One experiment generated n = 1000 instance Q 1000 , the other generated n = 10000 instance Q 10000 . We conducted 10 trials on each of the instances respectively. In each trial, the ABSS used K = 6 n search trajectories. Each search trajectory stopped when no improvement was made during 10 n iterations. The matrix E stored the edge configurations of the K final tours and then was searched completely using the depth-first tree search process. Table 1 lists the number of tours found in the constructed solution attractor A , the cost range of these tours, and the number of the best tours found in the constructed solution attractor. For instance, in trial 1 for Q 1000 , the ABSS found 6475824 tours with the cost range [3241, 4136] in the constructed solution attractor. There was a single best tour in the solution attractor. The ABSS found the same best tour in all 10 trials. For the instance Q 10000 , the ABSS found the same set of four best tours in all 10 trials. These four best tours have the same cost value, but with different edge configurations. If any trial had generated a different set of the best tours, we could immediately make a conclusion that the best tours in the constructed solution attractor may not be the globally optimal tours. From practical perspective, the fact that the same set of the best tours was detected in all trials provides an empirical evidence of the global optimality of these tours. The fact also indicates that the ABSS converges in solution. Convergence in solution means that the search system can identify all optimal solutions repeatedly. Always finding the same set of optimal solutions actually is the fundamental requirement for global optimization systems.

Tours in constructed solution attractor A for Q 1000 and Q 10000 .

6. Computing complexity of the ABSS

With current search technology, the TSP is an infeasible problem because it is not solvable in a reasonable amount of time. Faster computers will not help. A feasible search algorithm for the TSP is one that comes with a guarantee to find all best tours in time at most proportional to n k for some power k . The ABSS can guarantee to find all globally optimal tours for the TSP. Now the question is how efficient it is?

The core idea of the ABSS is that, if we have to use exhaustive search to confirm the globally optimal points, we should first find a way to quickly reduce the effective search space for the exhaustive search. When a local search trajectory finds a better tour, we can say that the local search trajectory finds some better edges. It is an inclusive view. We also can say that the local search trajectory discards some bad edges. It is an exclusive view. The ABSS uses the exclusive strategy to conquer the TSP. The local search phase in the ABSS quickly prunes out large number of edges that cannot possibly be included in any of the globally optimal tours. Thus, a large useless area of the solution space is excluded. When the first edge is discarded by all K search trajectories, n − 2 ! tours that go through that edge are removed from the search space for the exhaustive search phase. Each time when an edge is removed, large number of tours are removed from the search space. Although the complexity of finding a true locally optimal tour is still open, and we even do not know any nontrivial upper bounds on the number of iterations that may be needed to reach local optimality [ 27 , 28 ], decades of empirical evidence and practical research have found that heuristic local search converges quickly, within low order polynomial time [ 1 , 8 , 27 , 29 ]. In practice, we are rarely able to find perfect locally optimal tour because we simply do not allow the local search process to run enough long time. Usually we let a local search process run a predefined number of iterations, accept whatever tour it generates, and treat it as a locally optimal tour. Therefore, the size of the constructed solution attractor depends not only on the problem structure and the neighborhood function, but also on the amount of search time invested in the local search process. As we increase local search time, we will constructe a smaller and stronger solution attractor. The local search phase in the ABSS can significantly reduce the search space for the exhaustive search phase by excluding a large number of edges. Usually the local search phase can remove about 60% of edges of the matrix E in O n 2 .

Now an essential question is naturally raised: What is the relationship between the size of the constructed solution attractor and the size of the problem instance? Unfortunately, there is no theoretical analysis tool available in the literature that can be used to answer this question. We have to depend on empirical results to lend some insights. We conducted several experiments to observe the relationship between the size of the constructed solution attractor and the TSP instance size. Figures 8 – 10 show the results of one of our experiments. All other similar experiments reveal the same pattern. In this experiment, we generated 10 unimodal TSP instances in the size from 1000 to 10000 nodes with 1000-node increment. For each instance, the ABSS generated K = 6 n search trajectories. We first let each search trajectory stop when no tour improvement was made during 10000 iterations regardless of the size of the instance (named “fixed search time”). Then we did the same search procedures on these instances again. This time we made each search trajectory stop when no improvement was made during 10 n iterations (named “varied search time 1”) and 100 n iterations (named “varied search time 2”) respectively. Figure 8 shows the number of the edges that were discarded at the end of local search phase. Figure 9 shows the number of tours in the constructed solution attractor for each instance, and Figure 10 shows the effective branching factors in the exhaustive search phase.

travelling salesman problem optimal solution

The number of discarded edges at the end of local search phase.

travelling salesman problem optimal solution

Relationship between the size of the constructed solution attractor and instance size.

travelling salesman problem optimal solution

The b ∗ values for different instance size n in our experiment.

In Figure 8 , we can see that the search trajectories can quickly converge to a small set of edges. In the fixed-search-time case, about 60% of the edges were discarded by search trajectories for the 1000-node instance, but this percentage decreases as instance size increases. For the 10000-node instance, only about 46% of the edges are discarded. However, if we increase the local search time linearly when the instance size increases, we can keep the same percentage of discarded-edge for all instance sizes. In the varied-search-time-1 case, about 60% of the edges are abandoned for all different instance sizes. In the varied-search-time-2 case, this percentage increases to 68% for all instances. Higher percentage of abandoned edges means that majority of the branches are removed from the search tree.

Figure 9 shows the number of tours exist in the constructed solution attractor for these instances. All curves in the chart appear to be linear relationship between the size of constructed solution attractor and the size of the problem instance, and the varied-search-time curves have much flatter slope because longer local search time makes a smaller constructed solution attractor. Figures 8 and 9 indicate that the search trajectories in the local search phase can effectively and efficiently reduce the search space for the exhaustive search, and the size of the solution attractor increases linearly as the size of the problem instance increases. Therefore, the local search phase in the ABSS is an efficiently asymptotical search process that produces an extremely small search space for further exhaustive search.

The completely searching of the constructed solution attractor is delegated to the exhaustive search phase. This phase may still need to examine tens or hundreds of millions of tours but nothing a computer processor cannot handle, as opposed to the huge number of total possibilities in the solution space. The exhaustive search phase can find the exact globally optimal tours for the problem instance after a limited number of search steps.

The exhaustive search phase can use any enumerative technique. However, the edge configuration of E can be easily searched by the depth-first tree search algorithm. One of the advantages of depth-first tree search is less memory requirement since only the nodes on the current path are stored. When using tree-search algorithm, we usually use branching factor, average branching factor, or effective branching factor to measure the computing complexity of the algorithm [ 30 , 31 , 32 , 33 ]. In the data structure of search tree, the branching factor is the number of successors generated by a given node. If this value is not uniform, an average branching factor can be calculated. An effective branching factor b ∗ is the number of sucessors generated by a typical node for a given tree-search problem. We use the following definition to calculate effective brancing factor b ∗ for the exhaustive search phase:

where n is the size of the TSP instance, representing the depth of the tree, and N is total number of nodes generated in the tree from the origin node. In our experiments, the tree-search process always starts from node 1 (the first row of E ). N is total number of nodes that are processed to construct all valid tours and incomplete (therefore abandoned) tours in E . N does not count the node 1 (the origin node), but includes the node 1 as the end node of a valid tour. We use Figure 2(d) as an example. The depth-first search process searches the edge configuration of E and will generate N = 58 nodes. Therefore, b ∗ ≈ 1.3080 , that is, 58 ≈ 1.3080 + 1.3080 2 + … + 1.3080 10 . Figure 10 shows the effective branching factor b ∗ in our experiment. The low values of b ∗ indicates that the edge configuration of the solution attractor represents a tree with extremely sparse branches, and the degree of sparseness does not changes as the problem size increase if we linearly increase local search time in the local search phase for a large instance. The search time in the exhaustive search phase is probably in O n 2 since the size of the constructed solution attractor might be linearly increased with the problem size n and the number of edges in E is polynomially increased with the problem size. Our experiments shows that the ABSS can significantly reduce the computational complexity for the TSP and solve the TSP efficiently with global optimality guarantee.

Therefore, the ABSS is a simple algorithm that increases in computational difficulty polynomially with the size of the TSP. In the ABSS, the objective pursued by the local search phase is “quickly eliminating unnecessary search space as much as possible.” It can provide an answer to the question “In which small region of the solution space is the optimal solution located?” in time of O n 2 . The objective of the exhaustive search phase is “identifying the best tour in the remaining search space.” It can provide an anwer to the question “Which is the best tour in this small region?” in time of O n 2 . All together, the ABSS can answer the question “Is this tour the best tour in the solution space?” in time of O n 2 . Therefore, the ABSS is probably with computing complexity of O n 2 and memory space requirement of O n 2 . This suggests that the TSP might not be as complex as we might have expected.

7. Conclusion

Advances in computational techniques on the determination of the global optimum for an optimization problem can have great impact on many scientific and engineering fields. Although both the TSP and heuristic local search algorithms have huge literature, there is still a variety of open problems. Numerous experts have made huge advance on the TSP research, but two fundamental questions of the TSP remain essentially open: “How can we find the optimal tours in the solution space, and how do we know they are optimal?”

The P-vs-NP problem is about how fast we can search through a huge number of solutions in the solution space [ 34 ]. Do we ever need to explore all the possibilities of the problem to find the optimal one? Actually, the P-vs-NP problem asks whether, in general, we can find a method that completely searches only the region where the optimal points are located [ 34 , 35 , 36 ]. Most people believe P ≠ NP because we have made little fundamental progress in the area of exhaustive search. Modern computers can greatly speed up the search, but the extremely large solution space would still require geologic search time to find the exact optimal solution on the fastest machines imaginable. A new point of view is needed to improve our capacity to tackle these difficulty problems. This paper describe a new idea: using efficient local search process to effectively reduce the search space for exhaustive search. The concept of solution attractor in heuristic local search systems may change the way we think about both local search and exhaustive search. Heuristic local search is an efficient search system, while exhaustive search is an effective search system. The key is how we combines these two systems into one system beautifully to conquer the fundamental issues of the hard optimization problems. In the TSP case, the edge matrix E , a problem-specific data structure, plays a critical role of reducing the search space and transforming local search to global search.

The ABSS is designed for the TSP. However, the concepts and formulation behind the search algorithm can be used for any combinatorial optimization problem requiring the search of a node permutation in a graph.

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Published: June 30th, 2021

By Rafael Prieto Curiel

338 downloads

By Michael J. Droboniku, Heidi Kloos, Dieter Vanderel...

278 downloads

By Sérgio Henrique Vannucchi Leme de Mattos, Luiz Edu...

347 downloads

IncludeHelp_logo

IncludeHelp

Home » Algorithms

In this article, we will learn about the travelling salesman problem and prove that travelling salesman problem is the NP complete problem. Submitted by Shivangi Jain , on July 29, 2018

Travelling Salesman problem

This problem can be stated as- "Given n number of cities and a travelling salesman has to visit each city. Then we have to find the shortest tour so that the travelling salesman can visit each and every city only once."

This travelling salesman problem is one of the examples of NP-Complete problems.

In the travelling salesman problem, we are given a complete undirected graph G = (V, E) that has a non-negative integer cost c (u, v) associated with each edge (u, v) belongs to E and we must find a tour of G with minimum cost.

Let C (A) denotes the total cost of the edges in the subset A is the subset E .

Travelling Salesman problem

Practically, it is always cheapest to go directly from a place w, going by way of any intermediate stop V can’t be expensive. Or say, cutting out an intermediate stop never increase the cost. This can be formalized that the cost function c satisfies the triangle inequality, if for all vertices u, v, w £ V .

This triangle inequality is natural one, and is many application it is automatically satisfied. In this problem, our tour starts from an initial state and completes after returning to original state passing through all intermediate states.

If the graph has n vertices, i.e., |V| = n , then the solution space S is given by S = { 1, π, 1, π: is a permutation of (2, 3, ..., n)} .

Then |S| = (n-1)!

The size of S can be reduced by restricting S so that (1, i1,...i2,i(n-1), 1) belongs to S if and only if (ij, ij + 1) £ E, 0 , and i0 = in = 1 .

State space tree for this problem, for n = 4 and initial and final states 1 .

Travelling Salesman problem

Related Tutorials

Preparation

What's New (MCQs)

Top Interview Coding Problems/Challenges!

Comments and Discussions!

IncludeHelp's Blogs

Languages: » C » C++ » C++ STL » Java » Data Structure » C#.Net » Android » Kotlin » SQL Web Technologies: » PHP » Python » JavaScript » CSS » Ajax » Node.js » Web programming/HTML Solved programs: » C » C++ » DS » Java » C# Aptitude que. & ans.: » C » C++ » Java » DBMS Interview que. & ans.: » C » Embedded C » Java » SEO » HR CS Subjects: » CS Basics » O.S. » Networks » DBMS » Embedded Systems » Cloud Computing » Machine learning » CS Organizations » Linux » DOS More: » Articles » Puzzles » News/Updates

ABOUT SECTION » About us » Contact us » Feedback » Privacy policy

STUDENT'S SECTION » Internship » Certificates » Content Writers of the Month

SUBSCRIBE » Facebook » LinkedIn » Subscribe through email

© https://www.includehelp.com some rights reserved.

IMAGES

  1. The Travelling Salesman Problem

    travelling salesman problem optimal solution

  2. Travelling Salesman Problem in C

    travelling salesman problem optimal solution

  3. Problem mit reisenden Verkäufern

    travelling salesman problem optimal solution

  4. (PDF) Optimal solution for travelling salesman problem using heuristic shortest path algorithm

    travelling salesman problem optimal solution

  5. 0 Traveling Salesman Problem

    travelling salesman problem optimal solution

  6. Travelling Salesman Problem

    travelling salesman problem optimal solution

VIDEO

  1. Traveling Salesman Problem

  2. L 19 Maximization problem and Travelling Salesman Problem|Operations Research|Mechanical

  3. Approximation Algorithm for Travelling Salesman Problem

  4. travelling salesman problem

  5. IB MAI HL

  6. NP Completeness: Traveling Salesman Problem and Motivation

COMMENTS

  1. Travelling salesman problem

    The Beardwood-Halton-Hammersley theorem provides a practical solution to the travelling salesman problem. The authors derived an asymptotic formula to determine the length of the shortest route for a salesman who starts at a home or office and visits a fixed number of locations before returning to the start.

  2. How To Solve Travelling Salesman Problem With Simulated Annealing

    The travelling salesman problem (TSP)is a ubiquitous problem within combinatorial optimizationand mathematics in general. The problem poses the question: 'Given a list of cities and their distances, what is the shortest route that visits each city once and returns to the original city?'

  3. Explained: What is Traveling Salesman Problem (TSP)

    The traveling Salesman Problem is an optimization problem studied in graph theory and the field of operations research. In this optimization problem, the nodes or cities on the graph are all connected using direct edges or routes. The weight of each edge indicates the distance covered on the route between the two cities.

  4. Traveling Salesman Problem: TSP Solutions for Deliveries

    The traveling salesman problem is considered an NP-hard problem because there isn't a simple or straightforward solution, and the difficulty of calculating the optimal route with the shortest running time grows as more destinations are added. Try OptimoRoute ™ for Free No installation or credit card required Start your free trial

  5. Traveling Salesman Problem (TSP) Implementation

    Travelling Salesman Problem (TSP) : Given a set of cities and distances between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. Note the difference between Hamiltonian Cycle and TSP.

  6. Solving The Traveling Salesman Problem For Deliveries

    Here are some of the most popular solutions to the Travelling Salesman Problem: 1. The brute-force approach The Brute Force approach, also known as the Naive Approach, calculates and compares all possible permutations of routes or paths to determine the shortest unique solution.

  7. An Algorithm for the Traveling Salesman Problem

    until an optimal solution is found. The efficiency of the process, however, rests very strongly on the devices used to split the subsets and to find the ... R. J. TWERY, AND F. D. STONE, "A Solution to the Traveling Salesman Problem by Combinatorial Programming," mimeographed. 5. M. J. ROSSMAN AND R. J. TWERY, "Combinatorial Programming," presented

  8. Travelling Salesman Problem: Python, C++ Algorithm

    There are multiple ways to solve the traveling salesman problem (tsp). Some popular solutions are: The brute force approach is the naive method for solving traveling salesman problems. In this approach, we first calculate all possible paths and then compare them. The number of paths in a graph consisting of n cities is n!

  9. How to Solve Traveling Salesman Problem

    How to Solve the Traveling Salesman Problem - A Comparative Analysis | Towards Data Science 500 Apologies, but something went wrong on our end. Refresh the page, check Medium 's site status, or find something interesting to read. Pedram Ataee, PhD 798 Followers

  10. Travelling salesman problem solution

    The travelling salesman problem (TSP) consists on finding the shortest single path that, given a list of cities and distances between them Enhance your theoretical performance Average satisfaction rating 4.7/5

  11. Traveling salesman problem

    The traveling salesman problem (TSP) is a widely studied combinatorial optimization problem, which, given a set of cities and a cost to travel from one city to another, seeks to identify the tour that will allow a salesman to visit each city only once, starting and ending in the same city, at the minimum cost. 1 Contents 1 History 2 Description

  12. Solve the Traveling Salesman Problem

    A new AI processor has extended the traveling salesman solution from 16 nodes to 22.; The traveling salesman is an age-old exercise in optimization, studied in school and relevant to "real life." ...

  13. PDF The Traveling Salesman problem

    causing this problem to grow exponentially rather than as a polynomial. There are bunch of algorithms offering comparably fast running time and still yielding near optimal solutions. 2 History of The TSP The Traveling Salesman Problem (TSP) is a problem whose solution has eluded many mathematicians for years. Currently there is no solution

  14. Comparative analysis of the optimal solutions of travelling salesman

    planning, telecommunication, economics etc.The Travelling Salesman Problem states that the salesman must visit a number of cities starting from his headquarters and travels through each city exactly once and returns to the headquarters.In this paper, we obtain the optimal solution for the traveling salesman problem using Warshall's algorithm.

  15. Traveling Salesman Problem: Branch and Bound Solution

    Traveling Salesman Problem: Branch and Bound Solution | by Okan Yenigün | Feb, 2023 | Towards Dev Write Sign up Sign In 500 Apologies, but something went wrong on our end. Refresh the page, check Medium 's site status, or find something interesting to read. Okan Yenigün 650 Followers Full time: problem solver.

  16. The traveling salesman problem

    The traveling salesman problem (TSP) is one of the most intensely studied problems in computational mathematics. Its name reflects the real-life problem traveling salesmen face when taking their business from city to city - finding the shortest roundtrip possible while visiting each location only once. ... To find the optimal solution to this ...

  17. Computer solutions of the traveling salesman problem

    Two algorithms for solving the (symmetric distance) traveling salesman problem have been programmed for a high-speed digital computer. The first produces guaranteed optimal solution for problems involving no more than 13 cities; the time required (IBM 7094 II) varies from 60 milliseconds for a 9-city problem to 1.75 seconds for a 13-city problem. The second algorithm produces precisely ...

  18. Traveling Salesman Problem

    Optimal road trip to visit 647 colleges: Hiking Tour of Austria: Get a view of 100 mountain peaks: Scientific American: Short piece on Yogi Berra and the TSP: Travelling Salesman: Thriller movie centered around a solution of the TSP: Mona Lisa TSP: $1,000 Prize for a 100,000-city challenge problem. pla85900: Solution of a 85,900-city TSP. Iowa Tour

  19. PDF Examples of Traveling Salesman Problems

    I Brute-force is optimal but not e cient. I NNA, RNNA, and CLA are all e cient but not optimal (and can sometimes produce very bad answers). I The key word is \known." We do not know whether (a) there really is no optimal e cient algorithm, or (b) there really is one and no one has found it yet. Most mathematicians believe (a).

  20. Travelling salesman problem

    The Traveling Salesman Problem (often called TSP) is a classic algorithmic problem in the field of computer science and operations research. [1] It is focused on optimization. In this context, better solution often means a solution that is cheaper, shorter, or faster. TSP is a mathematical problem. It is most easily expressed as a graph ...

  21. How to Solve the Traveling Salesman Problem

    The Traveling Salesman Problem (TSP) is believed to be an intractable problem and have no practically efficient algorithm to solve it. The intrinsic difficulty of the TSP is associated with the combinatorial explosion of potential solutions in the solution space.

  22. Comparison of Algorithms for Solving Traveling Salesman Problem

    Abstract. Travel Salesman Problem is one of the most known optimization problems. While an optimal solution cannot be reached, non-optimal solutions approach optimality and keep running time fast ...

  23. Travelling Salesman Problem

    In this article, we will learn about the travelling salesman problem and prove that travelling salesman problem is the NP complete problem. Submitted by Shivangi Jain, on July 29, 2018 . Travelling Salesman problem. This problem can be stated as- "Given n number of cities and a travelling salesman has to visit each city.Then we have to find the shortest tour so that the travelling salesman can ...

  24. PDF Department of Mathematics

    Department of Mathematics | University of Pittsburgh