The post How to discover sink nodes in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>In a social network, a sink or from a friendship node will be poor in passing on important information to the group. On the other hand, they might be great at keeping secrets and avoiding allowing things to slip up. A sink could either be local or global. Local sinks are also referred to as a terminal and are directed graphs with no departing edge. A global sink is usually viewed as a sink and is a node that is reached by all the nodes in a given directed graph.

**Simple illustration of a sink node**

We do understand sink nodes do not have an edge emanating from them to other nodes in the specific graph. It will be important to express local and global nodes with two diagrams to drive home the point.

**Local sink:** In a local sink situation, all nodes are connected and lead to the sink. As earlier stated, the sink node exists because no edges are emanating from the orange coloured vertex. In a friendship network, imagine a friend is the source of exciting news of employment. This information is then shared from one blue node to the next with an instruction to keep the exciting update or development within the network. From the illustration below, it is clear to see that the orange node receives this great news from other friends in the same social network or community, but is unable to share that with any other node due to the command to keep the news within and the facts that every friend in the network is fully acquainted with the news. In this case, the network is considered local and the orange node is a sink due to the absence of an outgoing edge.

**Universal Sink**: In the illustration below, clearly indicates the shaded node appears to be the sink in this graph. It is because all nodes are pointing towards the sink and there are no outgoing edges from the sink. A key differentiator between this structure in comparison to that of the local sink is the limited connection between the non-sink nodes. Unlike in a local sink node graph, all nodes are mostly connected whilst pointed towards the designated sink.

Here is a simple code implementation in Python and inspired by source:

```
# Actual return of the number of sink nodes.
def presentSink(a, m, edgeFrom, edgeTo):
# Array for marking the non-sink node.
mark = [0] * (a + 1)
# Indicating the non-sink node.
for i in range(m):
mark[edgeFrom[i]] = 1
# Actual count of the sink nodes.
count = 0
for i in range(1, a + 1):
if (not mark[i]):
count += 1
return count
# Graph connection
if __name__ == '__main__':
a = 4
m = 2
edgeFrom = [2, 4]
edgeTo = [3, 3]
print(presentSink(a, m, edgeFrom, edgeTo))
# This code is inspired by PranchalK
```

The post How to discover sink nodes in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post The impact of bridges in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**Bridge in an undirected connected graph:**

The below graph is undirected but it is connected. There are 9 nodes and 10 edges in this graph. You can determine that one edge is in red and the rest are in grey/black. The edge in red is the bridge and removal will lead to a disconnection of node [L] from the rest of the graph. Assuming all of these nodes were train stations and the edges are London tube lines. A rail track maintenance work, shortage of control room staff or a broken train on the track are some of the reasons that can cause a train line that connects node [H] to node[L] to become disconnected or temporarily removed from the train network or graph. When this occurs, people living close to the station [L] will become stranded or short of transportation options. This is a connected graph because the disconnected unit is not a sub-graph on its own but a single node stranded and possibly struggling in the network.

**Bridge in an undirected dis-connected graph:**

From the example graph below, one can identify two graphs which are disconnected in nature. There is no bridge in the first graph that are 12 nodes and 15 edges. The second graph is made of two bridges [Q, P] and [P, R]. If this was a transport network, a disruption to the train line that travels between [Q, P, R] can disconnect two or all three of the nodes. This is an example of a bridge in an undirected disconnected graph.

**Core attributes of bridges in a graph:**

**The removal of a bridge increases the number of connected components of the graph**: This is one of the core features of a bridge in a graph. When a bridge in the form of an edge is removed it increases the cluster of connected nodes or components by +1. From the example graph below, there is only one connected component or cluster of nodes. The removal of one of the edges will lead to two separate components.

- A bridge does not belong to a cycle: A cycle in graph theory is a network where the only repeated vertices are the first and last ones respectively. The nodes are expected to be connected in a closed chain. Chains are found in the networkx Python library using the
**networkx.chain_decomposition()**function.

In practice, the number of bridges a graph can contain will be -1 of its total number of nodes. Simply stating that n-1 bridges can exist in a given graph. If an extra edge is added to the graph it becomes a cycle and when all bridges in a graph are bridges the network is referred to as a forest.

Here is a code implementation of bridge identification in Java inspired by Aakash Asija

```
// Finding bridges in an undirected graph
import java.io.*;
import java.util.*;
import java.util.LinkedList;
// This class expresses an undirected graph using adjacency list
// representation
class Graph
{
private int V; // No. of vertices
// Here is an array of lists for Adjacency List Representation
private LinkedList<Integer> adj[];
int time = 0;
static final int NIL = -1;
//Class constructor
Graph(int v)
{
V = v;
adj = new LinkedList[v];
for (int i=0; i<v; ++i)
adj[i] = new LinkedList();
}
// Here is a function to add an edge into the graph
void addEdge(int v, int w)
{
adj[v].add(w); // Adding w to v's list.
adj[w].add(v); //Adding v to w's list
}
// A recursive function that finds and prints bridges
// Mainly using DFS traversal
void bridgeUtil(int u, boolean visited[], int disc[],
int low[], int parent[])
{
// Acknowledge the current node as visited
visited[u] = true;
// Commence discovery time and low value
disc[u] = low[u] = ++time;
// Visit all vertices adjacent to this
Iterator<Integer> i = adj[u].iterator();
while (i.hasNext())
{
int v = i.next(); // the v is current adjacent of u
// If v is not visited yet, then make it a child
// of u in DFS tree and recur for it.
// If v is not visited yet, then recur for it as well
if (!visited[v])
{
parent[v] = u;
bridgeUtil(v, visited, disc, low, parent);
// Check if the subtree rooted with v has a
// connection to one of the ancestors of u
low[u] = Math.min(low[u], low[v]);
// If the lowest vertex reachable from subtree
// under v is below u in DFS tree, then u-v is
// a bridge
if (low[v] > disc[u])
System.out.println(u+" "+v);
}
// Update low value of u for parent function calls.
else if (v != parent[u])
low[u] = Math.min(low[u], disc[v]);
}
}
// A DFS based function to find all bridges. It uses recursive
// function bridgeUtil()
void bridge()
{
// Acknowledge all the vertices as not visited
boolean visited[] = new boolean[V];
int disc[] = new int[V];
int low[] = new int[V];
int parent[] = new int[V];
// Initialize parent and visited, and ap(articulation point)
// arrays
for (int i = 0; i < V; i++)
{
parent[i] = NIL;
visited[i] = false;
}
// Request for the recursive helper function to find Bridges
// in DFS tree rooted with vertex 'i'
for (int i = 0; i < V; i++)
if (visited[i] == false)
bridgeUtil(i, visited, disc, low, parent);
}
public static void main(String args[])
{
// The creation of the graphs
System.out.println("Highlighting the bridges in first graph ");
Graph g1 = new Graph(7);
g1.addEdge(1, 0);
g1.addEdge(0, 2);
g1.addEdge(2, 1);
g1.addEdge(0, 3);
g1.addEdge(3, 4);
g1.addEdge(4, 5);
g1.addEdge(5, 6);
g1.bridge();
System.out.println();
System.out.println("Introducing the bridges in Second graph");
Graph g2 = new Graph(6);
g2.addEdge(0, 1);
g2.addEdge(1, 2);
g2.addEdge(2, 3);
g2.addEdge(3, 4);
g2.addEdge(4, 5);
g2.bridge();
System.out.println();
System.out.println("Generating the bridges in Third graph ");
Graph g3 = new Graph(9);
g3.addEdge(0, 1);
g3.addEdge(1, 2);
g3.addEdge(2, 0);
g3.addEdge(1, 3);
g3.addEdge(1, 4);
g3.addEdge(1, 6);
g3.addEdge(3, 5);
g3.addEdge(4, 5);
g3.addEdge(6, 1);
g3.addEdge(7, 6);
g3.bridge();
}
}
// Here are the outputs
Highlighting the bridges in first graph
5 6
4 5
3 4
0 3
Introducing the bridges in Second graph
4 5
3 4
2 3
1 2
0 1
Generating the bridges in Third graph
6 7
1 6
```

The post The impact of bridges in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post Implementing Bellman-Ford’s algorithm on negative edge weights of a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**Negative weights of a graph:**

In an earlier article, I touched on the different types of weights in a graph. We’ve got flow, capacity, frequency, distance, time, resistance and cost weights. Negative edges and cycles are not always common in graphs. Their existence determines the most suitable algorithm to either detect or print the negative cycle. Before looking into the best algorithmic implementation of the negative cycles in a graph, it will be great to explore some scenarios or examples of negative edges.

Logistics Example: Let’s assume you are a delivery driver contracting for an eCommerce company and the weight of your edges W(a,b,c) of an edge a,b,c is the cost of petrol or diesel from the depot to the customer destination. Nodes x,y,z form a negative cycle as these are last-minute delivery destinations that the contracting driver has just accepted from a different supplier.

Let’s assume the delivery driver is contracting for Amazon and node [a] is the amazon depot and the contractor is expected to drop off parcels to customers in nodes [b] and [c] respectively. An hour before leaving, the delivery driver receives a notification from a retailer he contracts for to deliver an item to two customer destinations. Let’s assume this retailer is known as Flywing. Due to the last-minute nature of this request, Flywing will reimburse the delivery driver with the cost of petrol in addition to a competitive delivery pay. The driver is expected to pick up the additional items from node [x] and deliver them to customers in nodes [y] and [z].

The weight of the above graph is the cost of petrol to the delivery driver. The positive weights express the petrol cost attributed to the delivery driver. While the negative weights are those that are to be reimbursed by the retailer. In simple terms, the delivery driver bears the cost when the weight is positive but the retailer is responsible for the cost when weights are negative. In this example, the cost of petrol will be 45p per mile for the first 100 miles and 25p per mile afterwards,

Negative weights are an interesting mathematical problem that can seamlessly be extended to other applications that are confronted with the shortest-path problems. We also aim to present a suitable algorithm that is well suited to handling the negative edge weight shortest path problem. When negative edge weights are introduced to a network it changes its complete dynamics and outlook. Sedgewick reminds us that when negative low-weight edges are present in a graph, the shortest path is likely to consist of more edges than the higher-weight path. This can be clearly seen from our graph above. The shortest path from** a** to **c** will consist of more edges due to the presence of negative weights. In this case, the shortest path from a to c will consist of six nodes and [a,x,y,z,b,c] the shortest path weight of 8. The shortest path traversal on a graph with solely positive weights prompts shortcuts while a network with negative weights results in detours.

Revisiting the earlier example, the delivery driver is avoiding the shortcut of [a,b,c] and embracing a detour [a,x,y,z,b,c] to include dropping off the last-minute items to the retailer

**Bellman-Ford’s algorithm on negative weights and Python implementation **

By way of reminder, a negative cycle is one in which the overall aggregate of the cycle becomes negative. There is a negative cycle in our example above as the nodes from the retailer [x,y,z] form a negative cycle. The Bellman-Ford algorithm is effective in ascertaining if a network has a negative cycle and printing where n.

We will run a python implementation that prints the below graph with supposedly a negative cycle. The code is sourced from

```
# Structure to express a weighted
# edge in graph
class Edge:
def __init__(self):
self.src = 0
self.dest = 0
self.weight = 0
# Structure to express a directed
# and weighted graph
class Graph:
def __init__(self):
# V. Number of vertices, E.
# Number of edges
self.V = 0
self.E = 0
# Graph is represented as
# an array of edges.
self.edge = []
# Genrates a new graph with V vertices
# and E edges
def createGraph(V, E):
graph = Graph();
graph.V = V;
graph.E = E;
graph.edge = [Edge() for i in range(graph.E)]
return graph;
# Function runs Bellman-Ford algorithm
# and prints negative cycle(if present)
def NegCycleBellmanFord(graph, src):
V = graph.V;
E = graph.E;
dist =[1000000 for i in range(V)]
parent =[-1 for i in range(V)]
dist[src] = 0;
# Relax all edges |V| - 1 times.
for i in range(1, V):
for j in range(E):
u = graph.edge[j].src;
v = graph.edge[j].dest;
weight = graph.edge[j].weight;
if (dist[u] != 1000000 and
dist[u] + weight < dist[v]):
dist[v] = dist[u] + weight;
parent[v] = u;
# Check for negative-weight cycles
C = -1;
for i in range(E):
u = graph.edge[i].src;
v = graph.edge[i].dest;
weight = graph.edge[i].weight;
if (dist[u] != 1000000 and
dist[u] + weight < dist[v]):
# Store one of the vertex of
# the negative weight cycle
C = v;
break;
if (C != -1):
for i in range(V):
C = parent[C];
# To store the cycle vertex
cycle = []
v = C
while (True):
cycle.append(v)
if (v == C and len(cycle) > 1):
break;
v = parent[v]
# Reverse cycle[]
cycle.reverse()
# Printing the negative cycle
for v in cycle:
print(v, end = " ");
print()
else:
print(-1);
# Driver Code
if __name__=='__main__':
# Number of vertices in graph
V = 5;
# Number of edges in graph
E = 5;
graph = createGraph(V, E);
# Given Graph
graph.edge[0].src = 0;
graph.edge[0].dest = 1;
graph.edge[0].weight = 2;
graph.edge[1].src = 1;
graph.edge[1].dest = 2;
graph.edge[1].weight = 2;
graph.edge[2].src = 2;
graph.edge[2].dest = 3;
graph.edge[2].weight = 4;
graph.edge[3].src = 3;
graph.edge[3].dest = 4;
graph.edge[3].weight = -4;
graph.edge[4].src = 4;
graph.edge[4].dest = 1;
graph.edge[4].weight = -4;
# Function Call
NegCycleBellmanFord(graph, 0);
# This code is contributed by Pratham76
#Output: 1 2 3 4 1
```

The post Implementing Bellman-Ford’s algorithm on negative edge weights of a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post A simplified analysis of the 0-1 knapsack problem appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**Diving into the Knapsack Problem**

Let’s now assume a deeper look into the logic of the Knapsack Problem. The decision version of the Knapsack problem is considered NP-Complete. Firstly, we will briefly examine the problem statement. Providing a set of objects with attributable value and weights (Vi, Wi), what is the maximum value that can be attained when the sum of the subsets of these objects are selected to be within the knapsack capacity. In a simpler manner, using the image below, the maximum capacity of the knapsack is 40kg and we have a variety of items with their respective values and weights. The idea is to pick enough items whose total will be less than 40kg with special consideration to the value. In essence, we want to add as many items that will cost us the least. It is similar to going shopping and you are not able to get all the items you desired to get because there is no carrier bag and all you have is a laptop backpack. The number of items you can purchase in this context will be limited to the capacity of your bag.

After selecting the appropriate items that can fit into your knapsack, you can then check your receipt to determine the overall value of the items. This grocery shopping experience is quite similar to the knapsack problem. The data that is inputted in the matrix of the Knapsack problem is the actual value (i.e price from the image above), The weight of the individual items are used to determine compatibility during the compute stage and the respective value is entered for the relevant items. We will look at a matrix representation of a knapsack problem scenario.

**Matrix representation of the knapsack problem **

We will represent the matrix of a knapsack problem below. More simply, we will use a different knapsack example with a maximum capacity of 8kg.

Each row of the above matrix represents an object or item for which there is a relevant value and weight. The column symbolises the sequential capacity of the knapsack to a maximum of 8. The idea is to input the relevant values at each interjection between the columns and the rows. The maximum capacity of the knapsack is realised in cells containing 8kg. As earlier stated, the weights and the relevant value of the object determines its suitability to be added to the knapsack. It is a combinatorial optimisation problem geared at discovering the optimal choice of objects.

The implementation of the knapsack problem in Python

Here is a python implementation of the knapsack problem inspired by this code.

```
# A Dynamic Programming implementation
# Codes for 0-1 Knapsack problem
# Returns the maximum value V that can
# be included in a knapsack of capacity W
def knapSack(W, wt, val, n):
K = [[0 for x in range(W + 1)] for x in range(n + 1)]
# Build table K[][] in bottom up manner
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i-1] <= w:
K[i][w] = max(val[i-1]
+ K[i-1][w-wt[i-1]],
K[i-1][w])
else:
K[i][w] = K[i-1][w]
return K[n][W]
# The Driver code
val = [3, 13, 9, 7, 6]
wt = [14, 11, 10, 9, 5]
W = 40
n = len(val)
print(knapSack(W, wt, val, n))
Output:
35
```

The post A simplified analysis of the 0-1 knapsack problem appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post Exploring the Traveling Salesman Problem (TSP) appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**Combinatorial Optimisation Problem**

TSP belongs to a large class known as combinatorial optimisation problems. Combinatorial optimisation is aimed at discovering the best object (or city in a map situation) from a finite set of objects (or list of cities). The best solution or decision in selecting the route between objects or cities is believed to be discrete or at least reduced to discrete. Hence, the problem becomes NP-hard as the ultimate goal of most combinatorial optimisation problems seeks to find an efficient way of allocating resources such as time or money depending on the scenario.

**The difficulty of the Traveling Salesman problem in Artificial Intelligence**

Solving TSP is considered to be computationally challenging even in modern times. It becomes quite challenging when a salesman desires to find the shortest route through several cities to safely return home. Regardless of the challenges, some algorithms and methods have been modified or designed to solve this problem. The popular Depth First Search (DFS) and Breadth-First Search (BFS) algorithms are two possible ways for tackling TSP.

Common applications of the Travel Salesman Problems

**Minimisation of travel cost**: TSP can be useful in reducing the cost involved in travelling across several service stations. In this paper, TSP is utilised to ascertain the most optimal route for the service workers of ABC appliances limited. These service workers are expected to visit customers once a month to work on their air conditioners. Finding the best possible means of visiting all the locations of the customers to ensure the cost and distance between a pair of customer locations is taken into consideration. This will ensure the most efficient route is chosen considering the head office is the departure and arrival point.

TSP as an optimisation method is believed to be relevant in applications such as logistics, planning and the manufacturing of microchips.

**Code implementation of TSP**

We will be using the air conditioning servicing example above in implementing TSP implementation in Python below. Let’s assume node 1 is the headquarters of ABC appliances limited and the service workers are scheduled to visit all the customers in locations 2, 3, 4 and 5 before returning to the origin. What is the most cost-effective route to take? We will ascertain this after the code implementation.

**Python implementation of the Travelling Salesman Problem (TSP)**

The code implementation is inspired from this source.

```
from sys import maxsize
from itertools import permutations
V = 4
# executing the traveling Salesman Problem (TSP)
def travellingSalesmanProblem(graph, z):
# all vertex are stored exclusing the source node
vertex = []
for i in range(V):
if i != z:
vertex.append(i)
min_path = maxsize
next_permutation=permutations(vertex)
for i in next_permutation:
# states the current Path weight(cost)
current_pathweight = 0
# execute the current path weight
k = z
for j in i:
current_pathweight += graph[k][j]
k = j
current_pathweight += graph[k][z]
# update minimum
min_path = min(min_path, current_pathweight)
return min_path
# the weight of the edges in a matrix
if __name__ == "__main__":
graph = [[0, 12, 30, 14], [12, 0, 17, 25],
[14, 25, 0, 11], [30, 17, 11, 0]]
z = 0
print(travellingSalesmanProblem(graph, z))
```

The TSP path weight is 62. It is the most cost efficient route for the after sales team.

The post Exploring the Traveling Salesman Problem (TSP) appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post Understanding Articulation Points in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**A simple illustration of articulation points **

The undirected graph below contains seven nodes and there are two articulation or critical points. Node B is very important to the network as it directly connects with five nodes. Removing node B will break this graph into three disconnected components. The three disconnected graphs after removing node B will be (A) , (C and D) and (E, F and G). The second articulation point on this graph is node C. A decision to remove node C will lead to two disconnected components which are nodes (A, B, E, F, G) and (D). This clearly shows that node B and C are the two articulation points with B being slightly more critical. Node B is the most critical because if removed it renders the remaining graphs into three disconnected components. On the other hand, removing vertex C splits the graph into two disconnected components.

Some steps in discovering articulation points

A standard way of finding the articulation point in a given graph is by removing each node one after the other. The goal is to determine if the elimination of a specific node will lead to the disconnection of the graph. Two common techniques utilised for finding articulation points and checking the connectivity of a graph is the DFS (Depth First Search) and BFS (Breadth First Search) approaches.

Step 1: Remove node v from the graph

Step 2: Determine the connectivity of the graph using BFS or DFS

Step 3: Restore node v back to the disconnected or connected graph.

**DFS and BFS in articulation points**

I’ve written an article on DFS and BFS but in this section will briefly look at how these two techniques are relevant to discovering articulation points. DFS uses an edge-based technique to traverse through a graph via a LIFO(Last in, First Out) approach. The visited nodes are added to a stack and the most recent vertex in the graph is removed for the continuation of the traversal. On the contrary, BFS adopts a node-based technique in finding the shortest path. When a vertex is visited, it is marked and added to the queue. Usually, nodes at the same horizontal level are first visited before progressing to the next tier. When a node is visited, it is added to a queue which is known for its FIFO (First in, First Out) strategy. BFS are considered to be slower than DFS in implementation.

**DFS as a better option for Articulation points:**

DFS can be viewed as a better traversal to prevent the disconnection of a graph. Compared to BFS, DFS pays better recognition to parent nodes which are sometimes the articulation points in a given graph.

The output of a BFS traversal for the above graph will be : R, A, B, C, D, E. With a queue based system, the source node of R will be removed from the node lists and that will result in the disconnection of the graph as node R is an articulation point.

On the other hand, with DFS, the source or origin node remains in the stack due to the LIFO method. The outcome in this case is R, A, E, B, C, D. The connected network will remain as the source node R is maintained. Unlike in BFS, a queue-based system implies the removal of an articulation node R. In the above example, there are two articulation vertices R and E.

From a business perspective, articulation points could be applicable to a telephone network where a critical connection between other units is disconnected. Or, a group of friends who are usually brought together by the same person in the group. Assuming the link between these friends goes travelling or is out of town, the social connection between these folks will go cold.

```
// A Java program to find articulation points in an undirected graph
import java.io.*;
import java.util.*;
import java.util.LinkedList;
// Class represents an undirected graph using adjacency list
class Graph
{
private int V; // No. of vertices
// Array of lists for Adjacency List Representation
private LinkedList<Integer> adj[];
int time = 0;
static final int NIL = -1;
// Constructor
Graph(int v)
{
V = v;
adj = new LinkedList[v];
for (int i=0; i<v; ++i)
adj[i] = new LinkedList();
}
//Function to add an edge into the graph
void addEdge(int v, int w)
{
adj[v].add(w); // Add w to v's list.
adj[w].add(v); //Add v to w's list
}
// A recursive function that find articulation points using DFS
// u --> The vertex to be visited next
// visited[] --> keeps tract of visited vertices
// disc[] --> Stores discovery times of visited vertices
// parent[] --> Stores parent vertices in DFS tree
// ap[] --> Store articulation points
void APUtil(int u, boolean visited[], int disc[],
int low[], int parent[], boolean ap[])
{
// Actual count of children in DFS Tree
int children = 0;
// indeicate the current node as visited
visited[u] = true;
// commencing discovery time and low value
disc[u] = low[u] = ++time;
// Run through all nodes adjacent to this
Iterator<Integer> i = adj[u].iterator();
while (i.hasNext())
{
int v = i.next(); // v is current adjacent of u
// when v is not visited yet, make it a child of u
// in DFS tree and recur for it as appropriate
if (!visited[v])
{
children++;
parent[v] = u;
APUtil(v, visited, disc, low, parent, ap);
low[u] = Math.min(low[u], low[v]);
// the u is an articulation point in following cases
if (parent[u] == NIL && children > 1)
ap[u] = true;
if (parent[u] != NIL && low[v] >= disc[u])
ap[u] = true;
}
// Update low value of u for parent function calls.
else if (v != parent[u])
low[u] = Math.min(low[u], disc[v]);
}
}
// The function to do DFS traversal. It uses recursive function APUtil()
void AP()
{
// Mark all the vertices as not visited
boolean visited[] = new boolean[V];
int disc[] = new int[V];
int low[] = new int[V];
int parent[] = new int[V];
boolean ap[] = new boolean[V]; // To store articulation points
// Initialize parent and visited, and ap(articulation point)
// arrays
for (int i = 0; i < V; i++)
{
parent[i] = NIL;
visited[i] = false;
ap[i] = false;
}
// Using the recursive helper function to find articulation
// points in DFS tree rooted with vertex 'i'
for (int i = 0; i < V; i++)
if (visited[i] == false)
APUtil(i, visited, disc, low, parent, ap);
// Now ap[] contains articulation points, print them
for (int i = 0; i < V; i++)
if (ap[i] == true)
System.out.print(i+" ");
}
// Using the driver method
public static void main(String args[])
{
// Alphabetical graph labels are replaced by number. i.e A is 0.
System.out.println("Articulation points in graph ");
Graph g1 = new Graph(7);
g1.addEdge(0,1);
g1.addEdge(1,6);
g1.addEdge(0,2);
g1.addEdge(0,3);
g1.addEdge(0,4);
g1.addEdge(4,6);
g1.AP();
System.out.println();
}
}
Articulation points in graph
0
```

The post Understanding Articulation Points in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post A practical understanding of topological sorting and ordering appeared first on Dateme Tubotamuno - Semantic Geek.

]]>There are different algorithms designed to address the shortest path problem. We’ve covered one of such in the Dijkstra algorithm. Topological sorting is an important shortest path algorithm that can be applied to projects that require the completion of a prior task before a new one can be commenced. With this algorithm, there are a series of nodes in a Directed Acyclic Graph (DAG). A directed acyclic graph is a directed graph that contains no cycles. The ordering of nodes using this algorithm is referred to as topological ordering. It is a non-distinct arrangement of nodes that stipulates an edge from x to y should have x appearing before y in the topological sort order.

The source node is quite important with this algorithm and it is usually one with an in-degree of 0 or no incoming edges. Topological sort also works best when a graph consists of positive weights.

**Topological sorting on a simple directed acyclic graph**

As we’ve explained above, a DAG (Directed Acyclic Graph) contains no cycle or loop. From the graph below, it is quite clear that the edge connections end at vertex A. A topological ordering should naturally commence from nodes D or E. this is because both nodes do not contain an incoming edge.

There can be two topological sortings with the above graph. The sorting would always be initiated from the two source nodes of E and D. Are you wondering why the sort should commence with nodes D or E? It is because both nodes do not have an in-degree or incoming edges. With that in mind, in no particular order, the first sort is D, E, C, F, B, A and the second is E, D, C, B, A in the above graph.

**The practical application and theories of topological sorting**

A practical application of topological sorting is on tasks that involve a sequence of jobs.In a nutshell, when dependencies are required before the completion of a given job, topological sorting may be required. Khan and Parallel algorithms are two common types of topological sorting algorithms.

Below is a Python implementation of the Topological sorting algorithm highlighted in the diagram above. The code is inspired by Neelam Yardev from Geek for Geeks.

```
from collections import defaultdict
class Graph:
def __init__(self, vertices):
self.graph = defaultdict(list)
self.V = vertices
def addEdge(self, u, v):
self.graph[u].append(v)
def topologicalSortUtil(self, v, visited, stack):
# Mark the current vertex as visited.
visited[v] = True
for i in self.graph[v]:
if visited[i] == False:
self.topologicalSortUtil(i, visited, stack)
stack.append(v)
def topologicalSort(self):
# Mark all the nodes as not visited
visited = [False]*self.V
stack = []
for i in range(self.V):
if visited[i] == False:
self.topologicalSortUtil(i, visited, stack)
print ("stack"[::-1]) # return the adjency list in reverse order
#Source nodes and edges for the topological sorting
g = Graph(6)
g.addEdge("D", "B")
g.addEdge("D", "A")
g.addEdge("E", "A")
g.addEdge("E", "C")
g.addEdge("C", "F")
g.addEdge("F", "B")
print ("This is a Topological Sort of the graph")
```

The post A practical understanding of topological sorting and ordering appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post Exploring Breadth First Search and Depth First Search in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**Exploring Breadth First Search or Breadth First Traversal **

BFS is an algorithm that is designed to search for a graph or tree data formation. It usually travels in a breadthward motion and utilises a queue as a prompt to identify the next vertex to commence a traversal. If a roadblock is encountered or no adjacent node is found, the tree root or the source node is removed for the queue. The traversal of the graph usually begins with a ‘search key’ or the initialising node. Imagine a hotel with so many floors and rooms as nodes, a breadth-first traversal algorithm is like a cleaning staff that will clean rooms floor by floor. All neighbouring nodes at the current depth or floor with the example above will be visited to clean before moving to the vertices or rooms on the next floor. No node is expected to be revisited as one would not expect hotel staff to clean the same room twice in the same period. Once a room is cleaned, it is ticked on a sheet as a visited while with BFS, the neighbouring reversed node is enqueued or marked as visited,

Rules are important to ensure procedures are followed or order is maintained. There are relevant rules for a BFS algorithm.

**Rule Number One**: From the tree root or ‘search key’ visit the adjacent unvisited vertex. In some cases, if there are three neighbouring nodes, an alphabetic or serial logic can be applied to choose the nearest node. Once visited, the node is displayed and enqueued.

**Rule Number Two:**In the event, a neighbouring or adjacent node is not available, the tree root or initial node is removed from the queue and previously visited node plays the role of a search key (tree root or source node).

**Rule Number Three:**The final rule is simply a repetition of the above two rules until all nodes are visited.

Below is a graphical example of how breadth-first search unfolds

**Step 1:** The queue is initiated with node R as the tree root.

Queue: [ ]

**Step 2**: Graph traversal commences with the starting node R. The root node is now marked as visited.

Queue: [ ]

**Step 3:** The neighbouring node of A is now next in line and subsequently marked as visited.

Queue: [ A ]

**Step 4:** There are a couple of neighbouring nodes to A that can be added to the enqueued and marked. On this occasion, alphabetically, node B is the next logical neighbouring node that should be added to the queue and marked accordingly. That is what has been implemented in this case.

Queue: [B, A ]

**Step 5**: Node C is next in line and is visited and marked as well.

Queue: [C, B, A ]

**Step 6:** Node D is the adjacent node to C and alphabetically ticks the box.

Queue: [D, C, B, A ]

**Step 7**: All adjacent nodes to R have all been visited. In this case, R will be dequeued and A will be the root node that leads to the enqueuing and further marking of E.

Queue: [ E, D, C, B]

Here is a Python code implementation of BFS with nodes assigned a numerical label as opposed to the strings in the diagram representation above.

```
from collections import defaultdict
# This class represents a directed graph
# using adjacency list representation
class Graph:
# Establising the constructor
def __init__(self):
self.graph = defaultdict(list)
def addEdge(self,n,r):
self.graph[n].append(r)
def BFS(self, t):
visited = [False] * (max(self.graph) + 1)
queue = []
queue.append(t)
visited[t] = True
while queue:
# Dequeue a vertex from
# queue and print it
t = queue.pop(0)
print (t, end = " ")
for i in self.graph[t]:
if visited[i] == False:
queue.append(i)
visited[i] = True
# Create a graph given in
# the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(1, 0)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)
print("Executing BFS from (initialising from node 0)")
g.BFS(0)
#Output
Executing BFS from (initialising from node 0)
0 1 2 3
```

**Examining Depth First Search or Depth First Traversal**

Depth-first search traversal is different from breadth-first due to the direction of the search carried out. With depth-first search, the movement from a node to an adjacent one is depthward in nature. The depth-first search uses a stack to recall to get to the neighbouring unvisited vertex while breath-first uses a queue. A stack is the opposite of a queue, as the former is an ordered list of properties where all inclusions and exclusions are implemented at the same end. On the other hand, a queue has two sides, one end is used for inserted visited nodes and the other to delete data. The queue is usually referred to as FIFO (first-in, first-out) and the stack as the LIFO rule (last-in, first-out).

The below graph is an expression of how a depth-first traversal works. It does not search based on alphabetical order. The next node from a depthward motion is marked as visited and added to the stack. Node S is the tree root or the starting node and the traversal moves deep by visiting and adding nodes E, H and K to the stack. The edge labels indicate the order of the traversal with node G, becoming the last node in the traversal.

To implement the above traversal, certain rules were pertinent. You could refer to them as the rules of depth-first search.

Below are the rules that should guide the implementation of DFS

**Rule Number One**: The adjacent unvisited node should be visited. It needs to be marked as visited and added to the DFS stack. The traversal highlights a depthward motion.

**Rule Number Two**: When all adjacent vertices from a given node are visited, the most recently added node is deleted from the stack to make way for a new insertion.

**Rule Number Three**: Rules one and two are repeated until all nodes have been visited and added to the stack.

**Sequential implementation of DFS**

**Step 1: **This is a network with an empty stack. Vertex R is the root node and the stack is being initialised.

**Step 2:** Node R is visited and added to the stack. It is now the top node in the stack and the adjacent node from a depthward motion will be executed in the next step.

**Step 3:** Node A is adjacent to R and is visited next in that order. It is added to the stack and becomes the top node.

**Step 4:** Node E, is visited and also added to the stack. In a breadth-first search, node B would have been the adjacent vertex. In this case, vertex E becomes the top as it is the most recently visited one.

**Step 5:** Node B is unvisited and adjacent to E. In this instance, B is visited and added to the stack as the top node.

**Step 6:** There are no unvisited adjacent nodes to B. The vertex B will now be removed from the stack and E assumes the role of the top node to aid a smooth continuation of the traversal process.

**Step 6:** Node C is now visited and added to the stack to become the top vertex.

**Step 7:** There are no unvisited neighbouring nodes to C. Node C is now removed from the stack and the previously visited E becomes the top node to allow for a clear traversal to an unvisited vertex.

**Step 8: **Node D was the only one not visited in the previous sequence or step. It is now visited and added to the stack as the top node.

Here is the python code implementation using the node labels above to print the DFS order.

```
from collections import defaultdict
class Graph:
def __init__(self):
# The default dictionary for the graph
self.graph = defaultdict(list)
# function to add an edge to graph
def addEdge(self, u, v):
self.graph[u].append(v)
# A DFS function
def DFSUtil(self, v, visited):
# Mark the current vertex as visited
# and print it
visited.add(v)
print(v, end=' ')
# Recur for all the vertices
# adjacent to this vertex
for neighbour in self.graph[v]:
if neighbour not in visited:
self.DFSUtil(neighbour, visited)
# recursive DFSUtil() for DFS traversal
def DFS(self, v):
# Create a set to store visited vertices
visited = set()
# Requesting the recursive helper function
# to print the DFS traversal graph
self.DFSUtil(v, visited)
# Creating a graph and connecting the nodes
g = Graph()
g.addEdge('S', 'E')
g.addEdge('E', 'H')
g.addEdge('H', 'K')
g.addEdge('K', 'I')
g.addEdge('I', 'F')
g.addEdge('K', 'J')
g.addEdge('J', 'G')
print("Executing DFS from (initialising from node E)")
g.DFS('E')
```

The post Exploring Breadth First Search and Depth First Search in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post Exploring the different types of weights in a connected graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>In mathematics, a weight is used to express a set of multiplicative constants allocated in front of terms in an edge of a tree. For example, the weight of a graph at a point [n] is the maximum number of edges in a given branch [n]. The weights of a graph are computed and analysed in light of the problem investigated. Weighted graphs are believed to be present in different graph scenarios such as shortest path, centrality and travelling salesman problems.

The weight of a graph assigns value to either the node or the relationship between nodes. These values determine the shortest path or most central node in a given network. The nature or type of weight is more relevant in the type of network. For example, a weight type flow may be more applicable to traffic in a computer network, fluids in a water pipe, currents in an electrical circuit or demand movements. We will look into the flow network as it pertains to weight in the subsequent section.

**Different Types of graph weights **

Weighted graphs are labeled graphs with positive numbers as labels. These weights can be more relevant to a certain type of network than others.

**Flow Weights:**Graphs containing this type of weight are usually referred to as a flow network and are directed in nature. The amount of a flow that travels through an edge should never exceed the capacity of the respective node. It is expected that the amount of flow into a node equals the amount of flow out of it. The only occasion when this is not expected to hold is when the node is a source. A source node usually demonstrates an outgoing flow which is important in understanding the roles and importance of nodes in this type of labelled graph. On the other hand, a node only designed to receive flow is referred to as a sink. Earlier, I’ve mentioned different scenarios where a flow network can be most relevant. A good example will be the tube or train networks. Each station can be viewed as a node with respective capacity for trains or passengers to travel through. With this example, no station is a source or sink, as they all experience incoming and outgoing flow of trains and passengers. A mathematical representation of this weight can be initially represented with a graph*G*= (*V*,*E*), with V being a set of nodes and*E*the set of edges that are connected to the nodes*E*. The capacity function looks at the subset of two nodes in a linear map that results to*c*:*V*×*V*→ ℝ_{∞}. These nodes may be distinct but related in the network and are considered to be connected by the same edge or members of a set of edges. In mathematical terms, if (*v*,*u*) ∈*E*then (*u*,*v*) is also a member of*E*. This indicates that the two nodes are both connected by the same vertex*E*. Finally, the capacity function of the two nodes is expressed as*c*(*v*,*u*) = 0. The capacity of both nodes is expected to equal zero as the amount of flow out of each node equals the inflow. None of the above nodes is viewed as a source or sink. A flow network with a source and sink is represented as (*G*,*c*,*s*,*t*). Where*G*is the graph,*c*the capacity,*s*the source and*t*the sink or target.**Capacity Weight:**Flow and capacity tend to mostly work together in a weighted directed graph model. The capacity weight is quite applicable to a traffic network. A good example will be a transportation network where nodes are interchanges or junctions and the edges are the respective highway segments. The edge weights are capacities related to the maximum flow of traffic a vertex or interchange segment can carry. In other words, the capacity weights of these edges determine how many cars can travel through.

The capacity weighted graph below indicates the flow from each node. The node *s* is the source and *t* is the sink. The largest capacity is edge [*c,b*] while the least is vertex [*c,a*].

**Frequency weights:**Traditionally frequency weighting has been applied to communications engineering for sound level measurement. More specifically in the measurement of the sound level (low or high) to the human ears. Its association to spectral analysis has led its conversion of time series into a range of frequencies. In recent times, frequency as weights is now utilised in a social network or human interaction settings. In this scenario, weights are applied between two friends, contacts or persons in line with interpersonal communication. When two individuals or nodes in a graph sense, communicate regularly, the frequency weight of the edge that connects them will be stronger and viewed as the minimal path.**Distance weights**: In normal terms, the distance between two nodes on a graph is the number of edges in the shortest path. Distance as a form of weight in a weighted graph looks at the shortest walk. For a given graph G(V,E), w(a,b) represents the weight of edge e=(a,b). Assuming G(V,E,w) is an undirected weighted graph, it will be expected that w(a,b) = w(b,a) for (a,b) to be a ∈ (set member of) E. This type of weight can be applied to places or a transport network where the distance between two nodes (a,b) can be depicted in the weight function of w. For a vertex e = (a,b) ∈ E, the given weight of the edge determines the distance from a to b. There is a possibility that the time it requires to travel from a to b may not necessarily be the same from b to a. Resulting in a representation of w(a, b) ≠ w(b, a). The below are two graphs where we can estimate the distance between nodes a to b. The first network is a disconnected graph which indicates d(a, b) = ∞ (infinity). There is no weight distribution between both nodes due to a disconnected edge. On the other hand, the second graph is a connected one but the edge distribution of (x, y, z, x) highlights there is no minimum weight path from a to b due to the presence of a negative weight between x and z.

Distance as the weight can work closely with time. This will be touched in the next section.

**Time weights:**The time required to travel from two nodes in a weighted graph can determine the shortest path. As earlier stated, for a vertex e = (a,b) ∈ E, the time required to travel from a to b, may not necessarily be the same as b to a. They can both have the same edge distance but have varying time weights. For example in transport networks, one travel path may require more time due to bad road traffic, an accident or too many truck drivers due to a factory nearby. These are some of the factors that can determine the time weight between two nodes in a connected graph.**Resistance weights**: In a connected graph the resistance matrix is represented by R, and it is conveyed as R=(rij). In this instance, rij is the resistance distance between the nodes i and j of G. For a friendship or social network a resistance in weight form could be a profile that consistently ignores replying to a message from a given user. Or, in the case of LinkedIn, ignoring to accept a connection request. The least resistance between two nodes could be the shortest path in a connected graph.**Cost weights:**This looks at the amount of effort required to travel from one place to another in a connected network. It could be financial, material or human capital in nature. One could compare the cost of travelling from London to New York from a cost perspective. The different airlines can be different edges while the nodes are the source, transit and destination.

Example from the graph below, London can be represented by a node (0) and New York as (4). There are two airlines as edges (0,8,4) and (0,3,4). Nodes (8) and (3) are the transit airports en route to Newyork. This example reveals the most cost-effective airline is (0,3,4) with a combined cost of 3.

The above are seven examples of how weights could be represented in a connected weighted network.

The post Exploring the different types of weights in a connected graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>The post Finding the mother vertex in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>**What is the meaning of the mother vertex in a given graph?**

In a given graph G = (V, E), a mother vertex v has a pathway for all other vertices in the specified graph to enable the connection. All other vertices in the graph can be accessed via the mother vertex. A mother vertex is quite common in directed graphs but is also applicable in undirected networks. We will briefly explore mother vertices in different network examples

**Directed graph:** Directed graphs or DiGraphs do hold directed edges or nodes. These edges can be unidirectional or bidirectional. For DiGraphs, self-loops are permissible but parallel edges are not. Mother vertex exists in directed graphs and there can be multiple of these as shown below. Based on the directed graph below, nodes [4] and [6] are the mother vertex.

**Undirected connected graph: **A mother vertex can be present on undirected connected graphs. In undirected graphs, self-loops are allowed while parallel edges are not permitted. The undirected connected graph below displays nodes 11 to 16 connected by seven non-directional edges. In this type of undirected connected graph, every node is considered to be a mother vertex.

**Undirected disconnected graph:** There is no mother vertex in the example graph below. This is because no edge connects node [12] to node [A]. It is safe to say that undirected disconnected graphs do not have mother vertex.

**Python implementation of finding a mother vertex**

We will run through a mother vertex implementation in Python. The first code example returns no mother vertex from a selection of 12 nodes.

```
from collections import defaultdict
# A cass of a directed graph using adjacency list
class Graph:
def __init__(self,vertices):
self.V = vertices #No. of vertices
self.graph = defaultdict(list) # default dictionary
# DFS traversing from v
def DFSUtil(self, v, visited):
# Idenifying the current node as visited and print it
visited[v] = True
# Noting all the vertices adjacent to this vertex
for i in self.graph[v]:
if visited[i] == False:
self.DFSUtil(i, visited)
# Now adding w to the list of v
def addEdge(self, v, w):
self.graph[v].append(w)
# If a mother vertex exists return. Otherwise returns -1
def findMother(self):
visited =[False]*(self.V)
# Do a DFS traversal and find the last finished
# vertex
for i in range(self.V):
if visited[i]==False:
self.DFSUtil(i,visited)
v = i
# If there exist mother vertex (or vetices) in given
# graph, then v must be one (or one of them)
# Now check if v is actually a mother vertex (or graph
# has a mother vertex). We basically check if every vertex
# is reachable from v or not.
# Reset all values in visited[] as false and do
# DFS beginning from v to check if all vertices are
# reachable from it or not.
visited = [False]*(self.V)
self.DFSUtil(v, visited)
if any(i == False for i in visited):
return "Not Present"
else:
return v
# Create a graph given in the above diagram
g = Graph(12)
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 3)
g.addEdge(4, 1)
g.addEdge(6, 4)
g.addEdge(5, 6)
g.addEdge(6, 7)
g.addEdge(8, 7)
g.addEdge(9, 10)
g.addEdge(9, 11)
g.addEdge(11, 2)
g.addEdge(12, 11)
print ("A mother vertex is " + str(g.findMother()))
A mother vertex is Not Present
```

On the other hand, the example below identifies node [0] as the mother vertex in the below graph

```
from collections import defaultdict
# A cass of a directed graph using adjacency list
class Graph:
def __init__(self,vertices):
self.V = vertices #No. of vertices
self.graph = defaultdict(list) # default dictionary
# DFS traversing from v
def DFSUtil(self, v, visited):
# Idenifying the current node as visited and print it
visited[v] = True
# Noting all the vertices adjacent to this vertex
for i in self.graph[v]:
if visited[i] == False:
self.DFSUtil(i, visited)
# Now adding w to the list of v
def addEdge(self, v, w):
self.graph[v].append(w)
# If a mother vertex exists return. Otherwise returns -1
def findMother(self):
visited =[False]*(self.V)
# Do a DFS traversal and find the last finished
# vertex
for i in range(self.V):
if visited[i]==False:
self.DFSUtil(i,visited)
v = i
# If there exist mother vertex (or vetices) in given
# graph, then v must be one (or one of them)
# Now check if v is actually a mother vertex (or graph
# has a mother vertex). We basically check if every vertex
# is reachable from v or not.
# Reset all values in visited[] as false and do
# DFS beginning from v to check if all vertices are
# reachable from it or not.
visited = [False]*(self.V)
self.DFSUtil(v, visited)
if any(i == False for i in visited):
return "Not Present"
else:
return v
# Create a graph given in the above diagram
g = Graph(12)
g.addEdge(0, 1)
g.addEdge(0, 11)
g.addEdge(1, 2)
g.addEdge(2, 3)
g.addEdge(3, 4)
g.addEdge(4, 5)
g.addEdge(5, 6)
g.addEdge(6, 7)
g.addEdge(7, 8)
g.addEdge(8, 9)
g.addEdge(9, 10)
g.addEdge(10, 11)
print ("A mother vertex is " + str(g.findMother()))
A mother vertex is 0
```

The code implementation was inspired by this article.

The post Finding the mother vertex in a graph appeared first on Dateme Tubotamuno - Semantic Geek.

]]>