Graphs, the basics
Graphs are incredible.
They are dynamic recursive type data structures that are extremely powerful and flexible. They’re one of the most active areas of research right now and are being used every second of every day to solve countless real-world problems.
They’re easily one of the best contemporary methods for modeling real-life phenomena. As such, they can be wildly complicated, but despite that, they can be structured just enough to be navigated in a practical way.
To kick things off, let’s get a taste of just how powerful a graph can be.
This is a graph that is modeling a section of Facebook. If you were ever curious about how big these systems could get and what they’re capable of or how you can start playing a roll in how they affect the world. Learning about graphs is a great place to start.
If you master this data structure and become comfortable with the logic involved in making it work the way you want, the possibilities and stakes can get pretty immense.
The capacity of a graph’s use-case is not currently known. There are thousands of people working every single day to push this even further.
Interestingly, Despite the potential power and complexity of a graph, it’s implementation and parts are quite simple.
At its core, a graph is just a combination of vertices and edges. As the graph grows it can take on additional properties or forms. Graphs of different shapes have different names and use cases.
Here is some basic anatomy here :
While a graph can be implemented in a basic way. Usually, we need a little more infrastructure to make it practical and useable.
For algorithmic or modeling purposes most graphs use adjacency lists or matrixes to keep track of basic structure. This can be used to gain data about the graph itself or it can be used to figure how to traverse it algorithmically.
Adjacency matrixes and lists look like this :
One of the most common use cases for adjacency lists and matrixes is pathing. Essentially, the way that is done is an algorithm would first read the adjacency list either in polynomial or sub-polynomial time depending on the size or shape of the graph. Then by using the adjacency data, it can plan a depth-first pathway to whatever data it is trying to find.
Adjacency matrixes are used in a similar fashion. An algorithm can read an adjacency matrix to find a depth-first pathway (among a thousand other things.) Now, somebody could do a hundred blogs on matrixes alone, but I’m going to keep this short and sweet.
Adjacency matrixes are very-fast for this purpose since the time complexity can be as low as O(1 + h) where height is the number relative vertices. In worst-case scenarios, an adjacency matrix is O(n).
The only major drawback of an Adjacency matrix is the space complexity is always going to be O(V*V) This is where adjacency lists come in.
An adjacency list is even faster and takes up less space since it only lists the relative adjacencies as seen in the figure above.
Another method for traversing a graph is to create an adjacency web algorithm that checks the adjacency nodes of each node as it travels through the graph, then it checks a visited attribute. While it is technically possible to do a depth-first search with an adjacency web, it is harder to pull off. Adjacency webs are ideal for breadth-first searches.
Conclusion
That’s only a handful of things related to the basics of graphs. If I had more time I would move into implementation and then I would continue discussing edge cases concerning this incredibly robust data structure, but that will have to wait until a later blog.
Until next time.