Thoughts on computer science (and all of innovation)
I’ve come to a realization about computer science after learning about various data structures and libraries with ruby.
Most new things in computer science (and all of engineering) whether it be a new data structure or a new library is basically just the next logical step from one of its predecessors.
If you consider the very basics of what a computer is and how we are communicating with it and how it has to process things (most of the time) you’ll realize that what has happened (to create this thing we use every day) is we have solved a bunch of little problems one step at a time, then we put all of those little steps into a bigger step, and all those bigger steps into an even bigger step, and so on. Each bigger step is yet another collection of (relatively) littler steps.
There are endless parallel examples I could use, but let’s look at a house.
A house is not just a house. It’s also a concept that is the result of a lot of tools and activity. It only exists at all because there was a ton of stuff that came before it. If we combine the tools, components, and concepts that have come before it we can get a house.
For example, we found out how to make some tools out of wood and stone(and later, metal) Then we combined a combination of tools (like hammers, axes, levers…) and work and wood and made a house.
Then we combined a bunch of houses and made some villages.
Then we combined some villages and made some cities.
Then we combined houses more intimately and made some very large buildings.
I think you’re getting the idea. In essence, all of the colossal, fantastical, innovative creations that exist at the top of society are mostly just a result of a series of logical steps.
In computer science, some of these things get even more closely related than you probably realize. The difference between a BST and a B- tree is not much, and when you research them in sequential order it becomes very clear how such things were invented.
What happened was some person (or group of persons) in 1959–1962 figured out a range search could be accomplished if you structure the data in a particular way.
That same year, AVL trees were invented because a balanced tree solves some time complexity problems.
Then almost TEN years later in 1970 John Hopcroft looked at a BST for a very long time and determined that if you made some slight modifications using math that was not that complicated you could solve some time… more time complexity problems. Thus the B-tree was formally introduced to the world.
Then two years later the same logic was applied to a B+ tree, which again, is almost the same as B-tree except that it has some additional logic and math applied that makes it capable of solving some more problems.
I mean when I read all of these leaps of innovation and model the tools that were used to create them, I start to think that it might even be possible to make somewhat accurate near-future predictions about what is going to be a “next step” in my current time.