Crossing the Rubicon

There is a big change that computing is going to undergo  in the next 5 – 10 years that will most likely fundamentally transform our world as we know it. Nature shows us that rapid change can be dangerous for a species as well as existing ecosystems, so I hope we are fully cognizant of the full implications to some of the ground breaking research we are conducting. I’ve always felt in research that the most important question isn’t whether we can do something, but rather whether we should. How strange it is that these questions usually reserved for doctors and biologists are today confronting Computer Scientists.

The next decade is going to see huge advancements in different types of machines and algorithms that will have an immediate and huge impact. A quantum computer doesn’t feel too far off, yet has computing potential many orders greater than any super computer we have today. Why does this matter? Well for example, one of the more popular security algorithms, RSA, relies on the fact that prime factorization of large numbers is (still) essentially impossible today. Therefore, a public key that is a product of two primes and a private key of the primes used to encrypt a message is relatively secure today. However, a sudden jump in computing power could have far reaching implications for RSA and many other security algorithms.

In addition to the machine itself are new and emerging algorithms that attempt to not just exist as a recipe for computing a result, but dynamically adjust (“learn”) over time to varied inputs and input environments. Machine learning methods such as neural networks have an inherent ability to in effect learn and evolve based on exposure to different experiences. Neural networks specifically mimic the biology of a brain, with nodes able to exchange information between each other just like neurons in our own brain. The applications are innumerable, including everything from speech recognition to image analysis.

So what is the “crossing of the rubicon” in this context? It is the fundamental adjustment to how we think about and approach problem solving. The impetus for this came from a discussion about process scheduling in an OS; how a scheduler that uses preemptive scheduling switches contexts (processes) every quantum time period. To aid responsiveness, processes that are interactive with the user are prioritized in a series of queues. As a process ages in the priority queue, its priority weight may be adjusted to push it further up the queue until it is executed. However, there are 2 fundamental assumptions that are made. 1) We can’t know the burst time for a process in the queue and 2) our quantum length is static. Additionally, while we “learn” on a very elementary level over time about the weight of different processes, this information and the process table itself are volatile. All this information is lost each power cycle.

When I was looking at this material, the immediate thought that came to my head was “why can’t we make this better with machine learning?” Why can’t we create a scheduler that learns over time the nature of a process (e.g. burst time in ms) in order to appropriately prioritize the queue for optimized completion time? The static quantum length time is a trade off based on minimizing the quantum length time up until a reasonable context switch overhead. If we know the processes and have a model for how they behave given environmental variables (CPU load, memory usage etc), why couldn’t we dynamically adjust this quantum length to even further optimize the completion time? This would be something incredible, as the scheduler for your computers OS would literally be learning your usage habits and be personalized / optimized for you. Truly a ‘personal computer.’

While this specific example is the impetus for this post, it is a broader example of the way of thinking that I think is going to be crucial to not only pushing the field forward, but also preparing us for the coming transition. While a lot of people much smarter than myself are very divided over the implications of AI and similar technology in society, approaching problem solving from the context of these new capabilities is crucial to understanding the potential (good and bad) of these emerging technologies. I’m really new to the machine learning train, but have already felt its possibilities intertwine with how I read and digest material in courses as well as solve different problems. With that, I encourage you to cross the rubicon as well so you can be on the bleeding edge of potentially the largest computing revolution since the transistor.

Leave a comment