I haven’t written a blog post for MINIX3 in some time (sorry Professor), and I figured that it would probably be wise to wrap up what I managed to complete this semester after I managed to get past all the stress I was feeling.
About halfway through the semester I started meeting weekly with my professor in order to discuss ways to approach my project. He knew I was pretty darn stressed about it, but was also pleased with how much I had learned about MINIX3 and Operating Systems in general. When we started meeting, I hadn’t done any actual programming in the scheduler, and I was still fuzzy on how the whole scheduler worked as one unit. I had also not realized this at the time, but the scheduling algorithm in the official MINIX3 Book was an older version. It doesn’t reflect the design found in this report, which describes the current scheduling algorithm (found in versions 3.3.0 and newer, I believe). This was adding to the confusion in how to approach the system, because I was getting my information on the implementation of the scheduler from both sources.
I’ve talked about this a bit in my previous blogs, but I figured I’d recap in this one now that I have a better understanding of the system. In the official MINIX3 book, the scheduler is still built into the Kernel. This meant that any scheduling changes that were to be implemented involved editing kernel code, which could end up being pretty complex in the event of errors, etc. What the scheduling report outlines is the move of the scheduler into the user mode level of MINIX’s layered design system. There still is a small default scheduler that had to be implemented, but the main scheduling policy is implemented within the user mode scheduler.
Let’s talk scheduling policies. Both the kernel-level and the user-level schedulers implement a Round Robin scheduling algorithm by default. As outlined in the report, the easiest policies to implement with the user mode scheduler are round robin, priority, staircase, and some other queue-based policies. Whilst I was googling information about implementing priority-based algorithms, I found this wonderful project done by a GitHub user named Akshita Mittel. It includes a few test files and the changes necessary to make to some of MINIX3’s files in order to implement different algorithms. It is very similar to the project I had intended on doing, so whilst I did not copy it directly (licensing and copyright issues and whatnot), I did use it as a resource for implementing my own project.
Since I wanted to deviate my project slightly from the resources I found online, instead of just implementing more than one scheduling policy, I elected to compare the default scheduling policy with a priority scheduling algorithm, and vary the number of queues by +- 25%. This left me with 3 variations of the default Round Robin policy — one with 12 queues, one with the standard 16 queues, and one with 20 queues. It also left me with 3 variants of a scheduler using a priority algorithm, again with 12, 16, and 20 queues. My goal was to run the same test across each of these 6 variations and see if there was some notable runtime difference.
Before I go into the test, I figured I’d explain the difference between Round Robin and Priority style scheduling. Round Robin policies assign each process within the priority queues a time slice, called quantum. As processes run out of quantum, they’re moved to a higher (the higher the number, the lower the priority) priority queue. In MINIX3, after a certain number of ticks, the queues are rebalanced. This means that all processes who have had their priority changed get that priority reset. The difference between round robin and priority scheduling algorithms is fairly simple — round robin policies aim to be fair, which is why they implement queue feedback. Priority policies are designed without fairness in mind. You might want some processes to always have the highest priority. Therefore, it simply disables process feedback from happening. Processes don’t move around between queues. A diagram of the structure of the priority queues are shown below.
The test I used was based on Akshita’s that I found on GitHub. The test program would log system time, do some work with a process, then log the system time again, and find the difference between the start and end times. I found the average running time for 100 tests for each of the variations. The results are displayed below.
As you can see, there wasn’t much consistency across the tests. I’m sure that, before I even ran the tests, I could have reasonably come to the conclusion that there wouldn’t have been much of a difference in run times given my particular approach. The test I ran was mostly CPU bound as opposed to I/O bound, which very well could have made a difference. I think CPU bound tests could have made a bigger impact if I had implemented a different policy than priority. On top of designing a better test for what I was doing in particular, I think that theoretically the number of queues within a round robin scheduler wouldn’t make much of a difference unless the processes existed within them for a very long period of time. Also, I think that the number of queues within a priority scheduler wouldn’t make a significant difference either, since processes tend to live where are they spawned. Aside from the crowding of the queues, I don’t see it having a large impact on the overall runtime. Therefore, I think that the results I got make sense.
I’m not upset that my small “research project” didn’t yield great results. The point of this independent study was for me to better understand operating systems, as I felt as though I was seriously lacking in that area. While I know there’s still a tremendous amount of knowledge to be gained in OS’s, I’m happy with the work I did this semester and feel far more confident when discussing OS concepts in general. It was a very stressful project at first, but now that I’ve wrapped it up, I think it was absolutely worth it. Hands-on operating system practice is a great way to go about learning the subject. In fact, this independent study has sparked quite a bit of interest in systems programming for me, and I’m looking forward to continue learning about it in the future!
(Side-note, this is my last school-related blogpost. At first, I felt extremely uncomfortable writing these things. Now, I think it’s something I’d like to do once a week, or at least regularly. I appreciate my professor requiring us to start these blogs — writing is something I never really saw myself doing enjoying, and after doing it for a couple of semesters now, I get it. Who would’ve guessed that recording what you learn is both both enjoyable and rewarding. Thanks, Professor Wurst!)
From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.