Author Archives: jwblash

Sprint 2 Retrospective

During the first sprint, our goal was to get everyone set up with the development environment so that we could all build the AMPATH project and get to work as soon as we were given user stories/had tasks. I believe of the time of the last blog post, we still had to finish getting one or two of my teammates set up, but that was resolved rather quickly. After that, it was learning testing in Angular while playing the waiting game for AMPATH’s team to get in touch.

And well, wait we did. That was no fault of AMPATH’s, though. They’re a rather small development team with a big project. They’ve got their work cut out for them and they’re seemingly very busy.

Unfortunately, with how busy they were, we didn’t get any new tasks to work on at all this past sprint. What was nice was that we got plenty of time to familiarize ourselves with angular and testing. Reviewing testing was important because, for the upcoming tasks we got, we’re trying to figure out whether or not we’re going to be using mocking or a real database with random values for our “back-end” data. If we go with mocking as AMPATH has suggested to us, then our time spent learning testing with Angular over this sprint will be put to good use. I’m also pretty thankful for my Software Quality Assurance and Testing course that I took last semester, because that went in depth about a lot of these same topics.

Overall, Sprint 2 was fair uneventful. However I feel as though my team got more accustomed to working with one another and got a bit more comfortable with each other, too. So for those reasons, I think that it was a positive sprint. I’m excited for the next few weeks of Sprint 3 so we can start working on some coding!

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

MINIX3: Scheduler Research II

I’ve had to change my approach the past week or so with this independent study. When confronted with the source code, I think I began to feel overwhelmed by the amount of concepts I had to dive into. As a result, I attempted to take it from square one and relearn many systems concepts while also working on understanding the scheduler. As it turned out, this was a bit stressful for me. So I have decided to instead look at the relevant source code and, line by line, take notes learn things as they come. Perhaps this is the traditional way to handle diving into a large system of work, but since I don’t have a large amount of experience in large-scale work this is a learning process for me. It seems that this new direction is working a bit better for me.

Let’s start with /minix/servers/sched/main.c. The main() function is the primary function of the scheduler. When a message is passed into the scheduler, the main() function defines variables for the message, system call number, the caller’s number, and the result of the system call. Then, it enters an infinite loop. This loop saves the message’s info in the variables that were defined for it, and then checks for special situations such as system notifications, etc. (I’m not totally positive on the function of that, but it says that the balance_queues() function is called in this event.) Then, based on the call_nr (the system call number), a switch statement determines what call from /sched/schedule.c should be executed next, with functions like do_noquantum() (which executes when a process is out of quantum) and do_start_scheduling() (which seems to start the scheduling of the process). So long as the process is executed correctly, a reply is sent back that communicates success, and the loop continues on to the next kernel message.

So hopefully soon, I’m actually going to start tinkering with the scheduling policy and see what I can come up with. In the scheduling report I discussed in my previous post, it was suggested that with the current interface a Round Robin, Priority Scheduling, Staircase Scheduler, or Rotating Staircase Deadline algorithm can be easily implemented, so I’ll learn one of those and aim for that. I’m sure it’s going to take me longer than just a week to fully implement, but we’ll see what kind of magic I can work. I’m also sure I’m going to have to write a couple more of these research-related blog posts before I fully understand the workings and can proceed forward, but having changed my plan of attack, I’d like to finish those this week. Ready to move forward once again!

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Learn How You Fail

Similar to last week, I continued to read through Chapter 5: Perpetual Learning. This time, the pattern I am writing about is Learn How You Fail, and I’m finding it extremely relevant to my independent study at the moment. The pattern itself discusses that the path to success isn’t just all about learning and knowledge acquisition, but that it is equally important to pay attention to how your learning progresses and analyze why it stalls when it does — because it will stall. Sometimes we have behavioral patterns that negatively influence our ability to learn and perform. Once we become conscious of these behavioral patterns, we’re faced with a choice. You either accept that you will not change and attempt to collide with the issue forever, or you work to fix the problem. In the world of software development, this pattern may come up where there are gaps of knowledge in things that you have failed learning before. When you come across this, try sitting back and examining the trajectory you were on when you originally attempting to learn it and recognize what caused the interference. Try to reiterate on what mistakes you made originally, and intentionally spend time on those issues.

This pattern was particularly good for me to read at the moment because I’m attempting to tinker with the MINIX 3 operating system for my independent study and feel as though I’m struggling to make progress. I’m a fairly reflective person and I try to recognize any mistakes I’m making so I can work on them, but sometimes when something causes much stress and feels overwhelming it is very easy to get sucked in and forget the bigger picture. Taking time to step back, create a new plan of attack, and going at things from a different angle is key to overcoming obstacles. There is always an angle that will work, and it does just take finding it sometimes.

This, perhaps above any other patterns so far, may be the most important lesson to take from this book. It can be applied to every aspect of life and is absolutely critical for success in a field. Not only for success, but for staying in the right mindset (and for staying humble) in your advancement. Recognize that everyone, including yourself, has made and will make mistakes. It is precisely how you proceed forward from those mistakes that makes the difference in the long run.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Record What You Learn

This week I read Record What You Learn in Chapter 5, Perpetual Learning. I read this partly because I found irony in recording what I’ve learned about recording what you learn, but also largely because I’ve heard it given as advice in the past, and it is something I’d really like to put in place for myself. The pattern itself reflects on the situation where you learn something for whatever reason, be it for work, school, or personal development. However, without dedicating time towards fully understanding the concept, it doesn’t solidify and you fail to retain the information. The proposed solution is to write. Write as much as you possibly can about each and every part of your journey. Make time for it, because by simply putting concepts into your own words and jotting them down on paper (or on a blog, like this), you etch that knowledge into your head far easier than you would simply trying to absorb the information.

I’m sure we’ve all been in the position where we’ve learned something and then completely forgotten it some time later when we’re meant to recall it. Perhaps on tests, for work, or in conversation. We know that we’re familiar with it, but we seem to fail to remember it fully unless we seek to make it our own, and really feel it out. Take mathematics for example. In order to solidify a concept, you need to learn it and practice its implementation in several ways to really grasp the meaning behind it. Simply trying to absorb the information in one lecture or introduction won’t work for nearly anyone. So why, then, would it work for a theoretical concept in any field, or even something like a life lesson? You need to practice thinking it through in different ways. Writing about things does exactly that.

Part of the motivation behind starting this blog during my Software Development concentration was to have a portfolio to point to that shows what we’ve worked on during our classes. Not only that, but it was a means for connecting with the greater software development community. However I’m finding (through being more consistent with my blogging) that perhaps the most important thing isn’t any of these, and that the majority of the benefit has come through the act of journaling about what I’m learning. Everything I have learned and written about, I feel like know well. Coming from someone who felt extremely awkward about blogging when I started, this was an interesting change of heart — and a welcome one at that.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Operating Systems Recap I

I’m currently working through an independent study this spring semester where my goal is to tinker with the MINIX 3 operating system. It’s almost supposed to be like a hands-on, more advanced systems programming class. However, one issue I have come across is that for one reason or another, I’m fairly shaky with my fundamental operating systems concepts. Thankfully the MINIX 3 textbook has a really fantastic overview of Operating Systems concepts, so I figured instead of just taking personal notes, I’d put some review up here as well. Also, as a heads up, much if not all of this information is coming from the official MINIX 3 book.

Let’s start with the absolute basics. Some brief definitions.

System Calls. What are they, exactly? Well, loosely, they are the means with which user programs interface with the operating system. They are extended instructions that the OS provides — I suppose you could think of them as an API, in a way. In MINIX 3, system calls generally fall into two categories: Those dealing with processes and those dealing with file systems.

Processes. A process is essentially just a program that is currently being executed. Each process has an address space and some set of registers. Within the address space, there exists a series of memory spaces that the process can read from and write to. These memory spaces contain the program’s instructions, data, and its stack. Outside of the address spaces and within the set of registers are things such as the program counter, stack pointer, and other info needed to run the program.
Note here that there are clear differences between processes and programs. Often these terms are used interchangeably, but they are not one and the same.

Files. Since the GUI in computers represent files in such an effective way visually, it’s pretty easy to understand how they work. MINIX, like other operating systems, has directories. Directories can contain files and other directories, which gives rise to the file system hierarchy. The file system hierarchy is organized like a tree, and the most top level directory is the root.

The Shell. While not actually a part of the operating system, the shell is the primary means with which a user interacts with the operating system (unless they’re using a GUI). The shell is a process that gets started when a user logs in. It uses the terminal window as its standard input and output. When the shell is started and running in the terminal window, it prints the prompt, which is some symbol (commonly a $) to show that it is ready to receive a command. When the user inputs a command, the shell takes it and spawns it as a child process, and then waits for that process to terminate. When the child process terminates, the shell resumes its process and again prints the prompt on the screen.

Each one of these concepts has a lot to them, and I’ll delve further with upcoming posts. Lots of self studying to do this semester, I’m kind of enjoying it! Also, don’t tell anyone, but I’m actually starting to enjoy this whole blogging thing, too.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective

I think the most important lesson I learned this past week was that front end development can be extremely annoyi– ahem. Particular. Yeah, that’s the word.

In all seriousness, for our sprint this past week we were given 5 tasks. We needed to create an organization for our team in capstone, fork & clone the ng2-amrs repository from AMPATH’s GitHub account, read through its’ file, set up the development environment, and then learn about testing with Angular. As it turns out, this was a whole lot more frustrating than we originally anticipated. When setting up the development environment using the ng2-amrs repo, the entire class got errors. People had issues with sass, issues with CSS packages, issues with typescript and issues with javascript memory management. Thankfully having the slack was an extremely helpful tool because not only could we offer catered help to our individual team channels, but the entire class was working together in order to get things up and running. As it turns out, there are a lot of ways that an angular/node project can encounter errors purely based on the environment that you’re running it in.

By FAR the most difficult thing was getting the development environment set up for everyone in our team. I was running on Mac and everyone else was running on Windows. The common issue amongst us was an issue with the styles.css or styles.scss file in the /src folder in the project. For the windows users, the specific error they got was “Node Sass does not yet support your current environment: Windows 64-bit with Unsupported runtime (57).” Professor Wurst found that it was a problem with sass, which required them to run npm rebuild node-sass. After this, it seemed to fix that issue. Since I was the only one who had a mac, that was not an error that I got. When I got my error, it included “Can’t resolve ‘ion-rangeslider/css/……’. After a big of digging, it seemed as though there may have been a package I was missing regarding the ion-rangeslider, and so I found the tool npm-check, which allows you to check what packages you have installed in your project and what version is the most recent one. From there, I discovered that I didn’t have the ng2-ion-range-slider package, and after I installed that, my styles.css issue was resolved.

Once past the styles.css issue, it seemed like everyone encountered an issue when trying to resolve a .ts file. My teammate Harry emailed AMPATH directly about this, and they told him that he needed to run ng build –prod in order to get the production build of the server, as opposed to just running ng build. This worked perfectly and solved that typescript issue for everyone who encountered it. However, the very final error that most people were getting was an issue with the javascript heap running out of memory while building the server. Upon a little bit of googling, Harry also found the solution to that issue on this Stack Overflow post. Finally, the server ran and built as it was intended to — but not after much trial and error.

Along the way, there were constant issues with angular and npm versions that needed to be resolved. I can’t even tell you how many times I ran npm clean cache –force, rm -rf node_modules, rm -rf package_lock.json, and npm install@x.x.x just trying to figure out if the issues were a version or installation problem. I actually do feel like I learned a lot about the environment that Angular and Node need in order to run, just from digging through files and trying to see if all that was necessary was a small change to a .json file. There was a lot of reading, exploring, and self-development involved in finding changes for the issues, especially since we haven’t had a tremendous amount of front end development experience yet.

Currently, we’re all working on understanding testing in Angular a bit better and we still have one group member encountering technical difficulties. His seems to be partially related to his actual physical laptop which is hindering his progress, so that will likely get sorted out with time. Overall I think this sprint was really great way to kick off the projects. I enjoy my team and think we all work well together. Excited to continue onwards!

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

MINIX3: Scheduler Research I

Before I can edit the scheduling algorithm for my independent study, I need to understand the intricacies of how it works. That’s what I tasked myself with this past week, and as it turns out, there’s a lot to know.

MINIX is structured in 4 layers. Top to bottom, they are the User Processes layer, the System Server Process layer, the Device Drivers layer, and finally the Kernel. Originally, the scheduler was built into the Kernel (like you would find on a traditional operating system, I would assume), however an independent project by Björn Patrick Swift moved the vast majority of the scheduling handling into the user space — specifically the System Server Process layer. This change allowed for a much more flexible scheduler that can be easily modified without worrying about large impactful (and potentially damaging) changes to the rest of the Kernel. On top of this, it allows for multiple user mode scheduling policies to be in place at the same time. The current user mode implementation of the scheduler is an event-driven scheduler, which largely sits idle and waits for a request from the Kernel. In order for this the happen, though, the kernel needs a scheduling implementation as well.

The Kernel scheduling implementation runs in a round-robin style, and chooses the highest priority process from a series of n priority queues. When a process in a priority queue runs out of quantum (or time left for it to be executed, which is predefined), it triggers the process’ RTS_NO_QUANTUM flag. This is what dequeues the process and marks it so it may no longer be run. The two system library routines exposed to user mode from the Kernel are sys_schedctl and sys_schedule. sys_schedctl is called by a user mode scheduler in order to take over the scheduling for a process. sys_schedule is then used by a scheduler to move the process to different priority queues or give it different quantum, based on its policy. It can also be used on currently running processes for the same functionality.

Having a minimal Kernel scheduler is important because without it some processes would run out of quantum during system startup, before the user mode scheduler has even started. Not only this, but in the event that the user mode scheduler crashes, it could be possible for the Kernel to take over scheduling until the user mode scheduler is restarted (I’m unsure if this is currently implemented or not, though). There are plenty of benefits to having a simple, inefficient Kernel implementation whilst placing the majority of the workload on user mode schedulers most of the time.

The report I linked previously that details all of the workings of the scheduler is fairly long (20 pages), so I’m going to continue reading it and taking notes on it. I don’t want each one of these blog posts to get too long and wordy, so I’ll make a few posts (all of them this coming week) on this topic, each a continuation of the last.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Kindred Spirits

This week I read through Kindred Spirits from Chapter 2. This pattern is reflects on the stage of life of an apprentice who feels like they might not fit in entirely well with a company’s culture, perhaps because they have different interests or a different level of enthusiasm. Perhaps your organization is not very tight-knit and you’d prefer it to be. The solution is to make sure you reach out to meet those who are like you, and that you actively stay involved in what those people are doing. Read books, remember names, and attend meet-ups. Even getting coffee from time to time and discussing ideas can help immensely when your work life isn’t feeling satisfactory. You may even find yourself in a spot where you are offered a position to work alongside those kindred spirits who are just like you.

I found this pattern pretty interesting because, due to graduation coming up, I personally have been pretty concerned about finding a work environment that I really enjoy — as I’m sure everyone is from time to time. I know that I’m someone who seeks to get involved in the places I spend my time, and that if I join a company it is important to me that I’m not coming into work, getting things done, and then leaving. I want to find coworkers that I connect with, work that I can get invested in, and people who are enthusiastic and encouraging. From what I’ve heard and from what the chapter suggests, organizations that a truly encouraging are often few and far between. It may not even be the entire company, it could just be that those you who associate with on a day to day basis are not quite as enthusiastic about your work as you are. As a result, I really enjoyed the tips that this pattern suggested. Make sure you remember those who are similar to you and who have shared goals in mind, and that you keep those people as close as possible.

It’s important to keep track of who you meet, who inspires you, who knows what you know and is passionate about what you’re passionate about. Make lists of communities and try to get involved in them. One thing I’d like to do personally is to attend some software conferences — I know there are plenty and I know many are not outrageously expensive, so it’d be a really great way to meet people and make connections. It’s important to keep those you you mesh well with close to you, as those people will provide you with opportunity and friendship like no others.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

“Why Doctors Hate Their Computers”

For our capstone class, we were asked to read “Why Doctors Hate Their Computers“, and article written by Atul Gawande. It was a wonderfully written article that gave insight into the period of time where hospitals adopted the new Epic software system, and how they suffered through the process. It gave different outlooks on the computerization of enterprises, the pros and cons, and the contrast between human-centric work environments (adaptable ones) and computer-centric work environments. Adopting this technological overhaul was a major step in healthcare that hospitals across the nation decided to undertake. It came with training for nearly all employees, the hiring of technical staff, policy and routine changes for care providers, and a tremendous amount of headache.

The article dove into statistics of career satisfaction that healthcare providers in different specializations report. On average, doctors spent about 2 hours on their computer for every patient-facing hour the spend in the clinic. There was correlation found between this and job dissatisfaction, and especially between time spent on the computer and burnout rate, as reported using the Maslach Burnout Inventory. Doctors in specialties such as Emergency Medicine, who spend large amount of their time logging information into computers, reported significantly higher rates of burnout and job dissatisfaction when compared to a specialty like Neurosurgeons, even when they spent significantly less time at work. With the overhaul of Epic, doctors had to spend even more time on their computers logging patient information, often after hours and at home. Patients were booked for double the time slots they had previously been booked for, effectively halving the amount of patients seen per day. Not only this, but in a study done by the University of Wisconsin, it was found that the average workday for doctors increased to around 11.5 hours. While reading, it was pretty concerning to think about how these people who are trying to take care of others are doing so on such extreme schedules.

It wasn’t just learning curves that seemed to make the job harder on doctors. The senior vice-president at Epic, Sumit Rana, called one issue “the Revenge of the Ancillaries.” Since Epic was a system that would be used in every position in the hierarchy of the hospital, there were often disagreements regarding what administrative staff wanted the software to do and what doctors wanted the software to do. Since doctors had been used to calling the shots (for the most part) regarding patient healthcare, having restrictions placed upon them was a difficult thing. Not only was there the tension of having to learn and implement a new software system into their daily routine, but there was also tension “politically” in their work environment due to administration.

So who was the real customer for the system? I think it is clear to say it was hospitals, and the technology from a sales perspective was aimed at ease of use for doctors and administrative staff. However, Gregg Meyer, the chief clinical officer at Partner’s Healthcare who oversaw the introduction of Epic, put it really well in my opinion. “‘…we think of this as a system for us, and it’s not’, he said. ‘It is for the patients.’” Meyer is convinced that it will improve over time. The Epic system is new and absolutely massive. In 10 or 15 years, national EMRs will be far better than they are even today. There will be many of the same issues, however there is a tremendous amount of benefit that comes from it. According to the article, “In the first year of the study, deaths actually increased 0.11 per cent for every new function added—an apparent cost of the digital learning curve. But after that deaths dropped 0.21 per cent a year for every function added.” A system that does that seems worth the headache in my eyes. I hadn’t at all considered this viewpoint before reading this article, and it made great sense to me.

I think there is a lot that can be learned from this article far past the domain of hospitals, EMR, or even software in healthcare. The article referenced “The Mythical Man-Month” by Frederick Brooks which I found pretty interesting. It dives into the human aspects of software engineering and working in a computerized workspace. The actual development environment and technology used within the workspace has changed, however things like the scaling of collaboration and teamwork in a project-based environment have remained largely the same. Within the book, Brooks coined something called Brooks’ Law, which states that “adding human resources to a late software project makes it later.” Clearly even back in 1975, they knew a thing or two about the complexity of introducing and launching software products on a large scale — and the implementation of Epic was another example of exactly that.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Be the Worst

Consider yourself in the situation where you’ve joined a company, spent some time there, and learned quite a bit. You’ve climbed the craftsmanship ladder and others recognize your work as that of high quality, maybe even some of the best amongst the developers you work alongside. Your hard work has paid off and you’re a legitimately good developer. This is exactly what the pattern Be the Worst from Chapter 4 focuses on.

The problem with reaching this status is that you’re may likely find yourself at the pinnacle of growth in your work environment. Everyone turns to you for learning opportunities, and you’ve definitively established yourself as a leader in your workspace. However, you’ve found yourself unable to absorb information from your professional environment like you used to. Your rate of learning has stalled, and you’re no longer finding yourself continually developing and expanding your skillset.

The solution is something we’ve all heard before — surround yourself with those more skilled than you. Do so constantly. When you find yourself feeling like your learning has stalled, it may be time to locate another team whose skills outmatch your own (or at least, are in some kind of different field that you have lesser experience with.) Not only this, but keeping yourself in the ‘bottom-tier’ of your coworkers will “unlock” other patterns, which will help you keep yourself in the apprenticeship mindset. This is key because it ensures that, if you’re motivated, you’ll have continuous growth. Now, don’t get this pattern confused — Your goal is not to stay the worst on your team. It is to climb the rungs of the ladder until you’re an absolute coding machine, equipped with a host of skills that you’ve picked up from the time you’ve spent chasing craftsmanship. Then, once you’re at the top of the ladder, find another to climb all over again. Eventually, you’ll find yourself to be a developer skilled enough to guide others through the same process.

What action can you take to push yourself towards this goal? I may have been implying that it’s wise to up and leave every group that is slowing you down, but that’s not entirely what I meant. There’s no reason to forgo professional relationships in the search of career development if you’re happy where you work. Instead, search for elite teams across the world (via the internet) who you can be a part of. Get involved in more communities, different projects, etc. Staying in communities seeking growth will encourage you to do the same.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.