Record What You Learn

The section “Record What You Learn”, found in chapter five in Apprenticeship Patterns by Dave Hoover and Adewale Oshineye, opens with the common saying “Those who do not learn from history are doomed to repeat it”. This section focuses on the practice of keeping some sort of diary to record what you learn throughout your career. Doing so has the benefit of creating a useful resource that can be referred back to at any time. It also has the added advantage of finding like minded individuals by finding common ground in what the both of you have learned. Both authors see these benefits since they also keep records of what they’ve learned. Oshineye uses two instances of the wiki, one public and one private, to share lessons he’s learned with others and give honest feedback to his own progress respectively. Meanwhile, Hoover kept a text file of references and quotes which eventually grew to such a size that he decided to publish it online for others to use. What separates this design pattern from another such as “Share What You Learn” is that “Record What You Learn” focuses on preserving the road you took to mastery so that you or others might learn something new from it.

While I haven’t reached mastery yet, I do generally try to write down what I learn. Admittedly, I partly do so I won’t fall asleep during lectures and I also often don’t go back to older notes. But I still notice some of the benefits like how generally more of the material sticks when I write down notes since I’m paying more attention to the material writing it down. And most of my older notes still exist, either digitally in my hard drive or physically in a stack of note books I keep inside box in the corner of my room. They’re still around and I can access them at any time which is one of the main benefits to “Record What You Learn”. So, when I’m talking to someone and they bring up a concept that I somewhat remember, I can open up my notes and refamiliarize myself.

From the blog CS@Worcester – Rainiery's Blog by rainiery and used with permission of the author. All other rights reserved by the author.

Nurture Your Passion

This week I decided to talk about Nurture Your Passion pattern. I think this pattern apply to a lot of people in different direction.  Our success comes not so much from what we do, but how well we do it. It also illustrates that regardless of your job or your position on the company ladder, you can be successful if you have passion for your work. Regardless of your current job bringing passion to your work can lay a foundation for success. Not just success in your current job, but success for every rung you want to take up the company ladder. You may hate your job now, but the attitude you take towards it can play a pivotal role in your career.

The book describes a case where You work in an environment that stifles your passion for the craft and what solution you can follow.  It is hard for your passion to grow when exposed to such hostile conditions, but there are some basic actions you can take to sustain it. Find something at work that interests you, identify it as something you enjoy, and pour yourself into it. Join a local user group that focuses on something you want to learn more about. Immersing yourself in some of the great literature of our field can carry you through the rough spots when your passion is in jeopardy. Moving into an organization that offers career paths congruent with your own can protect your passion.

These are some good advice you can follow. It is hard to find everything in one place but at least if you do something you like you are going to be happier. Remember passion is an emotion, a state of mind so the first thing you have to do is motivate yourself. Turn to another emotion to find the motivation that you need. Once you have the motivation you can apply the passion. Remember it is not about how you feel about your job. It is about putting passion into your work. Maybe you need to learn new skills, or you just need to fully engage yourself in your work.

References:


Apprenticeship Patterns by Adewale Oshineye; Dave Hoover

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

Summary

In this sprint, I mainly did the following things:

What Worked Well

This sprint, I worked a lot with Marcos on the backend. Overall, I think that we worked well together. This was the first time I spent most of my time working with someone else and it went well. We didn’t really run into any issues because we made sure to be careful with git. I now see why its recommended to add new features on a new branch. We were able to split up the work and stay in communication.

I also had a decent understanding of OpenAPI and Node during the work. It felt like I knew what I was doing in a sense.

What Didn’t Work Well

I think the team overall just had a lot to do outside of this class. I was able to get done what I needed to during class but if I had to work on things outside of class as well, I wouldn’t have had the time and the motivation at the same time. I think that overall as a team we didn’t get as much done as I expected because people were busy with other classes.

We also still sometimes feel lost and don’t really know what questions to ask. This applies mostly to the details of the design or implementation. However, it was a lot better than Sprint 1.

What Changes Could be Made to Improve as a Team

We need to get used to typing out more info for the cards. I’ve also noticed people aren’t always moving the cards they’re working on into the “doing task” column. Besides that, I think as a team we’re doing pretty well. We work together when its necessary or convenient and manage our own work otherwise.

What Changes Could be Made to Improve as an Individual

I could dedicate more time to this class overall and I could ask teammates if they need help rather than always waiting for them to ask for it.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Software Quality Assurance and Testing Blog Post #4 (Program Graphs and DD Path Graphs)

The latest topic in my Software Quality Assurance and Testing class has been learning how to read and write program graphs and DD path graphs. We also have been learning how to make DD path graph tables that correspond with the DD path graphs and the program graphs. Our most recent assignment was all about these topics. With the amount of work we have put into learning these, the assignment proved to not be too difficult for me. It is generally easy to know how to read the program graphs because pretty much every node corresponds with a single line of code in a program. Where it gets tricky is when the program has loops and if statements. Converting to a DD path graph from the program graph is also not too hard if you know what you are doing. A lot of nodes from the program graph can be clumped together into single nodes (like all the instance variables for example). Lastly, creating the DD path graph table takes in account both the program graph and the DD path graph, but again is not too difficult to read or fill out. The amount of time we have spend on these topics in class while doing in-class activities has helped me tremendously, because looking at the graphs without knowing what you are looking at can be daunting and confusing. As much as I enjoy being able to understand the graphs and make them if I want, it is even more enjoyable knowing that these topics can all be used in real life. It is always very reassuring when I try to research topics from class and they seem to be very prominently used in the real world with lots of examples and helpful sources to teach me about them. In this case, these topics all seem to be used a lot and also seem to be very helpful for those who have used them before to study how a program could be working (or how it could be failing).

From the blog CS@Worcester – Tim Drevitch CS Blog by timdrevitch and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog Post

This second sprint has been a bit better in terms of actionable steps; this was aided greatly by the mock front and back end. The mock allowed us to test configuring a realm to a container locally which was the floor for proof of concept. Unfortunately, the lack of architecture made further progress confusing as I didn’t understand why all images couldn’t be on same container network. After it was determined that the architecture called for further subdivision, it became unclear what microservices should be on which image and/or network. For instance, it was more or less determined that Keycloak should be on its own separate container (presumably with WildFly) but at the beginning of sprint I wrongly believed, based on previous internet searches, that NGINX should be on the same image as Keycloak but this was later discovered to be unnecessary as the mock front end contains NGINX within our system.

On that same note, the task breakdown and codification into issues for the respective boards suffers from an incongruity where ignorance of the matter begets ignorance in the approach and then future work is inaccurately reflected in the process. This isn’t necessarily an issue with the approach as whole but it seems as though, for the past two sprints, this unfamiliarity with the microservices and their implementation into our architecture forced me into a series of micro pivots which quickly evanesced away from the originally declared issues. If the goal of scrum is not to constantly be inundated with creating issue cards, updating them, and closing the now obsolete ones, then perhaps it would behoove us to reconsider the amount of research time necessary or have a much more closely-guided approach to issue construction. If this rapid pivoting away from cards is acceptable (and their grades reflect this) then that’s not particularly bad news for students. However, we have to be honest with ourselves and have the hard conversation that Scrum in this setting does not accurately map to what Behr, et al. would refer to as WIP (work in progress).

The low-hanging fruit of constructive team feedback would be the necessity of more frequent external meetings. It certainly was not for lack of trying; the most charitable reading of the situation was that I found there to be no strong consensus on when the team schedules would align. We also would have benefited from more frequent contact internally. Barring all real-world, unforeseen, obligations that took team members away from us the scattering of the team into groups caused our reviews to be staggered resulting in the loss of two full class periods of collaboration. My prescription for this would be that those who come after us should successfully drill down on establishing a running prototype before splitting off into the other groups.

My individual criticisms to my work flow largely still touch on the externalities I’ve lamented in a prior blog post but, if I must touch on them briefly, they’ve stayed mostly the same. It’s quite apparent that certain teaching staff in the university have struggled to pivot to remote teaching and, if I’m to attribute to ignorance and not malice, they seem to have bequeathed those struggles to their students. To those teachers I would like to remind them that Monday holidays are not a surprise and certainly not an invitation to delegate away your teaching responsibilities because you’re lagging behind curriculum milestones. If these teachers cannot grade in a timely manner, then my deepest sympathies, but please do not complain that the bulk of your students keep getting concepts incorrect and you have to re-grade their assignments and/or assign extra credit to help inflate grades. I’m a commuter student with one semester until graduation so I will stay here at Worcester State; if I was a commuter student with two semesters left, I would have transferred out.

https://gitlab.com/nanuchi/techworld-js-docker-demo-app
When paired with their “Docker Tutorial For Beginners” has been an invaluable resource to teach myself Docker and use continuously as a reference.

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Sprint #2 Retrospective Blog Post

The second sprint of the semester started a bit rough for me with my surgery, but I was able to get most (if not all) of my work done throughout the weeks we worked. I was able to help a lot with creating the cards and giving them descriptions at the start, and then the first week or so of the sprint I was very busy. After this initial period, I was able to catch up on my work and get a lot done. I was primarily in charge of getting an updated example of our front end based on an old one that was provided to us from previous years. For this, I used google forms, but in the end I had to take screen shots of all my work and upload those in case the forms document that I created got deleted or lost. This will be helpful for our team and for any students working on this project in the future. There is still a bit of work I need to do on this example to make it perfect for what we want, but this has been added as a small task for me to complete in the next sprint. Overall, our entire team did very well getting a lot done, and although I did not get the backend coding assigned to me, I was able to lend my insight and some advice and code to my teammates. This sprint was a lot different than our previous sprint primarily because it had less to do with learning how to use cards and work in a sprint and had more to do with getting real work done to further our progress on this project. I presume that the next sprint will be even more important, and we will have a lot of hard work to do to try to complete all or most of our cards. Some things that worked well this sprint were our team’s hard work ethic for getting everything done and our ability to communicate well and let each other know what needs to get done and what we need help with. For this sprint, we actually added a member because of the IAmSystem team splitting up after the first sprint. This was not a problem at all and in fact was more helpful because we were able to get even more cards completed over the couple of weeks we had to work. If I were to change anything or expect anything more from the next print, it would be that I personally will be able to get a lot more work done earlier in the sprint without having a surgery as a distraction. I am excited to see how far we can get on this project by the conclusion of this semester, and hopefully we will be able to get a nice looking and working project by the end. This whole experience has been great for me so far for learning how to work in groups for sprints, and I know that after college I will run into many projects where I have to work like this.

From the blog CS@Worcester – Tim Drevitch CS Blog by timdrevitch and used with permission of the author. All other rights reserved by the author.

Erockwood Post #1

In my operating systems class, we are learning about all of the important roles the operating system has in a computer. There are many managers that the operating system itself manages to keep everything running as smoothly as possible. One thing I am most interested in when thinking about operating systems is concurrency and multithreading. Most tasks are done serially, which is to say one at a time. This worked well back in the day when most computers had only one or two cores. Nowadays, however, CPUs usually have 4+ cores, and if all tasks an operating system does are serial, then 3/4 of the CPUs resources are not being utilized. This is where concurrency / parallelism / multithreading comes into play. With them, we allow programs/tasks to run simultaneously, using more than one CPU core, to do more work in less time.

The only problem with this is trying to write the programs/tasks to take advantage of more cores. I know when I first tried to create and run a simple C program that would use more than one core, it was very confusing on trying to have everything be worked on by a separate thread. Eventually, after some trial and error, I got it working, and it slowly started to make sense on how to write this type of program. In doing some very simple calculations like I was trying to do in my example program, there is not much performance to be gained from multithreading the program, but in much bigger programs with bigger calculations to be done, there is definitely some time that can be saved by multithreading the process.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Learn How You Fail

Failing is unavoidable and it happens to everyone at some point. In reality, anyone who has never failed at anything has either stopped challenging themselves or learnt to ignore their own mistakes.

Your skills are progressing but you are still failing and have some remaining weaknesses. So instead of drowning in a sea of self-pity over previous failures, develop some self-awareness of your own patterns, habits, behaviors, and other factors that play a role in causing you to fail. When you are aware of yourself and the things that mess you up, you give yourself the option of either trying to solve the issues or cut your losses. You cannot succeed in all, and recognizing your shortcomings is critical because it allows you to actively recognize distractions and concentrate on your goals.

Accept that there will be certain stuff you aren’t very good at.

Once you learn to accept your failures you can set realistic limitations on your goals.

Often times I get irritated by the fact I keep failing, but if I accept it, my failures become learning opportunities. I can then seek out guidance and learn new skills from someone who knows more about code than I do. It is hard, honest work to admit your shortcomings and it will take a lot of failures, but I am confident that eventually, it will all be worth the frustration.

From the blog cs@worcester – Coding_Kitchen by jsimolaris and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/community/-/issues/6

(Read over/review UpdateGuestInfo for frontend): In the comments, there is detail on how to frontend will function for a guest info update. In case of a new guest or old guest, the ID will be entered first and then the guest will be accessed if they exist.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/community/-/issues/35

(Implement components/schemas to be used in backend endpoints): Updates were made to the OpenAPI.yaml file. These included new responses, corrections to schemas, and corrections to endpoints.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/backend/-/issues/2

(Implement API for backend endpoints in Node): The title is self-explanatory here; I implemented the GET route while David implemented the PUT route, as well as updating all the example files to fit GuestInfoSystem rather than items for InventorySystem. My chages occurred in endpoints.js and guest.js

As far as our timing within the sprint, we did much better this time than last time. We made use of due dates and stuck to them, even if refactoring was necessary afterwards. Our goal was getting working backend code up and shared with the group. From here, everyone was able to be on the same page in a timely manner. We also made better use of communication within GitLab, mainly in the cards and via merge requests. This way, even if Alex, Tim, or Cam was not working on the backend, they would still be able to see the code and understand how it functioned.

            We still have some significant improvements to be made as a team, though. Even though everything was made more public through cards and merges during this sprint, getting everyone on the same page insofar as how the project as a whole was functioning seemed to be lacking at times. This definitely slows down progress as there was separation between the backend development (what I was mainly contributing to with a great, great deal of contributions from David) and EventSystem and KeyCloak development. I’m still not sure if everyone is grasping how deployment of the system is working with Docker. We still need to be communicating and collaborating more than we are currently, as well as asking more questions if everyone is not in synch. This leads to some ambiguity on division of labor as well. We tend to be working linearly (with some people waiting to work on their part) instead of in parallel. This is where communication needs to be better, as well as planning.

            As a team, we should be communicating better during our time in class. I think I had the assumption that everyone was on the same page when no questions were being asked, which I don’t think was necessarily the case. This would help the pace at which we worked for sure. There is little progress to be made without a thorough understanding of the system. Coming up on our last sprint, there is no time to waste. I would hope no one feels uncomfortable asking questions though; as Apprenticeship Patterns states, expose your ignorance. And perhaps it’s just a lack of articulation in our cards. We have been better in using our cards, but I’m sure they could still benefit from more detail.

            I think I should be better in organizing with members from other teams, especially for the frontend. I wasn’t sure how the backend would be working, so I put off frontend development in favor of the backend. I definitely feel comfortable with that now, but I also feel kind of behind on the frontend. I expect this upcoming sprint to be exactly that: a somewhat hectic sprint. We have our work cut out for us, so focus is the name of the game. Organizing with members of other teams should be greatly beneficial, especially with my beginners Vue.js status. 3 minds are greater than one!

From the blog CS@Worcester – Marcos Felipe's CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

Software Testing: Jacoco

For this week’s blog post I chose to focus on a topic that I had to research for an honors project, this being Jacoco. In case you are not familiar with it, Jacoco is a software tool that is used to test code coverage, which in short just involves checking how much of a system’s code is being tested. This may sound redundant if you, like myself, have only really been using tests on small scale projects, but as a program’s complexity increases so does the complexity of the tests required. Additionally you will have to write more and more tests, making it difficult to determine exactly what has been covered. Before anything else however, I am going to break down exactly how code coverage is checked.


Having not known about Jacoco at all, I found this article on Baeldung that was extremely useful in understanding how it worked. As I stated the essential idea is to check code coverage, which consists of line and branch coverage. Line coverage essentially means Jacoco will check how many lines of code are being executed during tests to determine which are actually being tested. Branch coverages is similar to this, but focuses on checking the logical splits in the program, such as conditionals like if statements. The total amount of branches present is then compared to the amount that were covered to give the results. There is one more aspect of a program Jacoco checks for, being cyclomatic complexity. This is essentially a measure of the logical complexity of the program, or how many branches the program has. Now that we know what Jacoco is we can discuss why you might want to use it.

This testing tool certainly has some major benefits for anyone working on a large system. If you are working on writing tests for a system, it may begin as something straightforward, but as the system grows it will become increasingly hard to track exactly what components are being tested. This is where Jacoco comes in, allowing you to see exactly what lines, or just classes, are being tested so you can ensure there is sufficient test coverage. Additionally it can show you the degree of logical complexity present in the system, which could then be adjusted accordingly. One point to keep in mind that is mention in this article is that you should not get tunnel vision when trying to increase code coverage. High code coverage does not necessarily mean the tests being run are adequately testing the system. It is more important to focus on writing a few functional tests for each component rather than just having a plethora of tests that do not really do much of value. This article discusses Jacoco in more detail than I can so if this interests you I would recommend checking it out!

Source:

https://www.baeldung.com/jacoco

From the blog CS@Worcester – My Bizarre Coding Adventures by Michael Mendes and used with permission of the author. All other rights reserved by the author.