Category Archives: Week 7

Practice, Practice, Practice

While it is a likely contender for the most self-evident of the Apprenticeship patterns, I have found “Practice, Practice, Practice” to be the most inspiring thus far. The context that the particular pattern describes is a situation I found myself in quite frequently in my previous undergrad; there is a need to get better at your craft, develop your skills but it feels as though you’re constantly in performance mode. While there may be some merit to having a mirror by my side checking my posture and a metronome in my ears pushing me to increase my words per minute, it’s far more likely that the deliberate practice the authors allude to has more to do with firing the synapses dedicated to problem solving rather than strengthening the myelin sheathes governing my ability to put a stick to the head of a drum.

I enjoyed the reference to K. Anders Ericsson’s research on the matter but was predictably bummed out when the authors pointed out that reality is not in fact perfect and that we, as apprentices, do not have a readily available pool of benevolent masters who will take us under their wing. As someone who was raised primarily to throw a baseball and complete schoolwork assignments put in front of me, the idea of mastery never crossed my mind; talent was something I was born without and that my success in life would largely have more to do with my ability to follow orders and do necessary tasks linearly than anything else. As a latecomer to this world of musical craftsmanship I lacked not only the pedigree but also the prior mentorships that allowed my peers to sail through and seek out additional opportunities. I recognize in hindsight the benefits of a mentorship but also know that with mindful practice I was able to catch up to my peers and even succeed where some of them failed.

My ability to successfully achieve competency happened not because of strong mentorships (which I never received) but rather because of slow, concentrated practice much like the katas of the dojos mentioned. My favorite prescription presented by this article is at the “Action” section, to carve out some additional time to do mindful practice, as practice makes permanent.  I have several exercises in mind and once I finish a few more homework assignments I look forward to practicing!

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Best Practices for Using Docker

For the past few weeks, we have been working a lot with Docker and how to use it. Coincidentally, I came across an article describing some best practices when using Docker. Some of the things in the article were even things we used in class.

Many things were given to keep in mind. There is size. Images should not waste space, especially as it could be a security concern. Using Alpine Linux is a great way to save space though it may not always be appropriate. It is also important to keep your Dockerfile current. Using the latest tag is one way but the author also says to check regularly to make sure you really are building from the latest version. The author also mentions a new tool called docker scan to find known vulnerabilities in your Docker images. There is a lot of emphasis on Docker containers being simple and easy to create and take down. The system should be designed to do so without adversely affecting your app as that is the whole point of Docker. At the end of the article there are additional links to more articles about Docker by the same author.

I selected this article because it is relevant to the current course material. We have been using Docker in all our classes for the last few weeks. As I was reading the article I recognized where the author says use “Use Alpine Builds” because we do use those in class. The article even explains to use “FROM alpine”, something we start our Dockerfiles with in class.

Because the tips in the article were kept very general, I was able to understand most of it. For more technical articles I often get confused or lost, especially since I have no experience working in the software development industry. Reading this article made me realize how far I have come since I started the CS major. Going in I had no programming experience and little to no aptitude when it came to computers. Seeing the things we learn in class actually be relevant in the real world is a very validating thing. It proves to me that I am actually learning, something that is only more difficult with remote learning.  

I hope to be able to use this article going through the course and on the final project. These tips should help me get the most out of Docker.

From the blog CS@Worcester – Half-Cooked Coding by alexmle1999 and used with permission of the author. All other rights reserved by the author.

Law of Demeter

Todays blog is about the Law of Demeter a.k.a the Principle of Least Knowledge. This information is based off of the article, “Object Oriented Tricks: #2 Law of Demeter”, by Arun Sasidharan on the website “Hackernoon”. The law uses the Tell Don’t Ask principle which states that we may call methods of objects that are: passed as arguments, created locally, instance variables, and globals. In other words, saying that “each unit should only have limited knowledge about other units”, that are closely related to the current unit. Basically the Law of Demeter (LoD) says that it is a bad idea for single functions to know the entire navigation structure of the system, or to have a chain of functions, also known as a “Train Wreck”. The article states, “We want to tell our neighboring objects what we need to have done and depend on them to propagate that message outwards to the appropriate destination”, and that is what we need to do to solve this problem of “Train Wrecks”.

I chose this article because it was one of the first to come up when searching “Law of Demeter”. When I looked into the article it seemed like a reliable source. I also looked up reviews on the website before diving too far into the article. Many people suggested the website in their reviews so I concluded that it was a trustworthy source. The article has paragraphs discussing the law/principle, code snippets to show the before and afters of using the Law of Demeter, and then a final summary at the end to give an overview of everything talked about. I have found that articles with the same elements/structure help me a lot with understanding a new subject.

While reading about the Law of Demeter, it brought me back to a few times where I personally broke the law on projects. There has also been a few times where I have seen people break the Law of Demeter in tutorials, such as the one I watched for the Decorator Pattern. With this article, I learned how to write reliable, clean functions and to use them in a reliable manner. I also learned that it is very difficult to accomplish. As the article states, it is more of a suggestion than a law for that exact reason. Unlike some of the other principles, this is something that we cannot be proficient in in a single day. It is something to set as a longer term goal to improve on. I hope, through practice, to be able to utilize this principle in an efficient manner on future projects.

Source: https://hackernoon.com/object-oriented-tricks-2-law-of-demeter-4ecc9becad85

From the blog CS@Worcester – Austins CS Site by Austin Engel and used with permission of the author. All other rights reserved by the author.

Don’t Spend So Much Time Coding Cleanly!

There comes a point in every developer’s career where they transition from “make it work” to “oh my God I need to make this look decent”. Initially, our goal is to create a functional Hello World program. However, as we develop, we slowly begin to learn proper code styles and naming conventions. After reading enough Stack Overflow Forums, we start to become self aware of how our code looks and we begin to focus on code aesthetics. While there’s merit in that, we often focus on the wrong things.

This video does a great job of explaining just that: there isn’t really any such thing as clean code. Now, that isn’t to say that there aren’t wrong ways to code. I’m sure we can all name plenty. That being said, the main point is that we shouldn’t spend our time trying to code perfectly the first time.

The best way to code is to do so as well as we can without spending too much time overly focused on getting everything right the first time. If you know you have the opportunity to do something right or cut corners, do it right. However, if you’ve spent 10 minutes trying to name a variable, give it some placeholder name and worry about it later. The first priority is functionality. Later on, you can review and refactor your code.

The main goal of refactoring is readability. In a compiled language especially, there is only so much efficiency the programmer can add. Compilers do a remarkable job of optimizing code by themselves, so your main goal should be readability. Make sure that other developers, as well as your future self, can read and understand your code. Add comments where necessary, but aim for self-commenting code. If your code is its own comment, that helps save space.

In conclusion, focus on simple and readable code. Don’t waste your time doing something that can simply refactored later, within reason. With that being said, try to do things right the first time if it’s obvious how to do it right.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

TECHNICAL DEBT

As I was preparing for my exams, I came across the term/topic technical debt as one of the topics under the review topics and this term was seen in one of the class activities. I decided to research on the topic, write a blog post to broaden my understanding for the exams. I came across this blog post that talks about technical debt and chose this blog because the concept has been broken down into topics explaining the term in-depth.

The term technical debt has been described in this blog as well as the good and bad reasons for technical debt. It also explains and gives examples on the various types of technical debt such as Planned Technical debt, unintentional technical debt, unavoidable technical debt, software entropy. It also explains some identifications of technical debt in a project and various ways on how technical debt can be avoided and managed. A youtube video is provided explaining the term as well.

In this blog I learned that although the term may sound like a financial term, it is defined as the deviation of an application from any nonfunctional requirements. Simply, it is a code you have to work on tomorrow because you took a shortcut to deliver it/software today. Technical debt just like in financial terms can have interests which is the difficulty to implement changes and thus needs to be addressed in order to prevent software entropy and risks that may relate to the source code. Teams may delay better coding and certain pieces which are understood by them to impede future development, however, interests may cumulate if these pieces, mediocre codes and known bugs that were left are not resolved.

Technical debts can be identified when code smells are too subtle than logic errors and when there are higher levels of complexities with technologies overlapping each other. Other warnings signs include product bugs that will cause an entire system crash and issues with coding styles.

I learned in Planned technical debt that organizations make an informed decision to generate technical debt knowing and understanding the risks and cost while Unintentional technical debt occurs as a result of poor practices, inexperience with new coding techniques and rollout issues. Unavoidable technical debt is created due to changes in the business and technology that present better solutions hence, making old code obsolete.

One way of managing technical debt is to assess debts and to acknowledge that technical debts exists and share discovery with stakeholders, explaining the importance of paying off technical debt earlier. Technical debt can also be managed by using Agile methodologies that can ensure that teams are working on technical debt.

I hope others find this blog educative as well.

https://www.bmc.com/blogs/technical-debt-explained-the-complete-guide-to-understanding-and-dealing-with-technical-debt/

From the blog CS@Worcester – GreenApple by afua3254 and used with permission of the author. All other rights reserved by the author.

RESTful API from another Perspective

A topic that I chose to learn more about for this week is ‘REST API Design’ as listed in our course syllabus. The source that I had used to supplement my learning was a conference talk on REST API called Never RESTing – RESTful API Best Practices using ASP.NET Web API – Spencer Schneidenbach. This source will be linked at the bottom of this post; however, I would like to mention that I enjoyed this conference talk as I found it both informative and insightful.

In the talk, Schneidenbach goes over his experiences and gives insight into building and designing RESTful API’s. He breaks down building/designing restful api into four activities, design, implementation, document, and maintenance; although he mentions that practically all four of these activities or a culmination of the two activities would be performed at once when working on api, he tries to separate the four activities in his talk. Throughout the talk he also brings up other topics such as explaining what REST and api’s are; however, I would rather discuss segments what I found informative or insightful from his talk.

One of the things he brings up when designing api is that ‘restful api != good api’; and this claim that an api that solely follows the constraints imposed for REST API does not solely make that api good; rather he argues that when building api we should be doing what makes sense and ‘throw out the rest’. I really resonated from this statement, as it made the designing process more practical in that when building api we must be flexible and logical with our decisions; and that theoretical rigid constraints are not what always works in real environments.

In conjunction to this statement, he also mentioned two prime rules he kept in mind which were, KISS and be consistent. KISS meant to keep things simple, but an element of KISS that I found insightful, was asking who we are keeping things simple for. He argues that when designing we should always keep in mind the user experience or the consumer who will be using the api. I thought this was interesting perspective to think of how to keep things simple for a consumer to use the api when designing it; in addition to this rule, he also added to not be creative and just provide enough that is necessary and no more or less, which I agreed with. For the consistency principle, it was straightforward strategy to keep things consistent with accepted best practices; however, I also enjoyed his second comment ‘be consistent, but be flexible when it makes sense’. This comment reiterated a theme in design that there is no silver bullet in designing api, and that although there are solid principles that we should follow, we need to be have some sense to find a balance with what our consumers require and want.

From the blog CS@Worcester – Will K Chan by celticcelery and used with permission of the author. All other rights reserved by the author.

Creating Chords from Sine Waves in Python

This week as part of my independent study I worked on feature extraction in Python. I have been using Python for Engineers as a reference, and it describes basic digital signal processing. As an exercise, I expanded on the code found in that chapter.

It’s a non-trivial task to create a sine wave in code (although compared to the most complex aspects of DSP, it’s a cakewalk). A sine wave will create the purest tone possible, as it creates a constant oscillation. This oscillation is what you perceive as a pitch. The equation for a sine wave is given as:

y(t) = A sin(2πft + φ)

where A is amplitude, f is the frequency, t is time, and φ is the phase in radians. We can ignore phase because this simply indicates where the wave starts at t=0, and this doesn’t matter to our ear. We hear the same oscillation regardless of where it starts.

Take a look a the code for creating a sine wave. Some of the details aren’t as important, but you can see the book for a description. The line that actually creates the sine wave values is:

sine_wave = [np.sin(2 * np.pi * frequency * x/sampling_rate) for x in range(num_samples)]

If you aren’t familiar with list comprehension in Python, this is just using the sine wave equation above, substituting time, t, as a specified number of samples divided by the sampling rate. The result is a list of values representing a sine wave. In reality, this is all an audio file is, with some additional encoding (and usually more interesting oscillations than a sine wave).

So what if we wanted to do this for a chord of multiple sine waves? Maybe using more list comprehension? Sure.

# Note Frequencies
a4 = 440
c5 = 523.25
e5 = 659.25
chord = [a4, c5, e5]

sine_waves = [[np.sin(2 * np.pi * freq * x/sampling_rate) for x in range(num_samples)] for freq in chord]

This is doing the same as above, only it’s doing it for each frequency in a list of frequencies. But that’s the easy part. The original code just multiplies the samples by the amplitude, then converts them to hexadecimal values and writes it to the file:

for s in sine_wave:
    wav_file.writeframes(struct.pack('h', int(s*amplitude)))

But we can’t simply do that for each sine wave in succession, or we’d get different sine waves playing one after another. That’s an arpeggio, not a chord!

So we have to get a little creative. But not too creative. If you think of each sine wave playing from a separate speaker, what you hear is the sum of the air pressure from each speaker. A single speaker is the same story: it’s just playing the sum of the three sine waves. So then, iterating through each index, we can add the amplitudes of each individual sine wave. I also had to reduce it enough to store the value as a short int, by dividing by two.

# Only write samples to the end of the shortest sine wave
shortest_sample_len = min([len(j) for j in sine_waves]) 
for i in range(shortest_sample_len): 
    current_frequencies = [wave[i] for wave in sine_waves]
    value = sum(current_frequencies) / 2
    wav_file.writeframes(struct.pack('h', int(value*amplitude)))
Output of the sums of the sin waves before multiplying by amplitude

Python for Engineers goes on to describe how to use a Fast Fourier Transform to get the frequencies from the wave file with a single sine wave. But the code works just as well for the sine wave chord! This is because the FFT is an array that treats each index as a frequency, and the value at that index is the frequency’s amplitude. This means that regardless of the number of tones in a sample, the FFT can be plotted and will reveal outliers: those indices with a much higher amplitude. These are the frequencies in the audio file.

One last thing to keep in mind is that since the indices are used to represent the frequencies, they will be whole numbers. For example, c5 = 523.25hz will show up as a spike at indices 523 and 524, which 523 having the larger value of the two.

Output of finding the frequencies of the chord created above

Full code for creating a chord in Python is posted here.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Sweep the Floor

In the craft tradition, newcomers start as apprentices to a master craftsman. They start by contributing to the simpler tasks, and as they learn and become more skilled, they slowly graduate to larger, more complex tasks.

—Pete McBreen

Sweep the floor pattern shows that an apprentice can work very closely with experienced professional people of any industry, but they cannot be equal with them since the first day they start working on the company. First contributions to the company by an apprentice are usually small and simple tasks, because the team you are part of, doesn’t know where you fit better, nor do you. As bad as this may sound, “sweeping the floor” is a very good start, because you show to your superiors that you can do small things with high quality, and in the future, you can do greater things with the same quality.

What got my attention is Paul’s experience in a software company. He was aware that he couldn’t write code at a certain time, so he accepted to contribute to the company differently, by even sweeping the floor for real. He was only 17 when he first started, but slowly he started doing more technical assignments, like updating the content of the website or working on the backup. These tasks helped him a lot to gain the trust of the team. Since he was able to do the small tasks right, later Paul would be the same passionate person to do something huge in the company. But let’s not forget that it doesn’t matter how small or big your assignment is if you do something with passion and get the best out of it. As it says on the pattern: If no one sweeps the floor, then the glamorous work can’t be done because the team is hip-deep in the dirt. Every role is important, and no one should be ashamed of their job!

In my opinion, being an apprentice is very useful after you graduate. Courses you have taken during college, will not be everything you will need when you become an apprentice. Being able to start from the dirt, will help you understand step by step how things work in the company. Not everybody is ready to start from the dirt though, especially people who have some experience in the field. However, depending on your commitment to learning everything, in the future, you might be able to do great things on your own. As a conclusion, take the opportunity to Sweep the Floor, and make sure you prove to yourself and to the team that you’re worth doing something greater every day. 

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

Practice, Practice, Practice: No, not that Dave Thomas

While this pattern is excellent advice, as the title suggests, I could not help but get distracted by the quotes and advice offered by Dave Thomas. Of course, it was nothing substantive that he was saying just that I could not get the Wendy’s founder, also Dave Thomas, out of my head. As for the actual substance of this particular section, it falls into the category of things I am trying to do more consistently in my academic life. Despite being kind of nerdy romanticized, and potentially misapplied, karate jargon; it is effective enough in conveying the idea of lessons, or more accurately patterns, that should be practiced often. Obviously, the linchpin of these lessons is that they are done in private where you can fail gracefully, a common concept in many of these patterns.

Regarding how I apply these patterns, I use a site called codewars where these exercises are also called kata but are done in a web IDE and can be checked for functionality within seconds. As well, all the solutions done by users can be searched, so one can “compare notes” with other users and see what, often insanely truncated, solutions others reached. This fulfills the dojo aspect of these coding exercises as well. If ever you were looking to apply a pattern most directly, this is it – and I hope other students will use this resource. While I don’t believe the site allows users to create their own kata like the pattern says, they do have a suggestion section in the forums. After long hours of honing your skills you could dream something up to really test yourself and others.

This pattern is related to a favorite of mind, Breakable Toys; and not unreasonably so, many of these exercise are incredibly difficult and I have found myself dropping them into my own IDE to finish off at my own pace apart from the web IDE. As mentioned previously, the school and work schedule I maintain makes it a little difficult to keep up with these exercises, or completely them consistently enough. However, I believe that have read this pattern, being set to graduate, and barreling towards my graduation and subsequent career, I do not want to neglect any practice I can get.

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Be The Worst

I love this apprenticeship pattern. It immediately caught my attention, and although I knew exactly what it was getting at by reading the title, it’s a great reminder.

We’ve probably all heard the phrase “if you’re the smartest person in the room, you’re in the wrong room”. That’s what this pattern is describing. As the worst on a team, you will have an occasion to rise to, riding on the coattails of your teammates and becoming better for it. This means harder work. It requires the ability to apply many other patterns described in the book to succeed efficiently.

The above quote is a bit harsh, though, and isn’t mentioned in the book. It’s not that one should be openly criticizing their coworkers or consider themselves the smartest person in the room, despite every programmer’s periodic God complex. Besides, it’s difficult to ascertain who exactly is the smartest, or best developer, or best employee. There are many metrics and a team is full of people with different strengths and weaknesses. The individuals are less important than the group: this pattern is probably best implemented by looking at teams as a whole. Are your skills lower than those of the group as a whole? This could be caused by a lot of great developers not working well together. This, too, will slow your progress down. This, too, is a reason to change.

I have mixed feelings about this sentiment. It’s patently true that working with people significantly above your skill level will help you improve if you’re willing to put the work in. But it would be a real shame to leave a close, efficient team just because you’re no longer worse than all of them.

But that is an important part of growth: knowing when to move on. Finally ending my college career makes me consider all the things I could have done, or could do if I finished a second concentration or major. Or what I missed for general education credits, extra curricular activities, and social experiences by cramming the entire degree into three years and not taking my time to enjoy it. It’s nearing the time to move on, though. Despite a connection to classmates and professors, it would not be wise to stay longer. This doesn’t mean professional or academic relationships have to end, but the majority of my time will soon be spent improving in my chosen career, with new challenges, on a new team.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.