Category Archives: Week 7

Apprenticeship Pattern: Expose Your Ignorance

The pattern I decided to read is titled “Expose Your Ignorance” from chapter 2 of Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye. This pattern starts with throwing out a hypothetical problem where the reader is working for an employer that is expecting the reader to deliver a product but the reader is unfamiliar with the required technologies. The solution to this problem is in giving an honest response and building a reputation on your ability to learn rather building a reputation on what you know. This pattern also mentions how asking questions is a great way to expose your ignorance because you show what you are learning to your teammates. The pattern ends by discussing how exposing your ignorance is a trait of walking the long road because it means you are open to learning unfamiliar technologies and new domains. Whereas, experts tend to narrow their scope of learning, practice, and projects by specializing in specific things.

There were two instantly helpful pieces of information I gathered from reading this pattern. The first useful piece of information was how as I work on becoming a software craftsman, it is more useful to be known for my ability to learn things instead of being known for the specific technologies I am already familiar with. This piece of advice immediately affects how I think and how I will speak about future projects.

The second thought-provoking piece I read was how becoming an expert is a by-product of the long road I am on. This absolutely blew my mind but also humbled me because I should have known this already since I use a similar perspective for exercising in the gym. My goal of living a long and healthy life means that as a result I will have strong muscles if I am regularly going to the gym. I think one of the reasons this was so striking to me is because I feel like I have such a long way to go before I feel confident enough to call myself an expert in something. Therefore, hearing that becoming an expert is a by-product of the long road showed me how long this long road really is for me.

Lastly, there isn’t anything significant in this pattern that I disagree with. It was an enjoyable pattern to read and one where I learned some valuable information.

From the blog Sensinci's Blog by Sensinci's Blog and used with permission of the author. All other rights reserved by the author.

Practice, Practice, Practice

Malcolm Gladwell published a book about him analyzing how the practice has helped successful people succeed with the 10,000 hours rule. One of the sections in Apprenticeship Pattern Guide for Aspiring Software Craftsman ensures the similar point that practicing is an essential key in making a software engineer become better.
What I found interesting about this is the connection between some of the points in this section and Malcolm Gladwell’s in the “Outliers” are Malcolm shows that practicing was among the factors that led to the Beatle’s success and Bill Gates himself. These people had been able to manage practicing days and nights to the point that they were quite comfortable with the materials similar tools no matter what or where.
Taking the time to practice your craft without interruptions which means to the point where the mistakes are what we are comfortable with as it is mentioned in the Apprenticeship Pattern Guidance . I found it very interesting of how these two authors have the same perspective of spending more time practicing and confronting the problems. Ultimately at a critical time, we might run into those problems again, and being able to solve them would save us time and sources.
To be honest, I’m not a fan of lectures therefore sitting over a lecture in college is somewhat an achievement to me and I’m more like a hands-on person. Fortunately, that is one of the good ways to learn as a Software Engineer.As it was referred to in the book, a beginner should learn by doing rather than reading and sitting through lectures. Besides that, doing and repeating exercises helps us sharpen our skills, our minds, and our bodies to react to the disciplines. Another excellent point in this section that I found helpful is writing a lot and writing code often would help us develop neurons reactions in a good way.
Open to feedback is introduced in this section as well which I completely agree with. Being able to get feedback helps us determine whether we are on the right track. But part of me finds this odd because not everyone could get a good mentor and not everyone can give you good feedback. This also often is hard for people with small circle of relationships and connections. Relying on someone else feedback without knowing ours own issues is not a wise choice.
What I learned from this is finding an exercise and sticking with it until I could solve the problem on my own and then observing the solution that I have made over time as a programmer would have a measurable impact on my abilities to solve any problem later in my career.

From the blog CS@Worcester – Hung Nguyen by hpnguyen27 and used with permission of the author. All other rights reserved by the author.

Your First Language

For the following few years this could be the principal language if you`re requested to remedy a hassle and it dictates a programming language, permit the force closer to. If you`re pursuing a process that calls for a selected language, construct a toy software for the use of that language, ideally an open supply undertaking so it is straightforward for the distinction among a hassle taking mins or days of your time.  This enables to floor your studying in fact and offers essential manner to enhance this revel in is to are seeking for out possibilities to create comments languages have higher gear for comments than others, however no matter the language, you may take a few steps to installation a studying sandbox to test in.  Many languages offer equal empty Java elegance open in his IDE while he wishes to mess around with an unexpected API or language function: Once you`ve found out sufficient to in reality begin writing code, test-pushed improvement strategies to the recognition of test-pushed improvement, you`ll be hard-pressed to discover a language that don`t hesitate to put in writing easy assessments to test your information of the language, or simply to make yourself familiar with the checking out framework.  Eventually, you’ll move from simply writing studying assessments to writing assessments that look at your real code in place of your information of language constructs and APIs.

Over time, you’ll discover that there are numerous different strategies past easy unit checking out that use the pc to the following is a dialogue of studying to assume in a different way through studying a brand-new language, by the way the quality manner analyzes a language is to paintings with a professional in it.

I fully agree with Your first Language pattern. Especially with creating feedback loops to help gauge your progress. I know for myself personally; I like to look back at code that I may have changed to see where I improved on and where I need to improve. Also, by reaching out to a team member that may be stronger in set language is also a good resource to utilize. However, as stated in the text, I agree that as a programmer, you do not want to make it a habit. Although seeking help is a great tool to learn, being dependent on that resource will hinder your ability to grow as a programmer and understand the material at hand.

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

No Stranger To Shame

I think it’s a fair statement to say that anyone who has ever seen me fail, knows that I have often failed spectacularly. Failures of mine come with embarrassment, enigmatic errors and explosions as far as the eye can see. Long story short: I’m no stranger to shame.

Due to my experience with failure (and my tolerance towards the ensuing embarrassment), I often fail without thought or fear. This is an essential personality for learning exactly how I fail, and how to improve upon this type of failure. Reading Apprenticeship Patterns’ reflection on the topic has also helped me come to terms with “failure analysis”.

According to the book, everyone has experienced failure at some point; people that haven’t either never left their comfort zone, or are denying a past failure. It also dabbles on selecting which failures are worth improving upon to salvage sub-par skills. Finally, it admits that some skills may have to be left to rot, as an example of embracing our non-perfect nature.

I agree with the idea that no one is perfect; extending on the topic, I also agree that “ditching” some forms of failure are necessary to becoming a specialized worker. As the old saying goes: “a jack of all trades is a master of none”; I admit myself that trying to work on various topics only results in a mess of files, each one being less impressive than a single, focused file of well-written software.

However, I run into conflict with the pattern early on: its solution mentions “working on problems that are worth fixing”. In my opinion, as contrary as it may be to my above statements, ALL problems are worth fixing. The priority should be chronological, not a binary form of “assert/ignore”. Yes, figuring out our failures and becoming specialized is great, but even a specialist should be somewhat familiar with as much material as possible. A good team member benefits from a great crew, but can also proceed on their own (even if the progress slows down).

Going forward, I plan on using this pattern in my Capstone adventures (as well as my professional career) by trying to figure out what my specialty is. Which failures are the most extravagant? Which ones are fixable? Which ones are worth the effort despite the challenge? A big problem about my stance in computer science is that my stance is too vague (this was the original reason why I declared this major).

Hopefully, I will be able to cut down on this “vagueness”; despite computer programming being applicable in many fields, I will find the one that I am meant to work in. By finding my failures, the search will increment ever more closely to a successful result.

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

Software Development and Videogames

In our modern society videogames have become a

In our modern society videogames have become a ubiquitous piece software that many use as a form of entertainment. They are on everything from dedicated home consoles and PCs to handheld devices and most prominently, smartphones. Videogames as software can range from extremely simple text based games to complex 3 dimensional environments with intelligent AI systems controlling events throughout the game.

Game development can be a great exercise for any upcoming software engineer to practice their skills. A single person is capable of creating a game as was the case originally with the popular game Minecraft. Minecraft, as many may already know is a open world that is famous for its randomly generated terrain and complete freedom in destroying and building in the world around you. In this simple game there is AI controlling the in game monsters, physics that dictate how you jump, fall, and swim. A dataset of different types of blocks that make up the game, and a world generator that utilizes different rules to create a landscape that is random yet results in a world that is cohesive and believable.

In contrast to the random generation and world state tracking done in Minecraft, other games such as portal specialize in their ability to create believable physics as well as their signature reality bending portals. The game allows you to seamlessly move through portals and stand anywhere in between and also involves physics of items and liquids dropping and moving in a believable way. Physics is an important part of the game and can prove difficult in itself to design from a software perspective.

Aside from these however, games can be as simple as a 2 dimensional platformer that only needs to consider at a basic level the act of moving left or right and jumping, or even a turn based game that needs only consider strict choices that are given to the player to interact with the game world. Game development, just like any good software has a development cycle and will go through a development, alpha, beta, and release cycle where the software will take shape and become a finished product over time. As the game becomes more complex, bugs and other problems may arise that must be dealt with and code optimization and organization becomes important. It can involve multiple people and repository management as projects become more and more complex, too complex even for just one person to deal with. Often the gamification of a task can make it more enjoyable and rewarding to practice so why not apply this to software. If it interests you it can be a great way to practice your skills and even discover new practices that can help in other programs you create.

From the blog CS@Worcester – George Chyoghly CS-343 by gchyoghly and used with permission of the author. All other rights reserved by the author.

JSON

I’ve been seeing so many json files while working with Docker and cant help myself but wonder what is JSON? What do they do and why do we need them along with JavaScript. In this blog, I want to cover this topic to help myself and others to learn more about JSON.

JSON stands for JavaScript Object Notation, and is a way to store information in an organized, easy to access manner. Basically, JSON gives human-readable collection of data that can be accessed in logical manner. There are many ways to store JSON data but Array and nest objects are the most popular ones. However, I will not go into the details about those two methods but focusing more on JSON definition.

Why does JSON matter?

JSON becomes more and more important for sites to be able to load data quickly and seamlessly, or in the background without delaying page rendering. Also, it helps switching up the contents of a certain element within our layouts without refreshing webpages. This is convenience not only for users but also developers in general. Because of its popularity, many big sites rely on content provided by sites such as Twitter, Flickr, and others. These sites provide RSS feeds to minimize the effort to import and use the server side, but by using them with AJAX (a powered sites), we run into a problem that we would only be able to load an RSS feed if we’re requesting it from the same domain it’s hosted on. JSON allows us to overcome the cross-domain issue because using callback function in JSON would send the JSON data back to our domain. This capability makes JSON so useful as it solves so many problem that were difficult to work around.

JSON structure

JSON is a string whose format very much resembles JavaScript object literal format. We can include the same basic data types inside JSON as we can in a standard JavaScript object such as strings, numbers, arrays, booleans, and other object literals. This allows us to construct a data hierarchy. JSON is also purely a string with specified data format which means it contains only properties and it doesn’t have methods.

REST vs SOAP: The JSON connection

Originally, this kind of data was transferred in XML format using a protocol called SOAP. However XML was robust and difficult to manage in JavaScript. The reason is JavaScript already have objects, which are a way to express data within this language therefore Doughlas Crockford (JSON creator, also JSLint and JSMin) took a subset of that expression as a specification for new data interchange format and renamed it JSON.

REST began to overtake SOAP in transferring data. The biggest advantages of programming using REST APIs is that we can use multiple data formats which means you can include not only XML but also HTML and JSON. Since developers prefer JSON over XML so they come to favor REST over SOAP.

Today, JSON is the standard for exchanging data between web and mobile clients and back-end services. As I go deeper into the software development cycle, I feel like the need for JSON is essential. Let aside all advantages, there are disadvantages but the importance of JSON is undebatable.

Source: https://www.infoworld.com/article/3222851/what-is-json-a-better-format-for-data-exchange.html

From the blog CS@Worcester – Nin by hpnguyen27 and used with permission of the author. All other rights reserved by the author.

SOLID Principles of Object-Oriented Programming

Thinking about design patterns, I decided to research the SOLID principles that reinforce the need for design patterns.

A wonderful source explaining the SOLID principles is from https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/.

The SOLID acronym was formed from:

  • Single Responsibility Principle
  • Open-Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

The Single Responsibility Principle states that a class has only one responsibility and therefore should only have one reason that it changes.

The Open-Closed Principle states that extensions to classes can be added for new functionality, but the code of the class should not be changed.

The Liskov Substitution Principle states that objects of a subclass should be able to work with methods that expect the object of the superclass and the method should not give irregular output.

The Interface Segregation Principle relates to separating interfaces. There should be multiple specific interfaces versus creating one general interface in which there are many overridden methods that have no use.

The Dependency Inversion Principle states that classes should depend on interfaces or abstract classes rather than concrete classes and functions. This would help classes be open to extensions.

I researched SOLID from this source because it had in-depth examples of code violating the examples, and how it could be fixed. For the Single Responsibility Principle example, it showed how class Invoice had too many responsibilities and separated its methods and created class InvoicePrinter and InvoicePersistence so that each class has one responsibility for the application.

The Liskov Substitution Principle example involves a Rectangle superclass and Square subclass (because a Square is a Rectangle). The setters for the Square class are overridden because of the property of squares to have the same height and width. However, this violates the principle because of what happens when the getArea function from the Rectangle class is tested. The test class puts in a value of 10 for the height and expects the area to be width * 10, but because of the override for the Square class dimensions, the width is also changed to 10. So if the width was originally 4, the getArea test expects 40, but the Square class override makes it 100. I thought this was a great example because I would expect the function to pass the test but did not remember how assigning the height in the test would also change the width.

The examples provided were very helpful in understanding how it can be seen in code, and the diagrams were a bonus for visual representation of the code. Going forward, I will know that I have this information to reference, if need be, when dealing with class designs. It has shown me principles to help make coherent classes with detailed explanations.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

Monolithic vs. Microservice Architectures

Several of our classes have discussed two different forms of software architecture. Due to my intent to graduate with both Software Development and Big Data Analytics concentrations, this topic especially interested me. On one hand, I need to know the software architecture designs to best plan an implementation for, and on the other hand, I need to know how to deal with massive data sets. These two factors combined peak my interest as an intersection point of both fields.

A server-side application has several components, namely presentation, business logic, database access, and application integration. The Monolithic Architecture will package these components linearly, resulting in ease of development, testing, deployment, and scalability. However, its design flaws include limitations in size and complexity, difficulty in understanding, and application-wide redeployment.1

The Microservice Architecture is split into a set of smaller and connected services, each with its own architecture. Each service performs a given task(s) which is then, in one way or another, linked back to the other services. Microservice Architecture is characterized as faster to develop, independent deployment, and faster and independent scalability. However, its disadvantages include added complexity, increased difficulty in testing, and increased inter-service change difficulty.1

With these things in mind, questions such as “should I implement a Microservice Architecture or a Monolithic Architecture”, and “should I always use one or the other” arrive. What I’ve learned is that despite the fairly detailed description of both of these architectures there isn’t a uniform consensus of when to apply them. Martin Fowler says a Monolith Architecture should always be implemented first, due to the fact that the Microservice Architecture implies a complexity that may not be needed. When the prospects of maintaining or developing a monolithic application becomes too cumbersome one should begin transitioning into using Microservice Architecture2, Stefan Tilkov disagrees, however, stating that building a new system is when one thinks about partitioning the application into pieces. He further states that cutting a monolithic architecture into a microservice architecture is unrealistic.3

Would I start using Monolithic Architecture, or would I start using Microservices Architecture for software? While thinking about this problem, I know that different parts of the application are doing different things. One deals with users placing orders, which in turn requires the user to have their own UI, API, Database, and servers; and the other requires the approvers to have their own UI, API, Database, and servers. This is convincing in favor of not beginning development with a monolithic architecture design, since planning an application’s design should be thought of immediately. Therefore, I agree with Stefan Tilkov more on this issue, based on the knowledge that I have as a student.

Links:

  1. https://articles.microservices.com/monolithic-vs-microservices-architecture-5c4848858f59
  2. https://martinfowler.com/bliki/MonolithFirst.html
  3. https://martinfowler.com/articles/dont-start-monolith.html

From the blog CS@Worcester – Chris's CS Blog by Chris and used with permission of the author. All other rights reserved by the author.

REST APIs : First Look

Last week we went over docker and some general information about it. Now this week we have started to go over REST APIs in our activity’s so I decided to dedicate this weeks blogpost to it. The Source I have chosen is a fairly recent article written by Zell Liew that is lengthy and gives us some general insight on what REST APIs are and should be a good read to many who are looking to learn.

A REST API is firstly, an application programming interface(API) which allows the programs to communicate with each other and this API follows REST. REST stands for Representational State Transfer which is basically another set of rules/constraints that the API has to follow when being created. Here with REST APIs, we are able to request and get data back as a response. Zell goes over the 4 makings of a request which firstly the endpoint which is the URL we request for. The endpoint itself is made of the root-endpoint which looks something like https://api.website.com. Then there is the path which helps us finalize what we are requesting and query parameters. Then we have the method as the 2nd part which is what sort of request do we actually send which are in 5 types: get, put, post, patch, and delete which are then used to determine the action to take. Then there is the 3rd part which the the header which actually has a number of uses but they in general provide information in one way or another. Lastly, there is the data/body which contains the information we are sending to the server. Zell also goes over some more about authentication, https code, etc.

Zell’s explanation over REST APIs is a good start on helping me understand on what APIs are and their actual use in computer science. This view on how websites work is also quite interesting. This also will help in class as we potentially work with APIs further in our class activities and make the process a bit more fluid. The prevalence of REST APIs is very much apparent with it being used by big company sites like Google and Amazon and with the rise of the cloud over the years has led to increased usage of REST APIs. This information of REST APIs is a must if I am to continue to improve and potentially work in a career that has to work in areas like this as it looks like REST APIs will continue to be used for a while longer since people are comfortable with it so I should too.

Source: https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Docker Compose

In last week’s blog, I introduced the power of container technology, Docker. Besides the advantage of easy implementation, however, implementing a command again and again using a terminal is not a good idea long-term. Therefore, Docker Compose is a tool for defining and running multi-container Docker applications by implementing all the applications commands to a YAML file and having it executed when needed.

According to the Overview of Docker Compose, using Compose is a three-step process:

1.       Define your app’s environment with a Dockerfile so it can be reproduced anywhere.

2.       Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

3.       Run docker compose up and the Docker compose command starts and runs your entire app. You can alternatively run docker-compose up using the docker-compose binary.

Dockerfile is another concept, while in this blog I will introduce how to manipulate Docker compose. Let’s take this command as an example when we want to create a nginx server


docker run -d --rm \

--name nginx-server \

-p 8080:80 \

-v /some/content:/usr/share/nginx/html:ro \

nginx


This command serves the purpose of mounting an html file to the nginx server running at port 8080 locally where nginx-server is the name of the container, -p 8080:80 is mapping the local port at first parameter and the default container port at the second parameter, separated by a colon and -v /some/content:/usr/share/nginx/html:ro will mount that 8080 port with the html file at the pointed directory. Translating this command into Docker compose:


version: “3.9”

services:

                nginx-server:

               image: nginx

                                ports:

                                - 8080:8080

                                volumes:

                                - /some/content:/usr/share/nginx/html:ro


Version is the version of Docker compose we want to use, all the necessary commands will be integrated and placed under the services section. Name of the container is placed at first then all of its entities are written after a colon and one indentation in the next line. Image tag is the application we want to create, ports is equivalent to -p to map the port, and volume is the same as -v used for mounting. Under this format, we can implement as many images as we want at once and just rerun it with the command docker-compose up in the directory containing that docker-compose.yml file.


version: “3.9”

services:

                nginx-server:

               image: nginx

                                ports:

                                - 8080:8080

                                volumes:

                                - /some/content:/usr/share/nginx/html:ro

                redis-container:

                                image: redis

               


Overall, the convenience of Docker is taken to a higher level thanks to the compose concept. Also, this video is another source that I watched to understand more about docker-compose. Therefore, I suggest using the main Docker Compose page from their site and this video as a reference if you might need it in the future.

From the blog CS@Worcester – Vien's Blog by Vien Hua and used with permission of the author. All other rights reserved by the author.