Category Archives: Week 10

Working in containers

Today we will be focusing on containers and why containers have become the future of DevOps. For this we will be looking at a blog by Rajeev Gandhi and Peter Szmrecsanyi which highlights the benefits of containerization and what it means for the developers like us.

Containers are isolated unit of software running on top of OS i.e., only applications and their dependencies. Containers do not need to run full operating system; we have been using Linux operating system kernel through docker and using capabilities of our local system hardware and OS (windows or macOS). When we remote accessed SPSS and logic works through virtual machine, VM came loaded with full operating system and its associated libraries which is why they are larger in size and much slower compared to containers that are smaller in size and faster. Containers can also run anywhere since docker (container) engine supports almost all underlying operating systems. Containers can also work consistently across local machines and cloud making containers highly portable.

We have been building containers for our DevOps by building and publishing container (docker) images. We have been working on files like api and backend in development containers preloaded with libraries and extensions like preview swagger. We then make direct changes in the code and push them into containers – this can lead to potential functionality and security risks. Therefore, we can change the docker image itself. Instead of making code changes on backend, we are building image with working backend code and then coding on frontend. This helps us avoid accidental changes to a working backend, but we must reconstruct the container if we are making changes to the container image.

Containers are also highly compatible for deploying and scaling microservices, which are applications broken into small and independent components. When we will be working on Libre food pantry microservices architecture, we will have five to six teams working independently on different components of the microservices in different containers giving us more development freedom. After an image is created, we can also deploy a container in matter of seconds and replicate containers giving developers more experimentation freedom. We can try out minor bug fixes, or new features and even major api changes without the fear of permanent damage on the original code. Moreover, we can also destroy a container in matter of seconds. This results in faster development process which leads to quicker releases and upgrades to fix minor bugs.

Source:

https://www.ibm.com/cloud/blog/the-benefits-of-containerization-and-what-it-means-for-you

https://www.aquasec.com/cloud-native-academy/docker-container/container-devops/

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

The Bug

Internal bug finding, where project developers find bugs themselves, can be superior to external bug finding because developers are more in tune with their own project’s needs and also, they can schedule bug finding efforts in such a way that they are most effective, for example after landing a major new feature. Thus, an alternative to external bug finding campaigns is to create good bug finding tools and make them available to developers, preferably as open-source software. Bug finders are motivated altruistically (they want to make the targeted software more reliable by getting lots of bugs fixed) but also selfishly (a bug finding tool or technique is demonstrably powerful if it can find a lot of previously unknown defects). A priority for “a valid bug that should be fixed, but” I’ve found FindBugs’ “Mostly Harmless” useful for this and have lobbied to include it in every bug tracker I’ve used since, though it’s more of a severity than a priority. External bug finders would like to find and report as many bugs as possible, and if they have the right technology, it can end up being cheap to find hundreds or thousands of bugs in a large, soft target. A major advantage of external bug finding is that since the people performing the testing are presumably submitting a lot of bug reports, they can do a really good job at it.

A priority for “a valid bug that should be fixed, but not something that will ever be at the top of the list unless something unforeseen happens.” The OSS-Fuzz project is a good example of a successful external bug finding effort; its web page mentions that it “has found over 20,000 bugs in 300 open-source projects. As an external bug finder, it can be hard to tell which kind of bug you have discovered (and the numbers are not on your side: the large majority of bugs are not that important). However, if they actively don’t trust you, for example because you’ve flooded their bug tracker with corner-case issues, then they’re not likely to ever listen to you again, and you probably need to move on to do bug finding somewhere else. Much more recently, it has become common to treat “external bug finding,” looking for defects in other people’s software, as an activity worth pursuing on its own. An all-too-common attitude (especially among victims of bug metrics) is that bug reporters are the enemy—but those making pull requests are contributors. First, every bug report requires significant attention and effort, usually far more effort than was required to simply find the bug. Occasionally, your bug finding technique will come up with a trigger for a known bug that is considerably smaller or simpler than what is currently in the issue tracker.

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

Security in API’s

As we continue to work with API’s, I have decided to dedicate this blogpost to talk about security with API’s as eventually we will have to take this into consideration when we go into more in depth with them in the future. Security will always stay priority in which I think it would be helpful to look at this area now. I have chosen a blog post that gives us some good practices we can look at to help better our API’s.

In summary, this authors first goes over TLS which stands for Transport layer security which is a cryptographic protocol used to help prevent tampering and eavesdropping by encrypting messages sent to and from the server. The absence of TLS means that third parties get easy access to private information from users. TLS can be found when the website URL contains https and not just http such as the very website you are reading from now. Then they go over Oauth2 which is a general framework and use that with single sign-on providers. This helps manage third party applications access and usage of data on the behalf of the user. This is used in situations such as granting access to photos to used in a different application. They go in depth over codes and authentication tokens with Oauth2. Then they go over API keys where we should set the permissions on those and don’t mismanage those. They say at the end to just use good, reliable libraries that you can put most of the work and responsibility onto so that we can can minimize the mistakes we can make.

These practices will help bolster my knowledge on the security placed within the API’s we are working with. This blogpost has helped me learn more on the general framework on what security measures are placed in the general API landscape. TLS looks to be a standard protocol used in 99% of websites you see but also makes me wonder on all of the websites that I have traversed through that didn’t have TLS and you should too and make sure that you have no private information in danger on those sites. This also makes me wonder on how TLS is implemented such as with the LibrePantry API that is being worked on if it is being used there(hopefully). Then perhaps when we work further in the API’s, we get to see the security practices implemented.

Source: https://stackoverflow.blog/2021/10/06/best-practices-for-authentication-and-authorization-for-rest-apis/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Software Framework

A framework is similar to an application programming interface (API), though technically a framework includes an API. As the name suggests, a framework serves as a foundation for programming, while an API provides access to the elements supported by the framework. Also, a framework may include code libraries, a compiler, and other programs used in the software development process. There are two types of frameworks which are: Front end and back end.

The front end is “client-side” programming while a back end is referred to as “server-side” programming. The front-end is that part of the application that interacts with the users. It is surrounded by dropdown menus and sliders, is a combo of HTML, CSS, and JavaScript being controlled by our computer’s browser. The back-end framework is all about the functioning that happens in the back end and reflects on a website. It can be when a user logs into our account or purchasing from an online store.

Why do we need a software development framework?

Software development frameworks provide tools and libraries to software developers for building and managing web applications faster and easier. All the frameworks mostly have one goal in common that is to facilitate easy and fast development.

Let’s see why these frameworks are needed:

  1. Frameworks help application programs to get developed in a consistent, efficient and accurate manner by a small team of developers.
  2. An active and popular framework provides developers robust tools, large community, and rich resources to leverage during development.
  3. The flexible and scalable frameworks help to meet the time constraints and complexity of the software project.

Here are some of the importance of the framework. Now let’s see what are the advantages of using a software framework:

  1. Secure code
  2. Duplicate and redundant code be avoided
  3. Helps consistent
  4. developing code with fewer bugs
  5. Makes it easier to work on sophisticated technologies
  6. Applications are reliable as several code segments and functionalities are pre-built and pre-tested.
  7. Testing and debugging the code is easier and can be done even by developers who do not own the code.
  8. The time required to develop an application is less.

I chose this topic because I have heard many times about software frameworks and was intrigued by learning more about them, what they are, how they work, and what their importance is in the software development field. Frameworks or programming languages are important because we need them to create and develop applications.

Software Development Frameworks For Your Next Product Idea (classicinformatics.com)

Framework Definition (techterms.com)

From the blog CS@Worcester – Software Intellect by rkitenge91 and used with permission of the author. All other rights reserved by the author.

Blog Post 5 – SOLID Principles

When developing software, creating understandable, readable, and testable code is not just a nice thing to do, but it is a necessity. This is because having clean code that could be reviewed and worked on by other developers is an essential part of the development process. When it comes to object oriented programming languages, there are a few design principles that help you avoid design smells and messy code. These principles are known as the SOLID principles. These principles were originally introduced by Robert J. Martin back in 2000. SOLID is an acronym for five object oriented design principles. These principles are:

  1. Single Responsibility Principle – A class should have one and only one reason to change, meaning that a class should have only one job. This principle helps keep code consistent and it makes version control easier.
  2. Open Closed Principle – Objects or entities should be open for extension but closed for modification. This means that we should only add new functionality to the code, but not modify existing code. This is usually done through abstraction. This principle helps avoid creating bugs in the code.
  3. Liskov Substitution Principle – Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T. This means that subclasses can substitute their base class. This is expected because subclasses should inherit everything from their parent class. They just extend the parent class, they never narrow it down. This principle also helps us avoid bugs.
  4. Interface Segregation Principle – A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use. This principle helps keeps the code flexible and extendable.
  5. Dependency Inversion Principle – Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions. This means that dependencies should be reorganized to depend on abstract classes rather than concrete classes. Doing so would help keep our class open for extension. This principle helps us stay organized as well as help implement the Open Closed Principle.

These design principles act as a framework that helps developers write cleaner, more legible code that allows for easier maintenance and easier collaboration. The SOLID principles should always be followed because they are best practices, and they help developers avoid design smells in their code, which will in turn help avoid technical debt.

https://www.digitalocean.com/community/conceptual_articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design#single-responsibility-principle

https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.

Anestiblog #4

This week I read a blog post that I thought really related to the class about why software development is important. The blog is about a deep dive into the career of a software developer. It starts off describing software developers as the masterminds behind computer programs. The blog then gives what different types of software developers do, like how applications software developers are responsible for designing computer or phone apps, and systems software developers are responsible for operating systems level software. Afterwards , the skills needed for software developers are shown. Some of the skills are problem-solving skills, teamwork, motivation, and analytical strategy. The blog ends with the salary of software developers($110,000), and a message to motivate the reader for the future. I selected this blog post because software development is my dream job, and I thought it would be interesting to read about what I should expect for the future job. This blog has a good, in-depth description of how Software Development works, that I think every CS major should read. I think this blog was a great read that I recommend for many reasons. One reason I would recommend this blog is because of how deep it goes into the job of a software developer. The blog goes over what to expect, skills needed, the pay, and does it all at a high level. Another reason I would recommend this blog is because a lot of jobs that we need CS-343 for will all be similar to software development, so even if you do not want to become a software developer, you can still learn something. The last reason I would recommend this blog is because it could get people who don’t like software development into the area by showing them what to expect from the job. Knowing what to expect could really help open the doors for others to be interested in this field. I learned how many of the skills that are needed for software developers I have already like Java, and I also learned the skills that I have not learned yet like DevOps. The material affected me heavily because it showed me what skills to learn, and what to expect if I want my future dream of software development to come true. I will take all the knowledge given to me through this blog into the future by getting better prepared to do this job. Now that I know what to expect from software development, I will try to build on it for the future.

https://www.rasmussen.edu/degrees/technology/blog/what-does-software-developer-do/

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

Concurrency and Parallelism

In our most recent class, one of our in class activities mentioned asynchronicity for a JavaScript function. This function required us to use the “await” keyword for us getting data. I’d thus thought of doing a post on concurrency, but in my experience in using concurrency, I’ve noticed myself and others struggling with the difference between concurrency and parallelism. Hence, in order to clearly understand concurrency, I wish to also describe parallelism. 

An application that is neither parallel nor concurrent is one that is processed sequentially, typically one “where it only has a single job that is too small to make sense to parallelize.1

Parallelism is simply the separation of a task into multiple sub tasks. These sub tasks are simply parallel if they are worked on sequentially. If two or more sub tasks are being worked at the same time, the program is implementing both parallelism and concurrency.

Concurrency is the act of an application working on two or more parts of an application at the same time. One thread could be running through an array, while another is doing a separate calculation. If an application is running without implementing parallelism, then it is working on each part of the application.

An application that is both parallel and concurrent can partition parts of the application and executes the program’s sub tasks concurrently. It may also include programs that divide a subtask into further subtasks and run those concurrently too.

I picked this particular source because it articulates the difference between concurrency and parallelism, rather than just one or the other. It also has a section that shows the permutations of implementing concurrency, parallelism, both, or neither. Even though I know about parallelism and concurrency, this article in particular helped me visualize them. I did not know that one could implement concurrency without parallelism. 

I feel as through this article will help me understand some elements of homework #5, because it will likely include functions that are asynchronous, and therefore require us to acknowledge it by properly implementing the “await” keyword. While this article was oriented towards locally run applications, this will also aid with applications that are executed to and from servers, by which the topics of parallelism and concurrency will still render useful.

I have also created a program that attempts to implement concurrent execution between multiple sub tasks that exist from implementing parallelism, which this article helped me understand. Specifically, I generate a vector with integers, then split the vector into subtasks which are then run concurrently. Each vector sub task gives the average sum of their respective sub task. 

Links:

  1. http://tutorials.jenkov.com/java-concurrency/concurrency-vs-parallelism.html (the proper article sourced for the information)
  2. https://github.com/Chris-Archive/Vector-Parallelism-Concurrency (my GitHub repository of the example I used above in C++)

From the blog CS@Worcester – Chris's CS Blog by Chris and used with permission of the author. All other rights reserved by the author.

Design Smells

Hello everyone and welcome to my blog! This week I decided to write a blog on design smells. In programming, we write a lot of code and even a small mistake in the code tends to break the whole program which brings down the efficiency of the code. Design smells refer to the mistakes in the code that had to do with the way the programmer wrote the code which then tends to make a lot of problems due to the design on how the code is written. Design smells are structures in the design that indicate a violation of fundamental design principles and negatively impact design quality. If a programmer focuses less on code design, later the program tends to break easily even if a small feature needs to be added to it. In the article, Mr. Fowler says code smell is a surface indication that usually corresponds to a deeper problem in the system. Design smells are easy to spot if we know about them and what they are, hence, some of the common design smells are:

Rigidity: The program breaks if a single change is made in the code, hence had to go back, and make several changes to make the code work.

  • Immobility: the parts of the code if can be used in other system can’t be moved because of the high risk of breaking the original code.
  • Fragility: changes in the code that could make error in the different unrelated parts to the program, hence making it difficult to even change small section of the code.
  • Viscosity: Making the changes in the program in a right way is harder hence, code is easy to be broken.
  • Needless Complexity: having the code that is not useful and make the code hard to understand.
  • Needless Repetition: Various Repetition of functions in the code can be removed by refactoring the program.
  • Opacity: The functionality of a system or feature is unclear, or the code is unclear and very difficult to understand.

I choose this article because as a programmer above things are something we don’t want in our code, by learning about design smells one can know where to find it in the code and once you find it following the best practices such as refactoring can help solve design smells. As a programmer in the future knowing about these design smells helps me later in my everyday task in my job.

Links used:

https://martinfowler.com/bliki/CodeSmell.html

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

REST APIs

My experience with APIs outside of classwork is very limited, so I wanted to take this opportunity to further familiarize myself with them. After a discussion in class, I am especially eager to learn more about REST APIs and how they might differ from other APIs. Jamie Juviler’s “REST APIs: How They Work and What You Need to Know” is a very good start.

Introduced in 2000 by Roy Fielding, REST APIs are application programming interfaces that follow REST guidelines. REST, which stands for representational state transfer, enables software to be able to communicate over the internet in a way that is scalable and easily integrated. According to Juviler, “when a client requests a resource using a REST API, the server transfers back the current state of the resource in a standardized representation.” REST APIs are a popular solution for many web apps. They are able to handle a wide variety of requests and data, easy to scale, and simple to build as they utilize web technologies that already exist.

REST guidelines offer APIs increased functionality. Juviler states that in order for an API to properly take advantage of REST’s functionality, it must abide by a set of rules: client-server separation, uniform interface, stateless, layered system, cacheable, and code on demand. Client-server separation addresses the way a client communicates with a server. A client sends the server a request, and the server sends that client a response. Communication cannot happen in the other direction; a server cannot send a request nor can a client send a response. Uniform interface addresses the formatting of requests and responses. This standardized the formatting and makes it easier for servers and softwares to talk to each other. Stateless requires calls made with a REST API to be stateless. Each request to the server is dealt with completely independent of other requests. This eases the use of the server’s memory. Layered system states that, regardless of the possible existence of intermediate servers, messages between the client and main server should have the same format and method of processing. Cacheable requires that a server’s response to a client include information about if and for how long the response can be cached. Code on demand is an optional rule. It allows a server to send code as part of a response so that the client can run it.

I picked this source because I thought the information was valuable. Again, I have very limited experience with APIs, even less with REST APIs, and I struggled somewhat to understand them. Jamie Juliver provides a very thorough yet easily understood overview of REST APIs. I was not aware that what sets a REST API apart from a regular API was its adherence to a set of rules. This article helped me better understand REST APIs, and I am eager to put this knowledge to use in future coursework and projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

YAGNI

This is a topic that I have been wanting for a while to write about but never really got around to doing so until now. For this week, I am going to review a post made by Martin Fowler. In the first part of the blog post, Mr. Fowler explains what the acronym stands for and where the term comes from. Mr. Fowler than walks the reader through an example of when we could apply YAGNI. In the blog post, the example that he uses is a company having two teams work on two different components of a program (one for sales and the other one for pricing). The pricing team then predicts that in six months’ time, they will need to create software that handles pricing for piracy risks and Mr. Fowler argues that this violates the principle of YAGNI. He argues that because the team is making assumptions based on something that has not happened yet, the feature the team builds might be wrong or worse not needed or used at all. In which case, this will mean that the team wasted a lot of time, energy, and effort analyzing programming and testing something that may or may be useless. In the amount of time the team spent developing the precaution for piracy, they could have used that time working on a different required feature of the program. Furthermore, the feature they build could handle the problem incorrectly which means they might have to refactor this feature later down the line. It also adds an extra layer of complexity to the program and adds more items for the team to repair and maintain. In addition, Mr. Fowler argues this is a bad habit for people to get into. The habit of making precautions in advance because it is impossible to predict all possible outcomes and as he put it even if we tried, we would still end up getting “blindsided”.

I wanted to write about this topic now because when I was doing the intermediate assignment for Homework 4, I was thinking about how my approach was violating the single-responsibility principle because one of the methods was doing three different things. When realizing this, I started thinking about my other bad programming habits. One of which was I tended to think ahead and code things that I think the program would need and how it violates YAGNI. So, I decided to read and review a post on YAGNI because I wanted to learn more about why you shouldn’t code things in advance so that the next time that I code, I can avoid those mental pitfalls. I think reading about this blog post would help me a lot because sometimes I have a hard time making these realizations on my own unless someone brings it to my attention in discussion.

https://martinfowler.com/bliki/Yagni.html

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.