Category Archives: Week 7

Software Development and Videogames

In our modern society videogames have become a

In our modern society videogames have become a ubiquitous piece software that many use as a form of entertainment. They are on everything from dedicated home consoles and PCs to handheld devices and most prominently, smartphones. Videogames as software can range from extremely simple text based games to complex 3 dimensional environments with intelligent AI systems controlling events throughout the game.

Game development can be a great exercise for any upcoming software engineer to practice their skills. A single person is capable of creating a game as was the case originally with the popular game Minecraft. Minecraft, as many may already know is a open world that is famous for its randomly generated terrain and complete freedom in destroying and building in the world around you. In this simple game there is AI controlling the in game monsters, physics that dictate how you jump, fall, and swim. A dataset of different types of blocks that make up the game, and a world generator that utilizes different rules to create a landscape that is random yet results in a world that is cohesive and believable.

In contrast to the random generation and world state tracking done in Minecraft, other games such as portal specialize in their ability to create believable physics as well as their signature reality bending portals. The game allows you to seamlessly move through portals and stand anywhere in between and also involves physics of items and liquids dropping and moving in a believable way. Physics is an important part of the game and can prove difficult in itself to design from a software perspective.

Aside from these however, games can be as simple as a 2 dimensional platformer that only needs to consider at a basic level the act of moving left or right and jumping, or even a turn based game that needs only consider strict choices that are given to the player to interact with the game world. Game development, just like any good software has a development cycle and will go through a development, alpha, beta, and release cycle where the software will take shape and become a finished product over time. As the game becomes more complex, bugs and other problems may arise that must be dealt with and code optimization and organization becomes important. It can involve multiple people and repository management as projects become more and more complex, too complex even for just one person to deal with. Often the gamification of a task can make it more enjoyable and rewarding to practice so why not apply this to software. If it interests you it can be a great way to practice your skills and even discover new practices that can help in other programs you create.

From the blog CS@Worcester – George Chyoghly CS-343 by gchyoghly and used with permission of the author. All other rights reserved by the author.

JSON

I’ve been seeing so many json files while working with Docker and cant help myself but wonder what is JSON? What do they do and why do we need them along with JavaScript. In this blog, I want to cover this topic to help myself and others to learn more about JSON.

JSON stands for JavaScript Object Notation, and is a way to store information in an organized, easy to access manner. Basically, JSON gives human-readable collection of data that can be accessed in logical manner. There are many ways to store JSON data but Array and nest objects are the most popular ones. However, I will not go into the details about those two methods but focusing more on JSON definition.

Why does JSON matter?

JSON becomes more and more important for sites to be able to load data quickly and seamlessly, or in the background without delaying page rendering. Also, it helps switching up the contents of a certain element within our layouts without refreshing webpages. This is convenience not only for users but also developers in general. Because of its popularity, many big sites rely on content provided by sites such as Twitter, Flickr, and others. These sites provide RSS feeds to minimize the effort to import and use the server side, but by using them with AJAX (a powered sites), we run into a problem that we would only be able to load an RSS feed if we’re requesting it from the same domain it’s hosted on. JSON allows us to overcome the cross-domain issue because using callback function in JSON would send the JSON data back to our domain. This capability makes JSON so useful as it solves so many problem that were difficult to work around.

JSON structure

JSON is a string whose format very much resembles JavaScript object literal format. We can include the same basic data types inside JSON as we can in a standard JavaScript object such as strings, numbers, arrays, booleans, and other object literals. This allows us to construct a data hierarchy. JSON is also purely a string with specified data format which means it contains only properties and it doesn’t have methods.

REST vs SOAP: The JSON connection

Originally, this kind of data was transferred in XML format using a protocol called SOAP. However XML was robust and difficult to manage in JavaScript. The reason is JavaScript already have objects, which are a way to express data within this language therefore Doughlas Crockford (JSON creator, also JSLint and JSMin) took a subset of that expression as a specification for new data interchange format and renamed it JSON.

REST began to overtake SOAP in transferring data. The biggest advantages of programming using REST APIs is that we can use multiple data formats which means you can include not only XML but also HTML and JSON. Since developers prefer JSON over XML so they come to favor REST over SOAP.

Today, JSON is the standard for exchanging data between web and mobile clients and back-end services. As I go deeper into the software development cycle, I feel like the need for JSON is essential. Let aside all advantages, there are disadvantages but the importance of JSON is undebatable.

Source: https://www.infoworld.com/article/3222851/what-is-json-a-better-format-for-data-exchange.html

From the blog CS@Worcester – Nin by hpnguyen27 and used with permission of the author. All other rights reserved by the author.

SOLID Principles of Object-Oriented Programming

Thinking about design patterns, I decided to research the SOLID principles that reinforce the need for design patterns.

A wonderful source explaining the SOLID principles is from https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/.

The SOLID acronym was formed from:

  • Single Responsibility Principle
  • Open-Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

The Single Responsibility Principle states that a class has only one responsibility and therefore should only have one reason that it changes.

The Open-Closed Principle states that extensions to classes can be added for new functionality, but the code of the class should not be changed.

The Liskov Substitution Principle states that objects of a subclass should be able to work with methods that expect the object of the superclass and the method should not give irregular output.

The Interface Segregation Principle relates to separating interfaces. There should be multiple specific interfaces versus creating one general interface in which there are many overridden methods that have no use.

The Dependency Inversion Principle states that classes should depend on interfaces or abstract classes rather than concrete classes and functions. This would help classes be open to extensions.

I researched SOLID from this source because it had in-depth examples of code violating the examples, and how it could be fixed. For the Single Responsibility Principle example, it showed how class Invoice had too many responsibilities and separated its methods and created class InvoicePrinter and InvoicePersistence so that each class has one responsibility for the application.

The Liskov Substitution Principle example involves a Rectangle superclass and Square subclass (because a Square is a Rectangle). The setters for the Square class are overridden because of the property of squares to have the same height and width. However, this violates the principle because of what happens when the getArea function from the Rectangle class is tested. The test class puts in a value of 10 for the height and expects the area to be width * 10, but because of the override for the Square class dimensions, the width is also changed to 10. So if the width was originally 4, the getArea test expects 40, but the Square class override makes it 100. I thought this was a great example because I would expect the function to pass the test but did not remember how assigning the height in the test would also change the width.

The examples provided were very helpful in understanding how it can be seen in code, and the diagrams were a bonus for visual representation of the code. Going forward, I will know that I have this information to reference, if need be, when dealing with class designs. It has shown me principles to help make coherent classes with detailed explanations.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

Monolithic vs. Microservice Architectures

Several of our classes have discussed two different forms of software architecture. Due to my intent to graduate with both Software Development and Big Data Analytics concentrations, this topic especially interested me. On one hand, I need to know the software architecture designs to best plan an implementation for, and on the other hand, I need to know how to deal with massive data sets. These two factors combined peak my interest as an intersection point of both fields.

A server-side application has several components, namely presentation, business logic, database access, and application integration. The Monolithic Architecture will package these components linearly, resulting in ease of development, testing, deployment, and scalability. However, its design flaws include limitations in size and complexity, difficulty in understanding, and application-wide redeployment.1

The Microservice Architecture is split into a set of smaller and connected services, each with its own architecture. Each service performs a given task(s) which is then, in one way or another, linked back to the other services. Microservice Architecture is characterized as faster to develop, independent deployment, and faster and independent scalability. However, its disadvantages include added complexity, increased difficulty in testing, and increased inter-service change difficulty.1

With these things in mind, questions such as “should I implement a Microservice Architecture or a Monolithic Architecture”, and “should I always use one or the other” arrive. What I’ve learned is that despite the fairly detailed description of both of these architectures there isn’t a uniform consensus of when to apply them. Martin Fowler says a Monolith Architecture should always be implemented first, due to the fact that the Microservice Architecture implies a complexity that may not be needed. When the prospects of maintaining or developing a monolithic application becomes too cumbersome one should begin transitioning into using Microservice Architecture2, Stefan Tilkov disagrees, however, stating that building a new system is when one thinks about partitioning the application into pieces. He further states that cutting a monolithic architecture into a microservice architecture is unrealistic.3

Would I start using Monolithic Architecture, or would I start using Microservices Architecture for software? While thinking about this problem, I know that different parts of the application are doing different things. One deals with users placing orders, which in turn requires the user to have their own UI, API, Database, and servers; and the other requires the approvers to have their own UI, API, Database, and servers. This is convincing in favor of not beginning development with a monolithic architecture design, since planning an application’s design should be thought of immediately. Therefore, I agree with Stefan Tilkov more on this issue, based on the knowledge that I have as a student.

Links:

  1. https://articles.microservices.com/monolithic-vs-microservices-architecture-5c4848858f59
  2. https://martinfowler.com/bliki/MonolithFirst.html
  3. https://martinfowler.com/articles/dont-start-monolith.html

From the blog CS@Worcester – Chris's CS Blog by Chris and used with permission of the author. All other rights reserved by the author.

REST APIs : First Look

Last week we went over docker and some general information about it. Now this week we have started to go over REST APIs in our activity’s so I decided to dedicate this weeks blogpost to it. The Source I have chosen is a fairly recent article written by Zell Liew that is lengthy and gives us some general insight on what REST APIs are and should be a good read to many who are looking to learn.

A REST API is firstly, an application programming interface(API) which allows the programs to communicate with each other and this API follows REST. REST stands for Representational State Transfer which is basically another set of rules/constraints that the API has to follow when being created. Here with REST APIs, we are able to request and get data back as a response. Zell goes over the 4 makings of a request which firstly the endpoint which is the URL we request for. The endpoint itself is made of the root-endpoint which looks something like https://api.website.com. Then there is the path which helps us finalize what we are requesting and query parameters. Then we have the method as the 2nd part which is what sort of request do we actually send which are in 5 types: get, put, post, patch, and delete which are then used to determine the action to take. Then there is the 3rd part which the the header which actually has a number of uses but they in general provide information in one way or another. Lastly, there is the data/body which contains the information we are sending to the server. Zell also goes over some more about authentication, https code, etc.

Zell’s explanation over REST APIs is a good start on helping me understand on what APIs are and their actual use in computer science. This view on how websites work is also quite interesting. This also will help in class as we potentially work with APIs further in our class activities and make the process a bit more fluid. The prevalence of REST APIs is very much apparent with it being used by big company sites like Google and Amazon and with the rise of the cloud over the years has led to increased usage of REST APIs. This information of REST APIs is a must if I am to continue to improve and potentially work in a career that has to work in areas like this as it looks like REST APIs will continue to be used for a while longer since people are comfortable with it so I should too.

Source: https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Docker Compose

In last week’s blog, I introduced the power of container technology, Docker. Besides the advantage of easy implementation, however, implementing a command again and again using a terminal is not a good idea long-term. Therefore, Docker Compose is a tool for defining and running multi-container Docker applications by implementing all the applications commands to a YAML file and having it executed when needed.

According to the Overview of Docker Compose, using Compose is a three-step process:

1.       Define your app’s environment with a Dockerfile so it can be reproduced anywhere.

2.       Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

3.       Run docker compose up and the Docker compose command starts and runs your entire app. You can alternatively run docker-compose up using the docker-compose binary.

Dockerfile is another concept, while in this blog I will introduce how to manipulate Docker compose. Let’s take this command as an example when we want to create a nginx server


docker run -d --rm \

--name nginx-server \

-p 8080:80 \

-v /some/content:/usr/share/nginx/html:ro \

nginx


This command serves the purpose of mounting an html file to the nginx server running at port 8080 locally where nginx-server is the name of the container, -p 8080:80 is mapping the local port at first parameter and the default container port at the second parameter, separated by a colon and -v /some/content:/usr/share/nginx/html:ro will mount that 8080 port with the html file at the pointed directory. Translating this command into Docker compose:


version: “3.9”

services:

                nginx-server:

               image: nginx

                                ports:

                                - 8080:8080

                                volumes:

                                - /some/content:/usr/share/nginx/html:ro


Version is the version of Docker compose we want to use, all the necessary commands will be integrated and placed under the services section. Name of the container is placed at first then all of its entities are written after a colon and one indentation in the next line. Image tag is the application we want to create, ports is equivalent to -p to map the port, and volume is the same as -v used for mounting. Under this format, we can implement as many images as we want at once and just rerun it with the command docker-compose up in the directory containing that docker-compose.yml file.


version: “3.9”

services:

                nginx-server:

               image: nginx

                                ports:

                                - 8080:8080

                                volumes:

                                - /some/content:/usr/share/nginx/html:ro

                redis-container:

                                image: redis

               


Overall, the convenience of Docker is taken to a higher level thanks to the compose concept. Also, this video is another source that I watched to understand more about docker-compose. Therefore, I suggest using the main Docker Compose page from their site and this video as a reference if you might need it in the future.

From the blog CS@Worcester – Vien's Blog by Vien Hua and used with permission of the author. All other rights reserved by the author.

Anestiblog #1

This week I read a blog post that I thought really related to the class about why software design is important. The blog starts off talking about what Software Design is and the different types of it. The blogger describes Software Design as a layout for structuring the code of your software application. The different types of software design are listed as Architectural Design, High-level Design, and Detailed Design. Each type gets its own paragraph description to teach you a bit about them. Then it goes into what the blog is about with the importance of it, separating it into four different points, and then showing a chart of the process. The blog ends with a conclusion about how their company can help you with design. I selected this blog post because I was always very interested in Software Design because I do not know much about what that area is about. This blog has a good, in-depth description of why Software Design matters, that I think every CS major should read. Many students just take a class to get the grade and not care ever again about the material anymore, and I am against that. Every area in Computer Science has something to teach you outside of school. I think this blog was a great read that I recommend for many reasons. I love how the blog separates different points and types into its own section, so that everything has its own explanation, and you do not need to go look up another site to understand information on this blog. The chart that shows the steps in Software Design flow is really well made, and shows you the right way to design from understanding the requirements to deploying the aspects of development into the design itself. It is a perfect introduction to somebody interested in working in this field, and shows exactly how good designs are made. Another reason I would recommend this blog is because of how every part of an application hinges on the design. No matter how hard you worked on your part of it , if the design is bad, the whole application will fail. I learned how modularity makes the software simpler by giving the convenience of future changes, the privileges of good Software Design, and how bad Software Design can destroy an entire application. The material affected me heavily because it showed how the Software Design is the backbone of an entire application, so it should never be taken lightly, and always should be put in the highest priority because it could make or break the application. I will take all the knowledge given to me through this blog into the future through looking even further into Software Design in the future, because it will be a big part of me trying to achieve my Software Development dreams. Now that I know why the design is important, I will spread further into what makes a design great or poor, and how design is improved.

Link : https://www.mindbowser.com/why-software-design-is-important/

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

UML Diagram Designing

When it comes to programming and building a program, we need to first sometimes implement a diagram to give others an overview of the said program. I chose this blog because it contains many types of articles based upon various types of diagrams which also include detailed information about UML diagrams. I chose to pick this topic because as a software developer, when working on project on teams it helps specify different parts of the diagram to divide up the work and make the connections to make sure it builds appropriately to determine the structure.

Why Use UML?

UML stands for “Unified Modified Language”, which can be the blueprint in the world of architecture designed to make software architecture more visual to help other software developer understand how everything is implemented. There are two types of Diagrams, Structural UML diagrams, and Behavioral UML diagrams. Useful for every type of occasion such as databases. In the example below, we have a simple diagram, each main category are the classes. Each class contains. attributes and methods that are assigned to their own classes such as public, private, protected, etc. also displays the data type associated with the methods. Relationships are the connections between objects made with other classes, we have different type such associations, inheritance, aggregation, implementation, etc. Animal listed as the super class and the specific animals are the subclasses, the arrows are the inheritance relationship from the super class. 

UML Class Diagrams Tutorial, Step by Step | by Salma | Medium

After what I have learned about UML diagram designing, I have a small amount of experience prior thanks to taking the database course. But the further we progress in the course I can have a clear perspective on how go about it and how to make one when it is needed to make programming and others much is simpler. UML could have been useful during my intro to programming class because when we draw something on a piece of paper to fully see the classes, methods, objects, data types, etc. we could make programming and the connection made. This benefit will help all developed communicate effective but examining the appropriate connections, even though if may have some conflicts when it comes to the code writing phase and readjust the diagram accordingly. I have provided links before to further give more insights about how to go about using and making UML diagram and the purpose on why it is a crucial part of the software development process.

Links:

http://blog.genmymodel.com/what-you-need-to-know-about-uml-diagrams-structure-diagrams-1.html

https://www.genmymodel.com

https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/

From the blog cs@worcester – Dahwal Dev by Dahwal Charles and used with permission of the author. All other rights reserved by the author.

week-7

Hello, I want to write this blog after looking over some class activities again and seeing any questions to review, but something caught my attention. I read the word “microservices” in some class-work exercises; I got interested and looked it up again. I found two links that helped me understand What are microservices? And examples from Amazon company.

 

What are microservices? 

Microservices (microservice architecture) – is an architectural method that structures an application as a collection of services that are

  • Highly maintainable and testable
  • Loosely linked 
  • Individually deployable
  • Organized around business capabilities
  • Owned by a small team

The microservice architecture makes applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features to reduce complex applications. It even allows an organization to evolve its technology stack.

The pattern language guide 

The microservice architecture isn’t perfect; It has several problems. Moreover, when using this architecture, many issues must address.

The microservice architecture pattern language is a set of patterns for applying the microservice architecture. It has two goals:

  • The pattern language allows whether microservices are a good place for application.
  • The pattern language allows the microservice architecture favorably.

Characteristics of Microservices

  • Autonomous – Each element set in a microservices architecture can be developed, deployed, operated, and scaled without affecting the functioning of other benefits. Services don’t need to share any code or implementation with other services. Any connection between individual components happens through APIs. 
  • Specialized – Each service is designed for a collection of capabilities and focuses on solving a specific problem. 

Benefits of Microservices

  • Agility – promote an organization of small and independent teams that take ownership of their services. Groups work in a small and well-understood context and are allowed to work independently and fast. It helps to shorten construction cycle times. It benefits significantly from the throughput of the organization.
  • Flexible Scaling – Each service is to be independently scaled to meet the demand for its support application. It enables teams to support requirements, precisely measure the cost of a feature, and manage availability if a service experiences a spike in demand.
  • Easy Deployment – Enable continuous combination and delivery, helps try out new ideas, and rolls back if something doesn’t work. The low cost of failure enables experimentation to update code and stimulates time-to-market for new features.
  • Technological Freedom – It doesn’t follow a “one size fits all” plan. Teams have chosen the best tool to solve specific problems.
  • Reusable Code – Dividing software into small modules, which enables teams to use functions for multiple purposes.
  • Resilience – Service independence increases an application’s stand to failure. With microservices, applications handle complete service failure by discrediting functionality and not crashing the entire application.

From the blog Andrew Lam’s little blog by and used with permission of the author. All other rights reserved by the author.

Blog post 2 – Design Smells

In programming, we often make a lot of mistakes, some break the code, and some do not. The ones that do not tend to bring down the efficiency of our code and make it very difficult to work with. Some of these mistakes have to do with the way we write the code, and they tend to hint at a bigger problem in the code design. These mistakes are called design smells. I think Martin Fowler defines coding smells the best, he defines them as a “surface indication that usually corresponds to a deeper problem in the system.” Design smells come in all different ways, but they usually stem from developers not following best practices. The end result is that the code either becomes too bloated, too inefficient or breaks easily. Luckily, design smells are rather easy to spot once you know what they are. Some of the more common smells are:

Rigidity – Program breaks in many places when a single change is made to the code.

Immobility – The code contains parts that could be useful in other systems, but the effort and risk involved in separating those parts from the original system are too great.

Opacity – This smell occurs when the code is difficult to understand and follow.

Fragility – The code becomes pretty difficult to change. A simple change could cause a cascade of subsequent changes in dependent modules.

Viscosity –  When making changes to the code, it is easy to do the wrong thing, but hard to do the right thing.

Needless complexity –  There are elements in the code that are not useful. Having them in the code is simply not necessary and it makes the code more complex than it needs to be.

Needless repetitiveness –  There are too many repeating elements in the code that could be removed by using abstracted or refactoring the program.

These are things that we do not want in the code. In fact, they are considered technical debt. Technical debt is a term that describes the effects of mistakes or bad practices in code. As we program, we are going to make mistakes, errors, and sometimes not follow best practices. These shortcomings are things that we will have to revisit later today and spend time, resources, and effort spent on trying to fix and modify the code to make it work better. In this sense, it is similar to normal debt.  Once you know the smells, it’ll become a lot easier to find them in your code. If you do spot design smells, then it is best to try to remove and solve the underlying problem.

https://martinfowler.com/bliki/CodeSmell.html

https://martinfowler.com/bliki/TechnicalDebt.html

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.