Category Archives: CS-343

Git and Game Development

     A subject that has always been very near and dear to my heart are video games. Throughout my life I have always deeply enamored with games and the process of their creation, from the intricacies of 3d modeling to the various game engines in  use. Despite that, I wouldn’t  say I am an expert in modern game development by any means. As my classes have progressed however, I’ve begun to understand more about the inner workings of software development and how how teams are managed. This lead me to look into how game developers use these tools to manage projects and keep everything orderly. After some research, I found that prominent game engines like Unreal Engine have their source code up on Github. Not only that but Godot, a free open source engine, uses the MIT license and is entirely up on Github. Of course, even if an engine doesn’t have their code on Github or Gitlab that doesn’t mean you can’t just host your code in an online repository. Thanks to what I’ve learned this semester I now know how to create game projects and host them, as well as keep a neat record of commits.

 In terms of project frameworks, agile methodologies has seen widespread use within the video game industry. Scrum is the most prominent of these methodologies,  and has been adopted by various companies. Due to the nature of video game development, there is a greater need for cross-discipline teams comprised of developers versed in various skills. Game development can be effectively chopped up into tasks that fit nicely into each increment. I hope to one day make use of scrum and help create a game of my own.

https://www.unrealengine.com/en-US/ue-on-github

https://www.gamedeveloper.com/production/agile-game-development-with-scrum-teams

https://starloopstudios.com/best-agile-practices-in-game-development/

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Kubernetes Clusters: What they are, and why it may become the most in-demand skill in the 2020’s

To say that the use of containers has revolutionized how applications are designed and deployed would be an understatement. Gone are the days of applications being run on physical servers, as due to the issues with resource management, the alternative of using virtual machines to run multiple applications on a single CPU provides vastly more flexibility to developers. One downside of virtualization, however, is that these virtual machines are considered rather “heavy.” Each VM is a fully-functional machine, running a full OS in addition to whatever virtualized hardware is added on. In environments where each server’s CPU may have multiple virtual machines running, the same issue occurs; high resource usage.

To solve this problem, the use of containerized software has become common. Containers share the same OS instance as the host machine, as opposed to a VM, which has an entirely separate OS. This leads to a multitude of benefits, such as reducing image storage size. While a VM uses images multiple gigabytes large, container images are much smaller; often measured in megabytes. Additionally, containers are entirely independent, meaning they are much more easily portable, which leads to faster and easier deployment.

One major drawback to containers is the upkeep. Containers must have a specific amount of resources added to them, such as memory usage, while also, like anything else, can fail and need repair.

Enter Kubernetes; an open source platform designed to automatically conduct these maintenance tasks. Kubernetes uses clusters as the basis for their infrastructure, each containing nodes which run and manage the application. Control plane nodes manage scheduling, the API server, and other services. Worker nodes are where applications are actually run, with larger applications using more worker nodes than smaller ones.

If such a platform is so valuable, why are so many positions left unfilled? Because jobs working with Kubernetes are hard.

On the one hand, developing and maintaining applications with Kubernetes requires experienced engineers, and time. The nature of the environment simply demands developers have the knowledge and experience to implement it.

On the other hand, because Kubernetes is such a new technology, the field is rapidly evolving, requiring developers to evolve along with it. Each change requires testing and optimization, as well as programmers needing to continue to broaden their expertise.

Why is this problematic? In addition to creating a work environment prone to causing burnout among employees, engineers working in this field can outgrow their positions quickly. As they gain more experience and widen their skillset, many move on to positions that offer higher pay. According to Forbes, Kubernetes engineers spend an average of just 18 months in their positions before moving on.

This creates a cycle; engineers are hired to work on Kubernetes platforms, hold their positions for a short while, and either due to the intense workload, higher paying positions, or a mix of both, move on. This leaves an opening in their previous position, which must be filled by a new hire. Rinse and repeat.

Works Cited:

Budhani, Haseeb. “Council Post: Addressing the Kubernetes Skills Gap.” Forbes, http://www.forbes.com/sites/forbestechcouncil/2023/05/10/addressing-the-kubernetes-skills-gap/?sh=2a1430223f42. Accessed 20 Nov. 2023.

“Overview.” Kubernetes, kubernetes.io/docs/concepts/overview/. Accessed 19 Nov. 2023.

Poulton, Nigel. “What Is Kubernetes, and Why Should I Learn It?” Www.pluralsight.com, 2 Jan. 2023, http://www.pluralsight.com/blog/cloud/what-is-kubernetes#:~:text=Kubernetes%20is%20notorious%20for%20having. Accessed 20 Nov. 2023.

From the blog Butler Software Construction, Design, and Architecture by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Understanding SOLID Principles: A Reflection on Best Practices in Software Engineering (Week-10)

Overview

In the dynamic field of software engineering, the SOLID principles stand as critical guidelines for designing maintainable, scalable, and robust systems. Introduced by Robert C. Martin, these principles encompass Single Responsibility Principle (SRP), Open-Closed Principle (OCP), Liskov Substitution Principle (LSP), Interface Segregation Principle (ISP), and Dependency Inversion Principle (DIP). This post explores the essence and practical application of each principle in modern software development.

Single Responsibility Principle (SRP)

SRP champions the concept that a class should have a single responsibility, thereby promoting modularity. This approach simplifies software, making it easier to comprehend, debug, and update. Personally, adhering to SRP has streamlined my coding, enhancing readability and maintainability.

Open-Closed Principle (OCP)

OCP dictates that software components should be open for extension but closed for modification. This principle underlines the importance of designing flexible systems that can adapt over time without needing modifications to the existing code. In my practice, OCP has been crucial for developing systems that are both flexible and robust.

Liskov Substitution Principle (LSP)

LSP asserts that objects of a superclass should be seamlessly replaceable with objects of its subclasses without affecting program correctness. This principle emphasizes the significance of robust class hierarchies. Adhering to LSP has helped me maintain consistent and reliable class structures in my software.

Interface Segregation Principle (ISP)

ISP advocates for the creation of specific interfaces over general-purpose ones, ensuring that clients are not forced to depend on unused methods. This principle aids in organizing code more efficiently and mitigating the impact of changes. Implementing ISP has allowed me to develop more focused and efficient interfaces in my projects.

Dependency Inversion Principle (DIP)

DIP emphasizes that high-level modules should not depend on low-level modules, but rather on abstractions. This principle fosters a loosely coupled architecture that is easier to test and maintain. Applying DIP has been instrumental in my development work, enhancing system flexibility and testability.

Personal Reflection and Application

My experience with the SOLID principles has been transformative, sharpening my coding skills and altering my approach to software design. These principles have enabled me to develop software that is not only theoretically sound but also practically effective, impacting the quality of my work significantly.

Conclusion

The SOLID principles are not merely best practices; they represent a mindset shift towards better software development. Understanding and implementing these principles can dramatically improve the quality, maintainability, and scalability of software. As I progress in my career, these principles will continue to shape my approach to software engineering.

Resources

  • “Principles Of OOD” at butunclebob.com
  • “SOLID: The First Five Principles of Object Oriented Design” at butunclebob.com

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

CS-343 Week 10

GRASP (General Responsibility Assignment Software Patterns) assigns responsibilities for different modules of code in object-oriented software development. There are seven types of roles assigned to classes and objects to easily organize the responsibilities. These roles include Controller, Information Expert, Creator, High Cohesion, Low Coupling, Polymorphism, and Protected Classes. It is important to note that GRASP is occasionally paired with SOLID principles, making for the combination SOLID GRASP. Design patterns like this help keep code clean and more organized, making it more comprehensible and reusable.

Controller suggests that the entity responsible for handling a system operation should be its own class to act as an intermediary between the user interface and the logic of the program. This helps separate concerns as well as improve flexibility of the whole system. Information Expert focuses on assigning responsibilities to classes that include the most information required to fulfill them. By following this principle, we can design systems where responsibilities are distributed efficiently amongst classes which in turn reduces dependencies. Creator guides the allocation of responsibility for creating objects. A class should be responsible for creating objects of other classes if the first class aggregates or has a composition relationship with the second class. By following this, there will be less coupling between classes and ensures better maintainability. High cohesion advocates designing classes with a clear and focused purpose. Each class should have a single responsibility and capture related behaviors and data. This provides easier comprehension of the code and simpler test cases to write. Low Coupling relies on designing classes with minimal dependencies on other classes. The less interconnections between classes allows to create a more modular system and improves maintainability since changes to one class are less likely to affect other classes. Polymorphism enables objects of different classes to be treated the same throughout a common interface in object-oriented design. Leveraging this principle allows for the design of systems to be extensible and adaptable to new requirements and promotes loosely coupled systems.

Applying GRASP principles ensures a clear distribution of responsibilities and promotes low coupling and high cohesion among classes. These principles help create an understandable design architecture while also providing the ability to adapt to future changes. This also helps team members collaborate due to better facilitation for communication among developers and it establishes a common understanding of each component’s role and interactions.

What is GRASP (General Responsibility Assignment Software Patterns)? | Definition from TechTarget

GRASP Principles: Object-Oriented Design Patterns | by Patrick Karsh | Medium

From the blog CS@Worcester – Jason Lee Computer Science Blog by jlee3811 and used with permission of the author. All other rights reserved by the author.

Should we still use SOLID Principles?

While I was perusing for an article to write about this week I found this interesting blog post posing the question, “Why SOLID principles are still the foundation for modern software architecture”. The post is written by Daniel Orner, a Software engineer from the likes of Flipp and formerly IBM.

Daniel starts by explaining what exactly is “SOLID”, he attributes the meaning of SOLID to “the writings of Robert C. Martin in the early 2000s” which provided a framework for creating quality object-oriented programs.

Daniel then covers what has changed since the 2000s but also what hasn’t. Things that have changed are the popularity of scripting or dynamic languages has risen greatly, some facets of object-oriented programming have become meshed with functional programming, open source software has become extremely popular, and microservices have seen massive growth in use since the 2000s.

However, things that haven’t changed are that programs are still written and modified by people, there is still a form of organization/separation of code regardless of the language used, and code can be used for either internal or external purposes. I believe that all of these facets of programming may never change no matter how much time passes. Even with the advancement of AI, I still believe that the human factor in programming will always be important. When faced with potential moral or ethical problems, a human element would always result in a better outcome than an AI could/would.

Daniel then goes through each of the five principles and alters them to become more general and applicable to modern programming trends. He starts by altering the single responsibility principle to “Each module should do one thing and do it well.”, the open-closed principle to “You should be able to use and add to a module without rewriting it.”, the Liskov substitution principle to “You should be able to substitute one thing for another if those things are declared to behave the same way.”, the interface segregation principle to “Don’t show your clients more than they need to see.”, and finally he left the dependency inversion principle as is!

Daniel then wraps up his post concluding that modern SOLID is ensuring that your code is not surprising or confusing to anyone who will use or work with it, keeping things reasonable, and properly organizing said code. Reiterating that SOLID is still key to creating and maintaining well-written code.

Overall, after reading Daniel’s post it allowed me to widen my view and understanding of the solid principles. After comparing his new definitions to the ones crafted by Robert Martin, it showed how well thought out SOLID principles were to be able to last 20+ years already and remain relevant for plenty more to come. The SOLID principles offer a framework not only for object-oriented programming but programming in general and Daniel showed this impeccably just by changing a couple of words in the definitions while keeping the same meanings given to the principles back in the 2000s.

Article Link: https://stackoverflow.blog/2021/11/01/why-solid-principles-are-still-the-foundation-for-modern-software-architecture/

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

11/18 – Keeping the ball rolling.

So, since the end of the semester is coming up, I should be posting weekly for the near future, so stay tuned for a lot from me! With this week of work, I have two topics I wanted to discuss, both related to each class I’m taking with Professor Wurst.

Firstly, on the Software Process Management side, I wanted to discuss how we’re using Scrum, and the way it’s set up. We’re using GitLab’s issue boards, dividing them amongst where they currently are in The Sprint, with stuff like the Issue Backlog, In Process, Needs Review, Finished, and so on so forth.

This setup reminded me a hell of a lot of Trello, a website/service I’ve been using for years now! And what’s funny is that I even mentioned this in class once, and the Professor said he used to as well for Scrum. My setup is quite different compared to the setup we have on GitLab, however, it still uses a similar Scrum/Kanban-esque setup.

Here’s the two boards I mainly use, one for School, and one for Commission work, as I freelance in art. With my Comm board, it is a bit more simpler, having using tags within lists as opposed to using a list per tag. As for my schoolwork board, it’s simply just listed with each course I’m taking and items of work I need to finish. It’s a very good tool, and I highly suggest it to anyone looking to utilize Scrum or even need a good tool to organize things.

Check it out here! https://trello.com/

As for the Software Design, I was curious what kind of file the .json’s we were using were. We have been using them to store data about students and members of the modified version of LibreFoodPantry’s backend. I’ve seen them used before many times in my times modding games before, usually they store data for configuration files.

So, to learn more, I found this blog from HubSpot: https://blog.hubspot.com/website/json-files, and honestly I think I get it a bit more now. They’re simply data storage files, able to store comma seperated values, objects, and arrays. They also support multiple data types, like integer, boolean, and strings.

So putting that into the context of configuration files, like for settings in games, it makes sense why they are used, due to integer and boolean values. If an option for a game has an on/off choice, a boolean data value to hold that information would make sense, as “true” would be on, and “false” would be off. As for how integers would be used, say you want to store the value of the volume the user wants while playing the game, that can be stored as an integer from 0-100.

All in all, it’s really interesting to learn more about filetypes I don’t know too much about, and I should look into more and how they are written, like .obj, .html, and .ini.

From the blog CS@Worcester – You're Telling Me A Shrimp Wrote This Code?! by tempurashrimple and used with permission of the author. All other rights reserved by the author.

Navigating Code with the Principle of Least Knowledge

I recently delved into the fascinating world of software design principles. Among these, the Principle of Least Knowledge, also known as the Law of Demeter, caught my attention and proved to be a valuable guide in crafting efficient and maintainable code. In this blog post, I’ll share my insights into this principle, drawing from various resources to shed light on its significance.

The Principle of Least Knowledge

The Principle of Least Knowledge advocates for limiting the interactions between classes or components in a system, promoting a low coupling and high cohesion design. In simpler terms, it encourages a module to only communicate with its immediate neighbors, reducing dependencies and enhancing the system’s flexibility.

Selected Resource

I stumbled upon an insightful article titled “Law of Demeter in Java”, which provided a comprehensive exploration of the Law of Demeter and its practical applications in Java programming. This resource not only clarified the concept but also demonstrated real-world scenarios where adhering to the principle can significantly improve code quality.

Why This Resource?

Choosing this resource was a no-brainer for me. The Law of Demeter seemed like an abstract concept at first, and I needed a practical guide to bridge the gap between theory and implementation. The Baeldung article not only provided a clear explanation but also offered concrete examples, making it an invaluable tool for someone eager to enhance their coding practices.

Insights and Reflections

After absorbing the content, I realized that applying the Principle of Least Knowledge fosters modular and maintainable code. By minimizing direct dependencies between classes, code becomes more resilient to changes and easier to comprehend. This is especially crucial in larger projects where codebases can quickly become unwieldy.

The real strength of the Law of Demeter lies in its ability to enhance collaboration within a development team. When each module interacts only with its immediate neighbors, team members can work on different parts of the system without constantly interfering with each other’s code. This not only boosts productivity but also minimizes the chances of unintended side effects during the development process.

Future Application

As I move forward in my studies and eventually into the professional realm, I anticipate applying the Law of Demeter as a guiding principle in my coding practices. I foresee it becoming a cornerstone in my approach to software design, ensuring that the systems I contribute to are not only functional but also maintainable in the long run.

In conclusion, my exploration of the Principle of Least Knowledge has been enlightening, thanks to the practical insights provided by the Baeldung article. Understanding and implementing this principle is undoubtedly a step towards becoming a more proficient and conscientious developer.

Reference:

  1. Law of Demeter in Java – Baeldung

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

REST APIs

Diving deeper into exploring APIs I found a great blog on RESTful APIs called “REST APIs: How They Work and What You Need to Know” (https://blog.hubspot.com/website/what-is-rest-api) by Jamie Juviler. Juviler does a great job explaining what makes an API RESTful, why they are useful, how to use them, and provides a few examples from popular  websites like twitter, instagram, and spotify.

Application Programming Interfaces (APIs) allow two software applications to communicate and send data between them. They define how requests and responses will be formatted.

A Representational State Transfer (REST) involves a client sending a request for a resource from a server and the server responding with the current state of the resource. This means responses will vary based on the current state of the resource.

Juviler states there are 5 guidelines an API must follow to be considered RESTful, with one option guideline.

  1. Client-Server Separation:
    • Communication in a REST architecture is only ever initiated by a client. A request is only ever sent from a client to a server, followed by the response being sent from the server back to the client.
  2. Uniform Interface:
    • Every request and all responses must follow a common format. This is through the use of HTTP language. HTTP has become the standard for REST APIs and with it, the use of endpoints. All requests are formatted to contain two pieces, the HTTP method, and the endpoint. Endpoints are used to access specific resources on a server.
  3. Stateless:
    • All communications with a server are independent from each other. A request needs to contain everything required to complete the interaction. There is no memory on the server to store or access previous requests. 
  4. Layer System:
    • Additional systems, like layered servers, may be added for security, but should not alter the format of messages between client and server. Requests and responses should always follow the REST architecture regardless of backend code.
  5. Cacheable:
    • REST APIs allow for caching responses. This means APIs can have larger resources saved on the client for faster access.

APIs are a necessity in the software development world, and exploring RESTful APIs in the future will allow me to develop software that can communicate and interact with software around the world. Understanding the REST architecture will greatly improve my ability to create functional, organized, and scalable APIs.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

CS343 Blog Post for Week of November 13

This week I wanted to write a post about the last SOLID design principle, Dependency Inversion. This principle deals with the relationship between interfaces that operate at a high system level and low system level interfaces.

Composed of two parts, the Dependency Inversion Principle as stated by Robert C. Martin is:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.

While the name “Dependency Inversion” might imply inverting the dependency relationship between modules, applying this principle actually involves building both high-level and low-level modules from the same abstraction. Constructing your code to abide by the previous SOLID principles, particularly the Open/Closed Principle and the Liskov Substitution Principle, should implicitly also result in code that abides by the Dependency Inversion Principle. The Open/Closed Principle states that a software module should be open for extension, but closed for modification, and the Liskov Substitution Principle states that an interface should be able to be replaced by a separate implementation of the same parent interface without compromising the functioning of the software.

The author includes another coffee machine themed example, detailing how to apply the SOLID design principles to two separate classes of coffee machine. Starting with two classes, BasicCoffeeMachine and PremiumCoffeeMachine. The classes are very similar, with the only differences being an extra class variable representing a coffee bean grinder and a method to brew espresso for the PremiumCoffeeMachine class. The author describes their process for defining suitable abstractions for a piece of software representing coffee machines, including their decision to have the methods for brewing filter coffee and espresso split into two interfaces. A potential coffee machine class representing a machine with the capability to brew either filter or ground coffee can easily be built by implementing both of these interfaces and overriding the methods within the concrete classes.

Refactoring the code this way enables independence between interfaces and implementations and makes expanding on the code base easier and safer. With fewer dependencies between modules, the effects of any changes to existing code won’t ripple out to other parts of your software.

I wanted to review this principle specifically because I wanted to refactor the interfaces from some old projects of mine. I wrote some Java classes meant to represent characters in a fantasy role-playing game, complete with character levels and statistics. As I left it though, the code is only capable of creating characters meant for the user, and not any non-playable characters or any other sort of entities. I want to go back and redesign my code so that I have fewer methods that only function with specific classes, and more functionality to create and edit characters. I remember I had a lot of trouble trying to increment a character’s level by 1 and increasing their stats accordingly, and I get the feeling that refactoring my code with the SOLID principles in mind will help me get it to a place I am happier with.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

BLOG #3

Hello everyone, my name is Abdullah Farouk, and this is my third blog so far this semester (hoping I can get to 6 before the semester ends). I am going to be talking about comments in this blog because I saw this interesting article about it, and I just wanted to learn more and be better educated on the topic. When I started out with coding, I always ignored typing comments in my code because I didn’t really get the point of it other than wasting my time. But as things got more advanced, and I kept getting errors in my code and I knew what the error was, but I just didn’t know where the methods were in my code and what their function was. That’s when I started thinking about comments, I told myself comments would been helpful here instead of reading every line of the method to see what it does. In the article that I tagged down below, the author explains the importance of software technical documentation and how it should be used, with some examples to show you.

I have learned a lot about software technical documentation from this article including what are some examples of unnecessary comments. You don’t need to write an English translation of your software code; you just need a high-level overview of your method or an explanation to a complex logic. Back when I started with coding, I used to comment out some of my code that I didn’t need anymore, and I just kept it there, but I learned quickly that it wasn’t good because it just makes my code look messy. Don’t put comments like “fix this bug” but instead put something more useful like what needs to be fixed. I also learned that there is a lot of redundant and excessive comments. For example, you don’t need to write a comment explaining what i++ does because that is unnecessary as the code is self-explanatory.  Add comments that add value to your code, not make it useless and just clutter the code.

Some people say that a good code should not need comments to explain it, but I just disagree because I like reading information about the method before I start reading each line of it. If you don’t like to put comments on your code, then at least make a schema for your code so others can understand the relationship between codes better. The author shows plenty of example diagrams that show you what a code schema looks like.

Reference article : https://medium.com/@VincentOliveira/how-to-write-good-software-technical-documentation-41880a0e7814

From the blog CS@Worcester – Farouk's blog by afarouk1 and used with permission of the author. All other rights reserved by the author.