Using Github Codespaces in Class

Github in the classroom

While I have used github before in previous projects, this semester I have begun to dig deeper into it’s uses. Codespaces is one such use, and is an important tool to learn and use properly. While we have gone over github codespaces and git command line use in class, sometimes looking around online can help streamline the experience. Github themselves post their own blogs relating to their site’s recourses, and codespaces is no exception.

10 things you didn’t know you could do with GitHub Codespaces

This github blog details all of the various uses of their codespaces, and offers good suggestions, background, and description of both what it is, and why you should be using it. Mostly, this blog details the features that are both useful but not often utilized, as well as details why these features are helpful tools.

I chose to talk about this blog post (with my blog post, haha) because this is a tool that I personally have used, but didn’t really know too much about. I was told to use it in class for x y and z, but never really thought about how valuable it actually is. I only have experience with VS code and other Microsoft IDEs, but it’s cool to know that codespaces supports much more than just that one IDE. as well as this, I think it is extremely convenient to have access to a workspace without setting up everything locally, and am excited to try to learn how it better utilize this feature, as I realized that this feature has directly impacted my class experience; The professor only needed to give us a repository, and we were ready to work on our assignment, with no setup or install hassle!

Another feature of that I am now excited to try is live sharing. Live sharing allows you and other people to work in the same code file at the same time, similar to having multiple people in one google doc. It even shows where their cursor is and everything! I think this feature could significantly improve the group project experience, as previous years I’ve been frustrated with how annoying it is to work on the same code together with my group.

Lastly, codespaces allows you to easily manage all of your working areas, and makes file management easier to, well, manage! In my previous classes, I’ve found the most difficulty in the storage and file layout of my projects, which impacts how/if it runs at all on my local device. Being able to manage this and keep everything more separated with codespaces should, in my opinion, help with these troubles.

This concludes my mandatory blog post of Quarter 1 for the semester.

Till the next one,

— Will Crosby

From the blog CS@Worcester – ELITE Computer Science by William Crosby and used with permission of the author. All other rights reserved by the author.

Blog 1 – How to manage and improve software processing.

Improving your software processing is crucial, especially when working on a group project, managing time pressure, and leading a team, among other responsibilities. Here is a great resource that I found invaluable for beginners and interns. “https://axify.io/blog/software-process-improvement

According to Pierre Gilbert, a software delivery expert, he highlighted the “7 steps” to implement software process improvement, or SPI. I will break down those steps to get a better understanding.

Step 1: Make the problem visible – Use historical data to show where delays, defects, process inefficiencies are happening

Step 2: Get the Team’s Buy in – Don’t just impose changes. Use data to show why improvements are needed so your team member see the value.

Step 3: Track essential metrics – Use DORA metrics + value stream mapping to find bottlenecks

However, this step still gets me confusing, so feel free to checkout the link to have a better understanding.

Step 4: See where improvements would be most effective – Prioritize high-impact areas rather than trying to change everything at once.

Step 5: Make a plan – Clear responsibilities; tools; define which existing processes are targeted; pilot projects before roll-out; ensure feedback loops.

Step 6: Implement the plan – Execute carefully; monitor; allow for adjustment; don’t force changes that slow things down without justification; use continuous feedback.

Step 7: Adjust as needed – SPI is never “done” – measure progress via KPIs, adapt if cultural or resource issues arise, keep refining.

After reading those steps, I can’t imagine the environment of software engineering is not as simple as I thought. Understanding the steps could help me preparing of what’s coming next.

Before improving SPI, we need to understand the common challenges people usually face when it comes to working on project.

Time pressure – in high-pressure environments, it’s easy to prioritize delivery over process improvement.

Poor management or lack of ownership – improvements can be fragmented without clear responsibility

Team maturity – less mature teams may struggle with discipline & consistent adoption.

Overall, reading this could help you get ahead of what’s upcoming in the software engineering environment. For further information, check out the link above.

#CS-348, #SPI

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.

GitHub workflow vulnerabilities

https://github.blog/security/vulnerability-research/how-to-catch-github-actions-workflow-injections-before-attackers-do/ , https://www.legitsecurity.com/blog/-how-we-found-another-github-action-environment-injection-vulnerability-in-a-google-project

Today I want to talk about a blog post I found on GitHub own blog site that details the proper measures to protect repositories against malicious action injections to further protect our repositories and properly enforce standards to safe guard our intellectual property and information.

But first I want to go into detail about what the consequences of a action injection are and what this attack does and how this attack works. The main goal of the attack is for the attacker to have a command ran through one of the workflows in in the repository. This can be done by the attacker creating a branch or issue and this being ran and through this it is executed through a run portion of the workflow. So if you have a automation for when someone creates a issue, the bad actor can put a piece of malicious code in the title and because this title is ran through the workflow and executed which can give a bad actor permissions that they shouldn’t have under normal circumstances, this can even get to the point of them approving their own pull requests.

So the question remains is how to stop this? The answer is environment variables which allows the inputted data like the title of the issue to become untrusted and prevent the run in the workflow being ran with the malicious code. Using the proper standards like environment variables for API’s and other pieces of information is crucial as well to maintain proper change control standards. If these standards aren’t followed this can desecrate the integrity of the repo itself since if a malicious change goes unnoticed early on in development this can lead to the branches later one and other contributors repos and branches becoming compromised as well.

I chose this blog post due to my own internship mainly using automated systems for workflows and we have our own GitHub workflow that documents change control requests for the reason we can see when a push that might compromise information or other systems will be made. Furthermore we can reverse these changes and have workflows that try to detect attacks similar to this where a title through machine learning is detected. This is also important to understand in the class when working with a public repositories where these attacks might be botted to be more educated on when to properly test branches and the proper use of environment variables in a project to further protect the repo.

From the blog CS@Worcester – Aidan's Cybersection by Aidan Novia and used with permission of the author. All other rights reserved by the author.

Understanding GitHub

Quarter One Blog

The video I watched is a basic tutorial for GitHub. Corbin, the creator, explains that GitHub is essential for version control and organization, and how important it is when your code becomes more complex. He starts off by explaining what a repository is and how to make one in GitHub. Then, he shows one of his repositories for a project with a working program. He says if he wants to start making changes and adding to the code, he would create a branch to prevent the code from breaking and save a lot of time if he needed to backtrack and fix it. He shows how to make a branch, as well. He provides the basics for what we went over in class and in GitKit chapters and explains how to do it in a slightly different way than what was said in class. I chose this video because I find it useful when I have multiple ways to explain a topic and I can understand it better.

The process for creating a repository is not something that we went in depth on in class, so having Corbin explain how to go through that process was helpful. He touches on what the difference between public and private repositories are. In class and in FOSS applications, they would all be public repositories, but if you were just using GitHub to put your private code in the cloud and for your own version control, a private repository would be more useful.

Corbin provides a visual and explanation for what branches and forks do, but it is not as clear as what was shown in class. Having the different explanations of both is helpful for a deeper understanding. He goes over having a completed branch, how to merge it back to main, and explains what the version control deletions and additions look like. 

Outside of the Software Process Management class, understanding GitHub is super useful in the real world. Other classes, like the Software Development Capstone, will require the understanding of GitHub to complete the final project. A lot of companies use Git for version control and collaboration, and look for those skills on applicants’ resumes. I feel confident and comfortable that I can navigate and use GitHub in my life going forward. 

Last year, I took Software Testing before understanding Git. While I was able to figure it out and get through the class, having the knowledge I do now with the actual process and reasonings of it, I feel like I would’ve spent less time troubleshooting silly issues.

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

DESIGN PATTERNS

Software Design Patterns 101: A Beginner’s Guide, a Medium essay, introduces readers to software design patterns and discusses how they help create systems that are effective, scalable, and manageable. According to the article, design patterns are reusable templates that help programmers produce cleaner, more structured code by resolving common programming problems. The three primary types of design patterns, structural, behavioral, and creational are explained. Object generation is the main emphasis of creational patterns like Factory Method and Singleton. Adapter and Composite are two examples of structural patterns that explain how classes and objects can be merged to create more expansive, adaptable systems. Observer and Strategy are two behavioral patterns that emphasize direct communication and responsibility delegation. Teams can create a common language by employing these patterns, which enhances cooperation and lowers misunderstandings.

I chose this article because it clearly relates to the concepts of object-oriented design and code organization that we have been studying in class. In a recent exercise, for instance, we examined a Duck class hierarchy dilemma in which some subclasses, such as RubberDuck and DecoyDuck, were compelled to inherit unnecessary methods, such quack() and fly(). Because of this design, we had to override methods with “do nothing” implementations because these ducks either didn’t fly or quack. My understanding of why this is a typical example of a design issue that can be fixed using patterns like the Strategy Pattern, Which is under the behavioral category covered in the reading. 

I came to understand how inheritance, although helpful, may become problematic when it compels subclasses to adopt behavior that isn’t appropriate for them through the article and our class discussion. Depending on the type of duck, the quack() and fly() methods in the Duck superclass in our example have different actions. We may dynamically assign these behaviors to various ducks at runtime by classifying them into distinct classes, such as FlyBehavior and QuackBehavior. This method reduced superfluous overrides and increased the design’s adaptability. Here, the Strategy Pattern was essential since it let us alter a duck’s behavior without explicitly changing its class.The way that design patterns like Strategy prioritize composition over inheritance struck resonated with me. By mixing smaller, reusable components instead of depending only on strict class hierarchies, this idea promotes the development of systems. In the Duck example, we can simply construct new behavior classes and mix them in as needed, rather than adding extra subclasses each time a new type of duck is created. I intend to incorporate these ideas into my upcoming work. For instance, I could use behavior classes for activities like jumping, running, and attacking if I were creating a game with many character kinds. This would eliminate the need to rewrite significant portions of code in order to add new characters or change current ones.

From the blog CS@Worcester – A Bostonians Blogs by Abdulhafeedh Sotunbo and used with permission of the author. All other rights reserved by the author.

Understanding Git Collaboration: Communities, Upstreaming, and Merge Conflicts

Hello everyone! Welcome back to my blog posts. Today I would be delivering my first Quarter blog post.

For this week’s blog, I decided to read “Git Forks and Upstreams: How-to and a cool tip” from Atlassian Git Tutorials. I picked this article because it connects directly with what we’ve been practicing in class—working locally, pushing changes upstream, staying synchronized, and handling merge conflicts. I also wanted a guide that explained the actual Git commands rather than just high-level concepts, since I’ve been moving away from relying only on graphical interfaces.

Summary of the Resource

The article explains the difference between origin (your fork) and upstream (the original repository you forked from). It walks through how to set up your fork so you can keep it synchronized with the upstream repo, which is especially important when multiple people are contributing. Commands like git remote add upstream <url>, git fetch upstream, and git merge upstream/main are introduced step by step. The tutorial also shares a useful tip for checking how many commits your branch is ahead or behind the upstream, which makes it easier to stay in sync.

Why I Chose This Resource

I chose this article because it fills a gap in my own Git knowledge. Until recently, I mainly used the graphical interface on the side to commit, push, and sync my changes. That worked for basic assignments, but I often felt like I didn’t really understand what was happening behind the scenes. This tutorial helped me connect the dots by showing me the exact commands and explaining why they matter, especially in collaborative projects.

Reflection and Takeaways

This resource helped me see Git as more than just a tool for saving code. it’s really about teamwork. Understanding how to add and pull from upstream makes me feel much more prepared to collaborate on group projects or open-source contributions. I no longer see merge conflicts as something to fear, but as a natural part of multiple people working on the same code.

One big realization for me was how important it is to stay synchronized with upstream. In one project I did before, I once ignored updates for too long, and the merge that followed was messy and stressful. Now I understand that frequent git fetch upstream and git merge calls prevent bigger problems down the road.

Another personal shift was moving away from the GUI. While the interface made Git feel easier at first, I see now that the terminal gives me more power and clarity. Running git status, git log, or checking how far ahead/behind my branch is compared to upstream makes me feel more in control. It’s like going from driving an automatic car to learning manual, I finally understand how things actually work under the hood.

Looking ahead, I know these lessons will help me not only in this class but also in internships and my future career. Whether I’m working on an open-source project or contributing to a company’s codebase, being comfortable with upstream workflows and conflict resolution will make me a stronger and more reliable teammate.


Citation / Link

From the blog CS@Worcester – Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Understanding Git: The Key to Safe Team Collaboration

In class we are learning about Git. Git is a version control system that allows multiple developers to make changes to a program and still control the commits if needed. The reason why I think this is because there will be times where developers make a mistake and cause a whole list of issues but with git they can go back to the previous commit and fix it there. This allows developers to be able to work on major projects without causing the main program to break. 

Another important aspect we need to consider is that it allows larger teams to be able to work on the same program at once. Which can save the company time and money. We also need to consider how developers from all over the world could help with open source projects without working on a fork of the repository so that they can make changes locally and not impact the upstream of the project. 

In my perspective git divides upstream, clone, local, online repo, branch so developers are able to make these changes safely and there can be people to double check their work. That is what organizations can have reviewers and maintainers trying to review changes so that the upstream does not break. In addition, it allows developers to be able to know when changes are made and what type of scale a change it is. When a company’s upstream goes down it can cause the company to lose a lot of money and reputation with clients and the public. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

What are Coding Smells

Currently, in class we are going over code smells. These code smells are how we perceive the code and try to make a change with them that could cause a problem. First let me explain what code smells; Rigidity, Fragility, Immobility, Viscosity, Needless Complexity, Needless Repetition, Opacity. These code smells are what we should try to avoid making a program so that we do not have a future problem when we try to solve it. 

Coding standards and quality will change over time regardless of that programmers need to be aware of the risks of having a program with these issues. When code becomes rigid, it is referring to how a section of the program is needed throughout the program too much and when we try to make changes with that section it would require a lot more code to fix. Fragility is when a change we did makes other parts of the code to not work as intended. Immobility is when a program can not be transferred from one system to another. For example, in certain engines, ides might manipulate certain values in a program so it will be different within the two engines or ides. 

While Viscosity is when a program has a lot of improper coding methods that will slow down the efficiency. Needless Complexity occurs when a program is very complex for the demand it is used for. Needless Repetition, is when code repeats throughout the program that makes multiple unneeded repetitions. Lastly Opacity is how other programmers could see the use of a feature and how clear it is defined. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Improving Design Communication with PlantUML

Improving Design Communication with PlantUML
I guess when it comes to software engineering, code is usually the arbiter of truth, although it isn’t always the best avenue for communicating a design for your average person/laymen considering developers and computers can parse through lines of code like it’s nothing- but trying to understand the higher-level structure or interactions within a system can be somewhat challenging without some kind of abstraction to it, which is why UML diagrams have a great deal of utility to them as they strip-away the low-level implementation details that the layperson in question doesn’t have to deal with- allowing us to focus on the essential interactions.

Why I chose this?

So for this week’s professional development blog, i decided to go with this resource regarding UML diagrams(https://miro.com/diagramming/what-is-plantuml/#what-is-plantuml) considering we’ve been working on this in class for the better part of a few class days now. While syntax is important, I wanted a resource that went beyond the basics and emphasized best practices — specifically, how to make diagrams more readable and effective as communication tools. Which ties back to what we were doing for the classwork activity, the learning objectives of the activity we did in-class together includes identifying parts of UML diagrams, being able to connect them to Java implementation or even being able to draw the diagrams using Markdown with PlantUML.

What did I learn?
The article did help me with solidifying my understanding of the lifelines, the messages and the activation bars within the UML sequence diagrams, but in general from the article, i learned how PlantUML treats diagrams as code: by writing simple text scripts, you can generate UML diagrams consistently and efficiently. This can help out somewhat in collaborative environments, where diagrams kind of have to evolve along with the codebase. The section on best practices i think i find the most interesting since the article highlights that diagrams should focus on clarity over completeness.

For example, a UML sequence diagram should emphasize the key messages between objects rather than every small detail. The guide also pointed out how to use colors, notes, and layout to improve readability — some things that you don’t really pay too much attention to(at least for myself) so them giving pointers on how to use those things is good in case we want to make things look more pretty or neat-looking. I do appreciate the explanation of how PlantUML integrates with version control systems although it’s not something i found particularly too significant. Since diagrams are stored as text, they can be tracked and managed in Git just like source code. This makes them much easier to update collaboratively if need be, compared to traditional tools where diagrams are static images.

Reflection and Application?

At first i thought it might be something i’d forget about in a month or two(if i’m being just honest with the article), i do think it helped with reinforcing the core concept that the UML designs aren’t just academic exercises that we were doing in class, but it can be a practical tool for teams collaborating with each other compared to the traditional tools where it’s like the diagrams are just static images alongside the fact that it isn’t just checking a box but being able to make sure everyone understands, between the code and the people. I guess i’d say for any future projects that come to mind, i’ll apply what i’ve learned by keeping my diagrams somewhat simple and try to make it with an audience in mind since there will be people i’ll interact with and get feedback when it comes to my PlantUML code. I also wouldn’t mind experimenting with Markdown and Git so that the diagrams can evolve with the codebase, becoming almost like living documents as opposed to a static artifact.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

First Post

From the blog cs@worcester – Software Development by Kenneth Onyema and used with permission of the author. All other rights reserved by the author.