Category Archives: CS@Worcester

To Scrum or Not to Scrum

This semester, we spent plenty of time learning about and discussing Agile and Scrum. We learned about the framework and how it can be implemented in teams, as well as the performance advantages of teams that use it. However, just as not every tool is right for every job, it is just as important to […]

From the blog CS@Worcester – CurrentlyCompiling by currentlycompiling and used with permission of the author. All other rights reserved by the author.

Backend Development

Backend development is the backbone of any software application. It handles the functionality behind the scenes, supporting interactions between the users and the application or website. The backend manages databases and server-side logic so modern applications remain efficient, scalable, and reliable. Understanding the principles of backend development is essential for developers to create effective software.

For this week’s post, I found a resource discussing key elements of backend development called “Mastering Backend Development” by Dan for Roadmap.sh. Dan introduces a roadmap for becoming an effective backend developer, and in this post, he discusses some of the steps on this roadmap in detail.

Backend development has been a key component of our class. I chose this resource because it is connected to the course, and the roadmap seems accessible to new developers. Having a path for your research and actionable steps to take can help with any knowledge gaps or roadblocks in understanding backend development.

This resource outlines 19 important steps or knowledge necessary for backend development. I have limited the summary to those I found interesting or instructive.

Caching Strategies: Improving the performance by storing copies of frequently requested data, reducing database load, and speeding up responses.

Authentication and Authorization: Ensures only authorized users access the system by implementing security measures.

Architectural Patterns: Picking the correct pattern, like Monolithic or Microservices, helps build scalable and maintainable systems.

Observability and Monitoring: Provide tools to monitor the system’s health, optimize performance, and diagnose issues.

Continuous Learning: Staying up-to-date with technologies, frameworks, and best practices. This ensures developers can adapt to the rapid evolution of backend development.

Some steps seem self-explanatory, but they could provide a good foundation for someone new to backend development. I could see how new developers could choose to prioritize the software’s functionality while forgetting its efficiency, security, and diagnostics. The post also offered plenty of tools to help with each step, like Redis and Memcached for caching strategies and OAuth and JWT for authentication and authorization.

I felt Dan gave an excellent synopsis of the research needed for new developers. Plenty of steps were not readily apparent to me, like cashing or your system’s self-diagnostics. These steps will likely present themselves in development, but for someone with little experience, it gives a good direction for research. This mindset aligns with the last step, continuous learning, and the importance of staying proactive regarding new challenges and technologies. In the future, I will research these backend concepts and stay up to date so I can produce and maintain better backend software.

Resource: https://dev.to/roadmapsh/mastering-backend-development-mpb

From the blog CS@Worcester – KindlCoding by jkindl and used with permission of the author. All other rights reserved by the author.

Refactoring Tech Debt

While studying for my most recent exam in CS-343, I came across a phrase I was familiar with but did not know the meaning of: technical debt. I had a vague understanding and understood refactoring’s purpose to repay technical debt, but I was not positive. I came across an article on increment.com talking about refactoring the definition of technical debt into technical wealth. This article’s title is Reframing Tech Debt by Leemay Nassery

The article discusses the concept of tech debt, which is the accumulation of negative choices made during product development that can hinder long-term system efficiency. The article proposes reframing it as tech wealth. The article begins by explaining that tech debt arises from quick fixes and shortcuts taken to meet product goals. This debt leads to system inefficiencies that accumulate over time. Tech wealth reframes this idea as an investment in building scalable systems that improve developer productivity, system stability, and overall team happiness. A key example comes in the form of an automated deployment system, which makes all development afterwards faster. Tech debt is often seen as a nuisance or a negative aspect of engineering, but through tech wealth you allocate time to improve the code’s architecture. The article argues that addressing tech debt (or building tech wealth) should be prioritized, as it ultimately saves time and resources in the long term. The article gives some example strategies to build tech wealth:
1. Allocate Time in Planning Cycles: Teams should allocate a portion of their engineering capacity to work on tech wealth alongside feature development. For example, 20% of the time can be dedicated to tech wealth activities such as automating processes or improving system architecture.
2. Quarterly Focus: Teams can also dedicate a few cycles per quarter to focus entirely on tech wealth, using this time to clean up past code and improve the system’s foundation.

The article closes with a conclusion that suggests rethinking tech debt as tech wealth and incorporating this mindset into planning cycles. This shift not only benefits engineers but also leads to improved product outcomes for users, even if the benefits aren’t always immediately visible.

This article was essential in my complete understanding of the importance of technical debt. Through this article I was able to conclude strategies to tackle technical debt, and the power to be found in conquering it. I found the automated deployment example most tangible as I have had struggles developing in environments without that feature. Tackling these issues is an interest of mine and I am excited to read more.

https://increment.com/planning/reframing-tech-debt/

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Git Learning

Git is a strong version control system that allows collaboration, monitors changes, and ensures project integrity. The Atlassian Git course explains key techniques like branching, merging, and maintaining a clean commit history, which are critical for effective development workflows. It describes how teams can work independently on feature development using branches without interfering with the main codebase, and how merging techniques guarantee that changes are seamlessly integrated. In order to show how Git can manage intricate projects with several contributors, advanced techniques like rebasing and resolving merge conflicts are also discussed. These ideas are supported by the official Git documentation, which offers comprehensive instructions on topics including marking releases, stashing changes, and fixing errors. I selected these resources because they offer a comprehensive grasp of Git’s features, which are essential for contemporary software development. They provide useful insights into how Git might improve team operations and avoid common development problems, building on the version control system lessons we covered in class. The focus on crafting relevant commit messages, for instance, relates to our lessons on preserving traceability and clarity in software projects. Their emphasis on bridging the gap between complex workflows like conflict resolution and cherry-picking modifications and beginner-friendly techniques like basic commits and branching is another factor in the selection of these resources. These guidelines have demonstrated to me that Git is about more than just code storage; it’s also about fostering teamwork, accountability, and transparency. 

One idea that struck a deep chord was that of “atomic commits.” Atomic commits highlight how crucial it is to organize changes into logical chunks so that each commit embodies a single, coherent concept. Debugging, tracking project history, and undoing particular changes without causing unexpected consequences are all made simpler by this procedure. Changing my perspective has had a big impact on how I handle version control. For instance, I now make extra effort to commit often and make sure that the context and goal of the modifications are explained in detail in my commit messages. In order to eliminate extraneous noise in the project history, I have also begun utilizing Git’s interactive staging functionality to include only pertinent changes in each commit. I want to apply these Git best practices into every project I work on going forward. For example, I’ll employ branching more methodically, making sure that experiments, bug fixes, and features are separated for simpler testing and evaluation. In order to find areas for improvement and keep the repository clear and understandable, I also plan to make it a practice to routinely review the commit history. In order to maintain uniformity and cooperation, I will also urge team members to adhere to the same procedures. By doing this, I hope to enhance both individual productivity and productive teamwork by producing high-quality code and establishing a clear, stable, and future-proof project history.

From the blog CS@Worcester – A Bostonians Blogs by Abdulhafeedh Sotunbo and used with permission of the author. All other rights reserved by the author.

Agileeeee

Agile methods emphasize flexibility, collaboration, and iterative development to deliver high-quality software efficiently. The Scrum Guide explores core practices like sprint planning, daily stand-ups, and retrospective meetings, which help teams adapt to changing requirements and ensure continuous improvement. It highlights the importance of clear roles, such as the Product Owner, Scrum Master, and Development Team, to maintain focus and accountability. Similarly, the Agile Manifesto underscores values like prioritizing individuals and interactions, working software, and customer collaboration over rigid processes. Together, these resources demonstrate how Agile methodologies foster an environment of transparency and adaptability while driving innovation and customer satisfaction. 

I selected these resources because they offer a thorough comprehension of Agile principles, which are critical to contemporary software development. They enhance our classroom instruction on iterative processes and team dynamics, which has made it easier for me to understand how Agile promotes productivity in practical settings. These tools were chosen in part because they emphasize useful strategies like prioritizing client input and dividing work into digestible chunks. These guidelines are particularly helpful for keeping projects moving forward and avoiding the problems associated with strict, long-term planning. For instance, I had trouble with scope creep and late deliverables in earlier projects. Following my study of these materials, I came to understand how Agile frameworks, such as Scrum and Kanban, might lessen these problems by encouraging gradual development and frequent review cycles.

One concept that resonated deeply was the principle of “embracing change.” Rather than viewing changing requirements as a hindrance, Agile promotes adapting to them as a competitive advantage. This mindset has changed how I approach project management. I now see flexibility as an integral part of the development process rather than a disruption. Moving forward, I plan to integrate Agile practices like iterative development and regular retrospectives into my work. By doing so, I aim to create workflows that are not only efficient and adaptive but also aligned with customer needs and evolving goals.

Moving forward, I plan to integrate Agile practices like iterative development, regular retrospectives, and active stakeholder involvement into my work. To guarantee that all stakeholders are included and that input is consistently implemented, for example, I try to include user stories and sprint reviews. In order to increase focus and productivity, I also plan to implement the timeboxing concept. By doing this, I hope to develop processes that are not only effective and flexible but also in line with changing objectives and client needs. In addition, I want to create a cooperative atmosphere where candid communication and mutual responsibility propel the project ahead. By using these techniques, I will be able to provide not only functional software but also long-lasting solutions that genuinely satisfy user needs.

From the blog CS@Worcester – A Bostonians Blogs by Abdulhafeedh Sotunbo and used with permission of the author. All other rights reserved by the author.

The new partnership between GitLab and Amazon Q

A brand new partnership was just created between GitLab and Amazon Q. It has revolutionized the traditional flow of software development with its new AI capabilities. Now with the help of Amazon Q’s ai driven assistant developers now can get help in complex tasks including feature development, code reviews, and even codebase upgrades. Developers can use Amazon Q with simple commands in GitLab, which can streamline their workflow and boost productivity. The partnership offers capabilities like automated code generation, assisted code reviews, and legacy code upgrades, all inside of GitLab. This helps developers be able to focus their time on other tasks required for development rather than coding in order to enhance their productivity. 

The reason I chose this specific resource was because throughout the entire semester we mainly used GitLab and I believed that it had a lot to do with the majority of what we learned in class. Not only this, but because ai is one of the biggest topics and technologies in the world right now and I believed that this new partnership with GitLab was going to make every aspect of it much easier and make it much more efficient and efficiency in coding is another topic we went over in class a lot as well, with Scrum and Agile principles. I also believed that this would be not only important for me to learn, but others as well since this new program helps with many of the slowest parts of coding. I believed that it was a tool that would serve not only me, but everyone else who is majoring in computer science very well. This new information has also changed the way I think about software development as it shows that ai is not just some popular, interesting and controversial media topic, but now it is a useful tool that is being included into the daily process of developers. It is clear that anyone who wants to one day be a software developer must be able to not only code well, but also know how to use ai in the most efficient way possible. 

I plan on using this in the future by using AI more to be able to assist me in generating code, but also to stay mindful on a good balance between ai and my own ideas in order to make sure my coding is the most efficient it can possibly be and to keep the quality of my code as high as I can.

Link: https://aws.amazon.com/blogs/aws/introducing-gitlab-duo-with-amazon-q/

From the blog CS@Worcester – Thanas' CS Blog by tlara1f9a6bfb54 and used with permission of the author. All other rights reserved by the author.

The cost of Software development

In this class we have talked about the process of building software many times, but we never really looked into how much this software might cost to create and some different factors that would make a difference in it. I personally found this part of coding the one I kept wondering about so I found a blog post by Kacper Rafalski to learn more about it since if we are spending so much time on the creation of software it would be useful to know all the different aspects of its pricing including project size, complexity, team composition, and technology choices. 

First he explains the cost depending on the software type. The cost of software development depends a lot on the type of software being built. If the project is based on web development it can range from affordable prices to more expensive ones, while if you are creating a mobile app it becomes way more complicated and costs a lot more due to problems that arise where you need to support multiple operating systems for multiple types of phones. Meanwhile custom software development, which involves creating different solutions depending on the case, also varies in price based on the features required to create it and specialized fields like cloud computing and embedded systems are often very flexible and seen as better, but often makes the program cost much more expensive since it is a lot more complex than the other systems. He then explains that a project’s size including its features makes a big impact on its size, but also it has a lot to do with the diversity of its functions. A small company that wants a basic software might have to pay $100,000 meanwhile a bigger company that wants a more complex software will get a starting range of around $600,000. Pricing also depends on the developers. Developers that have much more experience will require much higher rates due to them being much quicker at working as well as being able to handle much more complex projects. There are also some hidden costs like maintenance that will be charged and system integration. The post talks about common pricing models in software development, like the Fixed Price, Time-and-Materials, and Dedicated Team models. Each model has its advantages and is suited for different project types and client needs. Overall the post recommends carefully planning your projects to budget well using different strategies to remain cost effective. 

Overall I believe that I learned a lot from this post and from now on I plan on approaching different projects with a more realistic and cost effective approach. I also learned that creating projects isn’t just about how well you can code, but also about planning strategically and managing my resources and budget as well as I can in order to be a better team player.

Link: https://www.netguru.com/blog/software-development-cost

From the blog CS@Worcester – Thanas' CS Blog by tlara1f9a6bfb54 and used with permission of the author. All other rights reserved by the author.

RESTful API: Tips and Tricks

REST stands for Representational State Transfer and API is application programing interface. Simply put, when there are multiple systems working together, a RESTful API allows a person to query or edit information in the system. I recently read a javaguides post on how to set up properly scalable API’s and wanted to share a short summary.

Tip 1: Domain Model-Driven Design
This means you should have the structure mirror the real-life pieces. So if, for example, you want the items of an order, the endpoint should look like “/orders/{id}/items”. Also, don’t nest it too deep, it will add needless complexity.

Tip 2: Choose HTTP methods appropriately
The 4 main methods and their use cases are; GET for getting data, POST for posting a new object to the database, PUT for putting a new object in place of an existing object, DELETE for deleting objects in the database. There is also PATCH which can be used patch an existing object. If you are using GET to modify an object you are adding unnecessary complexity. I personally will be adapting my use case of PUT after understanding the difference with PATCH

Tip 3: Implement Impotence Properly
A method is idempotent if no matter how many times it is called after the first, there is no behavior change from the first call. If you call DELETE multiple times on the same object then only that object should be deleted. GET is already idempotent, it will always get the object’s data regardless of whether you already called GET. PUT and PATCH should be implemented similarly to DELETE. The hiccup is with POST, because simultaneous calls are possible (two people can create new objects with the same identifier) logic to handle these calls needs to be made.

Tip 4: Chose the correct HTTP status calls
When a call is made, there are codes for the result. They are; 200 for a valid request, 201 for an object being created, 400 for an incorrectly formatted request, 401 if requester is unauthorized, 403 for a forbidden request, 404 if the object request is not found, and 500 for a server-side error. I am going to research more about how the permissions are set up when someone makes request and the logic behind blocking certain requests.

Tip 5: Versioning
Include which version of the API the request is using, maybe put it in the path “/v1/users”. They also list alternatives such as having a query parameter “/users?=v1” or adding a HEADER to the request that includes the version number. I never thought about adding versioning to the request but it makes sense to allow for previous versions to still request.

Tip 6: Use Semantic Paths
Singular resources use singular nouns and the path should be using the resources.
DON’T: POST /v1/loginUser
DO: POST /v1/users/login

Tip 7: Support Batch Processing
Something like:
POST /v1/users/batch
to allow for multiple users to be made in one POST.

Tip 8: Use query parameters for flexibility
You can add the ability to sort the data requested, or filter it, using queries. So:
/v1/users?age=gt:20
and
/v1/users?sort=name:asc
This is some awesome functionality and I cannot wait to implement it in my own system.

Link:
https://www.javaguides.net/2024/12/top-8-tips-for-restful-api-design.html

From the blog CS@Worcester – Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

Software Licenses

Role of Software Licenses in Protecting Your Code

Software licenses are essential tools for developers, companies, and organizations that create digital products. These licenses set the boundaries under which software can be used, edited, or shared, ensuring that the rights and responsibilities of creators and users are clearly outlined. By setting these boundaries, software licenses protect intellectual property, foster trust, and encourage innovation. They encourage more people to want to put out their work into the world for it to be utilized by everyone. Whether you’re an independent developer or part of a large organization, understanding and implementing the right license can make or break the success of your software.

Licenses are not just legal jargon—they’re critical to the security and success of your software. Without a license, your work may be misused, copied without acknowledgment, or exploited without your consent. By clearly defining permissions and restrictions, licenses empower creators to protect their investments while allowing others to contribute in ways that respect the owners’ visions for their software.

Real-World Example: SatixFy’s Landmark Licensing Agreement with MDA Space

link: https://news.satnews.com/2024/10/23/satixfy-signs-million-software-development-license-agreement-with-mda-space/

On October 23, 2024, SatixFy, a leader in satellite communication technology, announced a multi-million-dollar software development license agreement with MDA Space. This deal grants MDA Space access to SatixFy’s advanced software solutions for use in their satellite systems, setting the stage for groundbreaking collaboration. The agreement reflects the importance of licensing in facilitating partnerships that push technological boundaries, especially in fields as innovative and demanding as space exploration. This encourages new discoveries and developments to be made in space exploration showing the impact that this license can have.

The SatixFy-MDA agreement exemplifies how software licenses can drive collaboration and innovation. By granting controlled access to its proprietary software, SatixFy ensures its intellectual property is protected while enabling MDA Space to use cutting-edge solutions to make out-of-this-world collaborations. This approach benefits both parties, fostering trust and creating opportunities to develop next-generation satellite systems. Licensing agreements like this pave the way for advancements that could redefine how we approach space communication and exploration.

The success of software can a lot of the times solely depend on how it’s licensed. A well-chosen license can protect against people using the product and possibly distributing it while misconstruing the purpose of it and at the same time enabling growth and fostering trust among users and collaborators. Open-source licenses may help establish a community around your software, while proprietary licenses can ensure you retain control over commercial applications. On the other hand, poor or unclear licensing can lead to legal disputes, misuse, or a loss of control over your work. Overall, licenses are a big help in allowing software to succeed as well as setting the path for further innovation and collaboration.

From the blog CS@Worcester – coding.upcoming by Simran Kaur and used with permission of the author. All other rights reserved by the author.

Literature in computer science

To really boost your understanding, it’s better to dive into academic computer science papers instead of just watching tutorial videos. This approach helps build a solid foundation and explore future trends in the field. The article shares the journey of the “Papers We Love” team—Zeeshan Lakhani, Darren Newton, and David Ashby—who, even without formal training in computer science, explored key papers to expand their knowledge. Their experience shows how academic papers can shed light on the development of programming ideas and spark new ways to tackle problems.

The blog also recommends four key papers for anyone curious about computer science research, such as:

  • “Communicating Sequential Processes” by Tony Hoare
  • “Dynamo: Amazon’s Highly Available Key-value Store”
  • “A Unified Theory of Garbage Collection”
  • “Out of the Tar Pit”

Getting into literature can really help programmers grasp the theory behind their tools and methods, which can lead to smarter and more efficient software development.

I picked this blog because I think computer science is more than just watching tutorial videos. A lot of folks get stuck in “tutorial hell,” just binge-watching without really getting the deep understanding they need. This blog points out that diving into academic computer science papers can help break that cycle. By engaging with the core literature, you can expand your knowledge and discover insights that tutorials might miss. Checking out research papers allows programmers to really grasp concepts better and come up with more creative and informed solutions.

Reading academic papers in computer science is, in my view, an essential practice for individuals aiming to enhance their knowledge of the discipline. Such papers frequently lay the groundwork for subsequent innovations and offer perspectives that are not typically addressed in conventional resources or tutorials. Although the terminology may occasionally be complex, the endeavor to comprehend these documents is rewarding, as it cultivates a more thorough understanding of emerging technologies and theoretical progressions. By immersing themselves in academic literature, both developers and researchers can remain at the forefront of trends, improve their analytical abilities, and make significant contributions to the wider technology community.

Blog: https://stackoverflow.blog/2022/12/30/you-should-be-reading-academic-computer-science-papers/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.