Category Archives: Week 12

Version Control apply in cooperative work VS Student life

Each organization in the modern world invests thousands of dollars in Agile development. Agile offers a lot of advantages, but the business only creates successful tactics, and version control is one of them. For computer science students, becoming proficient in Agile not only increases your hands-on expertise but also helps you master your surrounding team environment.

Version control and Agile methodology give you more power for your frequently changing things. This principle provides quick adaptation and mastery in changing technology, so every team manager can robustly improve their performance with a good mindset. Chances in everyday situations also play a part in our tasks as a team. Agile is simply one technique; your team’s performance depends on more than just one mindset. It affects our environment’s inevitable glitches and bugs. It also upgrades one mindset through team-leading performances. Quick adaptation is also most crucial role play in student academic career. How fast you adopt things and applied it that things in right place with good understanding gives more benefits in your career goals. Today’s world is not steady because everyday life changing and new things come and go so adaptation according to chances is play key role in every sector. Students want to know how agile control flow their mind in different scenario. Version control allows students to create real-world scenarios in which multiple team members work on various parts of a project at the same time.

Key Benefits for Students

Enhanced Collaboration – Version Control enables students to work on multiple group activities in one time and also check their work and give meaningful work problems skills optimizations so students can easily learn multiple things in a single time and apply it in the job market.

Timing Ability – Focus on their project evolution completed in a proper way with time to time is implementing a tracking system and accountability. This not only helps with understanding the development timing but also cultivates a sense of accountability for program improvements.

Developing To Take Risk Ability – New ideas working in their plans without fear it’s creating a more advanced process. This encourages new modern art, helps to solve your error, and enhances your taking ability skills. 

In conclusion, vision control in Agile is powerful in the cooperative and student worlds. These benefits help a lot in your future career processes, so applying them with proper thinking creates delivery methods and boosts your career. Students helps these advanced weapons in the real world, so learning and understanding the whole process makes humans more powerful.

November 29, 2024

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Github Utilization

Github has probably been the most useful tool I’ve ever used when collaborating with other team members or co-workers on projects. Even though I have used it for quite some time now there are many resources out there on the basics that still help me in figuring out the tool. One of many that I found was actually on the Github blog itself under the developer skills section which has a guide to a good amount of the basics themselves. Here is a link to just one of the sections relating to pull requests https://github.blog/developer-skills/github/beginners-guide-to-github-creating-a-pull-request/. For anyone new to using Github these guides seem incredibly useful, they include videos, pictures and recaps for everything discussed.

Just recently we went through Github in our CS-348 course and I did end up referring back to this source to help me in getting a refresher and understanding the content better. These guides go through a step by step process in explaining how each of these functions such as pull requests and merging works. Each subsection of explanations also includes pictures and videos themselves to help the reader better understand what they should be seeing when following along with the process. One of the most recent posts in the beginners section also included the setting up and securing of the users profile as seen here https://github.blog/developer-skills/github/beginners-guide-to-github-setting-up-and-securing-your-profile/. This is one of the most basic of the basics that pretty much everyone that uses the internet should know about, but you would be surprised at how many actually do not take this part seriously. Thankfully in this post the author describes the use of 2FA and how to set it up, 2FA is one of the easiest ways to secure your account much better than just using a password. At the end of each of these lessons / posts, the author includes a “Your next steps” section that allows the reader the use of a repository to test with if the lesson declared the use of a repository, or it will point you in the next direction you should take.

Overall this blog is probably one of the most effective ways for someone who is just learning Github to get used to a bunch of the basics. I only found this blog just recently but I much prefer it over a youtube video itself since it guides you through each step with a good amount of detail and it is very easy to go back any time I need to and re-read specific steps or details. Since not all of the basics have been published yet I will most likely keep my eye on this blog because reinforcing the basics if you are not sure about something is always helpful.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

Blog #2: Anti-Patterns – Explained

Anti-patterns are best described as behaviors or approaches to problems that conceptually may help solve the problem, but in practice are a detriment to the process of doing so. In software development, this can come in many forms, whether ‘cutting corners’ by reusing old code or trying to condense behaviors into one class/object. Ultimately these decisions we make as developers come from a place of genuine concern. When these design patterns remain unchecked, they begin to rot in our code and cause many problems, some of which are contradictory to the intention of originally incorporating them.

In the article Anti patterns in software development, the author Christoph Nißle describes several anti-patterns that occur in software development and the consequences of each. Three anti-patterns resonated most with me, as I could see how someone could accidentally implement one of them. The first of which is what Nißle calls Boat Anchor. It represents code that *could* be used eventually, but for the time being, has no relevance to the current program. By keeping this code, the developer is contributing to visual bloat. Not only does this make finding specific lines harder, but once other developers are included on the project they may have questions about how this code will be implemented. To counter this anti-pattern it’s good practice to only keep code that is prevalent to the program’s functionality AND is currently being used by the program. The second anti-pattern I found interesting was Cut-and-Paste Programming. As the title suggests, it occurs when programmers reuse code from external sources without properly adapting it to their current project. This code can also come from the same program. Under both circumstances this code will cause errors, as it’s not a ‘one size fits all solution’, furthermore the code being pasted could have errors. These can be remedied by “creating multiple unique fixes for the same problem in multiple places”(Nißle), but each unique fix requires time and this time could have been spent creating code for the specific problem rather than reusing code. Lastly, the Blob pattern is one that I have personally fallen victim to several times. This pattern has the developer trying to make objects/classes as dense with functionality as possible, but this complexity acts against the single responsibility principle. Classes (and objects) should be solely responsible for one behavior if we include too many then the function of that specific class becomes unclear. The Blob pattern can easily be fixed by dissolving the blob class into several single-responsibility classes. It’s best to catch poor practices such as the Blob early in development to minimize the amount of refactoring that’s needed to fix the code.

As mentioned before, I’ve fallen victim to these anti-patterns as conceptually they save time in the development process. However, the time often saved is eclipsed by the time required to fix errors later in development. Properly following design principles will cause development to require more time, but it should reduce the number of errors that would appear if anti-patterns were used in their place.

Link to Article:

https://medium.com/@christophnissle/anti-patterns-in-software-development-c51957867f27

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #1: Introduction to APIs

In our work with REST APIs, namely through the HFOSS project Thea’s Pantry, we have implemented new functionality to the database by updating the HTML specifications and creating new endpoints. During this whole process I did not have a concrete idea of what an API was, nor did I understand what made REST APIs any different from their alternatives. 

In the article What is a RESTful API the authors Stephen Bigelow & Alexander Gillis define what an API is, and what components make an API RESTful, in addition to how they can be used. APIs are defined as “code that lets two software programs communicate with one another” (Bigelow & Gillis). This can be seen through our work in Thea’s Pantry as the specification.yaml file provides instructions for the commands which communicate between the backend and database. In a general flow of control the user interacts with software, this piece of software interacts with the API which then shifts control to the external software. From this point the user can directly interact with the external piece of software (in the cases of methods such as delete and put), or the user can fetch information from it which can be returned to their client-side software. REST stands for representational state transfer, this is a type of software architecture that makes communication between two programs more accessible and easy to implement (Bigelow & Gillis). Users can interact with resources from another program using HTTP requests composed of a method, endpoint, header, and sometimes will require a body. RESTful commands, similar to those of databases (get, update, delete.. etc), can be specified by the developers of the API to have unique functionality. This modularity of command functions is one of the benefits of using RESTful APIs. An alternative to RESTful APIs is SOAP. These both achieve similar functionality, but the methods of doing so are different. For example, SOAP is a communication protocol compared to REST which is an architecture style. SOAP is only compatible with .xml files, meanwhile REST can be used with .xml in addition to other file types. It is worth noting that REST and SOAP are not one-to-one alternatives and can be used together. 

APIs allow developers to extend the functionality of their programs by communicating with other programs. This can be achieved through HTML requests (in the case of RESTful APIs) and nodes (in the case of SOAP APIs). REST APIs favor flexibility and modularity, on the other hand, SOAP APIs are more rigid and require concise specifications. Due to its accessibility, RESTful APIs are more favorable in projects such as Thea’s Pantry. I cannot see SOAP being implemented in Thea’s Panty due to its rigidity as seen through the types of files it uses. REST is much preferred here as we can use javascript files to define the HTML requests that the API will use.

Link to Article:

https://www.techtarget.com/searchapparchitecture/definition/RESTful-API

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

How unpredictable bad code can be…

URLs:
Article on SOLID: https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/
Video mentioned: https://www.youtube.com/watch?v=j-6N3bLgYyQ

One video has always caught my attention because it clearly illustrates why SOLID principles are so important. I will reference two sources in this post for better understanding, in case you want to explore the topic further. However, I kindly ask you to watch the video linked at the beginning for my comments to make sense.

I chose an article to complement the video because it offers a more approachable explanation of SOLID principles and, as stated, explains them in plain English. The video features a dad following instructions from his kids to make a peanut butter and jelly sandwich. The issue here is that such a task may have become so automatic for us that we no longer think about every single detail involved.

How is that related to programming and SOLID principles? Well, it’s quite similar. At its core, code consists of lines and lines of instructions written for a machine to execute. To achieve the intended goal, these instructions must be precise and correct; otherwise, we can encounter numerous issues. As the video demonstrates, the lack of precision in the kids’ instructions led to some funny outcomes: first, the dad stacked two slices of bread on top of each other without doing anything else. In another instance, he ended up with a piece of bread with a “bit” of peanut butter on it, a whole bottle of jelly, and another slice of bread on top.

Did I just make a typo by saying a bottle of jelly was between two slices of bread? No, that did happen. This highlights what occurs when you assign certain instructions—or, in programming, functions (Single Responsibility Principle)—more than one purpose or intent. While a peanut butter and jelly sandwich recipe might not fully embody all five SOLID principles, the Single Responsibility Principle (S in SOLID) alone is enough to demonstrate the importance of clear and focused design in coding.

The lack of clarity and precision in the sandwich instructions led to various unwanted results. Although the instructions made sense to the kids, they didn’t work well for anyone else. Similarly, when writing code—whether for homework, work, or even personal projects—these principles should never be overlooked. I believe following such practices is part of a developer’s ethical responsibility.

At the end of the day, even if I’m the one reviewing my code a year later, I might struggle to understand it without proper adherence to these principles. You might wonder, “How is it possible not to understand what you wrote yourself?” Well, that’s exactly what happened to me yesterday while refactoring some old code. I encountered several parts that I couldn’t make sense of, so I had to revise and apply these principles to make them comprehensible.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Takes on how to become an effective team

URL: https://www.youtube.com/watch?v=7zDX8VqvBa0

I came across another interesting podcast episode from Beyond Coding. This time, the episode I watched focused on Effective Product Teams, featuring Anne Kooijman, currently a Product Owner at Coolblue. The conversation between her and the host covered various topics related to team management and ways to build an effective product team.

The reason I chose this resource is that it provides real-world solutions to straightforward questions. The host, Patrick Akil, mentioned that he recently took on an assignment as a Project Manager. He asked many interesting questions, some of which I had myself.

A couple of specific points caught my attention, and I’d like to share them with you. The first was Anne’s perspective on what is required for a team to deliver quality work. She said, “Give them the necessary tools and the theoretical background.” I found this fascinating because you shouldn’t give developers half-baked solutions or dictate how to solve a problem. Doing so might make it harder for them to translate someone else’s idea into code. Instead, provide them with the necessary knowledge and tools to figure out solutions on their own and let them do it.

Another topic Anne discussed was how companies sometimes deviate from the core principles of Scrum and the potential outcomes of those deviations. She pointed out that there’s no issue with straying from what the “constitution” of Scrum dictates if it leads to improvements. This is intriguing because Scrum is meant to provide a framework, not a rulebook. Different teams consist of different people who may respond differently to certain changes. Personally, I imagine that I wouldn’t react well to constantly changing sprint durations.

They also discussed goal-driven teams and how having goals is essential for team effectiveness. This resonates with me, as it aligns with a practice, I adopted this semester. This isn’t meant to criticize how others manage their responsibilities but to connect the podcast’s ideas with my own experience. For the first time this semester, I decided to only consume entertainment during my free hours once all my tasks were completed. It sounds simple and cliché, but it works—just like Scrum. Teams need a singular goal, and the focus should remain on that goal.

This brings us to the final topic: timelines, and how even flexible and inconsistent timelines can be better than having none. This concept challenged something I’ve always believed—that if you’re going to do something, do it right and to the best of your abilities, or don’t do it at all. However, I realized that some flexibility in timelines is necessary to allow for adaptation and growth.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Software testing

Software testing is verifying that a software product performs the way it is designed to.  The benefits of software testing are preventing errors or bugs from being in a program. When doing software testing, it is best to do testing during the start of a build, during the build, and after the deployment or release of a program. Through doing software testing, developers and companies can ensure there are no defects, or issues before releasing or prompting a product. If there are bugs or defects in company products, there is going to be a loss in customers and money. Some of the different types of software testing include acceptance testing, unit testing, and security testing. Acceptance testing is ensuring that a coding program runs and that there are no errors. Unit testing is testing parts of a program instead of testing a whole program. In  Unix systems programming class, I was exposed to unit testing in homework projects, where I had to write parts of a program and receive a grade for each part, and couldn’t move forward until receiving full credit on a part of a program. Unit testing can lead to fewer errors than acceptance testing. It is a better practice to test your code small bits at a time than to run a whole program. Security testing is ensuring that software programs are safe from hackers that could lead to being denied access to your software, or your software working incorrectly.

Software testing was first developed after the ending of world war two. Computer scientist Tom Kilburn wrote the first piece of software by performing mathematical calculations. Debugging was the main testing strategy at the time. By the 1980’s there were other strategies of software application testing outside of fixing bugs and errors. The process of software testing includes determining a testing method to use, developing test cases or setting up requirements for the test, writing scripts or parts of a program, analyzing test results, and writing a report. For large software programs, there are tools used to complete tasks and work on running tests for different tasks, with instant feedback on what works or not works for a program. Through reporting and analytics, teams can present their results in a dashboard, which would allow everyone to see the overall result of a project. Reporting results show how testing the product leads to the result. 

Blog url: https://www.ibm.com/topics/software-testing

From the blog CS@Worcester – jonathan's computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

AI Is Not A Software Engineer

In this blog, the author discusses how much the times have changed for new CS graduates. Reminiscing about how little they knew and how easily they got a job. Then talks about how much more prerequisite knowledge is needed to even sniff a job. The topic of the article is how now more than ever it is easier to get code that works. Thanks to AI, code is now more plentiful than it ever was before. However, all code is not good code. This leads to them discussing how despite how much code there is these days. Having people capable of understanding and able to build software are still very necessary. 

Although AI can now code for us, the coding wasn’t the hard part in the first place. The hard part was building software, and making good software. It’s easy to throw a bunch of code snippets together that accomplish something. But it is something entirely different to build specialized software that fills certain functions and meets certain criteria. AI cannot replace people, even though it may take away some jobs. At its heart, AI cannot build unique software. Teams of capable developers are still needed. The nature of how people code is changing. It’s becoming more important to be able to harness AI, but still oversee and build functional software.

I chose this article because I think it relates to team building. Like the article said, you need people who can understand code, not so much write it. Writing code is easier than ever, but finding people who understand how to build software is harder than ever. When using these tools it’s important not to rely on them too much. Discerning who can actually code these days is probably one of the most important skills for employers these days.  I think it’s important for me and everyone to keep in mind that AI is a tool. Tools dont make up for lack of knowledge. Tools are used best by people who know how to use them and maximize their use. One tool can’t solve every single problem. At the end of the day, knowledge is the most important part of being a software developer. 

Citations

https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/

By Charity Majors

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

How AI Tools Separate Us From Information

It is no secret that ChatGPT has blown up recently. It is not just used by CS people, but everyone from all walks of life. It has become a common tool used to help people with a wide range of problems. Offering a quick way to get answers without needing to look for answers by yourself. However, these AI tools are not just a catch all solution for every problem. In this blog from Stack Overflow called “Knowledge-as-a-service: The Future of Community Business Models” discusses how these recent developments have affected how we access information.

In just the last twenty years alone, the way of searching for knowledge has changed. Going from books, to search engines, and cloud technology allowing for farther reach. In recent times we have seen the rise of AI tools that help guide us to the answers we seek. These AI tools however, create a separation between knowledge and the people who make it. AI does the searching and synthesizing for us. Although convenient, it raises the question if that is the best way for people to learn.

Some common concerns held by people are that ChatGPT offers answers. It often does provide context as to why solutions work. What works for one dev environment might not work in another. AI is also reliant on humans for new consumption knowledge. If humans are not creating new knowledge, AI cannot create new information. The credibility of these tools often comes under scrutiny as well. Many developers mention how much variance there is to answers. Although these are certainly draw-backs, developers are learning that community created content is more needed than ever.

I choose this topic because I believe that most students use ChatGPT or some other tool to help us. I myself use it often to help with pretty much every single class I take. But I definitely rely on it the most for CS. I ask how something works or what is the best course of action. I think it is a common concern for many employers cause many don’t know how to actually code. Many people just copy and paste without learning. I am guilty of this myself. But I have been working on trying to actually understand every bit of code. And learning of where and when to apply these code snippets I use. I believe it is still very important to learn from sources outside of chatGPT. Like from classes or other websites composed of trustworthy data. It’s good to learn how to do things yourself without relying on outside sources.

Citations

https://stackoverflow.blog/2024/09/30/knowledge-as-a-service-the-future-of-community-business-models/

By Ryan Polk and Ellen Bradenberger

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

Blog Post Week 12

This week, as we’ve been learning a lot more about Git and different features of it, I decided to find an article that talks about different commands that we may have not used and what they do. The article I found titled “Modern Git Commands and Features You Should Be Using” by Martin Heinz, explains some newer(ish) commands in Git that people still may not know about or just hardly ever use.

He opens up with the switch and restore commands but these are commands we’ve already learned about and used, so I’m going to skip over these.

The first one he mentions that I had not heard of is “sparse-checkout”. If you have a large repo with many different individual directories, it can cause certain commands to run extremely slow such as the normal “checkout” command or the “Status” command. With sparse-checkout, you can configure git to only checkout files in a specific directory. You would then use sparse-checkout set to download or checkout that specific directory. As you can see, this would be extremely useful in scenarios where you have a massive repo with a large amount of directories. Being able to specifically select the directories you want, rather than having to deal with all of them on more generalized git commands can be a huge time saver which is certainly a value many programmers hold highly.

Another command he mentions, which I find to be extremely cool and probably one of the most useful commands I’ve seen is “bisect”. Essentially, you run a “git bisect start” command linking a commit that does not work, as well as the last known working commit. Bisect will find the halfway point between these two commits, and you can either say “Good” or “Bad” depending on whether or not the commit is selects works or doesn’t. From there, it will keep on going halfway until it finds the exact commit where the errors that stopped the code from working started. This seems to be an extremely useful and honestly just cool command as it makes the process of finding the issues within a given program a million times easier. It is a command I will certainly be using in the future, probably a lot, and I’m very glad someone was keen enough to actually make this a working command.

Overall, the two commands I spoke about seem to be extremely useful, especially bisect, and I will certainly hold onto them for future reference in Git. Heinz also mentions the “Worktree” command but, while this command also seems quite useful, I found the other two to be much cooler as well as understandable to use. It also opens my eyes to the fact that there are also many other git commands and features that could be utilized, and I’m definitely going to look into the rest of them as I am sure I will find a few more very useful commands.

Source: https://martinheinz.dev/blog/109?utm_source=tldrwebdev

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.