Monthly Archives: September 2023

Differentiating between trunk-based and feature-based development

Image depicting trunk-based vs feature-based development. Source: https://ruby0x1.github.io/machinery_blog_archive/post/step-by-step-programming-incrementally/index.html

Jacob Schmitt’s “Trunk-based vs feature-based development”

Link to article at the end

For this week, I am focusing on Jacob Schmitt’s blog post “Trunk-based vs feature-based development”. This is a short yet efficient article in explaining the differences between the two types of software development processes. Schmitt discusses the alternate use cases of such processes, where and when they might be utilized, who is most likely to use them, and the advantages and disadvantages of them. I chose this blog because as I learn more about forks, feature branches, commits and pull requests, it is important to also understand the overall development processes of companies and why they might use them. The development processes mentioned by the blog intertwine perfectly with git commands and usage.

Trunk-Based Development

According to Schmitt, Trunk-based development allows for developers to push changes straight to the main branch. However, if the new feature will take longer than usual changes, then they can “check out a branch from main, move the changes into it, then merge it back in when the development is complete” (Schmitt). Then, there would be a code review held by other developers to ensure that the new changes do not break the main production branch. Although this development approach is quite popular, it is used more often in the realm of experienced developers rather than newcomers.

Feature-Based Development

As for feature-based development, many developers can work on many different changes or branches at the same time. This is done by each developer creating their own feature branch, and then eventually requesting to merge with the main branch. One of the most important differences between feature and trunk based development is that in feature-based, developers never push straight to the main branch. This approach is much friendlier to beginner developers like myself, as there would be no worry of breaking production with new code since changes are on a different branch.

Advantages & Disadvantages

An advantage that trunk has over feature is that the code changes are more likely to be merged faster. Feature-based development pull requests can add up fast over time, which leads to a longer time between requesting and merging. Despite this, feature-based is much safer for larger development projects and groups, when there are too many feature branches to keep track of at once. As of right now, I prefer feature-based development because I would not be very comfortable working directly with the main branch as a new developer.

Thoughts

Although it might be simple, I found this article to be quite helpful in preparing me for what a future job or internship workflow might look like. When one has little to no experience with development in a professional environment, articles such as this one are very beneficial to learning more about team-based development. It gives a bit more background as to why we do this type of development, rather than just stating that we do it. I hope to use this knowledge in future projects with internships or jobs.

Link to article: Trunk-based vs feature-based development | Jacob Schmitt

From the blog CS@Worcester – Josh's Coding Journey by joshuafife and used with permission of the author. All other rights reserved by the author.

YAGNI and the need to say “No” or “Not Yet”

The article I chose for this week’s blog post is: “YAGNI (You Aren’t Gonna Need It) Principle Helps Devs Stay Efficient”, by Tatum Hunter. The article discusses what YAGNI is, how implementing it is difficult but necessary to implement in development teams, how it can be hard to know when to use YAGNI and how YAGNI can be applied to business as well as using a YAGNI methodology in making contracts. In this blog post I will be focusing on how being able to say no or not yet to a customer or employer is vital to a YAGNI system of software development. I chose this because we learned in class how useful YAGNI is to software development, with how it helps save time and money, as well as help morale and how communicating with customers or employers about what might not be good to implement is important.

YAGNI can be an incredible tool for making sure your team isn’t wasting time and energy on code that will not be used in the final product. This is incredibly useful, because as stated in the article, working on something for months thinking its going to be a great success just for it to be thrown aside can be devastating for team morale. “‘After months of hard work, it [a new component management system] just went by the wayside,’ she told Built In in 2020. ‘It might have been the right business decision at the time, but the team’s morale was really impacted.’” When YAGNI is implemented properly, it should lead to a faster development cycle meaning this project could have been completed before it was put aside. Saving the company money, and the team a lot of heartache.

Learning how to communicate YAGNI with customers and employers is one of the most important parts of implementing YAGNI especially if a requested feature may not be necessary. “‘To do that, we had to say ‘no’ or ‘not yet’ to about 30 features,’ Dingess said. And, true to YAGNI, that was the right call. After the product launched and the team could collect real user feedback, those 30 features ended up being irrelevant anyway. Instead, the team pivoted to build out the product based on that feedback.” A customer may want you to work on many extra features alongside your main task, only to later realize that many of those extra requests were not needed. If you were able to communicate to the customer that it would be best to implement those features at a later point in development, you would have saved a lot of time and effort for yourself, and money for your customer.

What I found to be most interesting about the article is how it relates the YAGNI principles to working with customers to make sure development teams don’t waste time and effort on implementations that don’t end up being used, or whole projects being scrapped all together.

Link to the article: https://builtin.com/software-engineering-perspectives/yagni

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Week 2

This week we learnt about how to use different git commands and their purpose. We worked on using branches and commits and using pull request to upstream your changes. One of the important things I learnt is how branches work. Basically, you get to work in a separate environment and once you’re finished, you make a pull request asking to combine your work with another person’s work. If they approve, your able to merge your branches. Forking is another important git command. A developer can see your repository and has an idea to add something to it, this is where forking comes into play. They can fork or make their own copy of your repository and add their own new features to it. If approved, they then submit a pull request to the owner and the changes are added. I think it’s crucial to know that anyone can fork a public repository, but it’s up to the repository owners to accept or reject pull requests.

            For the blog I chose, I wanted to research more into what is GitHub and what it’s used for, why is it one of the main platforms that developers use. I chose this blog because I wanted to read more about the basics of GitHub and why it was created. I think it’s important to know why it is one of the most used platforms used by developers. GitHub is one of the most popular resources for developers to share code and work on projects together. GitHub is used for storing, tracking, and collaborating on projects. It is also a social networking site where developers work openly and pitch their work. The blog talks about the biggest selling point of GitHub which is it’s set of project collaboration features, which includes version control and access control. One of the benefits of git is its cloud-based infrastructure which makes it more accessible. A user can access their repository from any location on any device, download the repository and push their changes.

            Based on my resource, I do like it because it has given me a deeper insight of GitHub and how it works. It resonates with me because the material from week 2 is like my blog and I now understand better what am doing in class and why am doing it. I think knowing the different commands used when working in GitHub is a huge part in successfully understanding how to use the platform.

Links.

https://blog.hubspot.com/website/what-is-github-used-for#what-github

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

What is Concurrency?

This week I wanted to learn more about concurrency because when I first heard about it I thought that it had to do with money but I found out it has a different meaning than I thought. So what is concurrency and why is it important? Concurrency is the execution of multiple sequences at the same time. This happens when operating systems have multiple threads running in parallel, with the threads running they communicate with each other’s shared memory. Concurrency is the sharing of those resources that cause problems like deadlocks. Concurrency helps with problems like coordinating the execution process and the scheduling for maximizing throughput. There are some ways that allow a concurrency execution, two of them are logical resource sharing which is a shared file and the other is physical resource sharing which is shared from hardware. In concurrency, there are two types of processes executing in the operating systems, that is independent and cooperating processes. The first independent process is a state that isn’t shared with another process, meaning that the end result depends on the input and that it will always be the same for the same input. A cooperating process is the opposite of an independent process, it can be shared with another process and the end result will not be the same for the same input. If you were to terminate the cooperating process it could affect other processes as well. A lot of systems can use at least two types of operations that can be used on process deletion and process creation. For example, process deletion is a parent deleting the execution of one of its children’s classes if the task assigned to the child is no longer needed. A process of creation is when a parent and child class can execute concurrency and share all common resources. Interleaved and overlapped processes are examples of concurrent processes and the relative speed of execution can’t be predicted. Concurrency all depends on the activities of other processes, and the scheduling of operating systems. Concurrency has a better time running multiple applications, a better response time, and a better performance. The sources I used to go more into detail about concurrency and explain the pros and cons of it very well. The reason why I picked this topic is because I thought it was interesting how it enables resources that aren’t used by one application and used for another application instead. It’s also interesting how without concurrency everything would take longer to run to completion because the first application would have to run first before starting another one.

Source:

https://www.geeksforgeeks.org/concurrency-in-operating-system/

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

Patience is Key

Over the weekend, I spoke with a retired Electrical Engineer, Bob. While we were chatting the topic of software somehow came up and the difference between today’s programming versus it’s past. We discussed how much things have changed from the 1960s to the present day. Bob had gone to WPI in the mid to late ’60s and was an Engineering Major. He enjoyed math and naturally gravitated toward the Engineering field but one day he realized how utilizing programs could help him compute highly complex math problems. Like everyone, he had to start his journey somewhere and the best language that seemed suited for him at the time was Fortran. Fortran is a very old language and I honestly didn’t know much about it, past that it was created at the time of punch cards and operators who compiled your programs for you. Bob would place his punch cards into a mailing box marked with the last 3 of his social. He said if he was lucky, the next afternoon the code was run. Normally, he said it would take about 2-3 days before getting your results back. Once returned, say if there was a period instead of a comma, a message would say “Program Terminated” or something along that. This is when the debugging process begins by carefully examining the code above the “terminate” message. Once the issue was found, he would fix it and then start the waiting process all over again. Today, we can run code in seconds, be able to debug and fix code in minutes, then re-run the code again. I’ve spent 6 hours in the past incrementally fixing and building a project for class and looking back I have a newfound appreciation for the tools and languages that aid us in programming today. But this made me think, what is Fortran and how was it used? To my surprise, while being a language seemingly in limbo, there is still a strong community surrounding the language. I found this article on Medium.com detailing a year’s journey attempting to revitalize and attract new programmers to Fortran. Over that one year, work had been done to implement a better-improved standard library (stdlib), a lot of focus and progress was being made into creating a Fortran Package Manager (fpm), and a website was used to bring the community together while also to help retain new learners instead of letting them struggle alone. While the modernization of the language still has some ways to go, the patience and commitment from the contributors to the stdlib, fpm, and website just show how patience is key to creating the best possible end product. This reminds me of the saying “Slow is Smooth and Smooth is fast” which resonates with software developers since the moment you rush yourself is when things end up half-baked and many issues arise. I should take my time more, that way I could catch small mistakes that can potentially snowball into more complex issues.

Article Link: https://medium.com/modern-fortran/first-year-of-fortran-lang-d8796bfa0067

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

code review, what it is and why it matters

For my first blog post for CS-348 and in general (in terms of actual content), I wanted to look into code review. I already had an inkling as to what it could entail, but I wanted to know what sorts of techniques and tools are used in looking over our peers’ (and our own) code.

For this blog post, I consulted a post on SmartBear to get a better understanding of it all. The post establishes reasoning for why we need code review so that we can overall reduce the excess workload and costs that can be caused by unreviewed code being pushed through. The post also gives us 4 common approaches to code review in the current day (which is noted to have been very much improved from methods found in the past). These approaches are email threads, pair programming, over-the-shoulder code review, and tool-assisted reviews.

An email thread provides advantages in versatility but sacrifices the ease of communicating that you get in person. Pair programming is the practice of working on the same code at the same time, which is great for mentoring and reviewing at the same time as coding, but doesn’t give the same objectivity as other methods. Of course, over-the-shoulder reviews are simply having a colleague look over your code at your desk, which while fruitful, doesn’t provide as much documentation as other methods. Lastly, tool-assisted reviews are also straightforward, utilizing software to assist with code review.

The SmartBear post goes on to say that tracking code reviews and gathering metrics helps improve the process overall, and should not be skimped out on. Some empirical facts from Cisco’s code review process in 2005 are given as well. According to an analysis of it, code reviews should cover 200 lines of code or less, and reviews should take an hour or less for optimal results. Some other points are given as well if you visit the post.

Considering most of my ‘career’ has been independent coding (that is, coding as the sole contributor), this was rather interesting to me. I’ve done code reviews for my peers, helping them with assignments and the like, while I’ve only really utilized tools and software to assist myself. It’s interesting to see how something as simple as looking over someone’s code on their computer is such an important step in the software development process, but it certainly makes sense. I also wonder how much the code review process has changed since the popularization of AI companions such as ChatGPT and Github’s Co-Pilot. Perhaps these tools have made code review with our peers less important, but I wonder if it’s more important to have our peers second-guess the AI’s suggestions in case of mistakes. Nonetheless, having a solid grounding of the actual ramifications of code review will prove very useful during my programming career, I am sure.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

My JavaScript Journey: Building a Simple To-Do List

Hey fellow tech enthusiasts! Today, I want to share my journey of diving into the world of JavaScript. As a computer science major in college, I’ve always been fascinated by programming languages, and JavaScript seemed like the next logical step in my coding adventure.

The JavaScript Bug Bites

It all started when I realized the immense power JavaScript holds in the realm of web development. From interactive websites to web applications, JavaScript seemed to be the backbone of modern front-end development. So, armed with my trusty coding setup and a burning curiosity, I embarked on this journey.

The Learning Curve

JavaScript was not my first programming language, but I quickly realized it had its unique quirks and challenges. The asynchronous nature of JavaScript and the various frameworks and libraries can be overwhelming at first. But hey, what’s a journey without a few bumps in the road, right?

I decided to start with the basics. I found some fantastic resources online that provided structured lessons and hands-on coding exercises. These resources made it easier for me to grasp the fundamentals, from variables and data types to loops and conditional statements.

My First Project: The To-Do List

To put my newfound knowledge to the test, I decided to create a simple yet practical project: a to-do list web application. It seemed like a fun way to apply what I’d learned and build something useful.

Here are the key features I implemented in my to-do list:

  1. Adding Tasks: Users can add new tasks to the list with a title and description.
  2. Marking as Complete: Tasks can be marked as complete with a single click.
  3. Deleting Tasks: Completed or unnecessary tasks can be removed from the list.
  4. Local Storage: I used JavaScript’s local storage to store the to-do list data, so it persists even after refreshing the page.

The Challenges and Triumphs

Building the to-do list wasn’t without its challenges. I encountered a fair share of bugs and quirks along the way. For instance, handling user input validation and ensuring smooth data storage required some debugging and problem-solving. But every bug fixed was a lesson learned.

One of the most satisfying moments was when I saw my to-do list project come to life in the browser. It was incredible to witness how a few lines of code could create a functional web application.

The Future of My JavaScript Journey

My journey with JavaScript is far from over. I’m eager to explore more advanced topics like asynchronous programming, working with APIs, and perhaps even diving into front-end frameworks like React or Vue.js. There’s always something new to learn in the ever-evolving world of web development.

So, if you’re a fellow student or aspiring developer, don’t be afraid to take the plunge into JavaScript. Embrace the challenges, celebrate the victories, and keep coding. Who knows? Your next project might just be the next big thing on the web!

Happy coding, everyone! ??

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

Learning PlantUML

This week, I found myself grappling with the fundamentals of PlantUML, a versatile language that allows the user to quickly create code diagrams. My encounter with this tool was prompted by a task that required me to construct a diagram for a Java program. To help myself get ready for this project, I decided to delve into some reading on the subject, and that’s where I found this weeks blog.

“The .NET Tools Blog,” more specifically, their comprehensive entry on PlantUML diagrams, available at https://blog.jetbrains.com/dotnet/2020/10/06/create-uml-diagrams-using-plantuml/, was a great resource in my learning on plantUML. This blog post, tailored to beginners like myself, served as an excellent starting point for comprehending the intricacies of PlantUML and its best practices. While it isn’t an exhaustive guide, it offers valuable insights and practical code examples that helped me feel much more comfortable using this tool. This directly relates to the courses content as our current homework assignment revolves entirely around understanding plantUML. I will definitely keep this on hand to refer back to until I get more comfortable with the basic syntax of the language plantUML

I wholeheartedly recommend this resource to anyone embarking on their PlantUML journey. The provided class diagram examples not only facilitate a smooth onboarding process but also serve as a foundation for crafting more intricate and detailed diagrams. The blog post also delves into PlantUML “use cases,” a facet I had yet to explore. These use cases appear to be an effective means of illustrating the interactions between users and the software, potentially serving as a valuable tool for communicating a program’s functionality to clients or customers who may not be well-versed in deciphering traditional blueprint-style diagrams.

As for practical applications of this information, as I was saying before these diagrams can be helpful to expain the overall structure and function of code to someone who might not understand a more complex explination. This is not the only application however. You could also apply plantUML to help plan a complex program before starting your programming. Even experienced developers would benefit from a tool to help quickly make a diagram that would let them see the structure of a program without any actual coding or debugging.

In conclusion, “The .NET Tools Blog” has proved to be an invaluable resource for beginners seeking to grasp the essentials of PlantUML. As I continue my journey of using with this language, I anticipate returning to this resource for further guidance on creating readable and informative code diagrams.

From the blog CS@Worcester – Site Title by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Week of September 18, 2023

This week, I wanted to make a post showcasing some examples of documentation for free, open source software. Comprehensive documentation is essential for any software project, so I want to see what useful documentation looks like. I was inspired to make this post when I found myself in need of a new podcast app for my Android device. The one I had been using was no longer refreshing my subscribed podcasts when I opened the app, and I wasn’t able to load the episode lists of any shows. I needed a new podcast app, but I didn’t immediately want to download the Google Podcasts app that was at the top of the search results on their Play Store. I understand Google collects user telemetry and data from their apps, and I didn’t want Google to connect advertising data to my account from the ads many podcasts read from their sponsors. Ideally, I wanted a free and open source app I could use so I could feel more secure in my data privacy.

From Opensource.com, the definition of open source software is “software with source code that anyone can inspect, modify, and enhance.” Many open source projects are supported by communities of volunteers connected over the Internet. The benefits of open source software include stability over time, meaning that because the project can be maintained indefinitely by anyone, the software may remain compatible with modern systems for a longer time than closed source software. Open source software also promotes security for end users. Since the software’s source code is openly accessible, there is a greater chance that undesirable code is deleted or corrected once discovered.

Large-scale projects that require collaboration are supported by extensive documentation for end users. The example I mentioned earlier, AntennaPod, has a simple-to-navigate documentation page that begins with the basic act of subscribing to a podcast, and ends with instructions for podcast creators on how to have their podcast listed on the app through the use of existing podcast dictionaries. One interesting section I found was an explanation of centralized versus distributed podcast apps. Centralized apps are always in communication with a central server, and content is delivered from that server to your device. In contrast, distributed apps send requests to the podcast publishers directly, and do not contact a central server. This approach allows the developers of the app to devote more resources to maintaining and iterating on the app, instead of maintaining a server. Distributed apps are also a protection for user privacy, as there is no interaction with any central server to provide an opportunity to collect user data. The app developers don’t have access to information like which users are subscribed to which shows either. This decentralized, distributed approach also helps protect against censorship, because there are multiple sources to download shows from instead of one central server owned by one entity. Likewise, the app will continue to function even if development ceases, where in contrast a central app will stop functioning if the central server shuts down.

Sources:

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

Learning Git

Mastering Git: Navigating the World of Version Control

As someone deeply immersed in the world of coding and development, I often find myself seeking the latest insights and innovations to stay ahead in this dynamic field. Recently, I had the opportunity to explore the GitHub Blog (https://github.blog/), a treasure trove of information that offers a glimpse into the exciting developments in code collaboration. GitHub, a cornerstone of the coding community, continuously adapts to cater to the ever-changing needs of developers worldwide.
The GitHub Blog, in this context, serves as my primary source of knowledge, delivering timely updates, tutorials, and in-depth insights into the platform’s newest tools and features. These innovations significantly boost productivity and code quality, making my coding journey smoother and more efficient. What truly stands out, though, is how GitHub is fostering a culture of open-source contributions and building inclusive communities.
I encourage you to delve into this blog and embark on your own journey of discovery in the ever-evolving realm of code collaboration. I chose the GitHub Blog as my go-to resource for several compelling reasons. Firstly, it consistently offers a pulse on the rapidly evolving landscape of code collaboration. This blog not only keeps me informed but also inspires me to be part of a larger, positive change within the coding community.

Having explored the GitHub Blog, I’d like to share my thoughts on the content and how it has impacted my perspective as a student in the field of computer science. This resource has been an invaluable source of knowledge for me, primarily due to its dynamic and up-to-date nature.

As a student, staying informed about the latest developments in technology is crucial to my education and future career. The GitHub Blog has proven to be a goldmine of information in this regard. I’ve learned about cutting-edge tools like GitHub Actions and GitHub Copilot, which have the potential to revolutionize the way code is written and managed. This exposure has not only broadened my understanding of these tools but also inspired me to explore them further in my coursework and personal projects.

Beyond the technical aspects, the blog’s emphasis on open-source contributions and inclusivity within the coding community has left a lasting impact on me. It has reinforced the importance of collaboration, diversity, and community engagement in the tech world. Studying for CS, I now see the significance of not just acquiring technical skills but also contributing to the greater good by participating in open-source projects and fostering an inclusive environment for all.

In my future practice as a computer scientist, I plan to apply what I’ve learned from the GitHub Blog in multiple ways. Firstly, I’ll integrate the knowledge of GitHub Actions and GitHub Copilot into my coding projects to boost efficiency and code quality. Additionally, I’m motivated to actively engage in open-source initiatives, leveraging the collaborative spirit and sense of community that the blog has instilled in me. I believe that by embracing these principles and tools, I can become a more effective and socially responsible developer.

From the blog CS@Worcester – Site Title by asejdi and used with permission of the author. All other rights reserved by the author.