Category Archives: CS-348

Version Control apply in cooperative work VS Student life

Each organization in the modern world invests thousands of dollars in Agile development. Agile offers a lot of advantages, but the business only creates successful tactics, and version control is one of them. For computer science students, becoming proficient in Agile not only increases your hands-on expertise but also helps you master your surrounding team environment.

Version control and Agile methodology give you more power for your frequently changing things. This principle provides quick adaptation and mastery in changing technology, so every team manager can robustly improve their performance with a good mindset. Chances in everyday situations also play a part in our tasks as a team. Agile is simply one technique; your team’s performance depends on more than just one mindset. It affects our environment’s inevitable glitches and bugs. It also upgrades one mindset through team-leading performances. Quick adaptation is also most crucial role play in student academic career. How fast you adopt things and applied it that things in right place with good understanding gives more benefits in your career goals. Today’s world is not steady because everyday life changing and new things come and go so adaptation according to chances is play key role in every sector. Students want to know how agile control flow their mind in different scenario. Version control allows students to create real-world scenarios in which multiple team members work on various parts of a project at the same time.

Key Benefits for Students

Enhanced Collaboration – Version Control enables students to work on multiple group activities in one time and also check their work and give meaningful work problems skills optimizations so students can easily learn multiple things in a single time and apply it in the job market.

Timing Ability – Focus on their project evolution completed in a proper way with time to time is implementing a tracking system and accountability. This not only helps with understanding the development timing but also cultivates a sense of accountability for program improvements.

Developing To Take Risk Ability – New ideas working in their plans without fear it’s creating a more advanced process. This encourages new modern art, helps to solve your error, and enhances your taking ability skills. 

In conclusion, vision control in Agile is powerful in the cooperative and student worlds. These benefits help a lot in your future career processes, so applying them with proper thinking creates delivery methods and boosts your career. Students helps these advanced weapons in the real world, so learning and understanding the whole process makes humans more powerful.

November 29, 2024

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Github Utilization

Github has probably been the most useful tool I’ve ever used when collaborating with other team members or co-workers on projects. Even though I have used it for quite some time now there are many resources out there on the basics that still help me in figuring out the tool. One of many that I found was actually on the Github blog itself under the developer skills section which has a guide to a good amount of the basics themselves. Here is a link to just one of the sections relating to pull requests https://github.blog/developer-skills/github/beginners-guide-to-github-creating-a-pull-request/. For anyone new to using Github these guides seem incredibly useful, they include videos, pictures and recaps for everything discussed.

Just recently we went through Github in our CS-348 course and I did end up referring back to this source to help me in getting a refresher and understanding the content better. These guides go through a step by step process in explaining how each of these functions such as pull requests and merging works. Each subsection of explanations also includes pictures and videos themselves to help the reader better understand what they should be seeing when following along with the process. One of the most recent posts in the beginners section also included the setting up and securing of the users profile as seen here https://github.blog/developer-skills/github/beginners-guide-to-github-setting-up-and-securing-your-profile/. This is one of the most basic of the basics that pretty much everyone that uses the internet should know about, but you would be surprised at how many actually do not take this part seriously. Thankfully in this post the author describes the use of 2FA and how to set it up, 2FA is one of the easiest ways to secure your account much better than just using a password. At the end of each of these lessons / posts, the author includes a “Your next steps” section that allows the reader the use of a repository to test with if the lesson declared the use of a repository, or it will point you in the next direction you should take.

Overall this blog is probably one of the most effective ways for someone who is just learning Github to get used to a bunch of the basics. I only found this blog just recently but I much prefer it over a youtube video itself since it guides you through each step with a good amount of detail and it is very easy to go back any time I need to and re-read specific steps or details. Since not all of the basics have been published yet I will most likely keep my eye on this blog because reinforcing the basics if you are not sure about something is always helpful.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

Takes on how to become an effective team

URL: https://www.youtube.com/watch?v=7zDX8VqvBa0

I came across another interesting podcast episode from Beyond Coding. This time, the episode I watched focused on Effective Product Teams, featuring Anne Kooijman, currently a Product Owner at Coolblue. The conversation between her and the host covered various topics related to team management and ways to build an effective product team.

The reason I chose this resource is that it provides real-world solutions to straightforward questions. The host, Patrick Akil, mentioned that he recently took on an assignment as a Project Manager. He asked many interesting questions, some of which I had myself.

A couple of specific points caught my attention, and I’d like to share them with you. The first was Anne’s perspective on what is required for a team to deliver quality work. She said, “Give them the necessary tools and the theoretical background.” I found this fascinating because you shouldn’t give developers half-baked solutions or dictate how to solve a problem. Doing so might make it harder for them to translate someone else’s idea into code. Instead, provide them with the necessary knowledge and tools to figure out solutions on their own and let them do it.

Another topic Anne discussed was how companies sometimes deviate from the core principles of Scrum and the potential outcomes of those deviations. She pointed out that there’s no issue with straying from what the “constitution” of Scrum dictates if it leads to improvements. This is intriguing because Scrum is meant to provide a framework, not a rulebook. Different teams consist of different people who may respond differently to certain changes. Personally, I imagine that I wouldn’t react well to constantly changing sprint durations.

They also discussed goal-driven teams and how having goals is essential for team effectiveness. This resonates with me, as it aligns with a practice, I adopted this semester. This isn’t meant to criticize how others manage their responsibilities but to connect the podcast’s ideas with my own experience. For the first time this semester, I decided to only consume entertainment during my free hours once all my tasks were completed. It sounds simple and cliché, but it works—just like Scrum. Teams need a singular goal, and the focus should remain on that goal.

This brings us to the final topic: timelines, and how even flexible and inconsistent timelines can be better than having none. This concept challenged something I’ve always believed—that if you’re going to do something, do it right and to the best of your abilities, or don’t do it at all. However, I realized that some flexibility in timelines is necessary to allow for adaptation and growth.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Software testing

Software testing is verifying that a software product performs the way it is designed to.  The benefits of software testing are preventing errors or bugs from being in a program. When doing software testing, it is best to do testing during the start of a build, during the build, and after the deployment or release of a program. Through doing software testing, developers and companies can ensure there are no defects, or issues before releasing or prompting a product. If there are bugs or defects in company products, there is going to be a loss in customers and money. Some of the different types of software testing include acceptance testing, unit testing, and security testing. Acceptance testing is ensuring that a coding program runs and that there are no errors. Unit testing is testing parts of a program instead of testing a whole program. In  Unix systems programming class, I was exposed to unit testing in homework projects, where I had to write parts of a program and receive a grade for each part, and couldn’t move forward until receiving full credit on a part of a program. Unit testing can lead to fewer errors than acceptance testing. It is a better practice to test your code small bits at a time than to run a whole program. Security testing is ensuring that software programs are safe from hackers that could lead to being denied access to your software, or your software working incorrectly.

Software testing was first developed after the ending of world war two. Computer scientist Tom Kilburn wrote the first piece of software by performing mathematical calculations. Debugging was the main testing strategy at the time. By the 1980’s there were other strategies of software application testing outside of fixing bugs and errors. The process of software testing includes determining a testing method to use, developing test cases or setting up requirements for the test, writing scripts or parts of a program, analyzing test results, and writing a report. For large software programs, there are tools used to complete tasks and work on running tests for different tasks, with instant feedback on what works or not works for a program. Through reporting and analytics, teams can present their results in a dashboard, which would allow everyone to see the overall result of a project. Reporting results show how testing the product leads to the result. 

Blog url: https://www.ibm.com/topics/software-testing

From the blog CS@Worcester – jonathan's computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

AI Is Not A Software Engineer

In this blog, the author discusses how much the times have changed for new CS graduates. Reminiscing about how little they knew and how easily they got a job. Then talks about how much more prerequisite knowledge is needed to even sniff a job. The topic of the article is how now more than ever it is easier to get code that works. Thanks to AI, code is now more plentiful than it ever was before. However, all code is not good code. This leads to them discussing how despite how much code there is these days. Having people capable of understanding and able to build software are still very necessary. 

Although AI can now code for us, the coding wasn’t the hard part in the first place. The hard part was building software, and making good software. It’s easy to throw a bunch of code snippets together that accomplish something. But it is something entirely different to build specialized software that fills certain functions and meets certain criteria. AI cannot replace people, even though it may take away some jobs. At its heart, AI cannot build unique software. Teams of capable developers are still needed. The nature of how people code is changing. It’s becoming more important to be able to harness AI, but still oversee and build functional software.

I chose this article because I think it relates to team building. Like the article said, you need people who can understand code, not so much write it. Writing code is easier than ever, but finding people who understand how to build software is harder than ever. When using these tools it’s important not to rely on them too much. Discerning who can actually code these days is probably one of the most important skills for employers these days.  I think it’s important for me and everyone to keep in mind that AI is a tool. Tools dont make up for lack of knowledge. Tools are used best by people who know how to use them and maximize their use. One tool can’t solve every single problem. At the end of the day, knowledge is the most important part of being a software developer. 

Citations

https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/

By Charity Majors

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

Blog Post Week 12

This week, as we’ve been learning a lot more about Git and different features of it, I decided to find an article that talks about different commands that we may have not used and what they do. The article I found titled “Modern Git Commands and Features You Should Be Using” by Martin Heinz, explains some newer(ish) commands in Git that people still may not know about or just hardly ever use.

He opens up with the switch and restore commands but these are commands we’ve already learned about and used, so I’m going to skip over these.

The first one he mentions that I had not heard of is “sparse-checkout”. If you have a large repo with many different individual directories, it can cause certain commands to run extremely slow such as the normal “checkout” command or the “Status” command. With sparse-checkout, you can configure git to only checkout files in a specific directory. You would then use sparse-checkout set to download or checkout that specific directory. As you can see, this would be extremely useful in scenarios where you have a massive repo with a large amount of directories. Being able to specifically select the directories you want, rather than having to deal with all of them on more generalized git commands can be a huge time saver which is certainly a value many programmers hold highly.

Another command he mentions, which I find to be extremely cool and probably one of the most useful commands I’ve seen is “bisect”. Essentially, you run a “git bisect start” command linking a commit that does not work, as well as the last known working commit. Bisect will find the halfway point between these two commits, and you can either say “Good” or “Bad” depending on whether or not the commit is selects works or doesn’t. From there, it will keep on going halfway until it finds the exact commit where the errors that stopped the code from working started. This seems to be an extremely useful and honestly just cool command as it makes the process of finding the issues within a given program a million times easier. It is a command I will certainly be using in the future, probably a lot, and I’m very glad someone was keen enough to actually make this a working command.

Overall, the two commands I spoke about seem to be extremely useful, especially bisect, and I will certainly hold onto them for future reference in Git. Heinz also mentions the “Worktree” command but, while this command also seems quite useful, I found the other two to be much cooler as well as understandable to use. It also opens my eyes to the fact that there are also many other git commands and features that could be utilized, and I’m definitely going to look into the rest of them as I am sure I will find a few more very useful commands.

Source: https://martinheinz.dev/blog/109?utm_source=tldrwebdev

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

Git in a Visual Explanation

As an expressly visual and hands-on learner, I try to find resources that have decent practical visuals and explanations. In class we already do this, and I’ve already had the chance to have some practice with Git and various remote repo websites like GitHub. But I wanted to have a more succinct and shorter summarization of Git and how to use it. This video was actually perfect for that as it essentially runs through what Git is, how it’s used and what it’s used for and then goes over some complicated problems that can show up eventually when using Git. I mostly chose this specific video because of what I’ve mentioned previously where I was looking for something a bit more visual for me to sink my teeth into that I can extract information from as using git is still somewhat complicated to me.

Watching through this video was actually pleasant as it had lots of very appealing and easy to understand visuals with a lot of examples of everything discussed or mentioned in it. I very much enjoyed the experience it provided but it was still a mostly basic, more foundational resource designed to give a nice outline of what git can do and that I can really appreciate as I’m still relatively new when it comes to something like this. But seeing how git is more flexible than I initially thought was nice to know as I didn’t really connect that it also works with other repo websites other than GitHub, as I’ve only really had to use it in that instance.

But seeing the different applications of git and the different issues that can arise with it I am imagining that it will most likely be a headache that I’ll have to contend with very often, especially when it comes to merging problems. Hopefully though this will not be the case and every project I work on will go perfectly. Evidently though I can foresee that git will actually just be something I have to interface with on a most likely daily basis where I’ll be pulling, committing, fetching, merging, and pushing all the time especially if there’s any collaboration to be had. So, it would only make sense for me to really practice and understand the depths and complexities of what git can do, so for the near future I’ll probably be looking for something to take me into those depths.

Here’s the video:

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Learning about Git

CS-348, CS@Worcester

In class we are going over how to use Git and not cause conflicts with upstream. First we learned about how to create a copy of the upstream in the cloud by forking the repository. Then we cloned the repository into our local machine to start using the code. We cloned the code into our local machine. Then, we learned how to make branches of the code. This allowed us to start making changes. 

In my off time I started to learn more about how branches are very important for group projects. For example, if someone makes a change on the main branch and sends it to the upstream, there might be no conflict at first. However, if someone else commits changes to the upstream, then conflicts happen. I was reading in the class textbook and some online articles. They stated that it is better for the group if people send commits to the fork first. In my opinion, this practice helps streamline the process. Let me explain further. What if your coworkers want to see what changes you made before it gets committed to the upstream? They would have to look at the fork copy of the upstream.

This lets the team determine if the changes you made are actually good. Otherwise, they will know if it needs to be changed again. This would allow your team to cut time. It would enable them to complete the project or the product. This ensures it will be ready for the public.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Most Useful Tool In a Developers Toolkit: Development Environments

Intro

Choosing a development environment to use is a decision that can be made based on feeling, or by taking the time to think out each choice and analyze which best fits your needs. Either way the environment that a developer uses is what is super important as it’s where all code is made in any project, making it the tool that every developer spends the most time using. It’s a personal choice and this blog by Matthew LeRay goes over everything you need to know about developer environments.

Summary of Source

This blog encompasses all that is needed to know about developer environments, including their purpose, importance, and what IDE’s are with some examples. The main sections are:

  1. Definition and Purpose: A structured setup of tools and processes that enhances software creation by automating tasks, supporting debugging, and ensuring consistency with production.
  2. Types of Development Environments: Explains the purpose and distinct roles of development, testing, staging, and production environments.
  3. Integrated Development Environment (IDE): The evolution of IDE’s and what they offer such as speed/efficiency and customizable features.
  4. Setting up a Development Environment: Goes through the steps of configuring your environment, from choosing the IDE, to configuring it and using tools like build automation.

The Reason I Chose This Source

For any new programmer, looking for an IDE to use can be confusing because of the lack of knowledge about what they even are, mixed with the daunting task of choosing one to learn and use. I chose this blog because it bridges that gap of being a new programmer having no idea what a developer environment even is, to choosing and setting up their IDE. It’s a very reader friendly resource, that even some experienced developers could learn from.

A Reflection of IDE’s

I personally use Visual Studio Code for the majority of what I program, but have used IntelliJ as well. I chose my IDE more based on appearance and general word of mouth, which is why I gravitated towards VS code, as it’s arguably the most popular and user-friendly IDE. I do like IntelliJ as it feels good to use, and although a drawback to others might be it’s a java only IDE, I only use java so that isn’t a problem for me. VS code also has a great variety of personalization options because of the extensions tab in the IDE. Extensions are great for not only appearance but also functional improvements. I think extensions are a big reason VS code is so popular, as well as its ability to support many languages, not restricting it to one the same way IntelliJ does. An IDE encompasses a ton of different tools a developer uses, so picking one that fits your needs is important. Becoming comfortable and familiar with the IDE you use is more important than switching to the “best” IDE for a developer based on some abstract metrics that others believe is the most important thing to have in an IDE.

My Future IDE Plans

I think I will continue to use VS code for now, but I can see myself trying out more technical and not-so user friendly IDE’s like Vim in the future. There really isn’t a need to switch it up if it’s working and honestly I don’t think it should be switched often. I will also probably utilize IntelliJ more as I do think it’s the best IDE for java which is the language I use most often.

Citation

Understanding Modern Development Environments: A Complete Guide by Matthew LeRay

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Transparency and Autonomy: Better Together

In continuing my research on team management strategies, I delved deeper and more specifically into the Software Development side of team management. In doing so I discovered the scrum.org blog, which has many different articles based on understanding Scrum. Two of the most important principles in Scrum are transparency and autonomy, and I wished to delve deeper into understanding how to achieve those in a team setting. The article I found explained how those two play into each other greatly. The article’s title is Transparency and Autonomy: Two Sides of the Same Coin by Sanjay Saini

The article begins by explaining agile’s fast-paced style of producing working code. How autonomy and independence can be essential for fast results. The article explains that in Agile, teams seek this autonomy to make decisions and deliver value without excessive oversight. Transparency can become essential for fostering this autonomy. The article explains that by making work visible, tracking progress, and openly addressing challenges, more autonomy and trust can be given. The article highlights five key points to help allow for this trust and efficiency:

  1. Visibility Creates Trust: By sharing progress and challenges during Scrum events like Daily Scrums and Sprint Reviews, it shows that the team is accountable and can be trusted to be autonomous.
  2. Transparency in Challenges Leads to Solutions: Being open about struggles encourages collaboration and problem-solving, proving the team can manage setbacks and seek out help when they need it independently.
  3. Data-Driven Transparency Builds Confidence: Using metrics like velocity and burndown charts shows consistent results, building leadership confidence in the team’s capability.
  4. Transparency Causes Better Decision-Making: When a team has full visibility into goals, priorities, and feedback, they can make informed decisions independently. Information needs to be freely shared for autonomy to occur, and good decisions.
  5. Open Communication Builds Long-Term Autonomy: Regular, open communication about decision-making processes helps cultivate trust and secure more autonomy over time, as the team can continue to build trust through constant demonstration of these values.

The article concludes by saying that, transparency creates a culture of trust and accountability, enabling Scrum teams to earn the autonomy needed to make decisions and drive value.

This article helped me understand the importance of these values to a Scrum team’s operation. This is a key step in understanding Scrum’s importance in the operation of a team, as things such as transparency can make a smoother work environment for everyone by providing autonomy. Next in my blog, I will look into articles relating to development environments such as Docker or GitPod and their importance for maintaining a productive team.

Source:

https://www.scrum.org/resources/blog/transparency-and-autonomy-two-sides-same-coin

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.