Category Archives: CS-348

An Interview with QA Engineer Augustine Uzokwe about Software Testing

Hi class,

For this blog post I decided to choose the topic software testing. Software testing is one of the topics that we went over this course and furthermore, I hope you can have some good takeaways about how software testing is implemented out in the real world.

Before going into the summary, let me provide you with the resource that I am getting this information from. https://www.youtube.com/watch?v=PqCVDGhvEFs&t=1988s This is the link to the podcast I listened to called “Beyond Coding.” Here the host Patrick Akil, a software engineer looking to gain and share interest in coding, sits down with QA, quality assurance, engineer Augustine Uzokwe to discuss what a QA engineer does in software engineering. (I really do enjoy listening to podcasts so that’s why I have chosen a podcast format for my source.)

Patrick begins the podcast off by asking Augustine what is QA, quality assurance. Augustine replies it varies depending on the work and/or field the said QA is in, but in terms of this interview he will do this in perspective to software engineering. 

Augustine then goes onto talking about QA in software engineering. He goes on to state that yes, there is a lot of unit testing, but it is more so adapting, quick thinking, and team collaboration. He takes the approach of using broad practices such as pragmatic and being flexible. Furthermore, he goes on to state that if there’s two teams, Team A and Team B, what might work for Team A, might not work for Team B. 

He stresses this is totally okay, and acknowledges that this happens, but as a QA engineer you must find a solution fairly quickly. Not only quickly, but also find a smart solution. When a unit testing doesn’t have the correct output/run, he then goes on to state that rather than going back trying to fix everything then performing an end to end unit test, there’s a smarter approach that is better and in the long run will be quicker.

This approach is team collaboration. In the instance above, he would call all participating engineers in the code and simply talk and review the code with them. From here each engineer would have an assigned part to work on, then they would dispersed to work on their said assignment. In this time of working, Augustine stresses the QA engineer needs to take the lead and create an environment of openness amongst the fellow engineers. If one engineer has a question, they shouldn’t go searching for it online for three hours, but rather ask a fellow peer. 

Once a developer has their assignment done, it should be unit tested separately (if applicable). This is very beneficial because all the bugs can be found in that moment, then developed, rather than with an end to end test. In this time, Augustine would be spreading his knowledge of QA to the engineer so that way the engineer will slowly learn how to conduct and what to look for while on their own doing a software testing. 

Augustine states in today’s world of engineering how fast it is out there, but it is more so about “removing the waste” and improving as you go with unit tests/peer testing. Not only is this the case, but he also states that team maturity is very important and key. Furthermore, if a lot of engineers come together it doesn’t necessarily mean a quicker delivery. Engineers need to have a process in place and need to know how to properly test software, of which is where the QA engineer comes in to ensure quality over speed. 

My personal comments about this is that I like the way Augustine thinks a lot; quality over speed. Furthermore, throughout the podcast he says “win the time.” Before listening to this podcast, if I heard that statement in regards to programming I would’ve thought to write out the program as quickly as possible and then test at the end when I am “done” programming. Although after listening to the podcast, I would now think let me talk with my team, write out a plan to implement the code, code, ask questions, and unit test the code along the way. Overall, this was a very interesting listen and very insightful that could be useful to programmers of all levels; or even people wanting to implement a better structure of working on a team project.

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.

By Any Other Metric

Estimation is a huge part of software development. It exists in many aspects of our work. The area where it is most important is estimation of development time. Many software methodologies are reliant on breaking down work into smaller parts and creating workflows. Like the famous Sprint from Agile. A common theme among all of these is estimating the time it will take to complete these tasks. And on a larger scale, how long it will take to finish a feature or product.

In this blog by Yaakov Ellis, discusses the importance of estimation. Moreover, the skills needed to make a good estimate. He first discusses the usefulness of estimation. How good estimates increase efficiency and reduce crunch time. One of the most important points of the article is the prerequisites needed to make good estimates. Such things like good and detailed specifications and understanding functionality. A point that stuck out to me is that he says that the people who do the work should do the estimates. Seems like something obvious but needed to be brought up nonetheless. There were many other points like not giving in to outside pressure to give an estimate that was unrealistic.

I chose this blog because a lot of what we talked about in class was teams deciding how much time to spend on certain things. Not only sprint lengths, but every part of Agile. Estimating the length of the project to even just a simple bug fix could take. I think it is a skill that is often overlooked. As someone who is bad at giving estimates and gives in to pressure to please people. I think that is something I will think about when making these decisions more. Some last points that I liked was the idea that you should estimate the public so everyone on the team can see them. I think that would decrease the risk of bad deadlines that force crunch on developers. 

Citation

https://stackoverflow.blog/2019/10/23/why-devs-should-like-estimates/ 

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

Is Open Source Best?

https://medium.com/chiselstrike/why-open-source-is-a-terrible-way-to-build-products-yet-most-great-products-use-open-source-c3bf9e201648

This post by Glauber Costa, an entrepreneur and former open source developer, discusses why open source is a horrible way to develop products but the best way to develop technology required for products. Technology and product are differentiated as product being something that uses technology to achieve some end market goal. I’ve always thought that open source was the best option for the vast majority of developers, with little exception outside of larger businesses where proprietary licenses would be best. This post challenged that notion.

When I look around the modern tech industry, I constantly see open source. I’ve known more people than I can remember who’ve used Godot, Shotcut, Krita, and Audacity; I’m writing this post on WordPress. However, the blog makes a great point: none of these products are better than their proprietary counterparts for general users. I always saw the community of open source developers and the possible self-customization of the works as the greatest strength of the licensing; I never for a moment considered how this lack of centralization in development is also Open Source’s biggest weakness. 

In ligth of this post, when I look around the Open Source world, I see very few products that my non-developer friends would ever use. For example, while I think Linux is great, I think all of them would prefer to stick with their Windows machines even if they had experience with Linux. I never thought to disconnect the impressive feats of community development as they pertain to the amazing innovations they’ve made to technology from the products that are released with said innovations, but in reflection, it makes perfect sense.

Open source development allows people to work on what they want when they want, and it gives individuals the chance to use their passion as an opportunity to solve problems they deeply care about. Proprietary development, on the other hand, gives people explicit goals aimed at creating the best end-user experience for everyone, with typically a strong centralization that ensures the product is polished, easy to use, and accessible to a wider audience. When you put it like this, it’s only logical that open source would result in the most innovative pieces of technology created by people with the drive to do so, while proprietary would result in the best products as they have an explicit focus to create something that would be best for the general user even if it comes at the loss of niche features for small groups of users.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

The Importance of Licensing Code

Licensing may seem like an obscure legal detail, but it plays a critical role in scientific software development. In “The Whys and Hows of Licensing Scientific Code”, Jake VanderPlas breaks down why picking the right license is key to sharing and advancing research.

Summary of the Article

VanderPlas emphasizes three key takeaways:

Always license your code. Without a license, code is effectively closed, limiting its reuse. If you don’t, it’s basically off-limits for anyone else to use.

Use a GPL-compatible license.This makes it easier for your code to work with other open-source projects.

Prefer permissive licenses like BSD or MIT. These licenses lower barriers for collaboration, they are the most flexible and let people from both academia and industry collaborate freely.

Licensing is crucial for scientific reproducibility and collaboration. Even if you post your code publicly, without a license, it’s still “all rights reserved,” meaning others can’t legally use it. VanderPlas recommends permissive licenses because they encourage more people to adopt and improve the code. On the other hand, copyleft licenses (like GPL) keep the code open but might scare off companies from getting involved.

Personal Reflection

While reading this article, I found VanderPlas’s insights particularly relevant and important. I appreciate how licensing can help bridge the gap between innovation and real-world impact. The idea of using BSD or MIT licenses makes sense because they’re simple and open the door for more people to get involved.

This also made me think about how intentional we have to be with our work. Just like we carefully document research methods, licensing makes it clear how others can use and improve our code and/or tools. It’s a good reminder that open science isn’t just about sharing, it’s about having solid guidelines that make collaboration easier and that push science forward.

Citation

VanderPlas, J. (2014, March 10). The Whys and Hows of Licensing Scientific Code. Pythonic Perambulations.

Link of the article: https://www.astrobetter.com/blog/2014/03/10/the-whys-and-hows-of-licensing-scientific-code/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

How Much Formatting is Too Much?

https://www.steveonstuff.com/2022/02/09/nitpicky-code-reviews-are-are-drag

This post from Steve Barnegren discusses his issues with the current state of team code review. Specifically, the blog takes time to point out the issues with being overly obsessive about nice formatting. It is one thing to point out flaws in logic and potential failures the system may have, giving feedback on how it might be improved or fixed, and another to, for example, say a ternary operator should be used instead of an if-else. The argument is made that a majority of ‘formatting issues’ of the variety I’ve given do not give enough value for the time they take between maintainers and developers.

Personally speaking, I like to have my code in a very consistent format. If just one thing is in a different format, it seriously bothers me. It wasn’t until recently when I started working on a project as a member of WSU’s Computer Science Club that I personally had to work in a team larger than two people, and in doing so I found out pretty quickly that many people do not hold the same standards to their code.

One team member very specifically does not care at all about how the code is formatted, focusing solely on efficiency and output. At the beginning of the project, I gave pushback on this, believing that the code he was writing was very poor if we wanted to maintain standards, but I was assured it wasn’t a big deal. All of us were working on different parts of the project and generally were disconnected from one another until it came time to connect things. I specifically was on my own creating the GUI of the project. However, the issue finally came when it was time for me to start working on the backend. What I found was a disaster, not in terms of functionality necessarily; there were definitely errors in output, but that wasn’t my concern. The real disaster was the cleanliness of the code. Trying to figure out what was going on, how things were calculated, what and where things were stored, it was a lot of tracing. 

By the time I finally understood what was going on, it took me very little time to do what was asked of me, but the process to get there should not have taken that long. The person who originally wrote the backend is now working to create extensive documentation so that way people don’t have to go through that process again. If there had been consistency in the formatting of the code, clear demonstration of how things functioned, and precautions taken to make sure things did not get out of hand, I feel it wouldn’t have been nearly as bad. 

Although I hear this blog’s thoughts, I hear them echoed in the person who originally said it wasn’t a big deal. In my opinion, the condition for a team to have code reviews like the one Steve recommends must be that all team members already agreed and showed the capability to write code that is clean and makes sense, or else you get the horror I had to go through. Generally, I think code reviews should be unique to every team, because the same rules don’t work for everybody, and that the nitpicks have their place in teams.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

Software Testing Circumstances

Software testing is crucial phase of the software development cycle. After numerous errors and choices have been made, this entire approach functions in a single manner. However, the effectiveness and efficiency of software testing are significantly influenced by the circumstances in which it is conducted. After we finish the software testing phase, there are still issues that arise despite the extensive critical thinking and methodology. The term “software testing circumstances” refers to the conditions and environments in which testing occurs. These conditions include a number of elements, including financial constraints, time limits, team experience, technological infrastructure, and the development technique adopted. Testing is scheduled in accordance with their execution and development procedure based on critical situations.

Key Challenges in Software Testing Circumstances:

  1. Time Constraints

Some tasks are ruined by tight deadlines, but other tools can help your complete tasks more quickly. Ultimately, how you do your work under intense pressure depends on how you handle time limitations.

2. Limited Resources

Insufficient resources, such as skilled personnel, testing environments, or financial backing, can restrict testing scope. Some resources offer extra help with the task at hand, but the testing scenario’s limited resources have impeded your work and stopped you from resolving their problems so you can continue testing.

These Two is key problem we see in every testing problem.

Adapting to Testing Circumstances:

  1. Prioritization with Risk-Based Testing

Teams can allocate resources efficiently by focusing on important capabilities and identifying high-risk areas. This guarantees that, despite limitations, crucial functions are adequately tested.

2. Early Involvement of Testing Teams

Engaging high skills testers from the beginning of the work is give reliable and accurate result and give balancing the whole cycle in testing phase.

3. Cloud-Based Testing Environments

Without requiring a significant upfront infrastructure investment, cloud testing methods provide scalable and wide-ranging testing environments. By simulating actual circumstances, these technologies increase coverage.

These are fundamental abilities we master in our cycle to get deeper and faster results with the time we need for essentials.

Our testing encounters little errors that can be resolved with minor adjustments, so we lower the testing error graph. AI-driven technologies assist us in our performance section, allowing us to draw our testing error cycle without requiring a large expenditure.

               In conclusion, Problems involving software testing can cause difficulties, but these can be successfully avoided with preemptive measures and modern tools. Understanding and adapting to the nuances of each testing scenario is key to maintaining reliability and user satisfaction.

Citations:

  1.  Myers, G. J., Sandler, C., & Badgett, T. (2011). The Art of Software Testing. Wiley.
  2.  ISTQB Foundation Level Syllabus. (n.d.). https://www.istqb.org
  3. Atlassian Continuous Testing Guide. (n.d.). https://www.atlassian.com/continuous-testing
  4. IEEE Software Testing Standards. (n.d.). https://www.ieee.org

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Git: Merge conflicts

Week-13: 12/2/2024

This week in class we worked on merge conflicts and how to resolve them. I had to do an activity that had to do with solving a merge conflict. This experience was not as frustrating as I thought it was going to be. While doing the activity, Professor Wurst highlighted how important it is to understand version control systems like Git and to develop effective strategies for resolving conflicts collaboratively.

On this week’s blog hunt, I came across a helpful blog post by Sid Puri titled “Git Merge Conflicts,” which provided a clear explanation of what merge conflicts are, why they occur, and how to resolve them using Git tools. It broke down the process into manageable steps and even offered advice on how to prevent these conflicts from happening in the first place.

One of the key takeaways for me was understanding the root cause of merge conflicts: multiple developers making changes to the same file at the same time. Because Git can’t automatically figure out which changes to keep, it flags these conflicts and requires manual intervention. The post explained how to use Git’s notification system to identify conflicts and then how to manually merge code using conflict markers – those weird symbols like <<<<<<<, =======, and >>>>>>> that used to make my head spin.

The post also emphasized the importance of communication in preventing merge conflicts. This really resonated with me because our team conflict stemmed from two of us accidentally modifying the same section of code. If we had just communicated about our tasks beforehand, we could have avoided the whole issue. Moving forward, I’m definitely going to advocate for more frequent team check-ins and a more organized approach to task allocation.

What I really appreciated about the blog post was its practical approach to conflict resolution. It explained how to use built-in Git tools like git status and git diff to navigate conflicts with confidence. Mastering these tools will definitely save me time and frustration in future projects. Plus, learning how to handle and resolve conflicts collaboratively is a transferable skill that will be valuable in any team setting, not just software development.

Overall, this blog post was a great resource that directly complemented our coursework on team-based software development. It reinforced the idea that understanding and resolving merge conflicts isn’t just a technical skill; it’s an essential component of effective teamwork in software engineering. I feel much more prepared to tackle these challenges in the future and to contribute more effectively to my team projects.

Blog link: https://medium.com/version-control-system/git-merge-conflicts-4a18073dcc96

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Understanding Clean Code in Reagant with the Clean Coders

This week, I explored the blog post “Mastering Reagent: Finding the Balance Between Readability and Performance” on Clean Coders. This post delves into the challenges of balancing readability and performance when using Reagent, a ClojureScript library for building user interfaces. It highlights techniques to maintain code clarity while optimizing performance, especially in interactive web applications.

The author discusses common pitfalls, such as overusing lifecycle methods, creating unnecessary computations, and mishandling state updates, all of which can lead to unresponsive or hard-to-maintain code. The blog suggests best practices, including leveraging idiomatic ClojureScript constructs and modularizing components to enhance both readability and runtime efficiency. Practical examples include minimizing expensive computations by using reagent.core/track and ensuring that components don’t re-render unnecessarily with reagent.core/shouldComponentUpdate. The author also emphasizes that while performance is crucial, readability often has a long-term payoff, especially in collaborative environments where maintainable code saves time.

I chose this blog post because our course has emphasized the importance of clean, maintainable code in software development and maintenance. These lessons have led us to balance the trade-offs between performance and code clarity, and understanding how to achieve that in application is very important. Additionally, we’ve explored a lot of similar clean code attributes in class that are expanded on within this post and tied to the context of UI frameworks.

What stood out to me most was the emphasis on maintaining readability without compromising performance, a principle applicable across programming domains. For instance, I’ve sometimes been tempted to optimize prematurely, leading to messy code that became hard to debug or modify later. This post reinforced the idea that readable code doesn’t just benefit the developer in the moment but also improves team productivity and ensures the application can be scaled or updated more easily in the future.

One of the key takeaways for me was the use of tools like track to manage expensive computations efficiently. I had not considered how reactivity frameworks like Reagent allow for targeted optimizations without sacrificing clarity. Moving forward, I plan to apply this principle in my projects by carefully identifying bottlenecks and ensuring that optimizations are implemented only where they provide tangible benefits.

This material has given me a new perspective on how to approach UI development. While I’ve worked primarily with simpler frameworks, I now see how the same balance between readability and performance applies universally. Whether I’m working with Reagent, React, or any other tool, the insights from this post will help guide my decision-making and ensure I focus on long-term maintainability as well as immediate efficiency.

Overall, this blog post offers practical advice for developers working with Reagent or similar UI libraries. I highly recommend reading it for anyone interested in crafting user interfaces that are both efficient and easy to maintain. The post can be found here: “Mastering Reagent: Finding the Balance Between Readability and Performance.”

From the blog CS@Worcester – CS Journal by Alivia Glynn and used with permission of the author. All other rights reserved by the author.

Scrum

Hello, and today’s blog post is about Scrum. For those of you who do not know, Scrum is basically a way to work as a team efficiently and effectively. I am glad that Professor Wurst chose to expose us to Scrum in his curriculum this semester because this method of teamwork is very interesting, and it actually seems like it could be useful to me in the future. The thing I really like about Scrum is that it is INCREDIBLY organized. Each day is planned out to an hour, and I think that having a schedule that is organized and efficient is very important when you actually want to get work done. Planning out your schedule/agenda ahead of time ensures that everyone on the team stays on track with minimal distractions. This method also is all about improvement. There are reflections/retrospectives at the end of every sprint (which is pretty much another way to say “project”). We went over this in detail during our class time, which engraved a lot of the Scrum knowledge in our heads. We also took official quizzes which helped us become “certified.” I think taking these Scrum tests were beneficial because we had to reach a certain score, and if we didn’t get it, we would have to re-do the tests until we had a much better understanding of the concepts.

Scrum uses the Agile framework, which I talked more about in one of my previous blogs. Scrum is based off of three pillars: transparency, inspection, and adaptation. The official Scrum website described the process of working using Scrum as “working through small experiments, learning from that work and adapting both what you are doing and how you are doing it as needed.” Scrum also has important values: courage, focus, commitment, respect, and openness. These values are each crucial to making this process work.

The article I chose to help me explain Scrum for this blog is linked here: An Introduction to Scrum | Lucidspark

This article describes Scrum as “an iterative, adaptable approach to software development.” It talks about how most software development methodologies are very linear, meaning there is not much room for improvement. Scrum on the other hand, as I mentioned before, follows the Agile approach rather than the Waterfall methodology, which makes for great adaptability. The article also re-iterates how Scrum is all about teamwork and collaboration. It goes into detail about each role in the Scrum team, and what all the events are in the sprints. I think this article is worth a read, and it is supposed to be just an 8-minute read as well.

Overall, Scrum is definitely one of the best software development methodologies that are in sync with Agile ideologies, so I am glad that we learnt about it!

From the blog cs@worcester – Akshay&#039;s Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

A potential of owning servers

CS-348, CS@Worcester

While researching server companies like data centers. I realized that many companies rent servers to host their applications. They also use them to store large amounts of data. Which is all very important for any business nowadays. I read an article from David Heinemeier Hansson. It was about how his company has moved 7 cloud applications from AWS servers into their own servers. 

Many people thought it was a ridiculous plan to move from renting a server to owning one. They thought this when David posted his article. Recently, companies have been renting servers to save money. They aim to reduce costs on employees, management, and the legal department. Additionally, they save on costs for parts and the electricity needed to run the server. In David’s article, he explains how there are benefits to both ideas of hosting servers. 

As the move from the cloud into their own servers has its multiple problems. For example, it takes a lot of time to move data from the cloud into another server. Also, the cost of hardware depends on the amount of data it needs to hold. Additionally, the performance the company wants can increase the expense significantly. Another problem is having employees and management to maintain the server. This can cause costs to increase for companies to do this. 

David goes into detail. He explains how his company was able to save a lot of money doing this. They already have the employees and management who maintained some of their other servers. That the company just had that problem solved. Also David explains how his company was able to pay for the hardware needed for 18 petabytes which is crazy. I want you to think about how much is 18 petabytes of data, the entire internet is just 36 petabytes. 

When I was researching this topic, I just hoped that his idea works. I also wish he explains in detail in the next few years how it went. If companies can figure out how to manage their own servers then it creates options to make personal servers cheaper. For example access to more parts and talent that knows how to handle large data and applications. If you want a detailed understanding of how his company is managing this issue, then read his article below. 

Work Cited

David Heinemeier Hansson, Our cloud-exit savings will now top ten million over five years, world.hey.com, https://world.hey.com/dhh/our-cloud-exit-savings-will-now-top-ten-million-over-five-years-c7d9b5bd, Created October 17th, 2024, Accessed on December 1st 2024

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.