Category Archives: CS@Worcester

Sprint 2 Blog Post

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-weight-based/inventorybackend/-/merge_requests/64
                  Create a local instance of the database in order to have it perpetuate along runs of the backend during development.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-weight-based/inventorybackend/-/merge_requests/63

                  Create validation in order to avoid negative quantity in inventory.

I also worked directly with one peer to help him resolve some merge conflicts with him Nodemon Implementation issue.

During Sprint 2, I went through some difficulty in getting work done in the correct way. What I mean by this is that I would produce some output to the system without first thinking about how that would affect certain areas, like not considering the physical limits of the system. This led to one of my merge requests this Sprint. The merge request to avoid negative values in the inventory was purely created because I was developing without thinking first. This led me to develop a sense of thinking first and developing second. This helped a lot more during Sprint 2, as I would have a complete and definite idea of what to code even before I sat down and typed it.

What I think did not work so well for me this past Sprint, and I believe was the reason why I produced much less than the first one, was the lack of a due date to deliver something. I have realized during this past month that, in order for me to produce anything myself, I need a due date. If I do not have any due date set to deliver something, I will most likely procrastinate. This is not related to the amount of work I had to do or the length of the Sprint at all. This is something personal, where I should have set due dates for myself in order to produce more and better. This correlates to something I spoke about in my Sprint Blog Post for Sprint 1—the enthusiasm and anxiety of delivering work. This is something that I need to get balanced out, with the use of due dates and time management.

As a team, we entered a really nice spot where we all became close, so working with each other is not an issue at all. During some classes, I would even be worried myself, because sometimes we would be the only group to laugh or have some kind of friendly conversation. Which is great, but we need to be careful that it doesn’t undermine our work. This is also something that I believe could be what is not working so well. Even though this does not happen all the time, some days the chit-chat has slowed us down.

The pattern I chose is called Retreat into Competence. It shows us that sometimes, when we find ourselves with no idea where to go, or find ourselves behind everybody else, or simply lost, we should take a step back, go back to what we know and are comfortable with, and finally launch ourselves forward just like a catapult. Sometimes, in order to take three steps forward, you need to take one back.

Retreat into Competence became a sequence to what I wrote about in the first Sprint. I dove deep, so deep that sometimes I found myself somewhere where I had no idea where to go or how to proceed. I would feel behind compared to my peers. And even without knowing this pattern, it correlates to something I learned from my first programming professor: sometimes all you have to do is retreat, leave the code aside, or go do something else related to it. And honestly, as magical as it may sound, the solution will just come to you. Brainstorming can sometimes happen in a quiet place. If I had read this pattern before, I would have applied it more often. Sometimes, even though I was familiar with such practice, I would still find myself lost.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Code Reviews

Source: https://about.gitlab.com/topics/version-control/what-is-code-review/

A code review is code that is peer-reviewed, which helps developers validate the code’s quality before it is merged and shipped to production. Code reviews are done to identify bugs, increase the overall quality of the code, and to ensure that the developers of the product understand the source code. Code reviews allow for a “second opinion” on the functionality of code before it is actually implemented in the systems. This prevents non-functional code from being implemented in the product and potentially causing issues or bottlenecks in performance. Ensuring that code is always being reviewed before merging encourages the developers to think more critically of their own code, and allows reviewers to gain more domain knowledge regarding the systems of the product. Code reviews prevent unstable code from being used by customers, which would lead to poor credibility and overall act as a detriment on the business. The benefits of code reviews are as follows: knowledge is shared among developers, bugs are discovered earlier, establishment of a shared development style/environment, enhanced security, increased collaboration, and most importantly, improved code quality. As with everything, there still are disadvantages. Code reviews lead to longer shipping times, pull focus/manpower from other aspects of the process,and large code reviewers equal longer review times. But the benefits far outweigh these disadvantages.

Code reviews can be implemented in multiple ways, through pair programming, over-the-shoulder reviews, tool-assisted reviews, or even email pass-around. Gitlab offers an interesting feature where developers can require approval from reviewers before their code can be merged. I chose this article because I use this feature frequently in my capstone class. My teammates and I review each other’s changes in the codebase through this Gitlab feature and, if needed, go over these changes in class whether it be through pair programming or over-the-shoulder reviews.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

CS443: A Wishlist for Automation and Productivity

You ever think about how being a software engineer is kind of like working in a factory?

Mill & Main in Maynard, where I did a summer fellowship a few years ago. Fun fact: this building and the rest of the town feature prominently in Knives Out (2019). True story!

I mean that quite literally. Especially here in Massachusetts, where primo office space quite frequently gets hollowed out of old textile mills. (The old David Clark building by the intermodal port, and a slew of defense contractors in Cambridge-Braintree, my old workplace included, come to mind.)

In some ways, the comparison isn’t unmerited. I don’t think it’s far-fetched to say that the focus of industry is to deliver product.

Okay, but how?

Last week, I wrote about the failure of the Spotify model — specifically, their implementation of large-scale Agile-based DevOps. You can read more about that here.

The impetus for this week’s blog is ‘what-if’; if, instead of Spotify’s focus on large-scale Agile integration, we approached DevOps (in a general sense) from the bottom-up, with a clear emphasis on software tools and freeform, ad-hoc team structure. What can we use, what can we do to effect a stable and logical working environment?

Just one quick disclaimer: this is bound to be biased, especially in terms of what I’ve seen work in industry. Small, tight-knit teams and relatively flat hierarchies. This won’t work for every situation or circumstance — and by sidestepping the issue of Agile at scale, I feel like I’m ignoring the issues endemic to Spotify’s structure.

Still, I figure it’s worth a shot.

Issue Hub: Atlassian Jira

The first thing we’ll need is an issue tracker. Atlassian doesn’t do a very good job at marketing its products to the non-corporate world, but it’s very likely that almost everyone reading this post has used an Atlassian product at some point or another: Trello, Bitbucket, and, best of them all, Jira. Think of it as a team whiteboard, where we can report on bugs, update our wikis, and view the overall health of our build, all within one web server.

Version Control: Subversion

Subversion is going to be our version control software. Although this doesn’t have all of the downstream merging capability of Git, its centralized nature actually works to our benefit; the specific combination of Jenkins, Jira, and SVN form a tightly-knit build ecosystem, as we will see.

CI Automation: Jenkins

Jenkins is a continuous integration (CI) and build automation utility which will run health checks on downstream builds before they’re committed to the nightly build, and then to the master build overnight. We’ll implement all of our tests and sanity checks within it, to ensure that no one pushes bad code. If, by some miracle, something does get through, we can revert those changes—another handy feature.

How does this work?

SVN repo → Jenkins (throughout-day staging, then end-of-day nightly build, then overnight master) → Jira (for reports and long-term progress tracking).

Does this all work?

In a word, hopefully. The social contract between you and a team of four or five people is much simpler to fulfill than that of you and the Tribe in the Spotify model. (You only have to track the work of several people, as opposed to almost everyone on-campus with the Tribal model).

There are commitments and onboarding requirements to a system like this, too, as there was with the Tribal model, but they’re not as pronounced, especially since we aren’t scaling our structure beyond this one team.

I think what is especially true of the workplace is that no two teams are alike, and it’s kind of crazy to assume that they are, which is exactly what Spotify did. How is it worthwhile to tell people who they should be working with, instead of letting them figure that out on their own?

Rather, by placing constraints on how the work is done (which is what we’re doing here—the emphasis on software as opposed to structure) we can get better results by letting people figure out how to get from Point A to Point B, assuming we properly define both A and B.

Between last week and now: a lot of thoughts to digest.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

What I Learned About QA: A Computer Science Student’s Take on Real-World Testing Practices

I recently read the article “Streamlining the QA Process: Best Practices for Software Quality Assurance Testing” published by KMS Technology. As a college student studying computer science and still learning the ins and outs of software testing, I found this article especially helpful. It gave me a clearer understanding of what quality assurance (QA) really looks like in real-world software projects.

I chose this article because I’ve been trying to get a better grasp on how testing fits into the bigger picture of software development. A lot of what we learn in class focuses on writing code, but not always on making sure that code actually works the way it’s supposed to. This article breaks down what can go wrong in the testing process and how to avoid those issues, which is something I know I’ll need as I continue learning and working on team projects.

The article talks about a few key challenges that QA teams run into:

Unclear Requirements – This one really stood out to me. The article explains that if the project requirements aren’t clearly defined, testing becomes almost impossible. How can you verify if something works if you’re not even sure what it’s supposed to do? It made me realize how important it is to ask questions early on and make sure everyone’s on the same page before writing code.

Lack of Communication – The article also highlights how communication gaps can mess up testing. If developers and testers aren’t talking regularly, bugs can slip through the cracks. As someone who’s worked on class group projects where communication wasn’t great, I totally see how this could happen on a larger scale.

Skipping or Rushing Testing – The article warns against rushing through testing or treating it like an afterthought. I’ve definitely been guilty of this in my own assignments—leaving testing until the last minute, which usually results in missing bugs. The article suggests integrating testing throughout development, not just at the end, and that’s something I want to start practicing more.

Reading this article made me reflect on my own experience so far. In one of my programming classes, our final project had a vague prompt and my group didn’t ask enough questions. We ended up spending extra time rewriting parts of our code because the requirements kept changing. After reading this article, I see how important it is to define everything early and communicate often.

I also plan to be more intentional about testing as I continue to build projects. Instead of waiting until the code is “done,” I want to get into the habit of testing as I go and making sure I understand the expected behavior before writing a single line.

Overall, this article helped me understand why QA is such a critical part of software development—not just something to tack on at the end. If you’re also a student learning about testing, I recommend giving it a read: Streamlining the QA Process: Best Practices for Software Quality Assurance Testing.

From the blog CS@Worcester – Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.

JUnit Testing

Hello everyone,

For this week’s blog topic I will talk about JUnit, what it is, the importance of it, why it is used, the features that it offers and many more. First to start everything, what is even JUnit. So JUnit is an open source testing framework for Java and it allows programmers to write and then run automated tests. It is very useful to catch bugs early in the development when they are the least expensive to fix. Some of the key features that JUnit has to offer are its powerful testing abilities. It has easy and simple annotation, making writing down the tests even easier. It is intuitive and with just a few practices anyone can get the hang of it. Similar to the Happy Path Tests learnt with behavioral testing, JUnit encourages those normal operations first to be tested. It also supports negative cases and also boundary tests. The blog that I read was really useful as not only it explained what JUnit is but also recommended some good practices for new programmers. For example they advised to test one behavior at a time. This is important as you wanna test a single aspect of the code then move into the other parts of it. You should also use descriptive test names. This is helpful as a clear name can explain directly what you are trying to test for, eliminating confusion and possibly the chance of writing the same test twice. Another good advice given from the author of the blog is that you need to write tests which are independent. This means that different tests should not depend on each other’s results in order for them to run correctly. Lastly, you should always try to test the edge cases. Testing the boundary conditions of the code and also unexpected inputs. Your project should be ready for anything to handle even if an input does not make sense, it should be able to handle correctly and guide the user in the right path. The blog also gives a detailed tutorial on how to not only install JUnit, giving step by step instructions with examples included but also teaches us how to perform automated testing and even in the cloud. At the end of the blog it even offers a FAQ section, clearing any bit of confusion that readers might have. This is a great blog that I recommend everyone to read. It is useful for all ranges of programmers, from beginners to more experienced ones.

In conclusion JUnit Testing is a fundamental skill to learn if you wanna become a great Java Developer. It helps you verify how your code behaves, helps you catch and fix any bugs that might come up at any time of the project development time. Mastering JUnit will not only improve your code quality but also it will give you a boost of confidence when you make any changes, knowing that JUnit will be there for you to catch any bugs. 

Source:
https://testgrid.io/blog/junit-testing/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

Static Testing

article https://www.browserstack.com/guide/static-software-testing-tools

This blog will focus on static testing. Static testing is the inspection of a code  program  without  execution. Static testing is an  early stage of creating a program, where a program is being developed, and code can be adjusted before the final product. A program’s files being reviewed before its release saves a company money, without the program being reworked. Review analysis and static analysis are two different methods for static testing. Informal review is a type of review analysis where team members provide code feedback, while static code analysis for static analysis uses software tools to detect coding errors. Static testing is used  multiple times in  coding a program. When a project is first assigned whether in a professional or academic setting, programmers need to understand the requirements of their projects. Usually after  instructions have been reviewed, coding would be the next step, but  static testing adds an extra step of  checking if  a program has the  documents used for  coding. Throughout the development of a program, a common practice is running the program, whether with unit testing or running a whole program, for a programmer  to know if the program is error free. Static testing at the coding stage can either be feedback from team members, or  different software tools such as Soot and checkstyle. BrowserStack Code Quality tool is one software tool for static testing. In my programming experience, I am used to  having to manually fix my errors. This past week, I was introduced to new visual studio code  software tools for coding errors. BrowserStack Code Quality tool is one tool of automated stack testing, where static testing is done through software tools. 

BrowserStack Code Quality has an assistant that recommends how large classes in a program can be split into smaller classes. BrowserStack Code Quality can be downloaded in either Android studio, Vscode, or Intellij, with a quick program scan with feedback. Another software tool is Checkstyle which only works with Java. Developers using Checkstyle  learn about  errors when writing code, compared to after a program has executed. Developers who are using Checkstyle can create coding  conditions, and  a program is checked for following defined coding conditions. Recently, I learned how to use PMD in Visual Studio Code. PMD  detects logical errors in code such as uninitialized variables, unused code. PMD has a copy paste detector that identifies duplicated code. PMD supports more than 10 different programming languages.

From the blog jonathan's computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

Comprehending Program Logic with Control Flow Graphs

This week I am discussing a blog post titled, “Control Flow Graph In Software Testing” by Medium user Amaralisa. When I read through this post initially, it immediately clicked for me with what we have been studying in class with different path testing types which capture the logic similarly. The comparison between CFGs and a map used to explore the world or get from point A to point B is incredibly useful as it explains the need for having a guide to explain the many execution paths of the program. The writer made the topic easy to understand while still including the technical information that is required to apply these techniques moving forward.

This post helped me see the bigger picture in terms of the flow of a program and how the logic is truly working behind the code we write. It tied directly into what we’ve covered about testing strategies, especially white-box testing, which focuses on knowing the internal logic of the code. The connection between the CFG and how it helps test different code paths felt like a practical application of what we’ve been reading about in our course.

It also made me think about how often bugs or unexpected behavior aren’t because the output is flat-out wrong, but because a certain path the code takes wasn’t anticipated. Seeing how a Control Flow Graph can lay out those paths visually gives me a better sense of how to test and even write code more deliberately. It’s one thing to read through lines of code and think you understand what’s going on, but when you actually map it out, you might catch paths or branches you hadn’t considered before. I could definitely see this helping with debugging too—like, instead of blindly poking around trying to find what’s breaking, I can trace through the flow and pinpoint where things start to fall apart.

I also really liked that the blog didn’t try to overcomplicate anything. It stuck to the fundamentals but still gave enough technical depth that I felt like I could walk away and try it on my own. It gave me the confidence to try using CFGs as a tool not just during testing but also during planning, especially for more complex logic where things can easily go off track.

Moving forward, I am going to spend time practicing using CFGs as a part of my development process to ensure that I am taking advantage of tools that are designed to help. Whether it’s for assignments, personal projects, or even during team collaboration, I think having this extra layer of structure will help catch mistakes early and improve the quality of the final product. It feels like one of those concepts that seems small at first, but it shifts the way you approach programming altogether when applied properly.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Spies and Their Role in Software Testing

As I was doing some at home research on stubs and mocking for one of my courses, I came across the idea of spies. Unlike stubs and mocks which allow for the program and tests to run while giving canned answers or being unfinished, spies perform a much needed but previously unfilled role.

Spies are used to ensure a function was called. It’s of course more in-depth than this but that’s it’s basic function.

On a deeper level a spy can not only tell if a call to function was made, but how many calls, what arguments were passed, and if a specific argument was passed to the function.

Abby Campbell has great examples of these in her blog, “Spies, Stubs, and Mocks: An Introduction to Testing Strategies” where she displays easy to understand code. I would definitely recommend taking a look at them, her blog also goes in depth on stubs and mocking.

When writing test cases, the addition of a spy to ensure a thorough case can’t be undersold. Imagine a simple test case that uses a stub, without the use of a spy you can’t be sure the correct function was called unless every function returns a different value which would be inefficient to set-up. By using a spy the function called is checked, the argument passed is checked, and the output can even be checked as well leaving little to no room for an error in the test case aside from human error.

With the addition of spies to our arsenal of tools for software testing we check off the need to find a reliable way of testing for ensuring correct function calls and arguments. I plan on carrying this new tool with me throughout the rest of my career. It allows for much more efficient, effective, and sound testing.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Behavioral Testing

Source: https://keploy.io/blog/community/understanding-different-types-of-behavioral-unit-tests

Behavioral unit tests validate how code units operate under certain conditions, allowing developers to ensure that the software/application is working as it should be. Behavioral unit tests focus on specific pieces of code. They help developers find bugs early and work based on real scenarios. They lead to improved code quality because this unit testing ensures that the software is up to the expectations of the user, and allows for easier refactoring. The key types of behavioral unit tests include happy path tests, negative tests, boundary tests, error handling tests, state transition tests, performance driven tests, and integration-friendly tests. The one that caught my attention was the performance-driven test. These tests validate performance under specified constraints, such as handling 10,000 queries. The test is run to ensure that performance remains acceptable under various loads. This test caught my attention because in my cloud computing class, I was loading files with millions of data entries, and the performance suffered, which highlights the importance of unit testing under conditions such as these.

The difference between functional and behavioral unit tests is that functional tests validate the system’s function overall, whereas behavioral tests focus on specific pieces of code to make sure that they behave as expected under various conditions. Behavioral unit tests should be run every time code is built to make sure changes in the code don’t cause problems. Tools that can be used for this kind of testing include JUnit/Mockito for Java, pytest for Python, and Jest for JavaScript. I chose this article because we use JUnit/Mockito in class and thought it’d be wise to expand my knowledge on other unit tests. It’s good to get reassurance that unit testing is important from all of the sources I’ve been learning from, because it is very apparent that many different scenarios can cause many different problems in regard to the production of code/software/applications.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Automation Tools

This week in class, we did activities based on static testing. We analyzed code with Gradle And Gradle is not a new tool we’re just hearing about, as we’ve worked with it throughout the semester. And since it is an automation tool with a lot of cool features, I took a further look into automation in software development. I wanted to know what the best features were as well as potential drawbacks from using automation. I ended up finding a blog called “Automation in Software Development: Pros, Cons, and Tools.”

What Else can be Automated?

We’ve learned by now that software testing can be automated. But is that it? Absolutely not. There are some other important software processes that can be automated. One of them is CI/CD (Continuous Integration/Continuous Deployment). Automating continuous integration allows changes in code by multiple developers to be continuously integrated into a common version control repository throughout the day, after which tests are run automatically, making sure that newly written code is not interfering with existing codes. Automating continuous deployment results in integrated and tested code in the production phase being released automatically. Releases are quicker due to the automated deployment, and better because every new line of code is tested before even being integrated.

Automation can also be used to monitor and maintain code. There are automation tools that help analyze data, identify issues, and also provide notifications of a deployed software product. With automation, issues can even be resolved automatically. This is really helpful because it drastically reduces time and resources spent trying to correct errors.

Pros

Three of the largest benefits that come with automation are reduction in manual workload, lower development costs, and an increase in software quality. When tasks are automated, developers can use that now-free time to find ways to improve the software. This way, there is a better chance of the software having more advanced features, as well as customers being satisfied with the product. Many errors and defects of a deployed product come from human errors made during the development of the software. This is where automated testing comes in. Testing tools such as Gradle, JUnit, and Selenium were created for this purpose. Automated testing tools provide feedback on code at the snap of a finger compared how long manual testing might take, which as said before, leads to less time and money being spent to rectify errors. Reduced time and cost are two of the most key automation features that persuade businesses to use automation.

Cons

The challenges most faced when implementing automation tend to be: complexity of the tools, financial constraints, and human resistance. Automation tools can be tough for a corporation to set up and some automation tools require skills that a corporation’s employees might not have. So that means they have to be trained to use it, which means more time and money spent. Though the last paragraph mentioned how automation had lower costs, it can be quite expensive when first implemented. From purchasing the required motion control equipment to paying subscription and renewal fees, automation on a large scale seems to only be a realistic option for large companies. The return on investment might not be immediate either. There is also a concern that automation will soon replace human employees. This can create uncertainty and division in the workplace because employees might not know if they are at risk of being let go or not, so they might object to using automation.

Reference

https://www.orientsoftware.com/blog/automation-in-software-development/

From the blog CS@Worcester – Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.