Category Archives: CS@Worcester

Software Testing Circumstances

Software testing is crucial phase of the software development cycle. After numerous errors and choices have been made, this entire approach functions in a single manner. However, the effectiveness and efficiency of software testing are significantly influenced by the circumstances in which it is conducted. After we finish the software testing phase, there are still issues that arise despite the extensive critical thinking and methodology. The term “software testing circumstances” refers to the conditions and environments in which testing occurs. These conditions include a number of elements, including financial constraints, time limits, team experience, technological infrastructure, and the development technique adopted. Testing is scheduled in accordance with their execution and development procedure based on critical situations.

Key Challenges in Software Testing Circumstances:

  1. Time Constraints

Some tasks are ruined by tight deadlines, but other tools can help your complete tasks more quickly. Ultimately, how you do your work under intense pressure depends on how you handle time limitations.

2. Limited Resources

Insufficient resources, such as skilled personnel, testing environments, or financial backing, can restrict testing scope. Some resources offer extra help with the task at hand, but the testing scenario’s limited resources have impeded your work and stopped you from resolving their problems so you can continue testing.

These Two is key problem we see in every testing problem.

Adapting to Testing Circumstances:

  1. Prioritization with Risk-Based Testing

Teams can allocate resources efficiently by focusing on important capabilities and identifying high-risk areas. This guarantees that, despite limitations, crucial functions are adequately tested.

2. Early Involvement of Testing Teams

Engaging high skills testers from the beginning of the work is give reliable and accurate result and give balancing the whole cycle in testing phase.

3. Cloud-Based Testing Environments

Without requiring a significant upfront infrastructure investment, cloud testing methods provide scalable and wide-ranging testing environments. By simulating actual circumstances, these technologies increase coverage.

These are fundamental abilities we master in our cycle to get deeper and faster results with the time we need for essentials.

Our testing encounters little errors that can be resolved with minor adjustments, so we lower the testing error graph. AI-driven technologies assist us in our performance section, allowing us to draw our testing error cycle without requiring a large expenditure.

               In conclusion, Problems involving software testing can cause difficulties, but these can be successfully avoided with preemptive measures and modern tools. Understanding and adapting to the nuances of each testing scenario is key to maintaining reliability and user satisfaction.

Citations:

  1.  Myers, G. J., Sandler, C., & Badgett, T. (2011). The Art of Software Testing. Wiley.
  2.  ISTQB Foundation Level Syllabus. (n.d.). https://www.istqb.org
  3. Atlassian Continuous Testing Guide. (n.d.). https://www.atlassian.com/continuous-testing
  4. IEEE Software Testing Standards. (n.d.). https://www.ieee.org

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Git: Merge conflicts

Week-13: 12/2/2024

This week in class we worked on merge conflicts and how to resolve them. I had to do an activity that had to do with solving a merge conflict. This experience was not as frustrating as I thought it was going to be. While doing the activity, Professor Wurst highlighted how important it is to understand version control systems like Git and to develop effective strategies for resolving conflicts collaboratively.

On this week’s blog hunt, I came across a helpful blog post by Sid Puri titled “Git Merge Conflicts,” which provided a clear explanation of what merge conflicts are, why they occur, and how to resolve them using Git tools. It broke down the process into manageable steps and even offered advice on how to prevent these conflicts from happening in the first place.

One of the key takeaways for me was understanding the root cause of merge conflicts: multiple developers making changes to the same file at the same time. Because Git can’t automatically figure out which changes to keep, it flags these conflicts and requires manual intervention. The post explained how to use Git’s notification system to identify conflicts and then how to manually merge code using conflict markers – those weird symbols like <<<<<<<, =======, and >>>>>>> that used to make my head spin.

The post also emphasized the importance of communication in preventing merge conflicts. This really resonated with me because our team conflict stemmed from two of us accidentally modifying the same section of code. If we had just communicated about our tasks beforehand, we could have avoided the whole issue. Moving forward, I’m definitely going to advocate for more frequent team check-ins and a more organized approach to task allocation.

What I really appreciated about the blog post was its practical approach to conflict resolution. It explained how to use built-in Git tools like git status and git diff to navigate conflicts with confidence. Mastering these tools will definitely save me time and frustration in future projects. Plus, learning how to handle and resolve conflicts collaboratively is a transferable skill that will be valuable in any team setting, not just software development.

Overall, this blog post was a great resource that directly complemented our coursework on team-based software development. It reinforced the idea that understanding and resolving merge conflicts isn’t just a technical skill; it’s an essential component of effective teamwork in software engineering. I feel much more prepared to tackle these challenges in the future and to contribute more effectively to my team projects.

Blog link: https://medium.com/version-control-system/git-merge-conflicts-4a18073dcc96

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Implementing New REST API Calls

APIs are integral to building scalable and flexible applications. They allow communication between clients and servers using HTTP methods such as GET, POST, PATCH, and DELETE. Adding new endpoints to your REST API can improve functionality, meet user requirements and support new features. To understand new endpoints, take an example if you need a GET/ inventory endpoint to retrieve the current stock levels and a PATCH/ inventory endpoint to update the stock based on things like adding or removing items. Some steps to implement a new endpoint include plan the endpoint. This starts by defining the endpoint’s purpose, required input, and expected output. For example, Endpoint: GET /inventory, Purpose: Retrieve total inventory in pounds, Response: JSON object with the current inventory count. Another step is to test the endpoint where you write test cases to verify the endpoint works as expected. Documenting the endpoint is important because this is where you use tools like swagger to document your API and you include details like input, response codes. Another important topic about rest API calls is error handling. It is essential for any API to provide meaningful feedback to users and developers while maintaining a secure system. Best practice is to use HTTP status codes effectively. For example, 400 Bad requests for issues like missing parameters or invalid input, 404 not found for requests to nonexistent resources, 500 internal server error for unexpected server issues.

When building REST APIs, adhering to strong design principles is important to creating scalable, maintainable, and user-friendly interfaces. There are three principles that include resource-oriented design, HTTP method semantics and consistency. For resource-oriented design REST APIs treat resources as primary entities representing objects in the application domain. For example /users to represent user data, /orders to represent order records, /products to represent items available for purchase. Another principle is HTTP method semantics. Each HTTP method has a specific purpose and using them correctly is critical. For example, GET fetches data and retrieves a list of users. GET/users/{id} gets the details of a specific user. POST creates a new resource for example POST/users creates a new user. PUT updates an entire resource for example PUT/users/{id} replaces all data for the user with the specified ID. DELETE removes data for example DELETE/users/{id} deletes the specified user. All this is to say that using HTTP methods correctly simplifies the developer experience by creating a predictable pattern. I chose this resource because it talks about API as an entirety, it lists the pros and cons, principles of rest APIs, shows examples of how they work.

References.

https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Software Architecture Patterns

Week-13: 12/2/2024

Understanding software architectural patterns is critical in the software development industry for creating strong, scalable, and maintainable products.

A recent Turing blog post, “Software Architecture Patterns and Types,” has been useful in solidifying my understanding of this important concept. This article provided a comprehensive overview of various patterns, including monolithic, microservices, event-driven, layered, and serverless architectures. The article clearly gives explanations of each pattern’s design principles, advantages, and limitations.

For instance, while monolithic architectures offer simplicity, they often struggle with scalability. On the other hand, microservices excel in scalability and allow for independent deployment but can introduce complexity in maintenance and debugging. The article also explores emerging trends like serverless architecture, emphasizing their importance in modern cloud-based systems.

The practical examples and concise explanations in the article made it extremely relevant to what I learned in my classes, particularly my software construction, design, & architecture class. The discussion on system scalability and maintainability directly aligns with the topics we’re covering.

One of the most valuable takeaways for me was the emphasis on aligning architectural decisions with business objectives. The article effectively illustrates that a microservices architecture, while attractive for its scalability, might be overkill for a small-scale project. This resonated strongly with my recent experience in a group project where we debated between microservices and a layered design. Reflecting on the deployment and dependency management challenges we faced, the article validated our decision to opt for a simpler layered design as a better fit for our project’s scope.

Furthermore, the article’s discussion of serverless architecture was truly eye-opening. I had previously held a somewhat simplistic view of serverless as a universal scaling solution. However, the article shed light on its potential drawbacks, such as vendor lock-in and latency issues. This more nuanced perspective will undoubtedly inform my approach to future projects, encouraging me to critically evaluate new trends before jumping on the bandwagon.

Moving forward, I intend to apply this knowledge by diligently assessing the specific needs and constraints of each project before selecting an architectural pattern. For instance, when tackling a high-traffic e-commerce site, I would now consider employing an event-driven architecture to effectively handle asynchronous data flow. Alternatively, for smaller projects, I would advocate for a monolithic or layered approach to minimize overhead.

By understanding the trade-offs inherent in different architectural patterns, I feel better prepared to design and build software systems that are not only functional but also scalable, maintainable, and aligned with business goals.

Blog link: https://www.turing.com/blog/software-architecture-patterns-types

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Understanding Clean Code in Reagant with the Clean Coders

This week, I explored the blog post “Mastering Reagent: Finding the Balance Between Readability and Performance” on Clean Coders. This post delves into the challenges of balancing readability and performance when using Reagent, a ClojureScript library for building user interfaces. It highlights techniques to maintain code clarity while optimizing performance, especially in interactive web applications.

The author discusses common pitfalls, such as overusing lifecycle methods, creating unnecessary computations, and mishandling state updates, all of which can lead to unresponsive or hard-to-maintain code. The blog suggests best practices, including leveraging idiomatic ClojureScript constructs and modularizing components to enhance both readability and runtime efficiency. Practical examples include minimizing expensive computations by using reagent.core/track and ensuring that components don’t re-render unnecessarily with reagent.core/shouldComponentUpdate. The author also emphasizes that while performance is crucial, readability often has a long-term payoff, especially in collaborative environments where maintainable code saves time.

I chose this blog post because our course has emphasized the importance of clean, maintainable code in software development and maintenance. These lessons have led us to balance the trade-offs between performance and code clarity, and understanding how to achieve that in application is very important. Additionally, we’ve explored a lot of similar clean code attributes in class that are expanded on within this post and tied to the context of UI frameworks.

What stood out to me most was the emphasis on maintaining readability without compromising performance, a principle applicable across programming domains. For instance, I’ve sometimes been tempted to optimize prematurely, leading to messy code that became hard to debug or modify later. This post reinforced the idea that readable code doesn’t just benefit the developer in the moment but also improves team productivity and ensures the application can be scaled or updated more easily in the future.

One of the key takeaways for me was the use of tools like track to manage expensive computations efficiently. I had not considered how reactivity frameworks like Reagent allow for targeted optimizations without sacrificing clarity. Moving forward, I plan to apply this principle in my projects by carefully identifying bottlenecks and ensuring that optimizations are implemented only where they provide tangible benefits.

This material has given me a new perspective on how to approach UI development. While I’ve worked primarily with simpler frameworks, I now see how the same balance between readability and performance applies universally. Whether I’m working with Reagent, React, or any other tool, the insights from this post will help guide my decision-making and ensure I focus on long-term maintainability as well as immediate efficiency.

Overall, this blog post offers practical advice for developers working with Reagent or similar UI libraries. I highly recommend reading it for anyone interested in crafting user interfaces that are both efficient and easy to maintain. The post can be found here: “Mastering Reagent: Finding the Balance Between Readability and Performance.”

From the blog CS@Worcester – CS Journal by Alivia Glynn and used with permission of the author. All other rights reserved by the author.

Scrum

Hello, and today’s blog post is about Scrum. For those of you who do not know, Scrum is basically a way to work as a team efficiently and effectively. I am glad that Professor Wurst chose to expose us to Scrum in his curriculum this semester because this method of teamwork is very interesting, and it actually seems like it could be useful to me in the future. The thing I really like about Scrum is that it is INCREDIBLY organized. Each day is planned out to an hour, and I think that having a schedule that is organized and efficient is very important when you actually want to get work done. Planning out your schedule/agenda ahead of time ensures that everyone on the team stays on track with minimal distractions. This method also is all about improvement. There are reflections/retrospectives at the end of every sprint (which is pretty much another way to say “project”). We went over this in detail during our class time, which engraved a lot of the Scrum knowledge in our heads. We also took official quizzes which helped us become “certified.” I think taking these Scrum tests were beneficial because we had to reach a certain score, and if we didn’t get it, we would have to re-do the tests until we had a much better understanding of the concepts.

Scrum uses the Agile framework, which I talked more about in one of my previous blogs. Scrum is based off of three pillars: transparency, inspection, and adaptation. The official Scrum website described the process of working using Scrum as “working through small experiments, learning from that work and adapting both what you are doing and how you are doing it as needed.” Scrum also has important values: courage, focus, commitment, respect, and openness. These values are each crucial to making this process work.

The article I chose to help me explain Scrum for this blog is linked here: An Introduction to Scrum | Lucidspark

This article describes Scrum as “an iterative, adaptable approach to software development.” It talks about how most software development methodologies are very linear, meaning there is not much room for improvement. Scrum on the other hand, as I mentioned before, follows the Agile approach rather than the Waterfall methodology, which makes for great adaptability. The article also re-iterates how Scrum is all about teamwork and collaboration. It goes into detail about each role in the Scrum team, and what all the events are in the sprints. I think this article is worth a read, and it is supposed to be just an 8-minute read as well.

Overall, Scrum is definitely one of the best software development methodologies that are in sync with Agile ideologies, so I am glad that we learnt about it!

From the blog cs@worcester – Akshay&#039;s Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

A potential of owning servers

CS-348, CS@Worcester

While researching server companies like data centers. I realized that many companies rent servers to host their applications. They also use them to store large amounts of data. Which is all very important for any business nowadays. I read an article from David Heinemeier Hansson. It was about how his company has moved 7 cloud applications from AWS servers into their own servers. 

Many people thought it was a ridiculous plan to move from renting a server to owning one. They thought this when David posted his article. Recently, companies have been renting servers to save money. They aim to reduce costs on employees, management, and the legal department. Additionally, they save on costs for parts and the electricity needed to run the server. In David’s article, he explains how there are benefits to both ideas of hosting servers. 

As the move from the cloud into their own servers has its multiple problems. For example, it takes a lot of time to move data from the cloud into another server. Also, the cost of hardware depends on the amount of data it needs to hold. Additionally, the performance the company wants can increase the expense significantly. Another problem is having employees and management to maintain the server. This can cause costs to increase for companies to do this. 

David goes into detail. He explains how his company was able to save a lot of money doing this. They already have the employees and management who maintained some of their other servers. That the company just had that problem solved. Also David explains how his company was able to pay for the hardware needed for 18 petabytes which is crazy. I want you to think about how much is 18 petabytes of data, the entire internet is just 36 petabytes. 

When I was researching this topic, I just hoped that his idea works. I also wish he explains in detail in the next few years how it went. If companies can figure out how to manage their own servers then it creates options to make personal servers cheaper. For example access to more parts and talent that knows how to handle large data and applications. If you want a detailed understanding of how his company is managing this issue, then read his article below. 

Work Cited

David Heinemeier Hansson, Our cloud-exit savings will now top ten million over five years, world.hey.com, https://world.hey.com/dhh/our-cloud-exit-savings-will-now-top-ten-million-over-five-years-c7d9b5bd, Created October 17th, 2024, Accessed on December 1st 2024

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Week-13 Post

This week’s post will cover REST APIs, Representational State Transfer Application Programming Interfaces. One of the main key principles of RESTful APIs is the seperation between the frontend UI the user interacts with and the backend server. Postman’s blog highlights this as, “The client’s domain concerns UI and request-gathering, while the server’s domain concerns focus on data access, workload management, and security”. The primary purpose of REST APIs is to allow different software systems to interact and exchange data over the web. REST mainly focuses on stateless communication, where each request from a client contains all the information needed for the server to process it.

REST APIs use HTTP methods and standard URL structures to enable communication between clients and servers. HTTP methods play an essential role in REST APIs. These methods correspond to CRUD (Create, Read, Update, Delete) operations in software. The POST method is used to create, while GET retrieves data from the server. PUT and PATCH are used to update existing data, with PUT replacing the entire resource and PATCH modifying specific parts. DELETE removes data. In addition, REST APIs use status codes to indicate the outcome of an operation, For example, a 200 status code indicates a successful operation, 201 signifies resource creation, 404 means a resource was not found, and 500 represents a server error. Including appropriate status codes in API responses helps clients understand the results of their requests and handle errors effectively.

The blog post I researched by Postman highlights how REST is widely used across various industries. For example, e-commerce platforms use REST APIs to manage product information and process orders. Social media applications utilize REST APIs to handle user profiles and posts. Cloud services often provide REST APIs to allow developers to interact with their resources programmatically. The blog also mentions another type of API called SOAP, standing for Simple Object Access Project. SOAP is considered a protocol, while REST is considered a set of guidelines. Unlike REST which uses methods like JSON, URLs, and HTTP, SOAP uses XML for sending data. One of the main reasons why SOAP might be preferred over the more popular REST is because SOAP supports WS-Security, which provides a framework for securing messages, including encryption, digital signatures, and authentication. This makes SOAP more suitable for applications handling sensitive data. Corporations like banks and hospitals dealing with sensitive user information could utilize to prevent information breaches.

These APIs provide a consistent way for systems to interact and exchange data while adhering to a set of well-defined principles. By understanding HTTP methods, status codes, and data formats, developers can create APIs that users can understand and use.

Blog: https://blog.postman.com/rest-api-examples/

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

AMD’s rumor trying to enter Smartphone SoC’s market

CS-343, CS@Worcester

In recent news AMD is rumored to be entering the smartphone processor market. Many people assume that Samsung and AMD’s partnership is a sign of entering the smartphone processor market. I read in articles that AMD is attempting to incorporate RDNA 2 into smartphone SoCs. These SoCs are for Samsung phones. 

First let me explain what RDNA 2 is. It is a gpu chip made for the RX 6000 series of gpu. RDNA 2 was mainly used for gaming. It handles more extreme demand from video game frames. It also tries to be as efficient with power. 

Let me give some more details on why these rumors are possible. In recent years AMD has been incorporating RDNA 2 into handheld gaming devices and consoles like the PS5. While making the chip efficient for small devices, they had to make sure the chip was smaller. They also needed to make sure it did not demand as much power from these smaller devices. Normally chip makers would only make their gpu chips for desktops and laptops. Chip manufacturers enabled desktop GPUs to work in a laptop with limited power. They did this by lowering the GPU power draw and reducing the performance by a percent. 

While AMD was capable of figuring out how to make RDNA 2 chips within restrictions of laptops. They faced a few difficulties. Nevertheless, they managed to make their RDNA 2 chips suitable for even small devices. Now here is a problem. Unlike handhelds, you only have a limited amount of space available for fitting components in phones. Also, you need to consider that consumers want their cell phone battery to last more than 24 hours. They also do not want the phone to be bulky when using it. Lastly, consumers want their devices to be high quality. They expect better performance. They want the devices to take good photos and record video at an even higher quality than last year’s models. 

If AMD can figure out these constraints of making smartphone SoC’s, I hope they create more changes in that market. Imagine not just Apple, Samsung and Qualcomm and more are competing in that highly competitive market. If we want the best products, we need a lot of competition in the market. This competition will constantly improve products. It also protects consumers from companies selling inferior products.

Work Cited:

Thitu, Naftary. “AMD Rumored to Make Entry in the Smartphone Market.” Techweez, 22 Nov. 2024, techweez.com/2024/11/22/amd-rumored-to-make-entry-in-the-smartphone-market/.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Agile vs Waterfall

Week 12 – 11/30/2024

Agile Versus Waterfall: Picking the Right Tool for the Job

As a senior in college, I’m starting to think more seriously about what it will actually be like to work on projects in the real world. I recently read a blog post that compared and contrasted Agile and Waterfall project management methodologies, and it really helped me understand the importance of choosing the right approach for different types of projects. This aligns perfectly with what we’ve been discussing in my class about the need for both strategic planning and adaptability.

The blog post “Agile vs. Waterfall: Understanding the Differences” by Mike Sweeney explained that Waterfall is a very linear, sequential approach, where each phase of the project must be completed before moving on to the next. It’s kind of like building a house – you need to lay the foundation before you can put up the walls. This makes Waterfall a good choice for projects where the requirements are well-defined and unlikely to change, like in construction or manufacturing. In these industries, making changes mid-project can be super costly and impractical, so having a clear plan from the outset is essential.

Agile, on the other hand, is all about flexibility and iteration. The project is broken down into short cycles called sprints, and the team continuously reevaluates priorities and adjusts its approach based on feedback and new information. This makes Agile a great fit for projects where the requirements are likely to evolve over time, such as software development. In software development, client needs and market trends can change rapidly, so being able to adapt is crucial.

One of the biggest takeaways for me was the realization that choosing the right methodology is crucial for project success. I used to think that being flexible was always the best approach, but now I understand that structure and predictability can be equally important in certain situations. The key is to carefully assess the project requirements and choose the methodology that best aligns with those needs.

As I prepare to enter the professional workspace, I feel much more confident in my ability to approach projects strategically. Thanks to this blog post, I now have a better understanding of when to use Agile versus Waterfall. For instance, if I’m working on a software project that involves a lot of client interaction, I’d probably lean towards Agile. But if I’m managing a marketing campaign with well-defined objectives, Waterfall might be a more appropriate choice.

The real-world examples provided in the blog post were super helpful in illustrating how these methodologies are applied in different industries. This practical insight will definitely be valuable as I transition from the academic world to the professional world.

Blog link: https://clearcode.cc/blog/agile-vs-waterfall-method/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.