Types of Software Testing: Ensuring Quality in Development

Within the Software Development Lifecycle, it has a major step that takes a decent amount to do is Software testing. The purpose of software testing is that the program has no problems and meets the requirements set by the customer. Software development is divided into two different types: verification and validation. As noted before, verification is a step that has programmers checking if the software is doing as the customer intended. In the same vein, the validation step is just programmers testing if the software meets the customer’s requirements. For example a website wants to add a new feature to handle an increase of daily users on the platform. 

Moving on, there are multiple different types of testing programmers use to validate and verify if the program is working as intended. There is automation testing, and manual testing. They then divide into smaller more specific tests that will focus on certain aspects of the program. As mentioned earlier, automation testing is when programmers write test scripts with or without software and can run the tests repeatedly. While manual testing is a method of testing that has a programmer write tests that will check on different sections of the program.

Within software testing there are different levels of testing. They are called unit testing, integration testing, system testing, acceptance testing. 

Unit Testing

  • It is a testing that checks every component or software 
  • It is to make sure the hardware or software that could be used by the programmers 

Integration Testing 

  • It checks two or more modules which are tested are then put in the program or hardware 
  • To make sure the components and interface are working as intended

System Testing

  • Is a test that verifies the software itself 
  • Also it checks the system elements 
  • If the program meets the requirements of the system 

Acceptance Testing  

  • Validates if the program meets the customer’s expectation 
  • If the customer can work correctly on a user’s device 

These different types of testing are used to prevent issues down the line. So that programmers can find potential bugs, improve quality of the program, improve user experience, testing for scalability, also to save time and money. In order to save money and time because it takes a lot of employees to solve problems with a program that could have been used to make a new product. A business wants a program to work and be improved over time.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Rest APi

 

The article “What Is a REST API?
Examples, Uses, and Challenges”, published on the Postman blog, provides an
accessible overview of REST APIs. Rest Api works as a backbone in modern web
communication. It explains that a REST API (Representational State Transfer) is
a standardized way for systems to exchange data over the web using common HTTP
methods such as GET, POST, PUT, and DELETE. It briefly explains the history of
the rest api and the comparing to Simple Object Access Protocol (SOAP) api
which required strict message formatting and heavier data exchange while REST api
emerged as a lighter and more flexible architectural style built on standard
HTTP methods. The piece also informs best practices for using REST api like using
correct HTTP status codes, providing helpful error messages, enforcing security
through HTTPS and tokens, In the end, the article shows real-world examples
from well-known platforms like Twitter, Instagram, Amazon S3, and Plaid,
showing how REST APIs are used at scale in everyday applications.

As a software developer trying to be a
full stack developer, api calls were one of my weakest points since I am
struggling to understand how to use it effectively even though I tried to use
it in many areas. I had to ensure that I have a strong, up-to-date conceptual
foundation of what REST APIs are and how they are designed and used in practice
as I use. Since Postman is one of the leading platforms in the API ecosystem,
its educational resources provide an excellent reference point for gaining both
theoretical and practical understanding of how REST APIs work in real-world
development.

Even though the article was long , with YouTube
videos and hard time to understand everything, this article with postman helped
me to better understand the history and the usage of the rest api and its real-world
use compared to before. I learned that REST is not a technology or tool but an
architectural style that enforces clear separation between client and server,
promotes scalability, and ensures that each request carries all necessary
information. Later I plan to use this information to carefully use api as I plan
to create my own REST api endpoints that provide access to datasets and user
submissions while documenting things well unlike before. Ultimately, my goal is
to become comfortable enough with APIs that I can architect entire applications

article link: https://blog.postman.com/rest-api-examples/ 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

Scrum On

Software Development Methods

Wow, look at us, huh? Almost a month later, another post appears! This month (I guess I’ve moved to monthly blogging… We’ll tighten it up, I promise) has been mostly about the different ways to go about developing something. Neat, huh?

So far, we’ve talked about waterfall, agile, and the latest, scrum!

SCRUM

Scrum builds upon each previous development methodology, and is a natural extension of Agile. It is not, however, ‘better’ than agile, but simply different. No one methodology is the best, but some perform better than other in certain circumstances. Scrum is the latest methodology that my class has been learning about, and as such I decided to take some time to further look into it.

Scrum in 20 minutes is a video I came upon while looking for examples. It explains the process of scrum, why it is used, and how it is different to other methodologies. As well as this, the video contains a few examples of how using scrum has been beneficial.

One of the reasons that I decided to watch this video was that it simply looked professional. It was well polished, and felt like a finished product. A lot of the times, when I go looking for academic supplemental material, I’m presented with a sea of the same animated characters, slideshows, and whiteboard-style videos, so this one was a VERY nice change of pace. More of these, please! (I have some words with whoever invented PowToon).

This video really helped me to see how Scrum is applied to a real life example. I also appreciated the refresher on the process as a whole, but having real-life examples of a full sprint, planning, and what each of the team members’ roles contribute was really helpful in better understanding Scrum as a whole.

Something that I realized while watching the video, was that Scrum does not have to be software development specific. I play a lot of optimization games, and something like scrum just feels extremely organized, and is something that I feel is worth applying, at least in terms of concept, to more of my life than just this one small facet. Organization and goal setting is important in almost everything we do, and Scrum is just one way to do that, but it has been refined over multiple years of trial and error.

I am excited to apply scrum to future projects, and look forward to the increased organization that a solid planning methodology will bring to the table.

This concludes my mandatory blog post of Quarter 2 for the semester.

— Will Crosby

From the blog CS@Worcester – ELITE Computer Science by William Crosby and used with permission of the author. All other rights reserved by the author.

Understanding RESTful API design principles – Upsun Blog

This quarter I decided to read “Understanding RESTful API design principles” by the Upsun blog, which talks about many aspects of RESTful API. I picked this because it directly relates to what we’ve been learning recently in class, and relates to a system I will have to work with next semester for my Capstone. I figured it would be a good idea to get more familiar with REST API early.

The article starts off explaining what REST (REpresentational State Transfer) actually is. It’s a “set of architectural principles that define how to design and implement these interfaces”, discussing the HTTP methods involved. Being the only API design principle I know of, I had no frame of reference for its advantages compared to other options, but the article provides a chart and compares RESTful to GraphQL (good for more efficient and specific data fetching) and gRPC (Good for microservices and streaming).

The RESTful architecture typically contains a Client, Server, and Database, but it can include additional elements such as a load balancer or a cache. The Client, Server, and Database are fairly straightforward, being the consumer, the server hosting and processing the requests, and the database to store the data. The load balancer is also intuitive to it’s name, but for the smaller scale system like the Food Pantry which is unlikely to experience a large server load at any time anyway, it’s probably unnecessary.

The article goes into 6 design principles of REST APIs. The first one, Client-server architecture, states that the client and server should operate independently to make long-term maintainability better. The second is Stateless communication, stating that each request from the client should contain all the information needed to process the request. Nothing is stored on the server, so that each request is self-contained and can be handled independently, simplifying the process. The third is Cacheable Responses. If an API response is cacheable, it should include caching headers such as “Cache Control” or “Expires”, so that the data can actually be cached and the server doesn’t need to be queried every time. The fourth principle is Layered System. In many ways this seems similar to the principle of Abstraction, where the API should operate across multiple layers without the client being aware of the layers, reducing complexity and making it so the client doesn’t have to interact directly with multiple backend servers. The fifth is Uniform Interface, stating that the way resources are interacted with (ie. HTTP Methods, URL patterns, etc.) should be standardized and uniform. The last one is Code on Demand, which the article states as optional. It states that servers can use executable code such as Javascript to extend functionality.

Many of these principles were fairly intuitive, but it’s good to have them explicitly stated anyway. Some were also directly connected to our class discussions. Things like caching and the executable code stuff are things we didn’t talk about, but I’ll keep them in mind as we do more with REST API.

From the blog CS@Worcester – Site Title by Justin Lam and used with permission of the author. All other rights reserved by the author.

Semantic Versioning

For this blog, I read two articles that gave me more insight into why semantic versioning is important for developers. The article, “Semantic versioning: Why You Should Be Using it,” by Kitty Giraudel, from SitePoint, discusses how semantic versioning can bring clarity and predictability to software projects. The other article, “Using semantic versioning tags,” by Marc Campbell from Medium, explained how these principles apply to container environments. These articles emphasize that versioning isn’t just a label for updates, it’s a method of stability, communication, and trust between developers and their systems.

Semantic versioning, or SemVar, follows a three-part system. This is major.minor.patch. A major change is something that might break existing code. Compatibility here is not guaranteed. A minor change may add new features, but it is backward compatible. A patch can have bug fixes, small tweaks, and also remain backward compatible. This allows teams to understand the scale of an update easily.

This builds predictability and confidence. Without semantic versioning, people wouldn’t even update out of fear that any update may break their system. This allows devs to plan for upgrades and maintain a stable codebase.

This concept parallels directly to Docker images. Container images have multiple tags(exp. 1, 1.0, 1.0.1, or latest). The latest tags always point to the most recent version. More specific tags like 1.1.0 or 1.0.0 represent a snapshot in time of that release. This gives devs added flexibility. The ability to just use the 1.0  would allow you to get all the latest updates without the potential of breaking anything.

Semantic versioning is a method of communication between developers. It’s a simple but important concept like UML diagrams. This is about maintaining a level of clarity within complex systems. The more organized the system versioning is, the better off we are down the line.

When everyone is following the same structure and the same system, developers can predict the impact of their updates without having to slew through documentation or code. It allows for more transparency, which can help the team work more efficiently for larger projects. This scales whether you are using API or Docker. Symantec versioning may seem minuscule, but getting a better understanding of this will definitely help me in the long run. It reinforces good habits that reduce confusion, help collaboration, and help projects evolve with fewer issues over time

https://medium.com/@mccode/using-semantic-versioning-for-docker-image-tags-dfde8be06699

https://www.sitepoint.com/semantic-versioning-why-you-should-using/

From the blog CS@Worcester – Aaron Nanos Software Blog by Aaron Nano and used with permission of the author. All other rights reserved by the author.

Testing Smarter, Not Harder: What I Learned About Software Testing

by: Queenstar Kyere Gyamfi

For my second self-directed professional development blog, I read an article from freeCodeCamp titled What is Software Testing? A Beginner’s Guide. The post explains what software testing really is, why it’s essential in the development process, and breaks down the different types of testing that developers use to make sure software works as intended.

The article starts with a simple but powerful definition: testing is the process of making sure your software works the way it should. It then describes several types of testing like unit, integration, system, and acceptance testing and explains how each one focuses on different levels of a program. It also introduces core testing principles such as “testing shows the presence of defects, not their absence” and “exhaustive testing is impossible.” Those ideas really stood out to me because they show that testing isn’t about proving perfection it’s about discovering what still needs to be improved.

I chose this article because, as a computer science student and IT/helpdesk worker, I deal with troubleshooting and debugging almost daily. I’ve always seen testing as something that happens after coding, but this article completely changed that mindset. It made me realize that testing is an ongoing part of development, not a one-time task before deployment. It’s a process that ensures software is not only functional but also reliable for real users.

What I found most interesting was how the author connected testing to collaboration and communication. Writing good test cases is like writing good documentation, it helps other developers understand what the software should do. The idea of “testing early and often” also makes a lot of sense. By catching issues early in the process, developers can save time, reduce costs, and prevent bigger headaches later on.

Reading this made me reflect on my own coding habits. I’ve had moments in class where my code worked “most of the time,” but I didn’t always test for edge cases or unexpected inputs. Moving forward, I plan to write more tests for my own projects, even simple ones. Whether it’s a class assignment, a group project, or a personal program, I now see testing as a chance to build confidence in my work and improve how I think about quality.

Overall, this article helped me understand that software testing isn’t just about finding bugs it’s about building better software. It’s a mindset that values curiosity, patience, and teamwork. By applying these lessons, I’ll be better prepared not only to write code that works but to deliver software that lasts.

***The link to the article is in the first paragraph***

From the blog CS@Worcester – Circuit Star | Tech & Business Insights by Queenstar Kyere Gyamfi and used with permission of the author. All other rights reserved by the author.

Why Software Maintenance Matters for Business Success

Recently AWS servers were not working due to something causing one third of the internet to go down. It made me start to think about what maintenance companies like Amazon have to have in order to make sure clients’ websites or programs do not get shut down. Then I found an article about Software maintenance explaining in a software engineer’s perspective on what happens with a product already made but has to be maintained. 

It starts to explain that software maintenance branches between bug fixes, adding new features, adding new hardware or software environments. At the end of the day the software has to work, be secure, efficient, and meet the users demands. As mentioned earlier there are multiple different types of software maintenance: bug fixes, patches, adaptive maintenance, optimizations, preventive maintenance. 

As programs get used or the amount of users start to increase companies have to consider that users might face bugs. These bugs can affect user experiences that might cause the company to lose a potential sale and even get bad reviews. These bugs can range from small coding errors or errors that might affect the user experience. 

In the future there is always the risk of needing to add patches to fix something very quickly. For example many companies would get hacked and have to undo the damage hackers might do to their program. Even though it could be an extreme case like getting hacked it could be a major update to the software that might be needed. 

Moving on to Adaptive Maintenance, it is about whether customers might want a new feature added to the software. Which could even change involving the environment, hardware, business wants, or new regulations that apply to the company. For example in the past the U.S government added a law to ensure the privacy of online users to prevent websites collecting data on users without consent.

Another type of maintenance is Perfective maintenance but to me I consider it optimizations. This can be either the consistency of the performance of a program and reliability. In addition, to make the software be flexible when adding new features or changes to it. 

The last type of Maintenance is Preventive maintenance. These could be features that make the program to be more secure, frequently tested, constant documents being made, and backups. This type of maintenance is to make sure potential problems can be fixed as quickly as possible and be tested so that it does not impact the user experience. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Understanding CPU Cache: L1, L2, and L3 Differences

In my spare time over the past couple of weeks I was looking into why do cpu manufacturers advertise the amount of cache they have in their cpus. It made me start to wonder. In video games people want to have as much L3 cache as possible but for work stations they might want a mixture of L1 and L2 cache instead of L3 cache. After reading a few articles, the cpu cache is made of SRAM (Static Random Access Memory). Which is high speed memory located very close to the cpu and it is volatile memory. The purpose of cache is so that the cpu can access data really quickly to complete the operation. 

Then it made me start to think about the differences between the three different caches. Let me start off with L1 cache.

L1 Cache 

  • Is made within the cpu core 
  • L1 is the closest and smallest out of the others 
  • L1 size is around 16KB – 128KB depending on cpu model 
  • L1 is directly mapped on the main memory blocks to improve speed of data 
  • The main purpose of L1 is to store the data and instructions that the cpu uses the most
  • L1 access data within 1 – 3 clock cycles

L2 Cache 

  • Is either core specific or shared between other cores 
  • The size ranges between 256KB – 2MB 
  • L2 access data within 3 – 10 clock cycles
  • The purpose of L2 cache is very similar to L1 it is just that the LRU (least recently used) decides whether to have the data be in L1 or L2 
  • L2 cache can be direct-mapped, set-associative or fully-associative just depends on cpu 
  • It also helps L1 to store additional data if needed 

L3 Cache 

  • L3 cache is shared with multiple cores 
  • Are the furthest from the cpu cores
  • Has the most capacity out of the group, which ranges between 2MB -64MB or more  
  • It has the same purpose of L2 cache but the LRU manages which data is sent to the L3 or L2 
  • Slowest speed it takes 10 – 20 or more clock cycles to access data
  •  L3 cache data can be manipulated and other cores can see the new data
  • In order to help maintain consistency between shared cores 
  • L3 cache associativity is 4 way or more mapped 

Even though the advancements of cpu manufacturers are trying to push the limits of cache it is still dependent on the users’ need for them. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Building Secure and Scalable APIs

For my second blog, I read “Best Practices for Building a Secure and Scalable API” from MuleSoft’s API University. The article goes over how developers can build APIs that don’t crash when traffic grows and that keep user data safe at the same time. It focuses on two things every developer worries about: scalability and security, and how good design decisions affect both.

It explains that scalability is all about how well your API handles growth. Vertical scaling means giving one server more power, while horizontal scaling spreads the load across multiple servers. The author also talks about caching, async processing, and rate limiting to help APIs run smoothly when there’s a lot of traffic. That part made me think about our Microservices Activities in class, where we created REST endpoints and thought about how requests would move through the system. The same design choices we made like organizing resources clearly and using the right HTTP methods are what help APIs scale better in the real world.

Then it shifts to security. It breaks down the basics: use HTTPS for encryption, OAuth 2.0 for authentication, and proper logging to track activity. What stuck with me most was how it said security should be built into the design from the start, not added on later. That lined up with what we’ve been learning about maintainable design in CS-343. If you think about security and structure early, your system stays reliable long term.

I picked this article because it directly connects to what we’re doing in class. We’ve been designing REST APIs and talking about microservice architecture, and this blog felt like a real world version of that. It also ties into the design principles we’ve covered, keeping systems modular and loosely coupled so updates and security changes don’t break everything else.

After reading it, I realized scalability and security go hand in hand. You can’t have one without the other. A system that handles tons of traffic but isn’t secure is a problem, and the same goes for one that’s super secure but slow or unreliable. My main takeaway is that good API design is about balance, thinking ahead so what you build keeps working and stays safe as it grows.

Link: https://www.mulesoft.com/api-university/best-practices-building-secure-and-scalable-api

From the blog CS@Worcester – Harley Philippe's Tech Journal by Harley Philippe and used with permission of the author. All other rights reserved by the author.

REST API Design Practices

Link: https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

In the blog post “Best practices for REST API design,” the authors go over important guidelines for designing any type of web service through the REST architecture style. They first go over the definition of a REST API, which is an interface that follows stateless communication and cacheability, as well as other constraints. They then show off a set of recommendations for good practices, such as using nouns and not verbs in endpoint paths, accepting and responding with JSON, logically nesting hierarchical resources, handling errors with standard HTTP status codes, and many more. The blog post emphasizes greatly that keeping names consistent and adhering to web standards will ultimately make APIs easier for maintainers and clients, as it will be easier for clients to use and maintainers to support. It would also make it much easier for systems to scale properly.

I selected this blog post because we are currently talking about this topic, and we have a homework assignment going over this topic so I thought it would be helpful to gain a deeper understanding of it. Also, in past courses, such as Unix Systems Programming, we would focus on architecture and backend systems, but I never really gave it a deeper look. This blog post gives very clear guidelines for REST API design, which can be very valuable in my career if I end up working in software or if I have to build or interact with APIs in the future. In the future, I plan on applying these principles if I ultimately have to design or integrate REST services, which could be through setting up a microservice in the cloud or even connecting to one as a consumer. I will definitely look to follow the guidelines listed in the blog post, as I believe that following those will provide a good baseline for my work. These API design patterns can also help me with projects, as I can ensure that my interface layers are clean and intuitive.

Overall, I thought that this blog post was very helpful and informative, as it provided a lot of clarity on the guidelines for REST API design. Their point about using nouns in endpoint paths was a very good point, as it is better for HTTP verbs to be doing the action. While it is a small naming decision, it can end up making a big difference when it comes to readability and maintainability. In the end, this blog post improved my understanding of REST API design practices and showed me that good design isn’t just about the efficiency of the algorithms and resources, but it is also about the service and client experience.

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.