Category Archives: CS-343

The Rest APi

 

The article “What Is a REST API?
Examples, Uses, and Challenges”, published on the Postman blog, provides an
accessible overview of REST APIs. Rest Api works as a backbone in modern web
communication. It explains that a REST API (Representational State Transfer) is
a standardized way for systems to exchange data over the web using common HTTP
methods such as GET, POST, PUT, and DELETE. It briefly explains the history of
the rest api and the comparing to Simple Object Access Protocol (SOAP) api
which required strict message formatting and heavier data exchange while REST api
emerged as a lighter and more flexible architectural style built on standard
HTTP methods. The piece also informs best practices for using REST api like using
correct HTTP status codes, providing helpful error messages, enforcing security
through HTTPS and tokens, In the end, the article shows real-world examples
from well-known platforms like Twitter, Instagram, Amazon S3, and Plaid,
showing how REST APIs are used at scale in everyday applications.

As a software developer trying to be a
full stack developer, api calls were one of my weakest points since I am
struggling to understand how to use it effectively even though I tried to use
it in many areas. I had to ensure that I have a strong, up-to-date conceptual
foundation of what REST APIs are and how they are designed and used in practice
as I use. Since Postman is one of the leading platforms in the API ecosystem,
its educational resources provide an excellent reference point for gaining both
theoretical and practical understanding of how REST APIs work in real-world
development.

Even though the article was long , with YouTube
videos and hard time to understand everything, this article with postman helped
me to better understand the history and the usage of the rest api and its real-world
use compared to before. I learned that REST is not a technology or tool but an
architectural style that enforces clear separation between client and server,
promotes scalability, and ensures that each request carries all necessary
information. Later I plan to use this information to carefully use api as I plan
to create my own REST api endpoints that provide access to datasets and user
submissions while documenting things well unlike before. Ultimately, my goal is
to become comfortable enough with APIs that I can architect entire applications

article link: https://blog.postman.com/rest-api-examples/ 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

Semantic Versioning

For this blog, I read two articles that gave me more insight into why semantic versioning is important for developers. The article, “Semantic versioning: Why You Should Be Using it,” by Kitty Giraudel, from SitePoint, discusses how semantic versioning can bring clarity and predictability to software projects. The other article, “Using semantic versioning tags,” by Marc Campbell from Medium, explained how these principles apply to container environments. These articles emphasize that versioning isn’t just a label for updates, it’s a method of stability, communication, and trust between developers and their systems.

Semantic versioning, or SemVar, follows a three-part system. This is major.minor.patch. A major change is something that might break existing code. Compatibility here is not guaranteed. A minor change may add new features, but it is backward compatible. A patch can have bug fixes, small tweaks, and also remain backward compatible. This allows teams to understand the scale of an update easily.

This builds predictability and confidence. Without semantic versioning, people wouldn’t even update out of fear that any update may break their system. This allows devs to plan for upgrades and maintain a stable codebase.

This concept parallels directly to Docker images. Container images have multiple tags(exp. 1, 1.0, 1.0.1, or latest). The latest tags always point to the most recent version. More specific tags like 1.1.0 or 1.0.0 represent a snapshot in time of that release. This gives devs added flexibility. The ability to just use the 1.0  would allow you to get all the latest updates without the potential of breaking anything.

Semantic versioning is a method of communication between developers. It’s a simple but important concept like UML diagrams. This is about maintaining a level of clarity within complex systems. The more organized the system versioning is, the better off we are down the line.

When everyone is following the same structure and the same system, developers can predict the impact of their updates without having to slew through documentation or code. It allows for more transparency, which can help the team work more efficiently for larger projects. This scales whether you are using API or Docker. Symantec versioning may seem minuscule, but getting a better understanding of this will definitely help me in the long run. It reinforces good habits that reduce confusion, help collaboration, and help projects evolve with fewer issues over time

https://medium.com/@mccode/using-semantic-versioning-for-docker-image-tags-dfde8be06699

https://www.sitepoint.com/semantic-versioning-why-you-should-using/

From the blog CS@Worcester – Aaron Nanos Software Blog by Aaron Nano and used with permission of the author. All other rights reserved by the author.

Testing Smarter, Not Harder: What I Learned About Software Testing

by: Queenstar Kyere Gyamfi

For my second self-directed professional development blog, I read an article from freeCodeCamp titled What is Software Testing? A Beginner’s Guide. The post explains what software testing really is, why it’s essential in the development process, and breaks down the different types of testing that developers use to make sure software works as intended.

The article starts with a simple but powerful definition: testing is the process of making sure your software works the way it should. It then describes several types of testing like unit, integration, system, and acceptance testing and explains how each one focuses on different levels of a program. It also introduces core testing principles such as “testing shows the presence of defects, not their absence” and “exhaustive testing is impossible.” Those ideas really stood out to me because they show that testing isn’t about proving perfection it’s about discovering what still needs to be improved.

I chose this article because, as a computer science student and IT/helpdesk worker, I deal with troubleshooting and debugging almost daily. I’ve always seen testing as something that happens after coding, but this article completely changed that mindset. It made me realize that testing is an ongoing part of development, not a one-time task before deployment. It’s a process that ensures software is not only functional but also reliable for real users.

What I found most interesting was how the author connected testing to collaboration and communication. Writing good test cases is like writing good documentation, it helps other developers understand what the software should do. The idea of “testing early and often” also makes a lot of sense. By catching issues early in the process, developers can save time, reduce costs, and prevent bigger headaches later on.

Reading this made me reflect on my own coding habits. I’ve had moments in class where my code worked “most of the time,” but I didn’t always test for edge cases or unexpected inputs. Moving forward, I plan to write more tests for my own projects, even simple ones. Whether it’s a class assignment, a group project, or a personal program, I now see testing as a chance to build confidence in my work and improve how I think about quality.

Overall, this article helped me understand that software testing isn’t just about finding bugs it’s about building better software. It’s a mindset that values curiosity, patience, and teamwork. By applying these lessons, I’ll be better prepared not only to write code that works but to deliver software that lasts.

***The link to the article is in the first paragraph***

From the blog CS@Worcester – Circuit Star | Tech & Business Insights by Queenstar Kyere Gyamfi and used with permission of the author. All other rights reserved by the author.

Understanding CPU Cache: L1, L2, and L3 Differences

In my spare time over the past couple of weeks I was looking into why do cpu manufacturers advertise the amount of cache they have in their cpus. It made me start to wonder. In video games people want to have as much L3 cache as possible but for work stations they might want a mixture of L1 and L2 cache instead of L3 cache. After reading a few articles, the cpu cache is made of SRAM (Static Random Access Memory). Which is high speed memory located very close to the cpu and it is volatile memory. The purpose of cache is so that the cpu can access data really quickly to complete the operation. 

Then it made me start to think about the differences between the three different caches. Let me start off with L1 cache.

L1 Cache 

  • Is made within the cpu core 
  • L1 is the closest and smallest out of the others 
  • L1 size is around 16KB – 128KB depending on cpu model 
  • L1 is directly mapped on the main memory blocks to improve speed of data 
  • The main purpose of L1 is to store the data and instructions that the cpu uses the most
  • L1 access data within 1 – 3 clock cycles

L2 Cache 

  • Is either core specific or shared between other cores 
  • The size ranges between 256KB – 2MB 
  • L2 access data within 3 – 10 clock cycles
  • The purpose of L2 cache is very similar to L1 it is just that the LRU (least recently used) decides whether to have the data be in L1 or L2 
  • L2 cache can be direct-mapped, set-associative or fully-associative just depends on cpu 
  • It also helps L1 to store additional data if needed 

L3 Cache 

  • L3 cache is shared with multiple cores 
  • Are the furthest from the cpu cores
  • Has the most capacity out of the group, which ranges between 2MB -64MB or more  
  • It has the same purpose of L2 cache but the LRU manages which data is sent to the L3 or L2 
  • Slowest speed it takes 10 – 20 or more clock cycles to access data
  •  L3 cache data can be manipulated and other cores can see the new data
  • In order to help maintain consistency between shared cores 
  • L3 cache associativity is 4 way or more mapped 

Even though the advancements of cpu manufacturers are trying to push the limits of cache it is still dependent on the users’ need for them. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Building Secure and Scalable APIs

For my second blog, I read “Best Practices for Building a Secure and Scalable API” from MuleSoft’s API University. The article goes over how developers can build APIs that don’t crash when traffic grows and that keep user data safe at the same time. It focuses on two things every developer worries about: scalability and security, and how good design decisions affect both.

It explains that scalability is all about how well your API handles growth. Vertical scaling means giving one server more power, while horizontal scaling spreads the load across multiple servers. The author also talks about caching, async processing, and rate limiting to help APIs run smoothly when there’s a lot of traffic. That part made me think about our Microservices Activities in class, where we created REST endpoints and thought about how requests would move through the system. The same design choices we made like organizing resources clearly and using the right HTTP methods are what help APIs scale better in the real world.

Then it shifts to security. It breaks down the basics: use HTTPS for encryption, OAuth 2.0 for authentication, and proper logging to track activity. What stuck with me most was how it said security should be built into the design from the start, not added on later. That lined up with what we’ve been learning about maintainable design in CS-343. If you think about security and structure early, your system stays reliable long term.

I picked this article because it directly connects to what we’re doing in class. We’ve been designing REST APIs and talking about microservice architecture, and this blog felt like a real world version of that. It also ties into the design principles we’ve covered, keeping systems modular and loosely coupled so updates and security changes don’t break everything else.

After reading it, I realized scalability and security go hand in hand. You can’t have one without the other. A system that handles tons of traffic but isn’t secure is a problem, and the same goes for one that’s super secure but slow or unreliable. My main takeaway is that good API design is about balance, thinking ahead so what you build keeps working and stays safe as it grows.

Link: https://www.mulesoft.com/api-university/best-practices-building-secure-and-scalable-api

From the blog CS@Worcester – Harley Philippe's Tech Journal by Harley Philippe and used with permission of the author. All other rights reserved by the author.

API Types Explained: REST, SOAP, and RPC

After class I spent some time researching what are the different types of REST API. Then I found more information about the different types of API architecture out there currently. There are currently three different types of API architecture. They are REST, SOAP, and RPC API each used differently depending on the task, data and specifications needed. 

First let me explain REST API architecture. REST API uses front end and back end servers to communicate to complete tasks. Which can let this architecture be very flexible due to how you want to scale or other specifications. Another benefit to REST is that it does not store any data or status between requests. In addition it is resource oriented which lets URI’s (uniform resource identifier) to identify certain resources within the server.

The data format uses JSON, XML, HTML and plain text. As a result of using JSON as a data format it uses less storage and is a lot faster instead of resolving requests. Moving on in order to communicate with the server it uses HTTP methods and commands so that we can manipulate resources within the server. Lastly, how the architecture handles errors is through the application level. It is how the application handles errors and exceptions made by the individual or company. 

Another architecture used is SOAP API. SOAP uses strict protocols that use XML and WSDL instead of HTTP methods. In addition, it uses message oriented instead of resource oriented messages. Let me explain further, is a program communicates with the server using messages to transfer data or receive data using a message queue. The data format  uses only XML, as a result it increases sizes of data being transferred over a connection. SOAP is an architecture that is stateless and stateful allows users to maintain session information over the server and still receive information from the server.

In order to protect information SOAP uses a WS-Security to encrypt messages and maintain the message. To clarify, SOAP uses XML as a data format but it causes it to be a lot slower due to increased data sizes being transferred and more complex the XML has to be to make sure everything works as intended. Within the XML code has built in error handling that are defined by the user to catch bad requests or other errors. One of the benefits of SOAP is a software architecture that can scale due to being able to add new features and define a stricter contract. 

Lastly, let’s talk about RPC (remote procedural call) architecture. The architecture has a client and server connection to send information back and forth. Also it is written in JSON, XML, and Protocol formats. The RPC communication layer uses protocol IP/TCP to send the data between the server and the user. In addition, it has error handling to manage network and server problems. Plus the user can add in features to authentication actions and encrypt messages. Lastly, RPC uses IDL to call return types, parameters remotely.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Refactoring “Bloaters”

I decided to read Pavan Kumar S’s blog, “Refactoring 101: Code Smells – Bloaters.” This post explains a specific code smell called “bloaters,” which starts when parts of your code grow too large and become difficult to maintain. The topic of this blog connects to our course work and topics on design principles, going over code smells and refactoring just like we’ve done in our lectures and homework.

Pavan Kumar describes bloaters as a sign that a program has become overcomplicated and cluttered. He picks out examples such as long methods, large classes, and long parameter lists. Each of these examples show a situation where code tries to handle too many responsibilities at once, causing poor readability and maintainability. Kumar suggests several refactoring techniques to combat these problems, including Extract Method, Extract Class, and Introduce Parameter Object. These techniques can be used to simplify your code by breaking large structures into smaller, specialized units. The blog explains that the goal of refactoring is not to change the external behavior or output of the program but to improve its internal structure so that it remains easier to understand, test, and extend in the future.

This blog appealed to me because it provides a clear explanation of a problem I often run into in my own programming projects. When writing assignments or personal code, I tend to write long methods that handle multiple tasks without modularity or specialized methods in mind. I’m aware of this bad habit because I mostly do it as I usually work with smaller projects that don’t require too much. Although it works initially, the code quickly becomes messy and difficult to modify or expand. Reading this blog was helpful for me since I recognized I recognized my bad habit when writing code.

What stood out most to me was how important refactoring can be in design principles. This was showcased in the duck homework where we have to refactor bad code. This only reinforced the idea and helped me better digest that concept of refactoring. A bloated method or class commonly goes against these ideas by combining too many responsibilities in one place. If you refactor and divide functionality into smaller pieces, each part of the code is more manageable and much cleaner. This short concept reinforced the idea that refactoring is not simply about cleaning up code, it’s good design habit.

From this reading, I picked up on the idea that refactoring does not have to be a massive rewrite or a stressful task. Small improvements like breaking large classes into manageable ones can prevent the slow build up that often happens in growing projects. Hopefully, I can incorporate regularly reviewing my code for signs of bloat and make refactoring part of my skillset.

https://medium.com/testvagrant/refactoring-101-code-smells-bloaters-f80984859340

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

My Experience Learning REST API

When I first started learning about REST APIs, I thought it was just about how apps talk to each other over the internet. But after reading the REST API Tutorial, I realized that REST is much more than just sending HTTP requests. It’s an entire way of thinking about how systems communicate efficiently and reliably. … Read more

From the blog CS@Worcester – BforBuild by Johnson K and used with permission of the author. All other rights reserved by the author.

Keep the Code Simple.

In software development, simplicity is further required in coding. It is the foundation of how software genuinely works. Every line of code we write in system, from short programs to large systems, teaches us one major lesson: keeping code simple leads to stronger, cleaner, and further reliable software. When code is simple, it becomes easier to understand how each part of a program interacts, how data flows through the system, and how changes can be made without breaking additional components.

Throughout our computer science class, we learn that writing code is not just about making something run, it’s about designing how it runs. Concepts like encapsulation, abstraction, and modularity form the heart of this process. Encapsulation keeps data protected and managed within defined boundaries. Abstraction allows us to hide unnecessary details so that only the important parts are visible to the developer. Modularity divides complex programs into smaller, independent sections that can be developed, tested, and improved separately. These principles not only make programs easier to understand but also reflect how big software plans are built in real-world projects.

Simplicity also plays pivotal role in collaboration. When multiple developers work on the same clear and consistent code, it allows everyone to read, modify, and contribute without confusion. In our group projects, clear code helps everyone understand and contribute easily. Even when we continue working from home, the project feels simple and organized, not stressful. This clarity saves time, reduces errors, and keeps teamwork smooth.

Another reason simplicity matters are maintenance. As we study different programming software models, we see that maintaining complex code can become easy to modify and frustrating. Clean, readable code allows group members to trace logic quickly and identify issues before they cause system failures. This is why experienced programmers often say, “Code is read more often than it is written.” The goal is not to write the most advanced or impressive code, but to write the most understandable and dependable one.

Keeping code simple also encourages better thinking. It forces us to focus on problem-solving rather than decoration. By keeping things simple, we learn to study the requirements carefully, organize the program’s structure wisely, and create efficient code that works effectively without unnecessary lines. Simplicity shows real understanding of how software functions at its core.

As I continue learning software design, I realize that simplicity is not the opposite of intelligence, it is the best outcome of experience and clarity. The coders don’t make code complex; they make it clear. True creativity in programming lies not in how much we write, but in how well it makes as simple—and that always begins with keeping the code simple.

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Week 2-CS 343 blog: SOAP API

During class, I went over the topic of REST API and how amazing it was to understand and analyze how it works. However, there are many types of API, including REST API, GraphQL APIS, RPC APIS, and SOAP APIs. Today, I would like to go through what catches my attention the most is SOAP APIS. I read an article about explaining how SOAP API work, when to use them, and how they compare with modern REST APIS, and I would love to share it with you. The article is posted below, and feel free to check it out for further information (all the information is according to the article) 

https://blog.postman.com/soap-api-definition/

WHAT is SOAP? 

In the late 1900s, SOAP, a messaging protocol that was developed to enable different systems running on multiple operating systems, languages, and networks, with the purpose is to exchanging structured information reliably 

Unlike REST, SOAP focuses on messages rather than resources. 

SOAP messages are formatted in XML and usually contain: 

  • Envelope – defines the start and end of the message 
  • Header – contains optional information like authentication or transaction details 
  • Body – holds the actual request or response data 
  • Fault – handles errors 

How SOAP works?  

A SOAP client sends an XML request to a server, which also responds with another XML message. These communications often occur over HTTP (but sometimes also SMTP, TCP, or UDP) 

  • SOAP messages are defined by the Web Services Description Language (WSDL) to describe the operations, parameters, and return types

Advantages of SOAP

  • Platform independent – works across languages and OS. 
  • Protocol flexible – not limited to HTTPS 
  • Strong security – supports encryption and authentication 
  • Error handling – display clear fault messages 
  • The ideal workplace, where rigid relationships and dependability are essential 

Disadvantages of SOAP 

  • More complex and verbose than REST 
  • Harder to evolve or version 
  • Slower performance due to XML analysis 
  • It is less appropriate for web applications because of its limited catching support. 

Common Use Cases 

SOAP API is commonly used in banking, telecommunications, transportation, and enterprise systems – these cases require reliability, structured data, and strict security. 

  • Examples: 

In banking, SOAP API is used for bank transfers.

In transportation, flight booking systems 

ETC, ….

Shipping and logistics services 

Conclusion 

SOAP API is a great protocol, robust and standardized, but compared to REST, it’s less flexible. REST is better for modern web applications and faster iteration. SOAP API is always needed and valuable for many applications that prioritize security, reliability, and formal structure.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.