Author Archives: Ben Santos

Types of Software Testing: Ensuring Quality in Development

Within the Software Development Lifecycle, it has a major step that takes a decent amount to do is Software testing. The purpose of software testing is that the program has no problems and meets the requirements set by the customer. Software development is divided into two different types: verification and validation. As noted before, verification is a step that has programmers checking if the software is doing as the customer intended. In the same vein, the validation step is just programmers testing if the software meets the customer’s requirements. For example a website wants to add a new feature to handle an increase of daily users on the platform. 

Moving on, there are multiple different types of testing programmers use to validate and verify if the program is working as intended. There is automation testing, and manual testing. They then divide into smaller more specific tests that will focus on certain aspects of the program. As mentioned earlier, automation testing is when programmers write test scripts with or without software and can run the tests repeatedly. While manual testing is a method of testing that has a programmer write tests that will check on different sections of the program.

Within software testing there are different levels of testing. They are called unit testing, integration testing, system testing, acceptance testing. 

Unit Testing

  • It is a testing that checks every component or software 
  • It is to make sure the hardware or software that could be used by the programmers 

Integration Testing 

  • It checks two or more modules which are tested are then put in the program or hardware 
  • To make sure the components and interface are working as intended

System Testing

  • Is a test that verifies the software itself 
  • Also it checks the system elements 
  • If the program meets the requirements of the system 

Acceptance Testing  

  • Validates if the program meets the customer’s expectation 
  • If the customer can work correctly on a user’s device 

These different types of testing are used to prevent issues down the line. So that programmers can find potential bugs, improve quality of the program, improve user experience, testing for scalability, also to save time and money. In order to save money and time because it takes a lot of employees to solve problems with a program that could have been used to make a new product. A business wants a program to work and be improved over time.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Why Software Maintenance Matters for Business Success

Recently AWS servers were not working due to something causing one third of the internet to go down. It made me start to think about what maintenance companies like Amazon have to have in order to make sure clients’ websites or programs do not get shut down. Then I found an article about Software maintenance explaining in a software engineer’s perspective on what happens with a product already made but has to be maintained. 

It starts to explain that software maintenance branches between bug fixes, adding new features, adding new hardware or software environments. At the end of the day the software has to work, be secure, efficient, and meet the users demands. As mentioned earlier there are multiple different types of software maintenance: bug fixes, patches, adaptive maintenance, optimizations, preventive maintenance. 

As programs get used or the amount of users start to increase companies have to consider that users might face bugs. These bugs can affect user experiences that might cause the company to lose a potential sale and even get bad reviews. These bugs can range from small coding errors or errors that might affect the user experience. 

In the future there is always the risk of needing to add patches to fix something very quickly. For example many companies would get hacked and have to undo the damage hackers might do to their program. Even though it could be an extreme case like getting hacked it could be a major update to the software that might be needed. 

Moving on to Adaptive Maintenance, it is about whether customers might want a new feature added to the software. Which could even change involving the environment, hardware, business wants, or new regulations that apply to the company. For example in the past the U.S government added a law to ensure the privacy of online users to prevent websites collecting data on users without consent.

Another type of maintenance is Perfective maintenance but to me I consider it optimizations. This can be either the consistency of the performance of a program and reliability. In addition, to make the software be flexible when adding new features or changes to it. 

The last type of Maintenance is Preventive maintenance. These could be features that make the program to be more secure, frequently tested, constant documents being made, and backups. This type of maintenance is to make sure potential problems can be fixed as quickly as possible and be tested so that it does not impact the user experience. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Understanding CPU Cache: L1, L2, and L3 Differences

In my spare time over the past couple of weeks I was looking into why do cpu manufacturers advertise the amount of cache they have in their cpus. It made me start to wonder. In video games people want to have as much L3 cache as possible but for work stations they might want a mixture of L1 and L2 cache instead of L3 cache. After reading a few articles, the cpu cache is made of SRAM (Static Random Access Memory). Which is high speed memory located very close to the cpu and it is volatile memory. The purpose of cache is so that the cpu can access data really quickly to complete the operation. 

Then it made me start to think about the differences between the three different caches. Let me start off with L1 cache.

L1 Cache 

  • Is made within the cpu core 
  • L1 is the closest and smallest out of the others 
  • L1 size is around 16KB – 128KB depending on cpu model 
  • L1 is directly mapped on the main memory blocks to improve speed of data 
  • The main purpose of L1 is to store the data and instructions that the cpu uses the most
  • L1 access data within 1 – 3 clock cycles

L2 Cache 

  • Is either core specific or shared between other cores 
  • The size ranges between 256KB – 2MB 
  • L2 access data within 3 – 10 clock cycles
  • The purpose of L2 cache is very similar to L1 it is just that the LRU (least recently used) decides whether to have the data be in L1 or L2 
  • L2 cache can be direct-mapped, set-associative or fully-associative just depends on cpu 
  • It also helps L1 to store additional data if needed 

L3 Cache 

  • L3 cache is shared with multiple cores 
  • Are the furthest from the cpu cores
  • Has the most capacity out of the group, which ranges between 2MB -64MB or more  
  • It has the same purpose of L2 cache but the LRU manages which data is sent to the L3 or L2 
  • Slowest speed it takes 10 – 20 or more clock cycles to access data
  •  L3 cache data can be manipulated and other cores can see the new data
  • In order to help maintain consistency between shared cores 
  • L3 cache associativity is 4 way or more mapped 

Even though the advancements of cpu manufacturers are trying to push the limits of cache it is still dependent on the users’ need for them. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

API Types Explained: REST, SOAP, and RPC

After class I spent some time researching what are the different types of REST API. Then I found more information about the different types of API architecture out there currently. There are currently three different types of API architecture. They are REST, SOAP, and RPC API each used differently depending on the task, data and specifications needed. 

First let me explain REST API architecture. REST API uses front end and back end servers to communicate to complete tasks. Which can let this architecture be very flexible due to how you want to scale or other specifications. Another benefit to REST is that it does not store any data or status between requests. In addition it is resource oriented which lets URI’s (uniform resource identifier) to identify certain resources within the server.

The data format uses JSON, XML, HTML and plain text. As a result of using JSON as a data format it uses less storage and is a lot faster instead of resolving requests. Moving on in order to communicate with the server it uses HTTP methods and commands so that we can manipulate resources within the server. Lastly, how the architecture handles errors is through the application level. It is how the application handles errors and exceptions made by the individual or company. 

Another architecture used is SOAP API. SOAP uses strict protocols that use XML and WSDL instead of HTTP methods. In addition, it uses message oriented instead of resource oriented messages. Let me explain further, is a program communicates with the server using messages to transfer data or receive data using a message queue. The data format  uses only XML, as a result it increases sizes of data being transferred over a connection. SOAP is an architecture that is stateless and stateful allows users to maintain session information over the server and still receive information from the server.

In order to protect information SOAP uses a WS-Security to encrypt messages and maintain the message. To clarify, SOAP uses XML as a data format but it causes it to be a lot slower due to increased data sizes being transferred and more complex the XML has to be to make sure everything works as intended. Within the XML code has built in error handling that are defined by the user to catch bad requests or other errors. One of the benefits of SOAP is a software architecture that can scale due to being able to add new features and define a stricter contract. 

Lastly, let’s talk about RPC (remote procedural call) architecture. The architecture has a client and server connection to send information back and forth. Also it is written in JSON, XML, and Protocol formats. The RPC communication layer uses protocol IP/TCP to send the data between the server and the user. In addition, it has error handling to manage network and server problems. Plus the user can add in features to authentication actions and encrypt messages. Lastly, RPC uses IDL to call return types, parameters remotely.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Understanding Git: The Key to Safe Team Collaboration

In class we are learning about Git. Git is a version control system that allows multiple developers to make changes to a program and still control the commits if needed. The reason why I think this is because there will be times where developers make a mistake and cause a whole list of issues but with git they can go back to the previous commit and fix it there. This allows developers to be able to work on major projects without causing the main program to break. 

Another important aspect we need to consider is that it allows larger teams to be able to work on the same program at once. Which can save the company time and money. We also need to consider how developers from all over the world could help with open source projects without working on a fork of the repository so that they can make changes locally and not impact the upstream of the project. 

In my perspective git divides upstream, clone, local, online repo, branch so developers are able to make these changes safely and there can be people to double check their work. That is what organizations can have reviewers and maintainers trying to review changes so that the upstream does not break. In addition, it allows developers to be able to know when changes are made and what type of scale a change it is. When a company’s upstream goes down it can cause the company to lose a lot of money and reputation with clients and the public. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

What are Coding Smells

Currently, in class we are going over code smells. These code smells are how we perceive the code and try to make a change with them that could cause a problem. First let me explain what code smells; Rigidity, Fragility, Immobility, Viscosity, Needless Complexity, Needless Repetition, Opacity. These code smells are what we should try to avoid making a program so that we do not have a future problem when we try to solve it. 

Coding standards and quality will change over time regardless of that programmers need to be aware of the risks of having a program with these issues. When code becomes rigid, it is referring to how a section of the program is needed throughout the program too much and when we try to make changes with that section it would require a lot more code to fix. Fragility is when a change we did makes other parts of the code to not work as intended. Immobility is when a program can not be transferred from one system to another. For example, in certain engines, ides might manipulate certain values in a program so it will be different within the two engines or ides. 

While Viscosity is when a program has a lot of improper coding methods that will slow down the efficiency. Needless Complexity occurs when a program is very complex for the demand it is used for. Needless Repetition, is when code repeats throughout the program that makes multiple unneeded repetitions. Lastly Opacity is how other programmers could see the use of a feature and how clear it is defined. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Next Steps

Hello, this is Benjamin. I want to explain what steps I took in order to try and learn over summer. I wrote down a list of 4 steps for me to do within the week. Step 1; read a couple of chapters a week about a subject from a textbook or notes like blogs. Step 2; then watch tutorials and practice coding fundamentals on multiple coding websites. Step 3; next I would solve getcracked.io problems and leetcode problems. Step 4 is to make projects for my portfolio. Over the summer I could not do any coding projects due to taking 3 classes so I just did more coding problems on getcracked. 

The reason I am trying to learn coding in this learning method is because it is better to understand why people choose a certain coding method instead of another option. I want to be helpful within a company and still be able to learn independently. So that I can have as many opportunities available. Moving on, I would watch podcasts that would have developers or professors about certain topics so that they can help explain certain concepts better.

I understand that in this field a lot of changes are happening. Which allows more opportunity for so many people and industries. Regardless of what some people say about the current state of computer science, I will still try to learn new concepts and how to implement them. I want to use this year to enjoy a few concepts and coding projects that I am passionate about so that I can enjoy this process.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Summer Reflections

Hello, this is Benjamin Santos Patrocinio with this blog I want to reflect upon what I did over this summer vacation. First let me explain what I did over the summer primarily, I took 3 summer classes to finish my general electives in school. During classes over summer I looked at other blogs and podcasts to learn about new topics in programming. For a couple of weeks I started to learn more about algorithm analysis, memory. The reason why I wanted to learn more about these topics is because I want to improve as a programmer. 

With programming even though knowledge is available from blogs, textbooks, videos and courses it is a field where programmers have to continuously learn. Being able to learn about new topics and even be able to use the skills are really important. Furthermore, I want to use this time to make a portfolio that can help standout and use all the skills I have learned. What I did first is find textbooks and documents I want to learn from. In order to build a plan to learn on my off time. In order to not be burned out, I would read textbooks and watch videos on youtube to learn more about certain topics.  Also a few times a week I would try to solve a few getcracked.io questions (just an alternative leetcode platform). So that I can get better at interview questions and find out which topics I did not really know.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Learning Git Part 1

CS-348, CS@Worcester

During the semester, it took me a while to understand git. I had to break down the concepts. Then, I started from there. I learn this way because I play games for my hobbies. These include fighting, MOBA, and card games. In these games, you learn how actions have trade-offs. You understand the actions you take or need to take.

While I was learning git, I wanted to learn it differently. I aimed to approach it like how I would learn fighting games as a small experiment. 

In my personal time, I would try to learn the purpose of git. I want to understand why it is necessary for programmers. In my interpretation, git is used for programmers to share and change code. Git can also be used locally on your computer. This way, you can manipulate your version of the code without altering the main repository.

Then I would spend time trying to understand what are the core tools needed in git. I went from learning the concept of git to the primary tools. Understanding the building blocks of code is better with knowing what it does. 

I try to learn from my mistakes when I study coding or fighting games. I analyze why it was a mistake. In my opinion learning through mistakes, especially in coding, is because coding mistakes can happen constantly. If you know why then it will be easier to fix that mistake in the future. Just like learning in fighting games, people constantly make mistakes. Once they understand why they made those mistakes, they can focus on what to do differently. They can then avoid making the same mistake in the future. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The problem with old GPU supply

CS-343, CS@Worcester

As we all know, graphic cards have been getting more expensive over the years. This is because of tariffs and product costs. As a result, gpu manufacturers need to increase prices. They do this to break even from the costs of production. We also have to consider that many graphic card chips originate from TSMC. This company is a semiconductor manufacturer. It supplies Nvidia and AMD gpu chips. These gpu chips are being made in Taiwan. They are finished being built in other factories around the world. Consequently, tariffs would apply to these products due to the tariffs on China’s products being exported to the U.S.

The next issue with these products is the increased cost in prices. It just becomes not worth it for average consumers to purchase these products. Currently, Nvidia, AMD, and Intel want to sell their newer products. These products are better than the current graphic cards in the market. They offer a performance increase ranging from 10%-20%. Living expenses are high, so many consumers have not being able to buy the current gpu’s in the market. During the holidays, they aimed to cut prices significantly. Their goal was to sell as many units as possible before launching the new products. 

According to the rumors AMD is going to try to sell the new gpu’s in different quarters in 2025. While Nvidia is trying to sell their new gpu’s separated time between the next low tier product. I believe they will not try to change this. It has worked by lowering stocks in the market. Consumers would either have to wait for the product they want or get the best one initially. 

For AMD new gpu’s are apparently going to have improved FSR by significant margins in comparison to RDNA 3. Let me explain further FSR is called FidelityFX Super Resolution. It is a program that will render games at higher quality frames. Additionally, it boosts the performance of the graphic card. AMD uses software like this to compete with Nvidia’s DLSS (Deep Learning Super Sampling). DLSS does the same thing for Nvidia graphic cards but performs way better. In some cases, it achieves around 30% better performance and image quality in frames. There is a gap between them because Nvidia created a gpu with DLSS first. It was able to refine DLSS more and more since the rtx 2000 series cards. 

Currently Nvidia gpu’s are more expensive due to using DLSS as the main feature to sell their gpu’s. Nvidia does not want to sell increased performance. If they did, it would cost too much to produce enough gpu’s to meet market demand. They increase the performance of cards by 10% to save on production costs. They improve DLSS by 20%-30%. This makes consumers not consider the base performance difference between previous generations gpu’s.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.