Author Archives: Ben Santos

Understanding Linters: Enhancing Code Consistency

Recently in class we started to learn more about linting tools. In the class we used a linter to check all documents in a project to see which words are redundant or misspelled. After looking into some articles online I realised that linters could be used for many things. For example, if a company wants its developers to write code within a certain format so that other developers in the company can easily read it. It is not just for formatting it can be used for coding errors, bugs, security vulnerabilities and stylistic inconsistencies. In addition, linters can be used for many different types of programming languages that allows organizations or groups to set a standard for everyone coding in that project. 

A linter is able to do all of this by dividing the code into units for variables, types, functions etc. Then to make these units into tokens that then compare these tokens to the tokens already made in the linter. Next it checks if these tokens are different to the premade ones and will flag them depending on the reason the linter is used for that project. 

There are a lot of reasons why projects use linters. Here are a few examples: it decreases errors, it makes code more consistent, improves code quality, improves security, saves money by being time efficient, and setting coding expectations with the team. Let me explain further: no project or company wants their program to have errors that make it towards the end user. In addition, linters being able to save money is by checking the code to prevent issues from costing more time to fix and more money to fix. 

Many companies want to save as much money as possible within the different steps of development: production, design, testing, development and maintenance. Each of these steps requires a lot of people to verify if the code is working or if the code is what the customer wants. In addition, having developers be able to not spend as much time finding errors and solving them. That is why linters can save companies a lot of money for the next project or maintain operations. 

I do have a problem with linters personally, when I am focused on doing something. Then I have to use a linter, it makes me lose focus and creates friction to get back to focusing on the task at hand. This a minor problem because once I get used to using a linter then there will be less friction between focusing on the task at hand and solving errors caught by the linter.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Role of Compilers in Programming Explained

In one of my classes this semester we started to learn about the purpose of a compiler in programming. After learning about a few things on how a compiler works, I just wanted to spend some free time learning more about the subject. Even though there are many different compilers out there currently they all still use the same steps on processing high level language to machine coding. The following steps are lexical analysis, Syntax analysis, Semantic analysis, Optimization, Code generation. 

  • Lexical Analysis: In this step it sends the high level code through a compiler’s lexer changing certain parts of the code like operators, identifiers into units that make up tokens. 
  • Syntax analysis: During this step it looks at all the tokens and checks the code for syntax errors and other programming errors within that coding language conditions. 
  • Semantic Analysis: Once the code is checked the compiler uses semantic analysis in order to determine the purpose of the code. In addition it tests for logical errors with the code for example the code could have an arithmetic error. 
  • Optimization: Within this step optimizations with a compiler are not always the same depending on what they want to optimize. For example, have the code run all the steps as quickly as possible or decrease the electrical demand for a coding language. 
  • Code generation: Once the code goes through this step the code will be converted into assembly code so that the computer can read the instructions needed to run the program. 

The convenience of using a compiler is that it allows programmers to code in high level language which is a lot more readable than assembly code. In addition it allows programmers to be able to learn any high level language. Which allows them to not worry about the steps needed to convert the code into assembly since the compiler will do it for them. Also having the compiler check for multiple different types of errors helps improve with quality assurance. Another factor to consider, certain hardware can only run specific types of code but with a compiler it lets programmers choose which language they prefer. 

Also compilers can reduce repetition due to only needing to run the code once and then it will be able to execute repeatedly from then on. Lastly Compilers can check for errors that we do not consider for example memory leaks and potential security issues with the code. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Frontend development problems and rules

I was just curious about frontend development. After reading a couple of articles, frontend development is how the customer interacts with the website or program. The key aspects of Frontend development is User Experience, Visual feedback, Optimization, Responsive with devices, Integrating the backend APIs to the Frontend. First let me explain User experience which means the website is accessible, usable, and a good visual design. Next, Visual feedback the frontend can react to user input through the website and animations can appear on time. Moving onto Optimization, to reduce loading from one page to another or a response from the user. 

Another aspect we need to consider is whether the website or program works with multiple devices like a phone, desktop, etc. Finally, integrating the backend APIs so that data can be sent to the user or sent from the user to the backend. 

These 5 goals are meant for a user to be able to not feel any friction between the frontend and the backend. Users want a program or website to be able to use how they want it and does not take too much time. For example, companies like Youtube want users to be on the platform as long as possible to sell more advertising ads. Many other platforms are trying to incorporate more features to have more users just stay on the platform for everything. 

In order to keep users on the platform no one wants to wait a long time to move to the next page or get the response they want. Another issue that frontend developers could face is having the website not be consistent with the responses or animations. Even though these problems are maintenance related it is important to have the website or program be functional as quickly as possible so that users do not notice if the website or program went offline. Another issue that users do not notice initially is does the program or website work with multiple Operating systems and browsers. Each browser and Operating system will react to the program or website differently depending on multiple factors. 

In addition, front-end developers have to consider how the website looks on different browsers. If I have a mac book and a desktop, if I as a user sees the visual differences of the website from the two different browsers it would make me not want to use the platform at all. If a website can look the exact same through multiple platforms then there will be less friction for users and they would know where everything is.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Importance of User Experience in Game Testing

While looking at internships I saw a couple of job postings for quality assurance at video game studios. Then I saw the qualifications and skills needed for the job. Then I started to look into the job. Looking at a couple of resources I noticed that this job has a couple of guidelines in order to help the video game have an amazing experience for players. These important concepts are called Functional evaluations, Regression assessment, User experience analysis. Within the job companies use Agile methodology to help QA teams to be able to solve upcoming issues throughout the game’s lifespan. 

This job requires employees to be skilled to fix technical problems and have critical thinking in order to solve problems. Let me explain the guidelines of game QA and why it matters. The first guideline is called Functional evaluations. Functional values is a series of tests that makes sure the game’s features and the game works as intended. 

Functional evaluations are divided into: 

  • Gameplay Mechanics: do player characters interact with objects correctly, can players use game mechanics correctly (for example special universal abilities), is character scale correctly.
  • User Interface: Can player controls activate certain buttons like pause, settings. Can players see certain features like health bars or ability cooldowns. 
  • Missions and Objectives: If a player completes a mission do they get a reward. Is it possible 
  • Multiplayer features: Can players join the server correctly and encrypted, etc…

Moving onto Regression assessments. It is to retest key features in the game after all patches have been implemented. The purpose of these tests is to

  •  Identify Vulnerabilities,
  •  Allocating Resources,
  •  Enhancing Reporting Accuracy. 

The reason why I mentioned these types of reasons for the tests is because they need to consider multiple factors in order to maintain customers’ enjoyment of the game. In addition, QA has to consider if the changes that could make the code be more complex, performance drops, user friction and costs. 

Moving onto the last point is User Experience Analysis. This issue can either make or break the success for a game. When players face some sort of friction like the game is not optimized for certain hardware or even constant disconnects to the server. As a result it will cause more players to return the game or stop playing the game entirely. I noticed that some games companies do not know how to filter through good suggestions and bad suggestions to fix in the next patch of the game. Regardless, that will take time for the company to set a clear roadmap on how they want to make their game.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Types of Software Testing: Ensuring Quality in Development

Within the Software Development Lifecycle, it has a major step that takes a decent amount to do is Software testing. The purpose of software testing is that the program has no problems and meets the requirements set by the customer. Software development is divided into two different types: verification and validation. As noted before, verification is a step that has programmers checking if the software is doing as the customer intended. In the same vein, the validation step is just programmers testing if the software meets the customer’s requirements. For example a website wants to add a new feature to handle an increase of daily users on the platform. 

Moving on, there are multiple different types of testing programmers use to validate and verify if the program is working as intended. There is automation testing, and manual testing. They then divide into smaller more specific tests that will focus on certain aspects of the program. As mentioned earlier, automation testing is when programmers write test scripts with or without software and can run the tests repeatedly. While manual testing is a method of testing that has a programmer write tests that will check on different sections of the program.

Within software testing there are different levels of testing. They are called unit testing, integration testing, system testing, acceptance testing. 

Unit Testing

  • It is a testing that checks every component or software 
  • It is to make sure the hardware or software that could be used by the programmers 

Integration Testing 

  • It checks two or more modules which are tested are then put in the program or hardware 
  • To make sure the components and interface are working as intended

System Testing

  • Is a test that verifies the software itself 
  • Also it checks the system elements 
  • If the program meets the requirements of the system 

Acceptance Testing  

  • Validates if the program meets the customer’s expectation 
  • If the customer can work correctly on a user’s device 

These different types of testing are used to prevent issues down the line. So that programmers can find potential bugs, improve quality of the program, improve user experience, testing for scalability, also to save time and money. In order to save money and time because it takes a lot of employees to solve problems with a program that could have been used to make a new product. A business wants a program to work and be improved over time.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Why Software Maintenance Matters for Business Success

Recently AWS servers were not working due to something causing one third of the internet to go down. It made me start to think about what maintenance companies like Amazon have to have in order to make sure clients’ websites or programs do not get shut down. Then I found an article about Software maintenance explaining in a software engineer’s perspective on what happens with a product already made but has to be maintained. 

It starts to explain that software maintenance branches between bug fixes, adding new features, adding new hardware or software environments. At the end of the day the software has to work, be secure, efficient, and meet the users demands. As mentioned earlier there are multiple different types of software maintenance: bug fixes, patches, adaptive maintenance, optimizations, preventive maintenance. 

As programs get used or the amount of users start to increase companies have to consider that users might face bugs. These bugs can affect user experiences that might cause the company to lose a potential sale and even get bad reviews. These bugs can range from small coding errors or errors that might affect the user experience. 

In the future there is always the risk of needing to add patches to fix something very quickly. For example many companies would get hacked and have to undo the damage hackers might do to their program. Even though it could be an extreme case like getting hacked it could be a major update to the software that might be needed. 

Moving on to Adaptive Maintenance, it is about whether customers might want a new feature added to the software. Which could even change involving the environment, hardware, business wants, or new regulations that apply to the company. For example in the past the U.S government added a law to ensure the privacy of online users to prevent websites collecting data on users without consent.

Another type of maintenance is Perfective maintenance but to me I consider it optimizations. This can be either the consistency of the performance of a program and reliability. In addition, to make the software be flexible when adding new features or changes to it. 

The last type of Maintenance is Preventive maintenance. These could be features that make the program to be more secure, frequently tested, constant documents being made, and backups. This type of maintenance is to make sure potential problems can be fixed as quickly as possible and be tested so that it does not impact the user experience. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Understanding CPU Cache: L1, L2, and L3 Differences

In my spare time over the past couple of weeks I was looking into why do cpu manufacturers advertise the amount of cache they have in their cpus. It made me start to wonder. In video games people want to have as much L3 cache as possible but for work stations they might want a mixture of L1 and L2 cache instead of L3 cache. After reading a few articles, the cpu cache is made of SRAM (Static Random Access Memory). Which is high speed memory located very close to the cpu and it is volatile memory. The purpose of cache is so that the cpu can access data really quickly to complete the operation. 

Then it made me start to think about the differences between the three different caches. Let me start off with L1 cache.

L1 Cache 

  • Is made within the cpu core 
  • L1 is the closest and smallest out of the others 
  • L1 size is around 16KB – 128KB depending on cpu model 
  • L1 is directly mapped on the main memory blocks to improve speed of data 
  • The main purpose of L1 is to store the data and instructions that the cpu uses the most
  • L1 access data within 1 – 3 clock cycles

L2 Cache 

  • Is either core specific or shared between other cores 
  • The size ranges between 256KB – 2MB 
  • L2 access data within 3 – 10 clock cycles
  • The purpose of L2 cache is very similar to L1 it is just that the LRU (least recently used) decides whether to have the data be in L1 or L2 
  • L2 cache can be direct-mapped, set-associative or fully-associative just depends on cpu 
  • It also helps L1 to store additional data if needed 

L3 Cache 

  • L3 cache is shared with multiple cores 
  • Are the furthest from the cpu cores
  • Has the most capacity out of the group, which ranges between 2MB -64MB or more  
  • It has the same purpose of L2 cache but the LRU manages which data is sent to the L3 or L2 
  • Slowest speed it takes 10 – 20 or more clock cycles to access data
  •  L3 cache data can be manipulated and other cores can see the new data
  • In order to help maintain consistency between shared cores 
  • L3 cache associativity is 4 way or more mapped 

Even though the advancements of cpu manufacturers are trying to push the limits of cache it is still dependent on the users’ need for them. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

API Types Explained: REST, SOAP, and RPC

After class I spent some time researching what are the different types of REST API. Then I found more information about the different types of API architecture out there currently. There are currently three different types of API architecture. They are REST, SOAP, and RPC API each used differently depending on the task, data and specifications needed. 

First let me explain REST API architecture. REST API uses front end and back end servers to communicate to complete tasks. Which can let this architecture be very flexible due to how you want to scale or other specifications. Another benefit to REST is that it does not store any data or status between requests. In addition it is resource oriented which lets URI’s (uniform resource identifier) to identify certain resources within the server.

The data format uses JSON, XML, HTML and plain text. As a result of using JSON as a data format it uses less storage and is a lot faster instead of resolving requests. Moving on in order to communicate with the server it uses HTTP methods and commands so that we can manipulate resources within the server. Lastly, how the architecture handles errors is through the application level. It is how the application handles errors and exceptions made by the individual or company. 

Another architecture used is SOAP API. SOAP uses strict protocols that use XML and WSDL instead of HTTP methods. In addition, it uses message oriented instead of resource oriented messages. Let me explain further, is a program communicates with the server using messages to transfer data or receive data using a message queue. The data format  uses only XML, as a result it increases sizes of data being transferred over a connection. SOAP is an architecture that is stateless and stateful allows users to maintain session information over the server and still receive information from the server.

In order to protect information SOAP uses a WS-Security to encrypt messages and maintain the message. To clarify, SOAP uses XML as a data format but it causes it to be a lot slower due to increased data sizes being transferred and more complex the XML has to be to make sure everything works as intended. Within the XML code has built in error handling that are defined by the user to catch bad requests or other errors. One of the benefits of SOAP is a software architecture that can scale due to being able to add new features and define a stricter contract. 

Lastly, let’s talk about RPC (remote procedural call) architecture. The architecture has a client and server connection to send information back and forth. Also it is written in JSON, XML, and Protocol formats. The RPC communication layer uses protocol IP/TCP to send the data between the server and the user. In addition, it has error handling to manage network and server problems. Plus the user can add in features to authentication actions and encrypt messages. Lastly, RPC uses IDL to call return types, parameters remotely.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Understanding Git: The Key to Safe Team Collaboration

In class we are learning about Git. Git is a version control system that allows multiple developers to make changes to a program and still control the commits if needed. The reason why I think this is because there will be times where developers make a mistake and cause a whole list of issues but with git they can go back to the previous commit and fix it there. This allows developers to be able to work on major projects without causing the main program to break. 

Another important aspect we need to consider is that it allows larger teams to be able to work on the same program at once. Which can save the company time and money. We also need to consider how developers from all over the world could help with open source projects without working on a fork of the repository so that they can make changes locally and not impact the upstream of the project. 

In my perspective git divides upstream, clone, local, online repo, branch so developers are able to make these changes safely and there can be people to double check their work. That is what organizations can have reviewers and maintainers trying to review changes so that the upstream does not break. In addition, it allows developers to be able to know when changes are made and what type of scale a change it is. When a company’s upstream goes down it can cause the company to lose a lot of money and reputation with clients and the public. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

What are Coding Smells

Currently, in class we are going over code smells. These code smells are how we perceive the code and try to make a change with them that could cause a problem. First let me explain what code smells; Rigidity, Fragility, Immobility, Viscosity, Needless Complexity, Needless Repetition, Opacity. These code smells are what we should try to avoid making a program so that we do not have a future problem when we try to solve it. 

Coding standards and quality will change over time regardless of that programmers need to be aware of the risks of having a program with these issues. When code becomes rigid, it is referring to how a section of the program is needed throughout the program too much and when we try to make changes with that section it would require a lot more code to fix. Fragility is when a change we did makes other parts of the code to not work as intended. Immobility is when a program can not be transferred from one system to another. For example, in certain engines, ides might manipulate certain values in a program so it will be different within the two engines or ides. 

While Viscosity is when a program has a lot of improper coding methods that will slow down the efficiency. Needless Complexity occurs when a program is very complex for the demand it is used for. Needless Repetition, is when code repeats throughout the program that makes multiple unneeded repetitions. Lastly Opacity is how other programmers could see the use of a feature and how clear it is defined. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.