Author Archives: Shamarah Ramirez

Creating a Smooth Web Experience: Frontend Best Practices

A key aspect of frontend development is creating websites that perform well and provide a seamless user experience. However, due to time constraints in class, we didn’t have many opportunities to dive deeply into frontend implementation techniques. To fill this gap, I explored the blog Frontend Development Best Practices: Boost Your Website’s Performance. It’s clear explanations, organized structure, and bold highlights made it an excellent resource for enhancing my understanding of this crucial topic.

The blog provides a detailed guide on optimizing website performance using effective frontend development techniques. Key recommendations include using appropriate image formats like JPEG for photos, compressing files with tools like TinyPNG, and utilizing lazy loading to improve speed and save bandwidth. It stresses reducing HTTP requests by combining CSS and JavaScript files and using CSS sprites to streamline server interactions and boost loading speed. Another important strategy is enabling browser caching, which allows browsers to locally store static assets, reducing redundant data transfers and improving load times. The blog also suggests optimizing CSS and JavaScript by making files smaller, loading non-essential scripts only when needed, and using critical CSS to improve initial rendering speed. Additional practices include leveraging content delivery networks (CDNs) to deliver files from servers closer to users and employing responsive design principles, such as flexible layouts and mobile-first approaches, to create adaptable websites.

I chose this blog because it addresses frontend implementation topics that were not deeply explored in our course. Its organized layout, with bold headings and step-by-step instructions, makes the content accessible and actionable. As someone who plans to build a website in the future, I found its advice easy to understand.

Reading this blog was incredibly insightful. I learned how even small adjustments—such as choosing the right image format or enabling lazy loading—can significantly improve website performance. For example, understanding browser caching taught me how to make websites load faster and enhance the experience for returning users. The section on responsive web design stood out, emphasizing the importance of creating layouts that work seamlessly across different devices. The blog’s focus on performance monitoring and continuous optimization also aligned with best practices for maintaining high-performing websites. Tools like Google PageSpeed Insights and A/B testing offer valuable feedback to help keep websites efficient and user-focused over time.

In my future web development projects, I plan to implement the best practices outlined in the blog. This includes using image compression tools and lazy loading to improve loading times, combining and minifying CSS and JavaScript files to reduce HTTP requests, and utilizing CDNs alongside browser caching for faster delivery of static assets. I will also adopt a mobile-first approach to ensure websites function smoothly across all devices.

This blog has provided invaluable insights into frontend development, equipping me with practical strategies to optimize website performance. By applying these techniques, I aim to create websites that not only look appealing but also deliver an exceptional user experience.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Code Smells

In software development, creating high-quality apps requires keeping code neat and effective. The blog “What is Code Smell & How to Avoid It” provides a thorough review of code smells, which are signs of more serious problems in a codebase that may hinder readability, performance, and maintainability despite the fact they are not errors. The blog carefully explains the definition of code smells, their most prevalent kinds, and practical solutions. It is a great resource for learning how to maintain high-quality code because of its structured approach.

Code smells are indicators of design flaws in a program, which might result in technical debt and vulnerabilities down the road. Tight timelines, inadequate evaluations, or insufficient experience are frequently the causes of them. Typical examples include excessive parameter lists, overly long functions, incorrect naming standards, and duplicate code, all of which make the codebase challenging to maintain. Along with highlighting smells like middle-man classes, data clumps, and primitive obsession, the site provides practical guidance on how to spot and address each. In order to prevent code smells, the blog highlights techniques such as using automated code review tools, implementing continuous integration/deployment (CI/CD) workflows, and regularly refactoring.

I chose this blog because it breaks down the concept of code smells into digestible pieces, discussing each type in depth while linking them to actual solutions. Additionally, the topic strongly reflects our course, Software Construction, Design, and Architecture, which stresses developing maintainable, scalable systems. This objective is directly supported by knowing code smells, which also serve to reinforce the fundamental ideas covered in the syllabus.

As I thought back on the blog and what we’ve learned in class, I realized how crucial it is to proactively find and fix code smells throughout the development process. The emphasis on techniques like refactoring caught my attention since it can be easy to overlook their wider importance beyond tackling immediate issues. Recognizing certain scents like “feature envy” and “primitive obsession,” brought to light how small problems may turn into big ones, affecting collaboration and a project’s ability to grow. For instance, I intend to use the strategy of condensing lengthy parameter lists into objects by grouping related data together. This will make the code in my upcoming projects cleaner, easier to maintain, and easier to read.

This blog improved my knowledge of software architecture and design by relating theory to real-world development issues and helped cement the content we went over in class. By putting the knowledge I’ve received to use, I hope to create software that is more reliable and maintainable, laying a solid basis for my future work in software engineering.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

REST API Design

Modern web applications are built on top of REST APIs, which provide the essential connection between the client and the server. This week, I discovered a blog called “Best Practices for REST API Design” by John Au-Yeung and Ryan Donovan which outlines important techniques for developing secure, performant, and user-friendly APIs. Because JSON is widely accepted and lightweight, it promotes its use as the standard data format. While HTTP methods like GET, POST, PUT, and DELETE determine the action, logical endpoint architectures that rely on nouns rather than verbs are crucial for clarity. Although it improves readability, resources should be kept simple to prevent complexity.

The blog discusses how to handle problems politely by giving easily comprehensible error messages to facilitate debugging and employing relevant HTTP status codes (e.g., 400 for Bad Request, 404 for Not Found). It also highlights the need of using query parameters for pagination, sorting, and filtering when handling big datasets. For protecting APIs, security measures including role-based access control, SSL/TLS encryption, and the least privilege principle are essential. Although caching is emphasized as a way to improve performance, developers should make sure it doesn’t produce stale data. Lastly, it is advised to version APIs, frequently using prefixes like /v1/, in order to guarantee backward compatibility and permit incremental enhancements.

Since we’ve been learning about REST API design in class, I chose to read a blog about it in order to gain a deeper understanding of best practices. In addition to explaining each essential aspect of REST API design, such as JSON usage, appropriate endpoint naming, error handling, security, caching, and versioning, I selected this blog because it also provides code blocks as examples, which helped readers understand and visualize the concepts more clearly.

What caught my attention the most was the part about using logical nesting for endpoints. It described how APIs are made easier to use and comprehend by grouping relevant endpoints. It also made the point that endpoints shouldn’t replicate the database’s structure. This increases the security of the API by shielding private data from attackers. I became more aware of how endpoint design may affect security and usability after reading this. This demonstrated the significance of properly planning endpoint architectures.

This article impacted my perspective on API design by emphasizing the necessity of striking a balance between usability and simplicity. I want to use these ideas in future projects by making solid security procedures, efficient error handling, and well-defined endpoint structures top priority. By using the strategies covered in this blog, I intend to create APIs that are effective and simple for developers to use, guaranteeing that they can be maintained and offer a satisfying user experience throughout time.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Software Design Principles

For this week, I decided to find a blog about Software design principles to refresh my mind on the topic. I found a blog called “Main Software Design Principles Every Developer Should Know” by Amr Saafan. The reason why I decided to discuss this blog Specifically is because it breaks down each section in short simple bullet points.  

The first section is Object-Oriented Programming (OOP) principles. This section explains principles like Encapsulation which guarantees that objects securely manage their data and behavior while concealing their complexity and granting limited access. Next is abstraction- by concealing details and concentrating on key elements, abstraction makes problem-solving easier and allows for solutions that are extendable and reusable. Inheritance encourages code reuse by enabling classes to share properties and functions, whereas polymorphism enables flexible behavior depending on object types, increasing flexibility and minimizing code duplication. I remembered these principles the most, possibly because I’ve had plenty of discussions about it, however, I found the explanation of encapsulation very helpful.

Software design is further improved by the SOLID principles. In order to improve testability and maintenance, the Single Responsibility Principle (SRP) promotes modular classes with distinct responsibilities. In order to maintain stability, the Open/Closed Principle (OCP) advises adding functionality through new code as opposed to changing current code. The Interface Segregation Principle (ISP) promotes short, targeted interfaces, minimizing needless dependencies, whereas the Liskov Substitution Principle (LSP) preserves compatibility between base and derived classes. The Dependency Inversion Principle (DIP), which prioritizes abstractions over concrete implementations, encourages loose coupling. I needed a refresher of LSP and ISP the most out of this section and I found the explanations to be helpful. 

The next section (Don’t Repeat Yourself (DRY)), explains how it is useful to eliminate redundancy, which promotes simpler, maintainable, and error-resistant code through modularity and reuse. (Keep It Simple, Stupid  (KISS)) promotes straightforward, effective solutions that are simpler to comprehend and maintain, while discouraging overengineering. The You Aren’t Gonna Need It (YAGNI) principle suggests avoiding unneeded features in advance, simplifying things, and concentrating on urgent needs. Those principles felt self explanatory to me but they were explained well. The Law of Demeter (LoD) improves modularity and decreases coupling by restricting an object’s interactions to its immediate dependents. I definitely needed the explanation of this principle. I found it helpful to have the list of how to apply it which included items like Limit Direct Dependencies, Use Interfaces and Avoid Chain Calls. It aided in my understanding of the principle.

Next, the Composition Over Inheritance principle suggests building systems with reusable components rather than deep inheritance hierarchies, improving flexibility and modularity. Finally, the Principle of Least Astonishment (POLA) ensures software behaves as expected, minimizing confusion by being clear, consistent, and intuitive.

As a software developer, these design principles will help design scalable, flexible, and user-friendly systems. KISS and YAGNI simplify designs to avoid unnecessary complexity, while composition and encapsulation add flexibility. Following POLA makes the system easier to understand and use for both developers and users.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

First blog for CS-343

This is my first blog for CS-343. Feel free to follow my journey.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Sprint Retrospective #3

Overall, I believe the second sprint went well. Like the previous sprint, we had meetings in person or over discord. All in all, I believe we all did a fantastic job of keeping each other updated and asking each other questions when we got stuck. I noticed that once we finished one issue, nobody hesitated to start another issue which really helped us with moving the project along. During this sprint, we worked on several of the issues as a group. We kept up the open-mindedness, accountability, honesty, and respect that were originally described as the culture we hoped to establish in the working agreement. Like in the previous sprints, we determined the maximum amount of work that each person should attempt to finish in order to split the work fairly and equally in regards to the issues we had, and we mostly adhered to it.

I worked on multiple issues that involved verifying that the pantry projects had the correct extensions, linters, and pipeline stages. Like the last sprint, I looked over the project’s file types, created a list of required linters based on the files, added any new linters, verified that the new linters passed, verified which stages were required, and adjusted the stages as necessary. For this type of issue, I worked on “Verifying that all Thea’s Pantry projects have the correct extensions, linters, and pipeline stages – InventorySystem General” by myself and then worked on “Verifying that InventoryAPI has the correct extensions, linters, and pipeline stages” and “Verifying that all Thea’s Pantry projects have the correct extensions, linters, and pipeline stages – Inventory Backend” with the group. When working on the issue for InventorySystem General, I realized that the build and test scripts weren’t needed so I also removed those files. I also worked on “Determine what needs to be done on GuestInfoFrontend” with the group. For this issue, we reviewed comments left in the code, wireframe, and documentation. Then if any work worthy of note needed to be done, we created new issues for them. We created these issues (“Moving “Other Assistance” attribute” and “Moving Receiving Unemployment Attribute to Assistance”) and linked them to the initial issue. 

I think we did really well as a team for this sprint. We did not get around to establishing a method that will guarantee that particular individuals are not examining the majority of issues, as we had originally intended when drafting our working agreement. For this sprint, we did not discuss how we would make sure to stay on top of issues that needed to be reviewed because many of the issues included us working together and we stayed on top of a lot of the issues. The time frame between finishing an issue, reviewing, and merging was a lot shorter than the last sprint. We made sure to constantly communicate with each other and the first one that could review an issue took the task. As a group, I can see that we improved a lot in that regard but I still think we could establish a method to make sure that reviewing code is not done mainly by a few individuals in the group. As an Individual, I showed a lot of improvement by checking on what needs to be done compared to the previous sprints but I think I can still improve when it comes to merging my issues as quickly as possible after they’ve been reviewed.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Security Testing

For this week’s blog, I decided to research security testing because we didn’t have the chance to go over it in class. While researching, I found a blog called “Security Testing: Types, Tools, and Best Practices” by Oliver Moradov. The article is split up into a few main sections: “What is Security Testing?”, “Types Of Security Testing”, “Security Test Cases and Scenarios”, “Security Testing Approaches”, “What Is DevSecOps?”, “Data Security Testing”, “Security Testing Tools”, and  “Security Testing Best Practices”.

The first section “What is Security Testing?” explains the definition of security testing then spits off into two sections that explain the main goals and key principles of security testing using bullet points to organize the information. Security testing determines if the software is vulnerable to cyber assaults and evaluates the effect of malicious or unexpected inputs on its operations. Security testing demonstrates that systems and information are safe and dependable and that they do not accept unauthorized inputs. It’s a type of non-functional testing that focuses on testing if the software is configured and designed correctly. The main goals of this kind of testing are to identify assets, risks, threats, and vulnerabilities. It gives actionable instructions for resolving detected vulnerabilities and can verify that they have been effectively fixed. The key principles of security testing are confidentiality, integrity, authentication, authorization, availability, and non-repudiation.  

The next section provides multiple sections that delve deeper into the multiple types of security testing. The examples provided are penetration testing (ethical hacking), application security testing (AST), web application security testing, API security testing, vulnerability management, configuration scanning, security audits, risk assessment, and security posture assessment. I knew about a few of these types of security testing. However, it was interesting to learn about API security testing and security posture assessments. It provided information like how APIs allow attackers to gain access to sensitive data and utilize them as an entry point into internal systems and the basics of what a security posture entails. 

The blog then discusses some important test scenarios like authentication, input validation, application, and business logic then provides other tests in a bulleted list. It then discusses the types of approaches ( white box, black box, and grey box testing) and a few useful tools.

The next section that I found very important was the section about best practices. The best practices mentioned were: “Shift Security Testing Left”, “Test Internal Interfaces, not Just APIs and UIs”, “Automate and Test Often”, “Third-Party Components and Open Source Security” and “Using the OWASP Web Security Testing Guide”. I knew about some of the practices like automating and testing often and testing often but I did not know about the Web Security Testing Guide (WSTG). I like the fact that the author provided a link to that resource as well. I think this blog is a great resource for those who want to learn about security testing. It is well organized and made me feel like I’m a bit better prepared to enhance security for future projects. 

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Performance Testing

For this week’s blog, I decided to research a bit on performance testing because it is listed as a topic for a class. While doing research, I found a blog called “How To Do Performance Testing: Tips And Best Practices” by Volodymyr Klymenko. In this blog, he discusses the primary purpose of this kind of software testing, types of performance testing, its role in software development, common problems revealed by performance testing, tools, steps, best practices, and common mistakes to avoid. The blog was well organized and clearly simplified each topic.

The purpose of performance testing is to find potential performance bottle necks to identify possible errors and failures. This is to ensure that the software meets the performance requirements before it is released. Some types are load testing(response under many simultaneous requests), stress testing ( response under extreme load conditions or resource constraints), spike testing (response under a significant and rapid increase in workload that exceeds normal expectations), soak testing(simulates a gradual increase in end users over time to test the system’s long-term stability), scalability testing(how the system scales when the volume of data or users increases), capacity testing (which tests the traffic load based on the number of users but differs in scope) and volume testing(which tests the response to processing a large amount of data).

The next section discusses the role of performance testing. I found some of the bullet points to be a bit obvious like “improving the system’s overall functioning”, “monitoring stability and performance”, and “assessing system scalability”. The other points mentioned were “failure recovery testing”, ”architectural impact assessment”, “resource usage assessment” and “code monitoring and profiling”.

After discussing the role of performance testing, the author discussed the common issues revealed by performance testing. The problems discussed were speed problems, poor scalability, software configuration problems, and insufficient hardware resources. This section is broken down into many bullet points to explain each issue. I found it to be very beneficial in simplifying the ideas presented.

The last three sections are the steps for performance testing, best practices, and common mistakes to avoid. The steps presented in this article for performance testing were: 1.Defining the Test Environment, 2.Determination of Performance Indicators , 3. Planning and Designing Performance Tests , 4. Setting up the Test Environment, 5. Development of Test Scenarios, 6. Conducting Performance Tests , and 7. Analyze, Report and Retest. The section after explaining the steps includes some best practices like “Early Testing And Regular Inspections “ and “Testing Of Individual Blocks Or Modules”. Some of the common mistakes to avoid that I found to be noteworthy were “Absence of Quality Control System” and “Lack of a Troubleshooting Plan”. While working on the Hack.Diversity Tech.Dive project, I had to do some performance testing and even used a tool that was mentioned in the article called Postman. In the future, I will definitely be more prepared to do performance testing now that I can reference this article.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

The Long Road

For this week’s blog, I decided to write about the pattern called “The Long Road” from Chapter 3. My previous blogs touched a bit on the contents of this pattern so I decided to discuss what the authors suggest for this pattern. In this pattern, the situation that is presented relates to how your aspirations to become a master software developer conflict with the expectations of others and how “Conventional wisdom tells you to take the highest-paying job and the first promotion you can get your hands on, to stop programming and get onto more important work rather than slowly building up your skills”. 

The solution that is presented is to first accept the judgments you may receive from others to go against the status quo and then to stay focused on the long term. During the journey you will gain quite a bit of skills and knowledge; it is a long one so you need to prepare for it by drawing your own map. The text mentions “ you should keep in mind the expectation that you will be a working software developer even when you are middle-aged. Let this influence the jobs you take and the scope of your ambitions. If you’re still going to be working in 20 years’ time, then you can do anything”. You will be able to catch up to anyone in terms of skills in the decades that you will work and you may just pass them. Before reading this pattern I never really thought about just how long I will be working in the field. As I have discussed in many of my blogs for this class, I have been working on developing myself professionally and learning from other developers; as I have progressed, I have thought about the possibility of not being the weakest on the team so I agree that it is realistic to surpass people during your long journey.

The pattern really stresses the idea that instead of counting the days to retirement, a craftsman will willingly work until their final decades. You should thoughtfully work on developing your craft over many years. The idea of taking the long road is the foundation for a successful apprenticeship along with self-assessment. I couldn’t find anything in this section that I could disagree with. It definitely made me think of what I want to get out of this long journey because I didn’t take the time to think about it in-depth before.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Sprint Retrospective #2

Overall I think the second sprint went well. Like the last sprint, I think we all did a good job of keeping each other updated and asking each other questions if we became stuck. For this sprint, we worked on a lot of the issues as a group. We also were open enough to communicate with each other when we realized that issues may not have been merged for a long period after being reviewed. Unfortunately, there were a couple of times when we didn’t keep up with issues that were in the “Needs Review” column. This resulted in a lot of merge conflicts that need to be resolved for those issues. We continued to display aspects of the original description of the culture we wanted in the working agreement: open-mindedness, honesty, respect, and accountability. We decided how much weight everyone should try to complete to divide the work evenly and fairly before the sprint started and we kept to it for the most part.

I worked on multiple issues that involved verifying that the pantry projects had the correct extensions, linters, and pipeline stages. For this issue, we examined the file types on the project, made a list of linters that were needed based on the files, added any linters, made sure the new linters passed, checked which stages were needed, and fixed the stages accordingly. I worked on verification for GuestInfoBackend, GuestInfoAPI, and GuestInfoSystem/General. I worked on the first two issues with the group and worked on verifying GuestInfoSystem/General by myself. We also had an issue for talking to group 3 about InventoryFrontend and InventoryBackend because they decided to work on some issues for it. Towards the end of the sprint, I realized they didn’t reach out so our group initiated a conversation to confirm if they were all set. Once that conversation finished I moved that issue to the done column. I also worked on reviewing the issue for getting InventoryBackend test working and getting the InventorySystem General test and build working. The issue for getting the InventoryBackend working was an issue that was left in the  “Needs Review” column for an extended amount of time and needed an extensive amount of work to resolve merge conflicts.

As a team, I think we did fairly well but we needed to keep up with the issues that need review more. We also did not rotate reviewing issues as we originally planned when creating our working agreement and or set up a system that will ensure that there aren’t certain people who are reviewing the majority of issues. Because a lot of the issues involved us working together, we didn’t address how we would make sure to stay on track of issues that needed review. As an individual, I need to make sure I can review issues as soon as possible to prevent the possibility of having a team member work through multiple merge conflicts. I think as a group we could benefit from a specific plan for keeping up with completed issues. For the next sprint, I plan on picking a schedule to check on the needs review column.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.