Author Archives: Winston Luu

Sprint 1 Retrospective

Created a separate branch from the frontend to solve the CORS/SSL issue

Created epics for larger tasks and broke them down

Organized tasks into issue boards

Prepared for Sprint 2 by adding new tasks

Documented important information regarding issues and how they were solved

In sprint 1, we were able to mostly complete everything we planned out; needing some revisions in certain areas. As a team, we performed well with eachother and communicated effectively to ensure all tasks were progressing at a timely manner. Additionally, we looked to eachother for help when we were stuck and didn’t know how to progress further, preventing teammates from falling behind in their tasks. What didn’t work well ocurred near the end of the sprint, where a teammate didn’t fully complete their task like they were supposed to, requiring one person to complete their task in sprint 2. This problem could have been solved if the team incorporated weekly code reviews to ensure that all areas of the task is looked over and nothing is forgotten. As the scrum master for the team, I plan to push for this element in future meetings.

Another improvement for the team that will be implemented in sprint 2 is better meeting time management. Since we only meet twice a week as a team, we should take this time to share progress we have made to the whole team. This could be any code written or any interesting articles that are useful to know. The primary goal of the capstone is to simulate a real development environment and solve tasks effectively. Equally important, however, is the opportunity to absorb and learn as much as possible. Something your teammates discover may prove valuable to you in your future career.

To improve as an individual, I want to get involved in other peoples tasks and try to offer my input in any way possible. Instead of being preoccupied by my own assigned tasks, widening my scope of work to my teammates tasks will allow me to grow as a developer and expose me to various tasks.

One pattern from Apprenticeship Patterns, that aligns with this sprint is “Confront Your Ignorance”. The pattern emphasizes the importance of recognizing and addressing gaps in your knowledge. Instead of avoiding unfamiliar topics, you should actively seek out challenges that push you to learn. This involves identifying areas where you lack expertise, setting goals to improve, and dedicating time to research, practice, and experiment. By taking on tasks outside your comfort zone, you develop problem-solving skills, adaptability, and a deeper understanding of new concepts. It pushes you to adopt a continuous growth mindset and helps you become a more effective and well-rounded developer. Going into capstone, I knew that I wanted to push myself to learn as much as possible, which is why for the first sprint I chose a task that I had zero prior knowledge about. In order to work on my issue, I had to research about CORS and SSL certificates, both of which are crucial concepts in web security and backend development. This process involved reading documentation, troubleshooting errors, and experimenting with different configurations to understand how they functioned. Choosing this issue to work on has also exposed me to the use of proxies, that route frontend requests to the backend. Utilizing a proxy will ideally allow the http backend url to connect to the https frontend url. In sprint 2, I plan to experiment more with proxies.

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Introduction

One aspect of the Apprenticeship Patterns reading that stuck with me was the requirements a person needs in order to be a successful software developer. A “growth mindset” as its referred to in the text is the idea that no one is born with inherent talent in a skill, its through dedication and failure that one reaches success. The growth mindset is a mindset that welcomes failure, because it teaches you to tackle the problem from a different angle. It’s the belief that something can be improved if you’re willing to put in the work despite failing. From failing, there is a lesson to be learned, from success you don’t learn anything. In a field that is constantly changing and improving, it’s crucial for a developer to have the ability to adapt to new features and languages. As a developer, you’re continuously learning how to do things more effectively than the last time, whether it be utilizing data structures or pushing out a feature. This aspect goes hand-in-hand with encouraging experimenting. Through experimenting for better solutions and failing, you get to learn why something doesn’t work or why a solution is preferable over others. On the other hand, though experimenting you might find solutions that actually improve efficiency, resulting in solutions that you can reference when solving similar problems.

Another essential aspect is pragmatic over dogmatic. It’s easier to first make a practical solution and polishing it later than trying to create a perfect solution the first time. It trains developers to think adapt when faced with changing conditions and favors optimizing for efficiency. Additionally, I think this tackles the common problem of procrastination because it allows developers to initially focus on solving the problem and disregarding how your code looks. It doesn’t matter how efficient your solution is if it doesn’t solve the initial problem in the first place.

Both of these aspects improve a developers efficiency and promotes a continuous learning routine. It allows us as developers to improve our skills and better prepares us the problems we face in the future. The rapid invention of technology today doesn’t guarantee a developers future in software, which is why we as developers must keep up with the pace and polish our skills.

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Set-up Task #5

One useful aspect I found on the LibreFoodPantry website was on the Organization page, which detailed the layout and relationships between development teams and the Coordinating Committee. The diagram provided information regarding teams receiving guidance from their shop manager, in addition to the committee. Before working on a FOSS project, it’s crucial to know who will handle project direction and issues, as well as how these decisions are handled at the top. Additionally, LFP links to the Agile Development Manifesto, which highlights the twelve Agile Development Principles. Ensuring the every developer that works on the open source project reads the manifesto, will guarantee team collaboration and smooth development progression. Having a Agile as the backbone will also ensure that teams receive a balanced input from both the customer and the shop manager

The User-Stories documentation page will be crucial to development teams because it will provide a rough sketch for how the food pantry is designed to operate. Having this information readily available to the development team will make decision-making and discussion easier because each team member can ensure that everyone is on the same page in regards to an “end product”. The documentation also provides how the program will operate differently, based on whether or not the student/guest has accessed the food pantry before.

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 15 Post

This week’s post will focus on the powerful backend tool, MongoDB and its data modeling features. MongoDB is a NoSQL database management system that’s mainly used to handle large datasets through a method called sharding. Sharding is a process that involves utilizing multiple servers to distribute data which improves scalability. This tool is especially flexible in various languages like C, C++, C#, Go, Java, Python, Ruby and Swift, making it applicable in numerous types of projects. Additionally, MongoDB is well-suited for use with Node.js, Express.js, and other modern web frameworks, making it an integral part of the MEAN and MERN stacks.

MongoDB is different from other backend data management tools because it uses NoSQL, instead of tables and rows of data it uses collections and documents of key-value pairs. These documents utilize Binary JSON or BSON because it accommodates more data types, making it versatile in any format. Gillis adds, “Documents will also incorporate a primary key as a unique identifier. A document’s structure is changed by adding or deleting new or existing fields”, which allows for fast indexing and queries.
Unlike traditional relational databases with fixed schemas, “MongoDB doesn’t require predefined schemas. It stores any type of data. This gives users the flexibility to create any number of fields in a document, making it easier to scale MongoDB databases compared to relational databases”. This one feature allows developers to not have to worry about changing schemas in the future due to changing product expectations.

According to Alexander Gillis from TechTarget, “Organizations also use MongoDB for its ad-hoc queries, indexing, load balancing, aggregation, server-side JavaScript execution and other features”. Uber uses MongoDB to optimize ride-hailing logistics, ensuring low latency even during peak traffic. Adobe uses it for handling billions of document transactions efficiently, ensuring user productivity on platforms like Adobe Creative Clouds.

When deciding to use MongoDB, it’s important to account for its shortcomings, for example, its non-relational nature can lead to duplicated data, which may require careful management. Additionally, it puts a lot of stress on memory resources. Another problem is its security, because user authentication is setup on default, hackers have learned to target un-secure MongoDB databases. Despite this, MongoDB is an example of how modern backend tools cater to the evolving needs of developers. Its flexibility and scalability make it a dependable tool when dealing with the unpredictable demands in software.

I specifically chose this topic to research because it will be crucial tool to have in my arsenal when working in full-stack projects. Additionally, this tool is the main backend tool for Thea’s Pantry, making this information vital for my capstone project. I hope to utilize this information next semester.

Blog: https://www.techtarget.com/searchdatamanagement/definition/MongoDB

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 14 Post

This week’s post will cover a powerful and popular frontend tool, Node.js, that enables developers to execute JavaScript code outside the browser. By allowing JavaScript to run on the server side, Node.js has transformed the way developers approach web development, enabling full-stack JavaScript applications. The blog by Matthew Clark highlights the common uses for Node.js, for example, Node.js is often used for building real-time applications, such as chat apps and collaborative tools. Additionally, Node.js is a popular choice for developing REST APIs and microservices, enabling efficient handling of API requests and data exchanges between client and server. Another common use case is single-page applications (SPAs). These applications rely on seamless interactions between the frontend and backend, and Node.js allows developers to use JavaScript across the entire stack.

The V8 Javascript engine compiles JavaScript into machine code, ensuring fast execution, making speed one of the primary advantages of Node. The speed of a platform is crucial to creating an enjoyable user experience. No one enjoys a slow program. Additionally, Node can handle large volumes of simultaneous connections efficiently. Arguably the most important advantage is scalability. Node.js is designed to scale horizontally, allowing applications to handle increasing workloads by adding additional resources without significant architectural changes. Developers can use JavaScript on both the client and server sides, reducing context switching and improving productivity. This consistency simplifies debugging and maintenance.

Node is used by major companies like Netflix and PayPal. Netflix is particular uses Node.js to improve application performance, specifically for the server-side rendering of their video streaming platform. Its lightweight nature has helped Netflix handle millions of user requests with reduced startup time and increased efficiency. PayPal made the switch to Node to unify their frontend and backend development. This shift resulted in a significant reduction in development time and improved application response times.

One drawback to this tool is the asynchronous programming model, this model relies heavily on callbacks and promises. While this enhances performance, it can increase the complexity of debugging and code maintenance.

I chose this topic of research because it’s one of the most popular connection tools for frontend and backend development, and for good reason. One other tool that I came across during my research was Deno, a modern runtime for JavaScript and TypeScript, created by Ryan Dahl, the original developer of Node. Unlike Node, Deno is secure out of the box. It runs scripts in a sandboxed environment, requiring explicit permission to access files, networks, or the environment. Additionally, Deno has native TypeScript support and doesn’t require additional package managers.

Blog Post: https://dev.to/mattclark1025/why-node-js-for-web-development-in-2020-2ebc

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week-13 Post

This week’s post will cover REST APIs, Representational State Transfer Application Programming Interfaces. One of the main key principles of RESTful APIs is the seperation between the frontend UI the user interacts with and the backend server. Postman’s blog highlights this as, “The client’s domain concerns UI and request-gathering, while the server’s domain concerns focus on data access, workload management, and security”. The primary purpose of REST APIs is to allow different software systems to interact and exchange data over the web. REST mainly focuses on stateless communication, where each request from a client contains all the information needed for the server to process it.

REST APIs use HTTP methods and standard URL structures to enable communication between clients and servers. HTTP methods play an essential role in REST APIs. These methods correspond to CRUD (Create, Read, Update, Delete) operations in software. The POST method is used to create, while GET retrieves data from the server. PUT and PATCH are used to update existing data, with PUT replacing the entire resource and PATCH modifying specific parts. DELETE removes data. In addition, REST APIs use status codes to indicate the outcome of an operation, For example, a 200 status code indicates a successful operation, 201 signifies resource creation, 404 means a resource was not found, and 500 represents a server error. Including appropriate status codes in API responses helps clients understand the results of their requests and handle errors effectively.

The blog post I researched by Postman highlights how REST is widely used across various industries. For example, e-commerce platforms use REST APIs to manage product information and process orders. Social media applications utilize REST APIs to handle user profiles and posts. Cloud services often provide REST APIs to allow developers to interact with their resources programmatically. The blog also mentions another type of API called SOAP, standing for Simple Object Access Project. SOAP is considered a protocol, while REST is considered a set of guidelines. Unlike REST which uses methods like JSON, URLs, and HTTP, SOAP uses XML for sending data. One of the main reasons why SOAP might be preferred over the more popular REST is because SOAP supports WS-Security, which provides a framework for securing messages, including encryption, digital signatures, and authentication. This makes SOAP more suitable for applications handling sensitive data. Corporations like banks and hospitals dealing with sensitive user information could utilize to prevent information breaches.

These APIs provide a consistent way for systems to interact and exchange data while adhering to a set of well-defined principles. By understanding HTTP methods, status codes, and data formats, developers can create APIs that users can understand and use.

Blog: https://blog.postman.com/rest-api-examples/

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Introduction Post for CS343

Hello, this is my introductory post for CS343. Feel free to check back every now and then to see what I blog about in this course.

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week-18 Post

My second post this week will recover the three elements of unit testing: Boundary Value Testing, Equivalence Class Testing, and Decision Table-Based Testing. Each play a crucial role in validating software behavior and functionality. Boundary Value Testing focuses on the edges of input ranges. This technique identifies defects at the boundaries of input domains, where errors are most likely to occur. By testing the minimum, maximum, and edge values, testers can catch issues that might arise from off-by-one errors or other boundary-related bugs. This method is particularly effective because boundary conditions are common sources of defects in software applications. To utilize boundary value testing, first determine the minimum and maximum values for each input field, and second create test cases that include the boundary values (e.g., minimum, maximum, just inside, and just outside the boundaries).

Equivalence Class Testing divides input data into equivalent partitions, or classes, where test cases are expected to produce similar results. Instead of testing every possible input, testers select representative values from each class, significantly reducing the number of test cases needed. This method ensures that different inputs within the same class are treated equally by the software, helping identify any inconsistencies or unexpected behaviors across various input ranges. To utilize equivalence class testing, first group input values that are treated similarly by the system into classes, and second choose one representative value from each class for testing.

Decision Table-Based Testing involves creating a table that maps different input conditions to their corresponding actions or outputs. This technique is especially useful for testing complex business logic and decision-making processes. By systematically covering all possible combinations of inputs and their respective outcomes, decision tables help ensure that all scenarios are accounted for and validated. This method enhances the thoroughness of testing by providing a clear and structured approach to handling diverse input conditions. To utilize decision table-based testing, first list all possible conditions (inputs) and actions (outputs), and second create a table with all possible combinations of conditions and their corresponding actions.

Boundary Value Testing, Equivalence Class Testing, and Decision Table-Based Testing are powerful techniques that enhance the effectiveness and efficiency of software testing. These testing techniques help ensure that software applications are robust, reliable, and capable of handling various input scenarios effectively. By incorporating these methods into your testing strategy, you can enhance test coverage, identify potential issues early, and deliver high-quality software that meets user expectations and business requirements.

Blog Post: https://celestialsys.com/blogs/software-testing-boundary-value-analysis-equivalence-partitioning/

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 18 Post

This post I will cover integration testing and why we use it today. Integration testing is a critical phase in the software development lifecycle, focusing on the integration of individual components into a cohesive system. It ensures that various modules or subsystems work together as intended. One of the primary challenges in integration testing is ensuring comprehensive coverage of interactions between different components. Identifying the right integration points and scenarios to test can be complex, especially in large-scale projects with numerous dependencies.

Selecting appropriate test cases to validate integration points is crucial. It requires understanding how components interact and designing tests to simulate these interactions effectively. Failure to cover all integration scenarios may lead to undetected defects, impacting the reliability and functionality of the software.

Moreover, integration testing often involves testing across different environments and platforms, adding to the complexity. Ensuring compatibility and consistency across various configurations is essential for delivering a robust product. One of the primary hurdles is achieving comprehensive test coverage across all integration points. Prioritizing critical integration points and designing effective test scenarios are essential to address this challenge.

Another challenge is managing the dependencies and external services during integration testing. Mocking or simulating external dependencies may be necessary to isolate various parts for testing, but it can also introduce its own set of challenges, such as maintaining realistic testing environments.

Furthermore, integration testing requires coordination among development teams working on different modules or services. Synchronizing changes and ensuring compatibility between components can be challenging, particularly in agile or distributed development environments.

Frameworks like Selenium are helpful for automating web browser interactions to test integrations between web components. For broader integration testing needs, companies might choose tools like Katalon Studio, which offers a comprehensive suite for web, mobile, desktop, and API testing. Additionally, some companies leverage enterprise-grade solutions like IBM Rational Integration Tester that provide robust features for complex integrations and compliance requirements. Ultimately, the choice of tool depends on the specific needs of the project and the company’s development environment.

Integration testing verifies the interactions between software modules, ensuring they function seamlessly as a unified system. Unlike unit testing, which examines individual components in isolation, integration testing focuses on how these components integrate and communicate with each other. It plays a crucial role in detecting issues arising from the integration of diverse elements, such as incompatible interfaces or conflicting behaviors. By identifying and addressing these issues early in the development process, integration testing helps prevent costly errors from surfacing in production. It’s step towards delivering reliable, high-quality software that meets user expectations and business requirements.

Blog Post: https://www.opkey.com/blog/integration-testing-a-comprehensive-guide-with-best-practices and https://www.testlearning.net/en/posts/integration-testing

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 16 Post

This week’s blog post will cover System Testing and its main benefits. System Testing, as the name suggests, revolves around evaluating the entire system as a whole. It’s not just about scrutinizing individual components; it’s about ensuring that all parts seamlessly integrate and function as intended. This phase of testing comes after the completion of unit and integration testing, aiming to validate the system against its specified requirements. It involves subjecting the system to a barrage of tests to assess its compliance with functional and non-functional requirements. From testing the user interface to examining performance metrics, System Testing leaves no stone unturned in the quest for a robust and reliable software product. This method is most effective before launching your product, to ensure a total coverage.

Security vulnerabilities can be a project’s nightmare. System Testing acts as a guardian, identifying security loopholes and ensuring the system is robust against potential attacks. One of the key tenets of System Testing is its focus on real-world scenarios. Instead of merely verifying technical functionalities, System Testing endeavors to simulate user interactions and workflows. By replicating typical usage scenarios, testers can unearth potential bottlenecks, usability issues, and even security vulnerabilities lurking within the system. Through testing and analysis, it offers valuable insights into the system’s readiness for deployment. Moreover, System Testing serves as a safeguard against post-release hurdles by preemptively identifying and preventing potential pitfalls.
System Testing does have its cons however, one crucial step in system testing is creating a comprehensive test plan. This is crucial for effective System Testing because it ensures all bases are covered and avoids blind spots.

Like most of the testing techniques we have covered in class, tools play a pivotal role in streamlining the testing workflow. From test automation frameworks like Selenium and Cypress to performance testing tools like JMeter and Gatling, there’s a plethora of tools available to expedite the testing process. Leveraging these tools not only enhances efficiency but also empowers testers to uncover hidden defects more effectively.

System Testing stands as a cornerstone of software quality assurance, offering a panoramic view of the system’s functionality and performance. While it may pose its fair share of challenges, the insights gleaned from System Testing are invaluable in ensuring the delivery of a high-quality, robust software solution. By embracing System Testing, you’re essentially investing in the quality and reliability of your software. It’s the final hurdle before launch, guaranteeing a smooth user experience and a successful project.

Blog Post: https://blog.qasource.com/what-is-system-testing-an-ultimate-beginners-guide

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.