Monthly Archives: May 2024

Chapter 6 of “Apprenticeship Patterns,” introduces the concept of “Sweep the Floor”.

In Chapter 6 of “Apprenticeship Patterns,” introduces the concept of “Sweep the Floor” is introduced, highlighting the significance of humility and readiness to undertake even the simplest tasks as a way of learning and contributing effectively to a software development team.

The pattern starts by addressing the mistaken belief that novice software developers should concentrate solely on coding or tackling intricate challenges. Nevertheless, the authors contend that there’s considerable merit in tackling routine or seemingly unimportant tasks like documentation, testing, or tidying up code. Engaging in these activities offers chances to gain knowledge, grasp the project’s scope, and establish trust among team members.

The pattern advises apprentices to adopt humility and adopt a mindset where they are eager to contribute in any capacity, regardless of the task’s level of complexity or appeal. By undertaking modest, low-stakes duties, apprentices can incrementally bolster their self-assurance, broaden their exposure to various facets of software development, and exhibit their dedication to the team’s prosperity.


An intriguing facet of this pattern lies in its stress on the significance of context and comprehending the overarching objectives of the project or organization.
Through involvement in activities such as documentation or testing, apprentices can acquire a deeper understanding of the project’s needs, limitations, and obstacles, which can serve as valuable insights for their subsequent contributions and decision-making processes.

Additionally, the pattern underscores the importance of collaboration and teamwork within software development. By engaging wholeheartedly in all project facets, irrespective of their level of experience or expertise, apprentices can position themselves as indispensable team players and forge robust connections with their peers.

This pattern is useful because it provides a practical approach to learning and professional development for apprentices in software development. By promoting humility and a readiness to tackle even the simplest duties, it aids apprentices in constructing a strong groundwork of competencies, understanding, and connections within their teams. Furthermore, by actively contributing to the team’s achievements in diverse roles, apprentices can hasten their learning curve, amass invaluable experience, and eventually evolve into more proficient and esteemed members of the software development realm.

Ultimately, this pattern encourages a mindset that values practical proficiency, self-directed learning, and a commitment to continuous skill development. It can inspire me to approach me intended profession with a greater sense of agency, confidence, and readiness to tackle real-world challenges.

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

Sprint 3 Reprospective

Docker Build Pipeline:

  • Issue: Path not correct for copying files into Docker Compose container.
  • Solution: Adjust copy statements in Dockerfile to specify correct paths.
  • Implementation: Modify paths relative to Docker directory, e.g., bin/ if in src/.

Development Docker:

  • New scripts added for running and installing nodemon with package.
  • Type noderundev in shell script for consistency.

Reconfigure Pipeline:

  • Existing issue persists for team OL1.
  • Keep test.sh temporarily until linters are added to config.
  • Reporting system pipeline configure fails due to missing es.lint configuration file.

Linters Integration:

  • Linters not in repo for pipeline.
  • Documentation required for pipeline operation.
  • Branches cleanup ongoing.

Shell Scripts for Convenience:

  • Provide shell scripts in directories for ease of use.
  • Example: build/backend.dev for npmrundev.

Backend Development:

  • Start nodemon instead of rpm for backend changes.
  • Ensure nodemon reloads backend on code changes.

Scripts Management:

  • Separate scripts for dev and prod environments.
  • Create backenddevup.sh for backend development.
  • Later adjust for production environment.

Startup Procedures:

  • Determine startup procedure for MongoDB and RabbitMQ.
  • Currently, Docker runs MongoDB and RabbitMQ for production.
  • Scripts needed to start these services.

Research:

  • Update es.lint Docker image for better functionality.
  • Transfer existing config to new format for better linting.
  • Investigate linting for syntax and code errors.
  • Configure Gitpod workspace for linting and formatting.
  • Set up Gitpod.yaml for npm install and linting.

Code Formatting:

  • Utilize Prettier in pipeline for automatic code formatting.
  • Ensure neat documentation of all procedures.

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

The Basics of Security Testing

Security testing is a very important aspect of software development aimed at verifying that software systems are free from design or configuration flaws that could compromise a software’s security. It involves evaluating systems throughout the software development lifecycle to ensure that services and information remain available to authorized users and protected from unauthorized access or tampering.

The main goals of security testing include identifying digital assets, classifying security vulnerabilities, assessing potential consequences of exploitation, reporting findings for remediation, and providing recommendations for addressing vulnerability. Basically, the primary goal of security testing is to determine the security status of an information system.

Security testing ensures that a software complies with security standards, which enhances software quality, and promotes user trust. Continuous security testing is essential because of the constant evolving threat landscape and the potentially devastating costs of cyberattacks.

When data is not securely protected, it’s vulnerabilities can be exploited resulting in data breaches. A case study involving Marriott International shows the significance of security testing in safeguarding such sensitive data to preventing costly security breaches. Marriott experienced two major data breaches in 2014 and 2020, exposing the personal information of millions of guests. Furthermore, statistics show that the average cost of data breaches reached a record of 4.45M in 2023. Such a financial blow could result in the end of many companies (Chavarria).

The key principles of security testing include comprehensiveness, realistic tests, continuity, and collaboration between development, operations, and security teams. This means that security testing needs to be logical, but also applied in a practical enough manner that can be adapted and used by multiple different operations in the program system.

To conduct security testing effectively, the security of a software should be a planned activity in every software development project. Developers should be proactive in addressing vulnerabilities and implement solutions as soon as possible. Automated testing should be integrated into continuous integration and delivery pipelines to ensure that all code complies with security policies.

Security testing is something that I have not learned much about, but this was a good introduction to why it is important and the principles by which it is implemented. In the world of business and competition, good code is not just clean, effective, and efficient code, but it must also be secure code. As I start to work more with things that deal with logins and user information, I will need to pay more attention to how my code is keeping this data secure, so not to have the data be vulnerable to data breaches.

Overall, security testing is important for identifying and mitigating security risks throughout the software development process, which ultimately enhances the security of software systems and protects valuable digital assets.

Source: Security Testing Fundamentals by Jason Chavarria

From the blog Stories by Namson Nguyen on Medium by Namson Nguyen and used with permission of the author. All other rights reserved by the author.

Behavior-Driven Development

Behavior-driven development is a shift in software development practices, aiming to minimize feedback loops and better efficiency. This article explores BDD and its changes from usual waterfall models to feedback-driven methods. It emphasizes the connections between analysis, testing, coding, and design within a loop of continuous feedback, leading to more effective software development. As a student, understanding cutting-edge methods like BDD is crucial. I chose this resource to go more into the details of behavior-driven development, its principles, implementation strategies, and the benefits it offers in terms of efficiency and quality assurance. Behavior-driven development focuses on behavior, collaboration, and continuous improvement that follows my class’s ideas to develop great working software solutions. The source’s discussion on behavioral-driven development’s misconceptions, especially regarding its association with UI testing, was interesting. Looking ahead, I aim to use behavioral-driven development principles in my development workflow. By adopting a test-driven analysis approach, I want to gain a deeper understanding of system behavior, prioritize features effectively, and deliver value-driven software solutions. 

Behavior-driven development offers strong communication, a shorter learning curve, and high visibility. With the shared language it’s easier for everyone to have an understanding of the project development and BDD can reach a bigger audience. Since BDD is taught in a simple language it makes learning shorter and easier. This source has sparked a curiosity to explore behavior-driven development frameworks like Cucumber and Gherkin to articulate behavior-driven tests effectively. While dealing with behavior-driven development there are a ton of rules used to guide those principles. BDD is a little tough but with a lot of practice, this principle will allow people to master this skill. The journey through BDD’s principles, misconceptions, and real-world applications has been very interesting. I enjoyed reading about behavioral-driven development and how it works in software development. It has given me a deeper understanding of iterative development, collaboration, and user-centric design. Using a behavior-driven development approach to software development, I look forward to using its power to drive efficiency, quality, and customer satisfaction in my future projects. BDD isn’t just a method, it’s an idea that focuses on continuous learning, improvement, and innovation in software development.

https://semaphoreci.com/community/tutorials/behavior-driven-development

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

AI in Software Testing: A Look at the Future

This blog post explores the growing influence of Artificial Intelligence (AI) in software testing, drawing inspiration from the podcast “AB Testing: All We Talk About is AI” (Episode 187: All We Talk About is AI).

The Rise of AI in Software Development

AI is transforming various aspects of software development, and testing is no exception. AI-powered tools are being utilized in several ways, including:

  • Automating Repetitive Tasks: AI can automate repetitive testing tasks, such as regression testing, freeing up human testers to focus on more complex scenarios and exploratory testing.
  • Generating Test Cases: AI can analyze user behavior and system data to automatically generate comprehensive test cases, ensuring thorough test coverage.
  • Defect Detection: Machine learning algorithms can be trained to identify bugs and defects in code with greater accuracy and efficiency than traditional methods.
  • Performance Optimization: AI can analyze performance data and suggest improvements to optimize software speed and responsiveness.

Impact on QA Professionals

While AI might seem like a potential replacement for human testers, it’s more likely to become a valuable tool in the QA toolbox. Here’s how:

  • Increased Efficiency: Automation of repetitive tasks allows QA testers to focus on higher-level testing strategies and leverage their expertise for more critical thinking and problem-solving.
  • Improved Accuracy: AI-powered tools can assist in catching bugs and defects that might be missed with manual testing alone, leading to higher quality software releases.
  • Faster Time to Market: By automating repetitive tasks and enhancing testing efficiency, AI can contribute to faster software release cycles.

The Future of QA with AI

The future of software testing is likely to see a deeper integration of AI, potentially leading to:

  • Self-Learning Testing: Imagine AI that can learn from its testing experiences and continually improve its strategies over time.
  • Context-Aware Testing: AI could analyze the context of a software application, such as its target audience or intended use, and tailor its testing approach accordingly.
  • Proactive Bug Prevention: AI might be able to predict potential issues before they even occur, allowing developers to address them early in the development cycle.

Challenges and Considerations

While AI offers significant benefits, it’s important to acknowledge the challenges as well:

  • Over-reliance on Automation: Overdependence on AI for all testing aspects should be avoided. Human expertise remains crucial for strategic thinking and creative test case design.
  • Explainability and Bias: AI algorithms can be complex, making it challenging to understand how they arrive at their conclusions. It’s vital to be aware of potential biases in AI models to ensure fair and unbiased testing practices.
  • The Human Element: The human touch will always be essential in QA. AI cannot replace the critical thinking, communication, and collaboration skills that are vital for successful software testing.

Conclusion

The rise of AI presents both challenges and opportunities for software testing professionals. By embracing AI as a valuable tool and continuously developing our skill sets, QA professionals can ensure they remain a critical function in the ever-evolving world of technology.

Take a look at the podcast: https://podcasters.spotify.com/pod/show/abtesting/episodes/Episode-187-All-we-talk-about-is-AI-e2a1sk4/a-aae4uv8

From the blog CS@Worcester – Site Title by Iman Kondakciu and used with permission of the author. All other rights reserved by the author.

Path Testing: Your Guide to Unveiling the Hidden Bugs in Software

Welcome back, fellow coders! Today, I’m going back to a technique called path testing. 

Why is Path Testing Important?

Software development thrives on creating programs that function flawlessly, regardless of user interaction. Traditional testing methods might miss certain sections of code depending on user choices. Path testing, however, takes a different approach. It systematically executes every possible path a program can take, significantly increasing the likelihood of encountering and eliminating potential errors.

Here’s how path testing elevates your software development game:

  • Enhanced Bug Detection: Think of bugs like sneaky goblins hiding in the castle’s shadows. Path testing, by meticulously traversing every path, shines a light on these goblins, exposing them before they can cause problems for users.
  • Improved Software Quality: Just like a well-maintained castle provides a secure and comfortable environment, path testing leads to the creation of high-quality software. Identifying and rectifying errors early on ensures a more robust and reliable program.
  • Increased Confidence in Functionality: Having meticulously explored every potential path within the program, testers gain a heightened sense of assurance. They know, with greater confidence, that the program will perform as intended, leading to a more predictable and stable user experience.

Exploring the Different Levels of Path Testing

Path testing isn’t a one-size-fits-all approach. There are various levels of coverage, each focusing on a specific aspect of the program’s execution paths:

  • Statement Coverage: This foundational level resembles meticulously walking across every single floorboard within the castle. It ensures that every single line of code within the program is executed at least once during testing.
  • Decision Coverage: Taking things a step further, decision coverage is like exploring every hallway and doorway, ensuring you’ve taken both the left and right turns at every intersection. It guarantees that each decision point within the program (such as if statements and loops) is evaluated with both possible outcomes – true and false.
  • Condition Coverage: This is the most rigorous level, akin to meticulously checking every wall and secret passage within the castle. It ensures that each individual condition within a decision (e.g., the expression in an if statement) is evaluated to be both true and false at least once.

The Path to High-Quality Software

By incorporating path testing into the software development lifecycle, developers gain a valuable tool for creating exceptional applications. This structured approach ensures comprehensive coverage of potential execution paths, leading to the identification and rectification of errors before they manifest as real-world problems.

Inspired by: Path Testing: The Coverage

From the blog CS@Worcester – Site Title by Iman Kondakciu and used with permission of the author. All other rights reserved by the author.

Equivalence Partitioning and Boundary Value Analysis – Effective Techniques for Test Case Design

This week,I am revisiting some fundamental test case design techniques: equivalence partitioning and boundary value analysis. While these terms might sound complex, they offer a structured and efficient approach to software testing, particularly for numerical inputs or situations with defined input ranges.

Equivalence Partitioning: Dividing the Input Landscape Strategically

Imagine a program that validates user age for login purposes. Traditionally, one might be tempted to test every single possible age value from 0 to 120 (or whatever the defined limit may be). This brute-force approach, however, quickly becomes impractical and inefficient as the number of possible inputs grows. Equivalence partitioning offers a more strategic solution.

Equivalence partitioning involves dividing the entire set of possible input values (the input domain) into distinct classes where the program is expected to behave similarly for all values within a class. These classes are called equivalence partitions. In the age validation example, we could define the following equivalence partitions:

  • Valid Ages: This partition encompasses all ages that fall within the expected range for a user (e.g., 0 to 120).
  • Invalid Ages: This partition includes all values outside the valid range, such as negative numbers or values exceeding the limit (e.g., negative numbers or ages greater than 120).
  • Empty or Null Values: This partition considers scenarios where the user leaves the age field blank or enters an invalid value that evaluates to null.

By identifying these partitions, we can significantly reduce the number of test cases needed for comprehensive testing. Instead of testing every single age within the valid range, we can select representative test cases from each partition. For example, we could test valid ages with values at the beginning, middle, and end of the range (e.g., 0, 30, and 120). Similarly, we could test invalid ages with a negative number and a value exceeding the limit.

Boundary Value Analysis: Sharpening Our Focus on Critical Areas

Equivalence partitioning provides a solid foundation for test case design. However, it’s important to pay close attention to the boundaries or edges of each partition. This is where boundary value analysis comes into play. Boundary value analysis focuses on testing the specific values that lie at the borders of each equivalence partition. This includes:

  • Minimum and Maximum Valid Values: In the age validation example, this would involve testing the program’s behavior with values at the beginning (0) and end (120) of the valid age range.
  • Values Just Above and Below the Valid Range: This involves testing one value above the maximum valid age (e.g., 121) and one value below the minimum valid age (e.g., -1).

The rationale behind testing these boundary values is that programs are often more susceptible to errors at the edges of their input domains. By testing these specific values, we can identify potential issues that might be missed by random testing within the valid range.

Conclusion

Equivalence partitioning and boundary value analysis are valuable tools for software testers. They promote efficient test case design, improve test coverage, and ultimately contribute to the development of high-quality software.

From the blog CS@Worcester – Site Title by Iman Kondakciu and used with permission of the author. All other rights reserved by the author.

Retrospective – Sprint #3

During this sprint, I contributed to 3 issues:

  1. Determine what needs to be done on GuestInfoFrontend – GuestInfoFrontend/issues/88
    (As a whole team, we explored the GuestInfoFrontend for any improvements, and created tickets for teams next semester)
  2. Verifying that InventoryAPI has the correct extensions, linters, and pipeline stages – InventoryAPI/issues/25
    (As a whole team, we reviewed and made changes to files in the InventoryAPI to ensure extensions, linters, and pipeline were all set for next semester)
  3. Verifying that all Thea’s Pantry projects have the correct extensions, linters, and pipeline stages – Inventory Backend – InventoryBackend/issues/101
    (3 of us reviewed and made changes to files in the InventoryBackend to ensure extensions, linters, and pipeline are all set for next semester)

Again, this sprint I also assisted in reviewing some issues for the team

This sprint was a breeze, all of us were comfortable working with most systems at this point and we were able to complete almost every one of the issues we set out to do. We did have a little hiccup with the GuestInfoFrontend as we were unsure of the process to start a backend and frontend hot reload instance after moving from Docker to Gitpod but after some direction from Professor Wurst, we were able to continue with our Determine what needs to be done on GuestInfoFrontend issue. Other than that we didn’t run into any problems over the duration of the sprint and we completed >75% very fast this time around too.

I don’t think that there is much or anything for us to improve on this sprint since most of the issues we worked on this sprint were team-collaborated issues we spent less time working on issues than in prior sprints since we were able to collaborate and fix any bugs with ease. Any of the issues that our team members did take on themselves, such were done without much if any assistance past peer reviewing, and overall the team worked like a “well-oiled machine” this sprint.

As a team, we killed it this sprint. We improved on our weak areas from our last two sprints and further solidified what we did well. I feel like communication can always be improved as we had made strides from our first sprint but I feel as if it had stagnated to an acceptable level for sprints #2 and #3. Other than that I can say there was anything that we should have done differently as we were on point this time around. Our in-person communication was a lot better than our discord communication but in the end, both weren’t bad at all, we just could have done better in some areas.

As an individual, I felt as if I could have worked more spreading the load of work since I feel that compared to our other sprints I had taken the least amount of burden of work this time around but if I look back to sprint #1 I feel that maybe it balanced out over the whole year since I felt as if I took on a large burden of work the first sprint so overall I think it evens out. I also could have been better at communicating when I will be on for calls as I had just joined when I could due to school and life just being busy so sometimes I joined Discord later than our usual time without explicitly communicating I would be doing so.

In the end, I feel as if this sprint had been our best yet since we had completed almost all issues (1 wasn’t completed), spread our burden of work, and communicated the best we had the entire semester. While there is always going to be room for some improvement, I feel that we had found our rhythm as a team this time around and we operated at our maximum efficacy.

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

Crafting Mastery: Deliberate Practice in Software Development

Summary:

The “Practice, Practice, Practice” pattern emphasizes the importance of deliberate practice and continuous improvement in mastering any skill, particularly in the realm of software development. Drawing from George Leonard’s concept of mastery and K. Anders Ericsson’s research on deliberate practice, the pattern highlights the necessity of carving out dedicated time for practice separate from daily professional responsibilities. While an ideal scenario involves structured exercises and mentorship, the reality often requires individuals to take initiative in their own skill development.

Reaction:

This pattern strikes a chord with me as it underscores the essence of lifelong learning and skill refinement. The notion that mastery is not merely a destination but a journey fueled by deliberate practice resonates deeply. It challenges the notion of perfection in favor of embracing imperfection as a catalyst for growth. Furthermore, the emphasis on creating a stress-free and playful environment for practice aligns with my belief in the importance of experimentation and exploration in learning.

Interest and Utility:

What I find intriguing about this pattern is its application of martial arts principles to software development. The concept of code katas, akin to choreographed movements in martial arts, offers a structured framework for practicing fundamental coding skills. Moreover, the emphasis on short feedback loops and the integration of public performance within a community of craftsmen underscores the collaborative nature of skill development in the tech industry.

Impact on Professional Outlook:

As someone aspiring to excel in software development, this pattern has prompted me to reconsider my approach to skill acquisition. Instead of viewing mistakes as setbacks, I now perceive them as invaluable learning opportunities. By committing to regular practice sessions and seeking feedback from peers, I aim to cultivate a growth mindset and continuously refine my coding abilities. Additionally, the suggestion to explore timeless resources like “Programming Pearls” for practice exercises has inspired me to delve deeper into the fundamentals of computer science.

Disagreement:

While I agree with the overarching principles of the “Practice, Practice, Practice” pattern, I believe there’s a need for acknowledgment of individual learning preferences and constraints. Not everyone thrives in a structured dojo environment, and some may prefer solitary practice or alternative forms of skill development. Therefore, while code katas and group sessions offer valuable avenues for improvement, they may not be universally applicable or accessible to all aspiring developers.

Conclusion:

In conclusion, the “Practice, Practice, Practice” pattern serves as a poignant reminder of the importance of intentional practice in achieving mastery in software development. By embracing the principles of deliberate practice, seeking feedback, and exploring diverse learning resources, individuals can embark on a journey of continuous growth and skill refinement. As I incorporate these insights into my own professional development, I’m excited to see how regular practice and reflection will shape my journey toward mastery in software engineering.

From the blog CS@Worcester – Site Title by rkaranja1002 and used with permission of the author. All other rights reserved by the author.

Sprint 3 Retrospective

Introduction

  • In this sprint, our primary focus was on rigorously testing the frontend developed during sprint 2, applying the insights and frameworks we had discussed with team 2. This sprint appeared significantly shorter than the extensive sprint 2, partly due to the lighter workload with a target of only 16 points. This more manageable workload allowed us some capacity to address and rectify lingering issues from the previous sprint.
  • The brevity of this sprint highlighted the importance of continuous integration and testing, which enabled us to quickly identify and resolve issues. Our collaborative efforts with team 2 proved invaluable, as their feedback directly influenced our troubleshooting and refining processes. Moving forward, maintaining this synergy and applying these practices consistently will be crucial for smoothing out any future bumps in our development process and enhancing the overall quality of our project.

Links to Activity on GitLab

Reflections on the Sprint

What Worked Well:

Advertisements

https://c0.pubmine.com/sf/0.0.7/html/safeframe.html

REPORT THIS AD

The standout success of this sprint was our group communication. Facing challenges as a team, rather than individually, significantly eased our problem-solving process. Our review procedures were effective, facilitating a focused approach towards achieving our objectives.

Areas for Improvement:

The primary challenge we encountered was time management, particularly as progress on the front end depended on having a working template. This dependency delayed our efforts, resulting in a hectic sprint conclusion. Better planning or earlier template availability might mitigate similar issues in future sprints.


Improvements for Team Performance

The team’s collaborative communication and problem-solving were key strengths this sprint, continuing a positive trend from the previous sprint. It’s crucial to sustain this momentum into the next sprint, incorporating some strategic improvements:

Improvements for the Next Sprint:

  1. Consistent Scheduling: To avoid the congestion experienced towards the end of the last sprint, establishing a more consistent schedule for meetings could help in better time management and distribute tasks more evenly throughout the sprint.
  2. Balanced Division of Labor: We should continue to monitor and adjust the workload among team members to ensure tasks are evenly distributed, preventing any team member from being overwhelmed while others have less to do.
  3. Streamlined Communication Channels: Building on our previous success, maintaining all critical communications in a centralized, organized, and easily accessible system will enhance clarity and continuity, aiding in more effective decision-making and problem-solving.

Personal Improvements

Reflecting on my personal challenges during this sprint, specifically around managing merge requests correctly

  1. Proactive Communication: To prevent and swiftly address any uncertainties or errors in my work, I commit to being more proactive in seeking feedback and clarifications from team members.
  2. Frequent Check-Ins: In realizing the significance of team alignment, I commit to checking in more frequently with my team members. By maintaining regular communication and seeking feedback, I aim to ensure that our efforts remain aligned towards our common objectives throughout each sprint.

From the blog CS@Worcester – CS: Start to Finish by mrjfatal and used with permission of the author. All other rights reserved by the author.