Author Archives: Arber Kadriu

Exploring Continuous Integration: A Pillar of Modern Software Quality Assurance (Week-14)

Exploring Continuous Integration: A Pillar of Modern Software Quality Assurance (Week-14)

In the rapidly evolving world of software development, Continuous Integration (CI) has emerged as a key practice that significantly enhances the quality and efficiency of products. This week, we explore the critical role of CI in modern software development, highlighting its benefits, best practices, and essential tools.

What is Continuous Integration?

Continuous Integration is a development strategy where developers frequently merge code changes into a central repository, followed by automatic builds and tests. The main aim is to provide quick feedback so that if a problem arises, it can be addressed at the earliest opportunity.

Benefits of Continuous Integration

1. Early Bug Detection: Frequent integration tests help identify defects early, reducing the costs and efforts needed for later fixes.

2. Reduced Integration Issues: Regular merging prevents the integration hell typically associated with the ‘big bang’ approach at the end of projects.

3. Enhanced Quality Assurance: Continuous testing ensures that quality is assessed and maintained throughout the development process.

4. Faster Releases: CI allows more frequent releases, making it easier to respond to market conditions and customer needs promptly.

Implementing Continuous Integration

1. Version Control: A robust system like Git is essential for handling changes and facilitating seamless integrations.

2. Build Automation: Tools such as Jenkins, Travis CI, and CircleCI automate building, testing, and deployment tasks.

3. Quality Metrics: Code coverage and static analysis help maintain high standards of code quality.

4. Automated Testing: A suite of tests, including unit, integration, and functional tests, are crucial for immediate feedback on the system’s health.

5. Infrastructure as Code: Tools like Docker and Kubernetes ensure consistent environments from development to production.

Best Practices for Continuous Integration

1. Maintain a Single Source Repository: Centralizing code in one repository ensures consistency and simplicity.

2. Automate Everything: From compiling, testing to packaging, automation speeds up the development process and reduces human error.

3. Ensure Builds Are Self-Testing: Builds should include a comprehensive test suite, and only successful tests should lead to successful build completions.

4. Prioritize Fixing Broken Builds: Addressing failures in the build/test process immediately keeps the system stable and functional.

5. Optimize Build Speed: A quick build process promotes more frequent code integration and feedback.

Tools for Continuous Integration

  • Jenkins: Manages and automates software development processes.
  • Travis CI: Hosted integration service used for project building and testing at GitHub.
  • CircleCI: Integrates with several cloud environments for CI/CD practices.

Conclusion

Continuous Integration is essential, not just as a technical practice but as part of the culture within high-performing teams. It fosters a disciplined development environment conducive to producing high-quality software efficiently and effectively. For those seeking to delve deeper, Continuous Integration: Improving Software Quality and Reducing Risk by Paul M. Duvall et al. is an excellent resource.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

As we wrap up our second sprint, it’s time to pause, reflect, and glean insights from the journey we’ve undertaken together. Sprint 2 has been a period of significant learning, characterized by both triumphs and obstacles that have shaped our team dynamics and individual growth. In this retrospective, we’ll delve into the challenges we faced, the lessons we learned, and the strategies we’ll implement moving forward.

One of the pivotal challenges we encountered during this sprint was the coding of a UI component. Initially perceived as a straightforward task, it quickly evolved into a complex endeavor due to unforeseen changes in the source. These alterations necessitated extensive adjustments to our initial plans, leading to delays and frustration. However, amidst the challenges, we discovered an opportunity for growth. By grappling with the intricacies of the component and adapting to evolving requirements, we honed our problem-solving skills and gained a deeper understanding of UI development. This experience underscored the importance of flexibility and adaptability in the face of unforeseen circumstances, emphasizing the need to approach tasks with an open mind and a willingness to iterate and refine our approach.

Another significant aspect that influenced our sprint was the weight of certain child issues. What initially appeared as minor tasks turned out to have a more substantial impact on our workflow than anticipated. This discrepancy highlighted the importance of thorough planning and assessment when breaking down tasks and allocating resources. Moving forward, we recognize the need for a more nuanced approach to issue prioritization, ensuring that each task receives the appropriate level of attention and resources commensurate with its importance. By fostering a culture of careful planning and strategic resource allocation, we aim to mitigate the risk of unexpected bottlenecks and delays in future sprints.

Despite the challenges encountered, sprint 2 has been instrumental in fostering our team’s growth and development. We’ve had the opportunity to enhance our problem-solving skills, adapt to changing circumstances, and refine our communication and collaboration strategies. Effective communication emerged as a cornerstone of our approach, enabling us to navigate challenges and coordinate efforts seamlessly. Whether through online discussions or face-to-face meetings, we remained committed to fostering open dialogue and transparent communication channels, ensuring that everyone remained aligned and informed throughout the sprint.

Looking ahead, we recognize the importance of carrying forward the lessons learned from sprint 2. We are committed to prioritizing effective communication, proactive problem-solving, and meticulous planning to ensure the success of future endeavors. Additionally, we plan to leverage our experiences from this sprint to inform our approach in subsequent iterations. By documenting our challenges, solutions, and key takeaways in a “lessons learned” repository, we aim to create a knowledge base that will serve as a valuable resource for future sprints, enabling us to anticipate and mitigate potential obstacles more effectively.

In conclusion, sprint 2 has been a journey of growth, resilience, and collaboration. While we encountered our fair share of challenges along the way, each obstacle served as an opportunity for learning and development. Armed with newfound insights and a renewed sense of determination, we look forward to tackling the challenges that lie ahead and achieving our goals as a team. With a shared commitment to continuous improvement and a supportive team environment, we are confident in our ability to overcome any obstacle and emerge stronger than before.

Links to issues covered:

Review GUI mockup
https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/checkoutguestfrontend/-/issues/45

Coding new UI and fixing gitpod implementation
https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/checkoutguestfrontend/-/issues/43

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Embracing Test-Driven Development for Quality Software (Week-13)

In the evolving world of software development, methodologies that enhance code quality and project efficiency are highly prized. Test-Driven Development (TDD) stands out as a pivotal methodology, advocating for tests to be written before the actual code. This approach is detailed compellingly in the Semaphore article “Test-Driven Development: A Time-Tested Recipe for Quality Software,” offering valuable insights into the TDD process and its benefits for developers and projects alike.

TDD is anchored in a cyclic process known as “red-green-refactor.” Initially, a developer writes a test for a new function, which fails (red stage) because the function isn’t implemented yet. Then, minimal code is written to pass the test (green stage), focusing on functionality before perfection. The final step involves refining the code without altering its behavior (refactor stage), enhancing its structure, readability, and performance.

The significance of TDD in software development cannot be overstated. It promotes incremental development, allowing for manageable progression and easier troubleshooting. The necessity to write tests first compels developers to deliberate on code design and interface upfront, leading to cleaner, more maintainable code. Additionally, TDD provides a safety net, enabling developers to modify or add new features with the confidence that existing functionality remains intact, as verified by the tests.

Despite its advantages, TDD is sometimes viewed skeptically, with critics pointing to its perceived complexity and the upfront time investment in test writing. However, the Semaphore article counters these arguments by highlighting the long-term benefits: reduced debugging and maintenance time, deeper codebase understanding, and ultimately, a more streamlined development process. The initial efforts pay off by preventing future headaches, underscoring TDD’s efficacy in building robust, high-quality software.

TDD marks a paradigm shift in software development philosophy. It prioritizes a meticulous, forward-thinking approach to code creation, where quality, efficiency, and thoughtful design are paramount. The practice of writing tests before code not only ensures functionality but also fosters an environment of continuous improvement and refinement.

The Semaphore article serves as a robust endorsement of TDD, encouraging developers to adopt this methodology for its substantial benefits. TDD is more than a set of procedures; it’s a mindset that values thorough preparation and continuous enhancement. For those seeking to elevate their development practices, embracing TDD could be the transformative step needed, leading to projects that are not just successful but also sustainable and adaptable in the long run.

Delving into Test-Driven Development can significantly impact the quality and sustainability of software projects. For those interested in a deeper exploration of TDD, the Semaphore article provides an excellent starting point with comprehensive insights and practical examples.

Read the full Semaphore article on Test-Driven Development here for an in-depth understanding of how TDD can transform your software development process.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Unraveling the Efficiency of Pairwise and Combinatorial Testing in Software Development (Week-11)

In the dynamic landscape of software development, ensuring the reliability and functionality of applications is paramount. Traditional testing methods, while thorough, often require substantial time and resources, prompting the industry to lean towards more efficient strategies. Among these, Pairwise and Combinatorial Testing have emerged as pivotal techniques, providing a balance between testing depth and resource allocation.

Pairwise Testing: A Closer Look

Pairwise testing, a subset of combinatorial testing, operates on a principle that most defects in software are triggered by interactions between pairs of input parameters. This method systematically generates test cases to cover all possible pairs of inputs, significantly reducing the number of tests needed to identify potential bugs. According to Capers Jones, a luminary in software engineering, pairwise testing can detect up to 75% of defects, presenting a compelling case for its adoption (Jones, 2008). This efficiency stems from the recognition that not all input combinations are necessary to uncover the majority of the issues, thus optimizing the testing process.

Combinatorial Testing: Expanding the Horizon

Combinatorial testing extends the concept of pairwise testing by considering interactions among three or more parameters. This technique is particularly beneficial in complex systems where interactions extend beyond simple pairs. While it requires more test cases than pairwise testing, it’s still far less than the exhaustive testing of all possible inputs. The National Institute of Standards and Technology (NIST) has highlighted combinatorial testing’s effectiveness, noting its capability to uncover intricate bugs that pairwise might miss, making it an indispensable tool for ensuring software robustness (Kuhn et al., 2013).

Integrating Pairwise and Combinatorial Testing into Development

The integration of these testing methodologies into the software development lifecycle can significantly enhance the quality assurance process. By identifying the most impactful combinations of parameters, developers can preemptively address issues, leading to more stable releases. Tools such as PICT (Pairwise Independent Combinatorial Testing tool) from Microsoft and Hexawise facilitate the implementation of these strategies, enabling teams to automate test case generation and focus on critical test scenarios.

Conclusion

Pairwise and combinatorial testing represent a paradigm shift in software testing, moving away from exhaustive and resource-intensive methods towards a more strategic approach. By focusing on the interactions that most likely contribute to defects, these methodologies offer a practical pathway to improving software quality without the overhead of traditional testing techniques. As software systems grow in complexity, the adoption of pairwise and combinatorial testing is not just advisable but essential for developers aiming to deliver flawless applications efficiently.

The practicality and effectiveness of these testing strategies underscore a broader trend in software development towards optimization and efficiency. As we continue to push the boundaries of what software can achieve, the methodologies we employ to ensure their reliability must evolve accordingly. Pairwise and combinatorial testing stand at the forefront of modern software quality assurance.

References:

  • Jones, C. (2008). Applied Software Measurement. McGraw-Hill Education.
  • Kuhn, D. R., Kacker, R. N., & Lei, Y. (2013). Introduction to Combinatorial Testing. CRC Press.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Unveiling Behavior-Driven Development: A Fresh Perspective (Week-10)

In the fast-paced realm of software development, Behavior-Driven Development (BDD) emerges as a transformative approach, promoting communication and precision in building software that aligns with user expectations and business objectives. Unlike traditional development methodologies, BDD emphasizes the behaviors of software under various conditions, prioritizing user needs and ensuring a shared understanding among all stakeholders.

At its core, BDD focuses on defining expected software behaviors through readable and understandable scenarios. This collaborative process involves all stakeholders, including developers, testers, and non-technical personnel, facilitating a common language and shared vision for the project. These scenarios, structured in a Given-When-Then format, not only guide the development and testing processes but also serve as living documentation and a source of truth throughout the project lifecycle.

The integration of BDD into a development process begins with the identification of user stories, which describe the needs and goals of the end user. These stories are further broken down into specific scenarios, outlining how the software should behave in different situations. This meticulous approach ensures that every feature developed is directly tied to user requirements, reducing unnecessary work and focusing efforts on delivering value.

BDD’s strength lies in its ability to bridge the communication gap between technical and non-technical team members. By translating technical specifications into a language accessible to all, BDD ensures that everyone has a clear understanding of what is being built and why. This alignment leads to more accurate product outcomes and a smoother development process.

Furthermore, BDD enhances the quality of the final product. Automated tests derived from behavior scenarios ensure that all functionalities meet the predefined criteria, reducing the likelihood of bugs and regressions. This continuous validation not only boosts product reliability but also instills confidence among the team and stakeholders.

However, the transition to BDD requires a cultural shift within the organization. It demands active participation from all parties involved and a commitment to ongoing collaboration and communication. While this can be challenging, the long-term benefits of improved clarity, better product quality, and increased stakeholder satisfaction are invaluable.

In conclusion, Behavior-Driven Development offers a systematic and collaborative approach to software development, centered around clear communication and a deep understanding of user needs. By adopting BDD, teams can build software that not only meets but exceeds user expectations, fostering a more efficient and effective development process. As the digital landscape continues to evolve, methodologies like BDD will play a crucial role in shaping the future of software development, ensuring products are not only functional but also truly aligned with the needs and goals of the users they serve.

Additionally, Lambdatest offers insights into the intricacies of BDD, discussing its limitations and providing best practices. This resource parallels BDD to planning a perfect party, emphasizing the importance of teamwork, early error detection, and ensuring every part of the product is right.
https://www.lambdatest.com/learning-hub/behavior-driven-development

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Mastering Object-Oriented Testing: Ensuring Robust OO Software Design (Week-10)

Object-oriented testing (OOT) plays a critical role in verifying the functionality and robustness of object-oriented software systems. This form of testing addresses the unique challenges introduced by OO principles such as encapsulation, inheritance, and polymorphism. In this guide, we delve into the essential phases of OOT, including class testing, integration testing, system testing, and the utilization of Unified Modeling Language (UML) diagrams, to ensure the development of reliable OO applications.

Class Testing: The Cornerstone of OOT

Class testing is the first step in the OOT process, akin to unit testing in procedural programming. This stage focuses on the smallest units of OO design – the classes. It involves:

  1. State-based Testing: Assessing objects in different states to ensure they behave correctly under various scenarios.
  2. Method Testing: Verifying the functionality of each method within a class, considering diverse input combinations and their effects.
  3. Attribute Testing: Ensuring that class attributes maintain valid states throughout the object’s lifecycle.

Thorough class testing is fundamental, as it lays a solid foundation for more complex stages of testing, facilitating early detection and correction of defects.

Integration Testing: Bridging the Classes

Integration testing in OOT checks class interactions, vital for managing OO systems’ complex dependencies. Key approaches include:

  1. Collaboration Testing: Checks the data exchange and cooperation between objects to confirm correct combined operations.
  2. Sequence Testing: Focuses on the order of method calls and message passing among objects, ensuring they align with the desired workflows.
  3. Subsystem Testing: Involves testing groups of related classes, or subsystems, to identify any integration mishaps early on.

Effective integration testing ensures that individual classes operate harmoniously within the larger system context.

System Testing: Validating the Entire Application

System testing evaluates the complete OO software against specified requirements to ensure it meets its intended purposes. This encompasses:

  1. Use Case Testing: Derives test cases from use cases, ensuring the system fulfills user expectations and business needs.
  2. Scenario Testing: Simulates real-world scenarios to uncover any unexpected system behaviors or failures.
  3. State Transition Testing: Assesses the system’s responses during various state changes, guaranteeing consistency and reliability.

This holistic approach verifies the system’s overall functionality and user readiness.

Utilizing UML Diagrams for Insightful Testing

UML diagrams are invaluable in OOT for visualizing the structure and dynamics of OO systems. They aid testers and developers by providing a clear representation of classes, interactions, and system states, facilitating the creation of targeted and effective test cases.

Conclusion: Elevating Software Quality with Object-Oriented Testing

Object-oriented testing is indispensable for crafting high-quality OO software. By systematically conducting class testing, integration testing, and system testing, and leveraging UML diagrams for enhanced insight, developers can address the complexities of OO systems effectively. A recommended resource for those seeking to deepen their understanding of OOT practices is “Testing Object-Oriented Systems: Models, Patterns, and Tools” by Robert V. Binder. Implementing these strategies ensures the delivery of robust, user-centric OO applications that stand the test of time.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Mastering Unit Testing: A Developer’s Guide to Enhancing Code Quality (Week – 9)

Unit testing is an integral part of software development, crucial for validating individual code segments’ functionality. This systematic approach helps in identifying defects early, enhancing code reliability, and simplifying modifications. Let’s delve into the nuanced strategies of unit testing, including specification-based and code-based techniques, and explore the role of code coverage and the utility of JUnit in ensuring robust software solutions.

Specification-Based Testing Techniques

In the realm of specification-based or black-box testing, the focus is on assessing the software’s external behavior rather than its internal structure:

  1. Boundary Value Testing: This method targets the extreme ends of input ranges, where most errors tend to occur. By testing these boundary values, developers can identify potential edge case issues that might not emerge under normal test conditions.
  2. Equivalence Class Testing: This strategy simplifies testing by grouping inputs into classes that elicit similar behaviors. Testing one sample from each class can reduce the number of tests while maintaining effectiveness, ensuring that different scenarios are adequately represented.
  3. Decision Table-Based Testing: For functions governed by complex rules, decision table-based testing offers a structured approach. It maps different input combinations to their expected outcomes, ensuring all logical branches are explored and validated.

Code-Based Testing Strategies

Code-based or white-box testing requires an understanding of the software’s internal workings:

  1. Path Testing: Essential for ensuring every executable path is tested at least once, path testing uncovers sections of code that could be prone to errors, enhancing the overall robustness of the application.
  2. Data Flow Testing: Focusing on the lifecycle of data, this method tracks the creation, manipulation, and usage of variables. It’s particularly effective in identifying issues related to improper data handling and scope errors.

Emphasizing Code Coverage

Code coverage metrics are crucial for gauging the extent of tested code. While high code coverage does not eliminate all software bugs, it indicates thorough testing and contributes to higher quality code. Achieving substantial code coverage helps in maintaining and updating code with confidence.

Leveraging JUnit for Java Testing

JUnit, a cornerstone in the Java programming ecosystem, streamlines the creation, execution, and documentation of unit tests. It supports annotations for defining test cases and employs assertions to verify code behavior, aligning with Test-Driven Development (TDD) practices. JUnit’s simplicity aids in regular test implementation, encouraging developers to maintain code quality continuously.

In conclusion, unit testing is not just a task but a discipline that significantly impacts software quality. By integrating specification-based and code-based testing methods and striving for extensive code coverage, developers can craft more reliable, maintainable software. JUnit further simplifies the testing process, embedding quality into the development lifecycle. For a comprehensive guide to unit testing with JUnit, refer to “JUnit in Action, Third Edition” by Catalin Tudose, a resource that offers deep insights and practical examples for effective Java testing.

By embracing these practices, developers can ensure that their code not only functions as intended but also adapts gracefully to future changes and requirements.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Embracing Vulnerability: “Expose Your Ignorance” Week-7

Acknowledging the Unknown:

“Expose Your Ignorance,” a compelling pattern from “Apprenticeship Patterns” by Dave Hoover and Adewale Oshineye, addresses a crucial aspect of growth in software development: openly acknowledging what you don’t know. This pattern encourages embracing the gaps in your knowledge as opportunities for learning, rather than as weaknesses. It’s about admitting your ignorance on certain topics, and actively seeking to fill those gaps, thereby transforming vulnerability into strength.

A Resonating Approach:

Though I haven’t yet started a career in software development, the principle of “Expose Your Ignorance” resonates with me. It aligns with my understanding of learning as an iterative and transparent process. This pattern challenges the often-held notion that admitting ignorance is a sign of weakness, especially in a field as complex as technology, where no one can know everything.

The Power of Honest Inquiry:

What I find most intriguing about this pattern is the empowerment that comes from honest self-assessment and inquiry. By exposing our ignorance, we open doors to new knowledge and show a willingness to grow. This approach not only accelerates learning but also fosters an environment of openness and collaboration.

Shaping Future Learning Attitudes:

While I have not yet had the chance to apply this in a professional setting, “Expose Your Ignorance” shapes my perspective on how I intend to approach learning in my future career. It instilled in me the value of being forthright about what I don’t know and using that as a catalyst for continuous improvement and skill acquisition.

A Balance of Vulnerability and Confidence:

While embracing ignorance is a powerful learning tool, I also recognize the importance of balancing this vulnerability with confidence in what I do know. It’s crucial to avoid underestimating one’s existing skills and knowledge while being open about areas for growth.

In conclusion, “Expose Your Ignorance” is an essential pattern for anyone aspiring to succeed in the ever-evolving world of software development. It encourages a mindset where admitting to not knowing something is not a drawback but a starting point for learning and growth. This pattern is a reminder that in the journey to becoming a skilled professional, vulnerability and openness are not just accepted; they are necessary. By willingly exposing our ignorance, we set ourselves on a path of continual learning and development, a path that is essential in the dynamic field of software development.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective Blog

As the first sprint EVER, it went very well, and it provided significant learning opportunities.

Our team exhibited strong collaboration throughout this initial phase. Effective communication was a cornerstone of our approach, enabling us to schedule both online and face-to-face meetings efficiently. A testament to our collaborative spirit was achieving our target of completing more than 75% of the tasks by the sprint’s end. This success largely stemmed from our commitment to weekly in-person meetings, fostering a united team environment crucial for addressing new challenges and workflows.

Despite these positive aspects, our journey was not devoid of hurdles. We encountered several setbacks, primarily due to our collective inexperience with GitLab and its workflow processes. Initially, we struggled with navigation, issue postings, and branch creations, leading to confusion and delays. Our approach to merge requests and the subsequent review process also proved problematic, culminating in merge conflicts and pipeline failures due to our wait-until-the-end strategy.

In light of these challenges, we aim to refine our approach in the upcoming sprint. We plan to create a “Workflow tips” document, compiling our experiences and solutions from this sprint to circumvent similar obstacles in the future. We intend to adopt a more proactive review process for issues and streamline our approach to merge requests, ensuring they align with workflow requirements.

Reflecting on the obstacles I encountered this sprint, there are several areas for personal improvement:

Enhancing GitLab Skills: I recognize the importance of becoming more proficient with GitLab. I intend to invest effort into understanding its intricacies, especially concerning branch management, handling merge requests, and connecting issues. I plan to utilize online resources, seek guidance from tutorials, and lean on my peers for support to enhance my competency.

Strengthening Communication: Acknowledging the need for improvement, I aim to enhance my approach to communication. I will take initiative to seek out feedback more actively and clarify doubts promptly with team members and mentors, aiming to resolve issues before they escalate.

Boosting Organizational Capabilities: I understand that better organization is key to avoiding past mistakes. Therefore, I am committed to honing my organizational skills, particularly in keeping track of my tasks, associated merge requests, and issues. Employing project management tools or maintaining a personal task tracker will be instrumental in keeping me in sync with the team’s goals and deadlines.

Links to the issues covered in this sprint:

Create Integration and Pipeline

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/integration/-/issues/1

Settings and extensions previously located in the dev container should now be transferred to .vscode/settings.json and .vscode/extensions.json within the Gitpod environment, as outlined in the .gitpod.yml documentation. Furthermore, developer commands should be moved from the commands directory to bin to align with standard Linux conventions, necessitating updates in script paths and .gitlab-ci.yaml environment variables. Additionally, integrate the AlexJS linter into each project’s pipeline and the bin/lint.sh script, ensuring all documentation is checked and updated accordingly.

Familiarize ourselves with guestInfoFrontend to understand what goes into CheckoutGuestFrontend

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/checkoutguestfrontend/-/issues/37

This activity served as an introduction to the primary objectives of sprint 2. During this phase, we collectively reviewed the current wireframe for the checkout guest front end to familiarize ourselves with the anticipated design layout.

Refactor commands folder to bin

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/gitlab-profile/-/issues/69

This process entailed establishing a bin directory within the project and transferring three scripts from the template project into this new folder. Following this, I conducted tests on each of the three scripts to verify their functionality.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Building Skills with “Breakable Toys” Week-6

Cultivating Skills in a Safe Space:

“Breakable Toys,” a pattern from “Apprenticeship Patterns” by Dave Hoover and Adewale Oshineye, advocates for the creation of personal projects or ‘toys’ that can endure failures. These projects offer a sandbox for experimentation, where learning and mistakes occur without the high stakes of a professional environment. This pattern underlines the significance of having a personal space to apply and test new skills and knowledge in a tangible, yet forgiving, setting.

A Concept That Inspires:

While I am yet to embark on a professional career in software development, the idea of “Breakable Toys” strikes a chord with me. It appeals to the part of me that believes in the power of hands-on experience and learning through doing. The notion of constructing a personal project where the risk of failure is not only permissible but encouraged, is both liberating and exciting, especially for someone preparing to enter the tech industry.

The Freedom to Experiment:

What I find most intriguing about this pattern is the emphasis on the freedom to experiment, innovate, and yes, even fail. In the realm of these personal projects, the usual barriers and fears associated with failure are reduced, paving the way for creativity and exploration. This approach makes “Breakable Toys” not just a learning exercise, but a crucible for innovation and self-discovery.

Anticipating Its Impact on Learning:

The concept of “Breakable Toys” has already begun to shape how I envisage my approach to learning and development in software engineering. It reinforces my belief in the importance of engaging in personal projects as a fundamental part of my learning journey. This hands-on practice will be key to transforming theoretical knowledge into practical expertise.

A Balance Between Play and Purpose:

While I am enthusiastic about the potential of “Breakable Toys,” I also recognize the importance of balancing these personal explorations with goal-oriented learning. It’s essential that these projects are not just about exploration but also about advancing specific learning objectives or developing particular skills.

In conclusion, the “Breakable Toys” pattern presents a compelling approach for anyone aspiring to grow in software development. It highlights that mastering this field involves not just structured learning but also unstructured, creative experimentation. By building and experimenting with personal projects, one can cultivate a deeper understanding and a more versatile skill set, all within a context where failure becomes a stepping stone to innovation and mastery. This pattern celebrates the idea that sometimes, the most valuable learning experiences come from the freedom to break things and learn in the process.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.