Author Archives: lynnnsubuga

Week 11

Mocking is a process used in unit testing when the unit being tested has external dependencies. The purpose of mocking is to isolate and focus on the code being tested and not on the behavior or state of external dependencies. These dependencies, such as databases, external services, or third-party libraries, may be difficult to control or reproduce in a testing environment. By creating mock objects that mimic the behavior of these dependencies, we can isolate and test individual components of our code in a controlled and predictable manner. Some benefits of mocking include isolation. Mocking allows us to isolate the unit of code being tested from its external dependencies, ensuring that tests focus solely on the logic within the unit itself. Mock objects provide precise control over the behavior and responses of dependencies, enabling us to simulate various scenarios and edge cases during testing. Debugging is another benefit of mocking. Mock objects can be used to debug code by providing detailed information about how a method is being called and what values are being passed to it. Collaboration:Mock objects can be shared among developers, making it easier to collaborate on testing and ensure that code is tested consistently across different environments.

Some of the challenges that mock testing can cause is maintenance. Mock objects can be difficult to maintain, particularly as systems evolve and change over time. When new features are added, mocks may need to be updated to accurately reflect the behavior of the system. Complexity is another problem. As the complexity of a system increases, the complexity of the mock objects used to test it may also increase. This can make it difficult to understand and modify tests, particularly for developers who are not familiar with the system. Some mock testing best practices include using mock testing sparingly. It’s important to use it well and not rely on it too heavily because it can be hard to maintain. Keeping mock objects simple is important. Mock objects should be simple and easy to understand, with clear and concise code. Write test cases first: Writing test cases before writing code can help ensure that code is designed with testing in mind and that it can be easily tested using mock objects. I chose this article because it talks about the pros and cons of. Using mock testing as well as strategies to use in order not to find a lot of issues with it. Mock testing can be complicated when it’s overly used therefore it’s not advisable to use it all the time.

References.

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

WEEK 8

PATH TESTING.

Path testing is an approach to testing where you can ensure that every path through a program has been executed at least once. However, testing all paths does not mean that you will find all bugs in a program. There are some steps involved in path coverage testing. Step one is code interpretation. It is important to carefully understand the code you want to test. The next step is constructing a control flow graph. It shows the nodes representing code blocks and edges for the movement of control between them. The third step is determining the paths. This entails following the control’s path from its point of entry to its point of exit while considering all potential branch outcomes. While determining paths, you’ll also consider loops, nested conditions, and recursive calls. It is important to list every route like giving each path a special name or label so you can keep track of which paths have been tested. The next step is testing case design. Create test plans for each path that has been determined, make inputs that will make the program take each path in turn. Make sure the test cases are thorough and cover all potential paths. Examine the test results to confirm all possible paths have been taken. It is important to make sure the code responds as anticipated.

Some advantages of path testing is it helps reduce redundant tests, it focuses on the logic of the programs and it is used in test case design. Some cons of using path testing is the test case increases when the code complexity is increased, it will be difficult to create a test path if the application has a high complexity of code and some test paths may skip some of the conditions in the code. There are three path testing techniques which are Control Flow Graph (CFG) – The Program is converted into Flow graphs by representing the code into nodes, regions, and edges. Decision to Decision path (D-D) – The CFG can be broken into various Decision to Decision paths and then collapsed into individual nodes. Independent (basis) paths- Independent path is a path through a DD-path graph which cannot be reproduced from other paths by other methods. I chose these two resources because they go more in depth about path testing and help explain it well. One of the sources talks about the pros and cons of using path testing, the types of path testing which I didn’t know before this.

References.

https://www.geeksforgeeks.org/path-testing-in-software-engineering

https://www.tutorialspoint.com/software_testing_dictionary/path_testing.htm

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Week 6

In week 6, we talked about Equivalence class testing in class. It is a black box testing technique that allows testers to group input data into sets or classes, making it possible to reduce the number of test cases while achieving comprehensive coverage. This technique is useful when dealing with a large range of input values. The classes resemble the specified requirements and common behavior or attributes of the inputs. Test classes are designed based on each class attribute and one element or input is used from each class for the test execution to validate software functioning. In Equivalence class testing, there are some important features that we need to note. Aside from being a black box testing technique, it restricts the testers to examine the software product externally. It is also used to from groups of test inputs of similar behavior or nature. And finally, test cases are based on classes, which reduces the time and effort required to build a larger number of test cases.

Some examples of equivalence class testing are weak normal equivalence testing, strong normal equivalence testing, weak robust equivalence testing and strong robust equivalence testing. Some pros of using Equivalence class testing is it helps reduce the number of test cases, without compromising the test coverage, it reduces the overall test execution time as it minimized the set of test data, it enables testers to focus on smaller data sets, which increases the probability of uncovering more defects in the software product and it’s used in cases where performing exhaustive testing is difficult. Some cons of using Equivalence class testing is it doesn’t consider the conditions for boundary value, identification of equivalence classes relies heavily on the expertise of testers and testers might assume the output for all input data set is correct, which isn’t the case all the time.

Some difference between equivalence class testing and boundary value analysis is equivalence testing is a black box technique while boundary analysis is portioning/testing. I chose this article because it goes into details talking about Equivalence class testing in depth. In my opinion when we were doing both equivalence class testing and boundary value testing, I found boundary value testing to be easier to use and to understand. However, after reading more on equivalence class testing, I have more understanding of how it works and why it is used a lot. I know I will be able to do assignments that will require using Equivalence class testing now that I have more knowledge on it.

References.

https://www.professionalqa.com/equivalence-class-testing

https://testsigma.com/blog/equivalence-partitioning

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Final week

This week we talked about clean code. It is a reader-focused development style that produces software that’s easy to write, read and maintain. Knowing how to produce clean code is an essential skill for software developers. Clean code is what someone must do to call yourself a professional. Clean code is clear, understandable, and maintainable. When you write clean code, you’re keeping in mind the other people who will read it and need to interpret the code. Some of the characteristics of clean code is it should have meaningful names for the reader to easily understand and to avoid confusion. Functions are the building blocks of programs so creating easy to read functions makes it easier to understand and modify programs. It is important for programs to have comments because it helps explain your code to other people. Formatting when writing clean code is important like making sure you have white spaces in the program.

There are three principles of clean code which are choosing the right tool for the job, optimizing the signal-to-noise ratio and strive to write self-documenting code. The 10 steps to writing clean code are following conventions. This can be like using a name which keeps things clear and lets you know what you’re working with. Say what you mean is another step to writing clean code. It’s easily frustrating seeing code with variables that are misleading. Whitespace is incredibly powerful so that the code is readable. Remember the power of I as it’s always clear that “I” is your iterator variable. Keep if functional. If a function is doing more than its name suggests, then some of the excess functionality could be split into its own function. Keep it classy can mean keeping code tidy, clear and consistent or if you have a functionality problem, you can separate by creating a class to handle that functionality. I chose this blog post because it talks about some of the essential things needed for writing clean code to be a good programmer. This blog also explains in detail each step that’s important when writing clean code which also helps me understand better what good programmers do to have efficient code. I am an aspiring developer and going through all these steps has helped me have more insight on what I must do and gained some more knowledge. I liked this resource because it has helped me further understand some principles and steps when writing clean code.

References

.https://www.pluralsight.com/blog/software-development/10-steps-to-clean-code

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Week 12

This week we talked about development environments and what they are used for. A development environment is the space where developers can work, experiment, and test without worrying they’ll interfere with the experience of real users. One important environment commonly used in the developing world is Visual studio code. We looked at some of the extensions needed in VS code to run a program successfully. Visual studio code also has a dev container. Containerization has helped developers install particular operating systems and dependencies with particular version numbers with tools like docker. This has solved the problem of developers running different operating systems and having different versions of dependencies installed. Another development environment we looked at is Gitpod. It is a cloud development environment that lets you develop in pre-built development containers, running on their cloud infrastructure, and use a variety of IDEs in your browser. Therefore, you don’t need to install docker or any IDE, and your development doesn’t depend on your local machine’s resources.

Gitpod enables developers to immediately start coding, debugging, and testing their code. We also looked at command scrips mainly a build script and a lint script. Build scripts are used to create artifacts or packages for later development. They do this by getting code from a source code repository like git, then running tools like MSbuild or Gradle to compile and test the code. Build scripts are used for projects that are complicated to build because we don’t have to remember all the details in which to run multiple commands. A lint script is a tool that scans your code with the goal of finding issues that can lead to bugs or inconsistencies. For the blog I chose, it talks about what development environments are and how to get started with them. Development environments allows software developers to create, run, and test their application code in a way that’s adequately realistic and safe. It is important for development environments to offer isolation to developers. This means that they can do whatever they need in their environment without concern that they are breaking someone else’s work.

I chose this resource because it goes more into detail about development environments and why they are important to developers in the real world. These different environments help developers be able to do their work without interfering with the user experience. Developers also must keep the environments as close to each other as possible and containers are a great technology for enabling that.

References.https://www.plutora.com/blog/what-development-environment-how-get-started-now

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Week 7

In this blog, we are going to learn about Scrum development framework and why its important. Scrum is a lightweight framework that helps people, teams and organizations generate value through adaptive solutions for complex problems. The Scrum team consists of one scrum master, one product owner and developers and there are no sub-teams or hierarchies. The good thing about a scum team is that it is cross-functional meaning all the members have all the skills necessary to create value each sprint. Usually, the scrum team has 10 or fewer people and this helps teams communicate better and be more productive. The scrum team is responsible for all product related activities from stakeholder collaboration, verification, maintenance and more.

The blog I chose talks about the getting started as a scrum master and some of the steps to getting started. Some of the steps in the blog to getting started are getting to know your new team and this is important to understand who’s in the team and building healthy relationships. It is also important to understand your new team’s purpose and goals because sometimes action is confused with progress and it’s crucial to know what’s driving them. Mapping out the stakeholders is important because as a scrum master, you’re responsible for ensuring the team can run as efficiently as they can. When you’ve identified a tricky stakeholder, you can work with them to ensure they interface with the product owner rather than the team. Another important aspect as a scrum master is asking your team if the Agile framework like scrum is working for them. This is to make sure the team is aligned on different things like sprint planning. Looking after yourself and development as a scrum master is one of the most important steps when starting with a new team. This is because the role of a scrum master requires you to think differently a lot of the time which can be tiring therefore you need other people to get advice from or to bounce your ideas off.

I think this blog post gives an insight of the role of a scrum master in more detail. It explains some important aspects involved with being a scrum master and what the role looks like. The scrum team consists of other members like developers who keep the database working and product owner who is responsible for commuting the product goal and what it should do. This goes to show that this isn’t a one man’s job, and several people are required for completion of a successful task.

References.

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Week 5

In week 5, we talked about the steps involved in the developing software. We mainly looked at 6 steps and when listed in order they are Requirements analysis, design, implementation, verification, deployment and maintenance. By definition, Requirements analysis in developing software could be determining languages a project is developed in, any hardware requirements needed, and what the project will be doing. Design includes the planning of how features are going to be created, implementation is the creation of the project, verification is the process of making sure the project is working as intended as well as making sure it’s what the customer wants, deployment is releasing the completed project to the consumer or the user, and finally maintenance could involve working on bug fixes and regularly updating the software as needed. For the blog post, it goes deeper into explaining the different steps involved in software developing. As a result of following these steps, your team is able to share a common goal, targets are set and tracked by the developers and everyone is able to cooperate effectively.

The blog includes a process called SLDC(Software Development Life Cycle). SLDC is a structured process used for designing, developing, testing and maintaining software. SDLC provides an organized methodology for the software development process, and supports the development team in identifying risks, establishing quality standards and monitoring performance. Knowing the requirements for the software development process is important for the developers because it makes them better understand the user and hence provide the best quality software for them. Some questions to ask the client before the launch of any new project are who are you? What software do you want? What is this software for? What is your budget? and more. I chose this particular resource because it talks about the same steps we discussed during class but goes more into depth. I think knowing the steps involved in developing software is important because as a future software developer, it is good to have an idea of what the work life as a developer is. I want to be a developer after graduation and Week 5’s topic gave me a little insight of what developers do and the cycle’s in which they work in.

Reference.

https://www.keypup.io/blog/software-development-process-an-easy-breakdown-keypup#:~:text=The%20seven%20phases%20include%3A%20requirements,development%20process%20are%20often%20skipped.

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Week 3

In week 3, we talked about staying synchronized with the upstream by using pull requests that are merged into the main branch by the maintainer. Synchronizing with the upstream ensures that your local and origin copies of the main branch have the same commits as the upstream main branch. One key thing I took from this week is when pull requests are merged into the upstream main, the main branches in the local and origin repos will get out of synch with the upstream. To start pulling from the upstream, the git remote -v command is first used to connect your local repository to the remote server. It lists the names and URLs of all the remote repositories that the local repo knows about. One other important git command used when synchronizing is git pull. To pull changes from the upstream, the main branch needs to be the active branch in the local repo. After this, git pull upstream main will pull and add commits from the main branch of the upstream repo to the main branch. After merging with the upstream, there’s no need to keep the feature branch and can therefore be deleted. Usually, developers will delete them to avoid having their repos becoming cluttered with old feature branches.

The blog for this week that I chose speaks about how to synch with the upstream and some of the git commands used. To start, you need to have an origin and upstream repo that are active. The command git remote -v helps verify that you have already setup a remote for the upstream repository. When you want to share some work with the upstream maintainers you branch off main and create a feature branch. When you’re done, you can push it to your remote repository and delete the feature branch after. Another important command is git status because it shows you how many commits you have of the synched remote branch. I chose this blog because it further broadens my knowledge on the git commands that are normally used with synchronizing with the upstream. I think it’s important to do research on the material your covering in class in order to improve on the knowledge you already have. This blog has a lot of the commands I used in the git homework 3 and it helped me be more familiar with the git commands. I really liked reading this blog and seeing how some of the things am studying in class are commonly used by developers in their day-to-day jobs.

Reference.

https://www.atlassian.com/git/tutorials/git-forks-and-upstreams

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Week 2

This week we learnt about how to use different git commands and their purpose. We worked on using branches and commits and using pull request to upstream your changes. One of the important things I learnt is how branches work. Basically, you get to work in a separate environment and once you’re finished, you make a pull request asking to combine your work with another person’s work. If they approve, your able to merge your branches. Forking is another important git command. A developer can see your repository and has an idea to add something to it, this is where forking comes into play. They can fork or make their own copy of your repository and add their own new features to it. If approved, they then submit a pull request to the owner and the changes are added. I think it’s crucial to know that anyone can fork a public repository, but it’s up to the repository owners to accept or reject pull requests.

            For the blog I chose, I wanted to research more into what is GitHub and what it’s used for, why is it one of the main platforms that developers use. I chose this blog because I wanted to read more about the basics of GitHub and why it was created. I think it’s important to know why it is one of the most used platforms used by developers. GitHub is one of the most popular resources for developers to share code and work on projects together. GitHub is used for storing, tracking, and collaborating on projects. It is also a social networking site where developers work openly and pitch their work. The blog talks about the biggest selling point of GitHub which is it’s set of project collaboration features, which includes version control and access control. One of the benefits of git is its cloud-based infrastructure which makes it more accessible. A user can access their repository from any location on any device, download the repository and push their changes.

            Based on my resource, I do like it because it has given me a deeper insight of GitHub and how it works. It resonates with me because the material from week 2 is like my blog and I now understand better what am doing in class and why am doing it. I think knowing the different commands used when working in GitHub is a huge part in successfully understanding how to use the platform.

Links.

https://blog.hubspot.com/website/what-is-github-used-for#what-github

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Tech Perspectives

Welcome to my blog. I will be sharing all things tech related and this can range from industry insights, to the latest gadgets and more. Join me on this tech adventure and let’s learn together.

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.