Author Archives: Maria Delia

Understanding Smoke Testing in Software Development

In software development, a build has to be stable before further more comprehensive testing in a project so that the project is successful. One of the ways of guaranteeing this is smoke testing, which is otherwise known as Build Verification Testing or Build Acceptance Testing. Smoke testing is an early check-point to verify that the major features of the software are functioning as desired before other more comprehensive testing is done.

What is Smoke Testing?
Smoke testing is a form of software testing that involves executing a quick and superficial test of the most crucial features of an application to determine whether the build is stable enough for further testing. It is a minimum set of tests created to verify if the core features of the application are functioning. Smoke tests are generally executed once a new build is promoted to a quality assurance environment, and they act as an early warning system of whether the application is ready for further testing or requires correction immediately.

Important Features of Smoke Testing

-Level of Testing: Smoke tests are interested in the most important and basic features of the software, without exploring each and every functionality.
-Automation: Automated smoke testing is a common routine, especially in the case of time limitations, to perform quick, repeatable tests.
-Frequency: Smoke testing is normally run after every build or significant change in code in order to allow early identification of major issues.
-Time Management: The testing itself is quick in nature, so it is a valuable time-saver by catching critical issues early.
-Environment: Smoke testing is typically performed in an environment that mimics the production environment so that test results are as realistic as possible.

Goal of Smoke Testing

The primary objectives of smoke testing are:

-Resource Optimization: Don’t waste resources and time on testing if core functionalities are broken.
-Early Detection of Issues: Identify any significant issues early so that they can be fixed at a quicker pace.
-Refined Decision-Making: Present an open decision schema on whether or not the build is ready to go to thorough, detailed testing.
-Continuous Integration: Make every new build meet basic quality standards before it is added to the master codebase.
-Pragmatic Communication: Give rapid feedback to development teams, allowing them to communicate clearly about build stability.

Types of Smoke Testing
There are several types of smoke tests based on methodology chosen and setting where it is put to practice:
-Manual Testing: Test cases are written and executed manually for each build by testers.
-Automated Testing: Automation tools make the process work by itself best used in situations of tight deadline projects.
-Hybrid Testing: Combines a mixture of automated as well as manual tests for capitalizing on both the pros of each methodology.
Daily Smoke Testing: Conducted on a daily basis, especially in projects with frequent builds and continuous integration.
Acceptance Smoke Testing: Specifically focused on verifying whether the build meets the key acceptance criteria defined by stakeholders.
UI Smoke Testing: Tests only the user interface features of an application to verify whether basic interactions are working.

Applying Smoke Testing at Various Levels
Smoke testing can be applied at various levels of software testing:
Acceptance Testing Level: Ensures that the build meets minimum acceptance criteria established by the stakeholders or client.
System Testing Level: Ensures that the system as a whole behaves as expected when all modules work together.
Integration Testing Level: Ensures that modules that have been integrated work and communicate as expected when combined.

Advantages of Smoke Testing
Smoke testing possesses several advantages, including:
Quick Execution: It is easy and quick to run, and hence ideal for frequent builds.
Early Detection: It helps in defect detection in the initial stage, preventing wasting money on faulty builds.
Improved Quality of Software: By detecting the issues at the initial stage, smoke testing allows for improved software quality.
Risk of Failure is Minimized: Detecting core faults in earlier phases minimizes failure risk at subsequent testing phases.
Time and Effort Conservation: Time as well as effort is conserved as it prevents futile testing within unstable builds.

Disadvantages of Smoke Testing
Although smoke testing is useful in many respects, it has some disadvantages too:
Limited Coverage: It checks only the most critical functions and doesn’t cover other potential issues.
Manual Testing Drawbacks: Manually, it could be time-consuming, especially for larger projects.
Inadequate for Negative Tests: Smoke testing typically doesn’t involve negative testing or invalid input scenarios.
Minimal Test Cases: Since it only checks the basic functionality, it may fail to identify all possible issues.

Conclusion
In conclusion, smoke testing is an important practice at the early stages of software development. It decides whether a build is stable enough to go for further testing, saving time and resources. By identifying major issues early in the development stage, it facilitates an efficient and productive software testing process. However, it should be remembered that smoke testing is not exhaustive and has to be supported by other forms of testing in order to ensure complete quality assurance. 

Personal Reflection

Looking at the concept of smoke testing, I view the importance of catching issues early in the software development process.
It’s easy to get swept up in the excitement of rolling out new features and fully testing them, but if the foundation is unstable, all the subsequent tests and optimizations can be pointless. Smoke testing, in this sense, serves as a safety net, getting the critical functions running before delving further into more rigorous tests. I think the idea of early defect detection resonates with my own working style.

As I like to fix small issues as they arise rather than letting them escalate into big problems, smoke testing allows development teams to solve “show-stoppers” early on, preventing wasted time, effort, and resources in the future. Though it does not pick up everything, its simplicity and the fact that it executes fast can save developers from wasted time spent on testing a defective product, thus ending up with a smooth and efficient workflow. The process, especially in a scenario where there are frequent new builds being rolled out, seems imperative to maintain a rock-solid and healthy product.
The benefits of early problem detection not only make software better, but also stimulate a positive feedback loop of constant improvement between the development team. 

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Understanding Mocking in Software Testing

Software testing is crucial to ensure that the system acts as expected. However, when dealing with complex systems, it can be challenging to test components in isolation since they rely on external systems like databases, APIs, or services. This is where mocking is used, a technique that employs test doubles. Mocking allows developers to simulate the behavior of real objects within a test environment, isolating the components they want to test. This blog post explains what mocking is, how it is applied in unit testing, the categories of test doubles, best practices, and more.

What is Mocking?
Mocking is the act of creating test doubles, which are copies of real objects that behave in pre-defined ways. These fake doubles stand in for actual objects or services and allow the developers to isolate chunks of code. Mocking also allows them to simulate edge cases, mistakes, or specific situations that could be hard to replicate in the real world. For instance, instead of conversing with a database, a developer can use a mock object that mimics database returns. This offers greater control of the testing environment, increases testing speed, and allows finding issues early.

Knowing Test Doubles
It is important to know test doubles to completely comprehend mocking. Test doubles are mock objects that replace actual components of the system for the purpose of testing. Test doubles share the same interface with the actual object but act in a controlled fashion. There are different types of test doubles:

Mocks: Mocks are pre-initialized objects that carry expectations for being called. Mocks are used to force particular interactions, i.e., function calls with specified arguments, to occur while the test runs. When interactions are  not up to expectations, then the test would fail.

Stubs: Stubs do not care about interactions. They simply provide pre-defined responses to method calls so that the test can just go ahead and not worry about the actual component behavior.

Fakes: They are more evolved test doubles with smaller implementations of real components. For example, an in-memory database simulating a live database can be used as a fake to speed up testing without relying on external systems.

Spies: Spies are similar to mocks but are employed to log interactions against an object. You can verify the spy after a test to ensure that the expected methods were invoked with the correct parameters. Unlike mocks, spies will not make the test fail if the interactions are unexpected.

The Role of Mocking in Unit Testing
Unit testing is testing individual pieces, such as functions or methods, in isolation. But most pieces rely on external services, such as databases or APIs. These dependencies add complexity, unpredictability, and outside factors that can get in the way of testing.

Mocking enables developers to test the unit under test in isolation by substituting external dependencies with controlled, fake objects. This ensures that any problems encountered during the test are a result of the code being tested, not the external systems it depends on.

Mocking also makes it easy to test edge cases and error conditions. For example, you can use a mock object to throw an exception or return a given value, so you can see how your code handles these situations. In addition, mocking makes tests faster because it avoids the overhead of invoking real systems like databases or APIs.

Mocking Frameworks: Mockito and Beyond
Various mocking libraries are utilized by programmers in order to craft and manipulate the mocks for unit testing. Among the most commonly used libraries used in the Java community is Mockito. Mockito makes it easy for one to write mock objects, specify their behavior, and confirm interactions in an easy-to-read manner.

Highlights of Mockito include:

Behavior Verification: One can assert that certain methods were called with the right arguments.
Stubbing: Mockito allows you to define return values for mock methods so that various scenarios can be tested.
Argument Matchers: It provides flexible argument matchers for verifying method calls with a range of values.
Other than Mockito, other libraries like JMock, EasyMock, and JUnit 5 can also be used. For Python developers, the unittest.mock module is utilized. In the.NET ecosystem, libraries like Moq and NSubstitute are commonly used. For JavaScript, Sinon.js is the go-to library for mocking, stubbing, and spying.

Best Practices in Mocking
As terrific as mocking is, there is a best-practice way of doing it and having meaningful, sustainable tests. Here are a few rules of thumb to bear in mind:

Mock Only What You Own: Mock only entities you own, such as classes or methods that you have created. Mocking third-party APIs or external dependencies will lead to brittle tests, which will be broken when outer dependencies change.

Keep Mocks Simple: Don’t overcomplicate mocks with too many configurations or behaviors. Simple mocks are more maintainable and understandable.

Avoid Over-Mocking: Over-mocking your tests can make them too implementation-focused. Mock only what’s required for the test, and use real objects when possible.

Assert Behavior, Not Implementation: Tests must assert the system’s behavior is right, not how the system implements the behavior. Focus on asserting the right methods with the right arguments are called, rather than making assertions about how the system works internally.

Use Mocks to Isolate Tests: Use mocks to isolate tests from slow or flaky external dependencies like databases or networks. This results in faster and more deterministic tests.

Clear Teardown and Setup: Ensure that the mocks are created before each test and destroyed thereafter. This results in tests that are repeatable and don’t produce any side effects.

Conclusion
Mocking is an immensely valuable software testing strategy that provides developers with a way to segregate and test separate components isolated from outside dependencies. Through the use of test doubles like mocks, stubs, fakes, and spies, programmers are able to fake out actual conditions, test on the boundary, and make their tests more reliable and quicker. Good practices must be followed, like mock only what you own, keep mocks as plain as possible, and assert behavior and not implementation. Applied in the right way, mocking is a great friend in creating robust, stable, and quality software.

Personal Reflection

I find mocking to be an interesting approach that enables specific and effective testing. In this class, Quality Assurance & Testing, I’ve gained insight into how crucial it is to isolate the units being tested in order to check their functionality in real-world settings. Precisely, I’ve understood how beneficial mocking can be in unit testing in order to enable the isolation of certain interactions and edge cases.

I also believe that, as developers, we tend to over-test or rely too heavily on mocks, especially when working with complex systems. Reflecting back on my own experience, I will keep in mind that getting the balance right, mocking when strictly required and testing behavior, not implementation, is the key to writing meaningful and sustainable tests. This approach helps us ensure that the code is useful and also adjustable when it encounters future changes, which is, after all, what any well-designed testing system is hoping for.

Reference:

What is Mocking? An Introduction to Test Doubles by GeeksforGeeks, Available at: https://www.geeksforgeeks.org/mocking-an-introduction-to-test-doubles/.

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Basis Path Testing in Software Testing

Software testing is a significant part of confirming the functionality, reliability, and performance of software products. Out of all the diverse types of tests, Basis Path Testing is one essential technique for confirming the control flow of a program. In this blog, we share the concept of Basis Path Testing, its importance, and how it is applied in software testing.

What is Basis Path Testing?
Basis Path Testing is a white-box testing method that focuses on software control flow. It was formulated by Thomas J. McCabe as a part of the Cyclomatic Complexity metric, which calculates the amount of linearly independent paths in a program’s control flow. The approach is designed to test the software by executing all the independent paths through the program at least once to provide complete coverage of the code.

The goal of Basis Path Testing is to locate all the potential paths within a program and ensure that each of them is tested for possible issues. It helps in determining logical defects that are not obvious through other testing techniques, such as functional or integration testing.

Key Elements of Basis Path Testing
Control Flow Graph: The first step in Basis Path Testing is to design a control flow graph (CFG) for the program. This graph represents the control structure of the program, including decision points, loops, and function calls.

Cyclomatic Complexity: The second step is to compute the cyclomatic complexity of the program, which is the number of independent paths. The metric is calculated as:
V(G) = e – n + 2*P

Where, e is number of edges, n is number of vertices, P is number of connected components.
The cyclomatic complexity provides the minimum number of test cases required to exercise all the independent paths.

Independent Paths: After calculating the cyclomatic complexity, the independent paths in the control flow graph must be determined. These are paths that don’t reuse any other path’s sequence of execution.

Test Case Design: Once independent paths are identified, test cases are created to execute each path such that all aspects of the program’s logic are exercised.

Importance of Basis Path Testing
Basis Path Testing is particularly useful in revealing intricate logical errors that can result due to intricate control flow. By carrying out all independent paths, it ensures that nothing in the program is left untreated, and this reduces the chances of undiscovered defects.

The approach is used widely in unit testing and integration testing, especially for programs with intricate decision structures and loops. It is also a good approach to use in regression testing, where changing the codebase can probably introduce flaws into previously tested paths.

Conclusion
Basis Path Testing is a highly valuable method for thorough testing of software using independent paths through the control flow of a program. By understanding and applying this method, software developers are able to improve the quality of applications, reduce errors, and deliver improved software to end-users.

Personal Reflection
Having studied Basis Path Testing, I can see how this approach is essential to checking the strength of software systems. As a computer science major, what I have learned from my studies is that testing is not just about checking if the code runs but, more importantly, that the logic and correctness of running are checked. Basis Path Testing’s focus on cyclomatic complexity provides a clear, mathematical way to ensure that all possible execution paths are considered.

My experience is that application of this technique detects logical flaws in programs which would otherwise not be easily seen through normal debugging or functional testing. 

Citation:
“Basis Path Testing in Software Testing.” GeeksforGeeks, https://www.geeksforgeeks.org/basis-path-testing-in-software-testing/.

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Learning Boundary Value Analysis in Software Testing

One of the most significant ways of ensuring that an application is reliable and efficient before deployment is through software testing. One of the most powerful functional testing techniques that focuses on testing the boundary cases of a system is Boundary Value Analysis (BVA). Boundary Value Analysis finds potential defects that are apt to show themselves on input partition boundaries.

What is Boundary Value Analysis?

Boundary Value Analysis is a black-box testing method which tests the boundary values of valid and invalid partitions. Instead of testing all the possible values, the testers focus on minimum, maximum, and edge-case values, as these are the most error-prone. This is because defects often occur at the extremities of the input ranges rather than at any point within the range.

For example, if a system accepts values between 18 and 56, instead of testing all the values, testers would test the below-mentioned values:

Valid boundary values: 18, 19, 37, 55, 56

Invalid boundary values: 17 (below minimum) and 57 (above maximum)

By running these primary test cases, the testers can easily determine boundary-related faults without unnecessary repetition of in-between value testing.

Implementing BVA: A Real-World Example

To represent BVA through an example, let us take a system processing dates under the following constraints:

Day: 1 to 31

Month: 1 to 12

Year: 1900 to 2000

Under Single Fault Assumption, where one of the variables is tested while others are at nominal values, test cases like below can be written:

Boundary value checking for years (e.g., 1900, 1960, 2000)

Boundary value checking for days (e.g., 1, 31, invalid cases like 32)

Checking boundary values for months (i.e., 1, 12)

By limiting test cases to boundary values, we are able to have maximum test coverage with minimum test effort.

Equivalence Partitioning and BVA together

Another helpful technique is combining BVA and Equivalence Partitioning (EP) together. EP divides input data into equivalent partitions where every equivalence class is expected to behave in the same way. By using these techniques together, testers can reduce the number of test cases but still maintain complete coverage.

For instance, if a system would only accept passwords of 6 to 10 characters long, test cases can be:

0-5 characters: Not accepted

6-10 characters: Accepted

11-14 characters: Not accepted

This mix makes the testing more efficient, especially when using more than one variable.

Limitations of BVA

Although BVA is strong, it does face some limitations:

It works well when the system contains properly defined numeric input ranges.

It has no regard for functional dependencies of variables.

It may not be equally effective on free-form languages like COBOL, which has more flexible input processing.

Conclusion

Boundary Value Analysis is one very important test method that can help testers define most probable fault sites of a system. Merged with Equivalence Partitioning, it has highest test effectiveness at the maximum elimination of test case replication and minimum complete loss of test coverage. In as much as BVA isn’t a “catch-all”, yet it represents an essential technique of software provision quality and dependability.

Personal Reflection

Learning Boundary Value Analysis has helped me understand more about software testing and how it makes the software reliable. It has shown me that by focusing on boundary values, defects can be detected with higher efficiency without generating surplus test cases. It is a very practical approach to apply in real-world scenarios, such as form validation and number input testing, where boundary-related errors are likely to be found. In the future, I will include BVA in my testing approach to offer more test coverage in software projects that I undertake.

Citation

Geeks for Geeks. (n.d.). Software Testing – Boundary Value Analysis. Retrieved from https://www.geeksforgeeks.org/software-testing-boundary-value-analysis/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Maria Delia’s Blog

First Blog for CS-443

Hello I am Maria Delia and this is my first blog post for Software Quality Assurance & Testing.

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Understanding Software Development Methodologies

The selection of an appropriate development methodology is important in every software project. The article “Top 4 Software Development Methodologies” by Mike McGuire provides a brief overview of the principles of Agile, Waterfall, RAD, and DevOps. Each methodology has unique strengths and challenges, making them suitable for different scenarios.

Summary of the Article

The article discusses four of the most commonly adopted methodologies in software development.

Agile Development Methodology: Agile emphasizes iterative development-release of functional software increments frequently for greater efficiency and customer satisfaction. While Agile is great in adapting to changing requirements, it requires high commitment and communication.

Waterfall Development Methodology: Waterfall is a traditional and linear approach that fits projects with clearly defined objectives and stable requirements. It’s easy to understand but can be slow and inflexible.

Rapid Application Development (RAD): This approach emphasizes rapid iterations with a minimum of overheads and hence is ideal for small to medium projects with well-defined objectives. RAD does, however, depend on experienced teams and well-defined user requirements.

DevOps Methodology: DevOps is rather a concept and culture than a methodology; it ensures collaboration across development, QA, and operations. It emphasizes automation and reliability, but challenges include adapting to continuous updates and regulatory constraints.

The article also introduces DevSecOps, which is a newer iteration of DevOps that integrates security in every step of development. This ensures that speed and safety go hand in hand in the production of software.

Reflection and Learning

Reading this article made it quite clear how much the development methodologies rely on the project’s aims and the team’s functioning style. For example, the very adaptability of Agile stresses its applicability in a strongly changing environment, while Waterfall’s structured approach is suited for projects with fixed requirements. This understanding helps me anticipate which methodology might be most effective in various situations.

What really caught my attention was DevSecOps, the integration of security into the development pipeline. This again highlighted the importance of embedding security practices early, as emphasized in some course works on secure software design. The move to collaboration and automation in DevOps and DevSecOps reflects changing demands in the software industry.

I have faced difficulties in projects where methodologies were not clearly defined, which resulted in wastes and miscommunication. This article shed light on how methodologies such as Agile and DevOps could minimize such occurrences. In the future, I will adopt the Agile practices in team projects to be more adaptive and productive. I also want to study DevSecOps in detail, as it aligns with my interest in developing secure and reliable software systems.

Application to Future Practice

Knowing such methodologies empowers me in informed decision-making while managing or taking part in software projects. I now know that while selecting Agile for its flexibility or Waterfall for clarity, I will understand how methodologies drive the outcomes of a project. The emphasis on security in DevSecOps inspires me to give top priority to secure coding practices in these present times when technology has gained center stage in our lives.

Citation

McGuire, M. (2024, March 24). Top 4 Software Development Methodologies. Link: https://example.com/top-4-software-development-methodologies

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

The Importance of Clean Code: Striking the Right Balance

Clean code is an important aspect of a software developer’s skill set. It means writing code that works well, and is easy to read, understand, and maintain. Three key principles of writing clean code are conciseness, reusability, and a clear flow of execution. Each principle in its own way contributes to the ease with which the software development process proceeds and the reliability of the end product.

1. Striking the Right Balance Between Conciseness and Clarity:

Finding the right balance between writing concise and clear code is very important. Concise code can make the codebase easier to read and it reduces the amount of time spent writing it. However, being too concise can make the code difficult to understand. The goal is to keep the code brief but also easy to read and understand without losing its purpose or logic.

2. Reusability

Another important principle of clean code is reusability. Writing code that can be reused in different parts of an application or across different projects saves time and reduces redundancy. Reusable code leads to a more modular structure, which makes the codebase easier to maintain and enhances its flexibility. It not only speeds up development but also makes it easier to fix bugs and make future updates.

3. Explicit Flow of Execution

A clear flow of execution is one of the critical characteristics of the code that guarantees its readability and ease of maintenance. A not well structured code leads to a condition where it is difficult to maintain a project. A logical and straightforward flow of code is easy to perceive by a developer, which is necessary for the code support during the whole life cycle.

4. The Single Responsibility Principle (SRP)

Every module or class should have only one responsibility. This decreases complexity, thereby making the code base manageable. Testing  and debugging, with development and improvements for future maintenance, are all greatly simplified, hence, the software becomes easier to use and much more understandable.

Conclusion

The balance between conciseness, reusability, clear execution, and following the Single Responsibility Principle will definitely allow developers to write clean code, benefiting the whole process of software development. Clean code is not simply error avoidance, but also it improves collaboration inside the team, makes life easier for new developers, and significantly increases the speed of code reviews. This will definitely result in efficient, maintainable software with easy adaptation to changes. Keeping these principles in mind allows me and other peers to create software that is functional, yet high quality and user-friendly. Even though we have done Clean code in Software process management class, reading this article helps as a reminder for writing code as best possible and also comes with examples which I did not include not to make this blog post too lengthy, but I suggest everyone to go and read it.

Citations:

https://www.freecodecamp.org/news/how-to-write-clean-code/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

The Importance of Licensing Code

Licensing may seem like an obscure legal detail, but it plays a critical role in scientific software development. In “The Whys and Hows of Licensing Scientific Code”, Jake VanderPlas breaks down why picking the right license is key to sharing and advancing research.

Summary of the Article

VanderPlas emphasizes three key takeaways:

Always license your code. Without a license, code is effectively closed, limiting its reuse. If you don’t, it’s basically off-limits for anyone else to use.

Use a GPL-compatible license.This makes it easier for your code to work with other open-source projects.

Prefer permissive licenses like BSD or MIT. These licenses lower barriers for collaboration, they are the most flexible and let people from both academia and industry collaborate freely.

Licensing is crucial for scientific reproducibility and collaboration. Even if you post your code publicly, without a license, it’s still “all rights reserved,” meaning others can’t legally use it. VanderPlas recommends permissive licenses because they encourage more people to adopt and improve the code. On the other hand, copyleft licenses (like GPL) keep the code open but might scare off companies from getting involved.

Personal Reflection

While reading this article, I found VanderPlas’s insights particularly relevant and important. I appreciate how licensing can help bridge the gap between innovation and real-world impact. The idea of using BSD or MIT licenses makes sense because they’re simple and open the door for more people to get involved.

This also made me think about how intentional we have to be with our work. Just like we carefully document research methods, licensing makes it clear how others can use and improve our code and/or tools. It’s a good reminder that open science isn’t just about sharing, it’s about having solid guidelines that make collaboration easier and that push science forward.

Citation

VanderPlas, J. (2014, March 10). The Whys and Hows of Licensing Scientific Code. Pythonic Perambulations.

Link of the article: https://www.astrobetter.com/blog/2014/03/10/the-whys-and-hows-of-licensing-scientific-code/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

How Agile Management is Changing Industries

The article “What to Expect From Agile” by Julian Birkinshaw talks about how agile management, which started in software development, is now being used in other industries. It uses ING Bank in the Netherlands as an example to show how agile principles can change a whole organization. ING started its transformation in 2015 to deal with problems like too much bureaucracy and separate departments that didn’t work well together.

Summary

Agile is a method that focuses on being flexible, working together, and improving things little by little. ING decided to use it to make their operations more efficient, inspired by companies like Spotify and Google. They made big changes, like reorganizing their teams into smaller groups called “squads” that handle tasks from start to finish and larger groups called “tribes” that focus on similar goals.

Here are the key things ING learned during this change:

  1. Shifting Power: Managers gave up some control to allow teams closer to customers to make decisions. This led to a big culture shift, and some senior managers left because they didn’t fit this new way of working.
  2. Keeping Stakeholders Involved: ING worked with regulators and other important groups to show that agile could work while still keeping important rules in place.
  3. Focusing on Customers: Teams were organized based on what customers needed and could adapt their focus as those needs changed.
  4. Balancing Freedom and Structure: ING used quarterly business reviews to set goals and stay on track while letting teams decide how to get things done.
  5. Helping Employees Grow: Agile gave employees more opportunities to learn new skills and take on exciting challenges.

The results were positive: employees felt more engaged, customers were happier, and the bank saved money.

Reflection

What stands out to me the most is how agile balances giving teams freedom while still making sure they’re working toward the company’s goals. It’s not easy to find that balance, especially for industries like banking, where following rules is really important. It also impressed me how ING’s leaders had to let go of control and trust their teams to make decisions. That takes a lot of courage!

As a computer science and business administration student, I see how this case connects both of my fields. Agile started as a software development idea, but it’s now shaping how businesses are managed. If I were in a workplace like this, I’d like having the freedom agile offers, but I’d also want clear support systems to stay on track and make sure we’re meeting goals.

Citation

Birkinshaw, Julian. “What to Expect From Agile.” MIT Sloan Management Review, December 11, 2017.

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Introductory blog post

Hello this is Maria Delia. Welcome to my blog page! CS@Worcester @CS-348

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.