API

Hello dear reader. As I was staring at my computer and was trying to find a good subject for today’s blogs a bell ringed in my mind: APIs.
I never came across the name API until I started my job. I remember when everyone used to say ‘The API is not working’, ‘We need to call the API to make this process happened’ etc. and I was super confused so I entered google and wrote what is API. Google told me ‘Application Programming Interface’ more fancy words. I started to watch videos about it and also, we started to use them at school. Now I am the one using the term API.

Application – think of an application like credit card. You expect the credit card to help you purchase items and goods.

Programming – API allow the credit card needs to contact your bank and make sure you haven’t extended the limit in your card and is okay to go on.

Interface – is the way we interact with an application.

Simple for you: API define rules that developers must follow to interact with a programming language, software library, web interface or any other software tool. Everyone uses an API every day in some way. A simple comparable example would be you accessing a webpage in your browser. You make a request by entering the webpage URL and the view you see after you press Enter is the response. The API has the same process of request/response but the difference is that API requests provide data in their response.

But why do we even use API? Many of the APIs are made with the intention to allow 3rd party developers to build applications using company data, Since the APIs simply provide data, there are limits on how a company can then go on to use that data. APIs act just like a door and keys. Only the people having the key can open the door and enter the room.

https://www.youtube.com/watch?v=InoAIgBZIEA This video is a great example on how to use the API ( how to call and get the information you need).

From the blog CS@Worcester – Danja's Blog by danja9 and used with permission of the author. All other rights reserved by the author.

UML

Hello dear reader. Another concept that I came across this semester in my Software Design and Construction class was UML diagrams and I wanted to write about UML diagrams in one of my posts.

For all of you that are new to UML, UML is not a programing language. UML stands for Unified Modeling Language. UML is a standard model language of an integrated set of diagrams. UML was developed to help system and software developers for specifying, constructing, visualizing, and documenting the code of software systems. It is a very important part of developing object-oriented software and the software development process. UML uses mostly graphical notations to express the design a software. Using UML helps teams communicate and validate the architectural design of the software. We use it to portray the behavior and the structure of a system/project.

A question that I asked myself when I started to learn about UML was: Do we really need UML? The more I learned about it the more I understood how important UML is. This for different reasons like: – there are a lot complex applications and systems that need planning from a lot of different teams, and clear explanation need to go to each and every team working on the same project; most of our users might not ever know what code is, but there are a very important part of our project and that’s where UML kicks in by translating this ‘foreign language’ called code.

UML can be classified in two types: Structural and Behavior Diagrams. Structural Diagrams get the static aspect or structure of a system. It includes Object Diagrams, Deployment Diagrams, Class Diagrams and Component Diagrams. Behavior Diagrams on the other hand get the dynamic aspect or behavior of the system and it includes Interaction Diagrams, State Diagrams, Use Case Diagrams and Activity Diagrams.

Except school I have come across UML Diagrams at my job too. The diagram I have seen and that we use a lot if the Deployment Diagram as each of us should be aware of the architecture of the system as deployment.

I like the way Noel explains UML diagrams and where/how to use them. He provides great graphic examples of the diagrams.
https://tallyfy.com/uml-diagram/

From the blog CS@Worcester – Danja's Blog by danja9 and used with permission of the author. All other rights reserved by the author.

Refactoring – Documentation – Software Framework

Hello dear readers. In this blog post I would like to write about Refactoring, documentation and software framework. While explaining what they are I will try define why are they needed.

Refactoring is very used term in software development and has played a major role in the maintenance of software. Refactoring is one of the most self-evident processes, but it is surprisingly difficult to perform properly. In most cases, we deviate from strict refactoring and execute an approximation of the process; sometimes, things work out and we are left with cleaner code, but other times, we get snared, wondering where we went wrong. In either case, it is important to fully understand the importance and simplicity of barebones refactoring. Refactoring is a controlled technique for improving the design of an existing code base. Its essence is applying a series of small behavior-preserving transformations. The cumulative effect of each of these transformations is quite significant. By doing them in small steps you reduce the risk of introducing errors. You also avoid having the system broken while you are carrying out the restructuring — which allows you to gradually refactor a system over an extended period of time.

I also want to talk about documentation in this blog post. For a programmer reliable documentation is always needed. The presence of documentation helps keep track of all aspects of an application and it improves on the quality of a software product. Its main focuses are maintenance, development, and knowledge transfer to other programmers. Successful documentation will make information easily accessible, simplify the product, help new users learn quickly, provide a limited number of user entry points and help cut support costs. Documentation is usually focused on the following components that make up an application: business rules, troubleshooting, server environments, application installation, database/files and code deployment.

A software framework is a concrete platform where common code with general functionality can be specialized or overridden by developers or users. Frameworks take the form of libraries, where a well-defined API is reusable anywhere within the software under development. There exist some features that make a framework different form other library form. These features are: default behavior, inversion of control, extensibility, non-modifiable framework code.

I am attaching the following links that will help you have a better understanding of these concepts as these blog posts also provide examples.
https://refactoring.com/
https://medium.freecodecamp.org/why-documentation-matters-and-why-you-should-include-it-in-your-code-41ef62dd5c2f
https://www.quora.com/What-is-a-framework-in-programming

From the blog CS@Worcester – Danja's Blog by danja9 and used with permission of the author. All other rights reserved by the author.

Object Oriented Design Principles

Hello dear readers. Welcome back to my blog. As we know by now most programming language support and encourage object-oriented programming and in this blog post we are going to talk about the principles of the Object-Oriented Programming. The key design principles of Object-Oriented Programming are:

Abstraction – is the idea of simplifying a concept to its bare essentials in some context. It allows you to better understand the concept by stripping it down to a simplified version. Your abstraction should be intuitive.

Encapsulation – can be thought of as putting something inside a capsule. In software, restricting access to inner objects and properties helps with data integrity. Encapsulation makes your classes easier to manage, because you know what part is used by other systems and what isn’t. This means that you can easily rework the inner logic while retaining the public parts and be sure that you have not broken anything. A disadvantage of it is that working with the encapsulated functionality from the outside becomes simpler as you have less things to think about.

Decomposition – is the action of splitting an object into multiple separate smaller parts. Spitted parts are easier to understand, maintain and program. There exist three types of decomposition relationships: 1. Association, which defines a loose relationship between two components. They don’t depend on each other but work together. 2. Aggregation, which defines a weak ‘has-a’ relationship between a whole and its parts. The parts though can exist without the whole. 3. Composition, which defines a strong ‘has-a’ relationship where the whole and the part can’t exist without each other.

Polymorphism – is the ability for data to be processed in more than one form. It allows the performance of the same task in various ways. It also consists of method overloading and method overriding.

Inheritance – is the ability of one class to inherit properties of another class, called the parent class. We can inherit properties from other classes s well.  So, when we create a class, we do not need to write all the properties and functions again and again, as these can be inherited from another class which possesses it. Inheritance allows the user to reuse the code whenever needed.

Design principles are rules in software design that have proven themselves valuable over the years. Just like any other ‘game’ even when we code we need to know and follow the rules. This is the main reason why I choose to write about the principles of Object-Oriented Programming.

From the blog CS@Worcester – Danja's Blog by danja9 and used with permission of the author. All other rights reserved by the author.

SOLID..

Hello dear readers. Today we are going to talk about SOLID, the first five principles of the object oriented design.

S.O.L.I.D is an acronym for the first five object-oriented design principle. When these principles are combined together, it makes it easy for a programmer to develop software that are easy to maintain and extend. They also make it easy for developers to avoid easily refactor code and are also a part of the agile or adaptive software development. SOLID stands for:
S – Single responsibility principle
O – Open/Closed principle
L – Liskov substitution principle
I – Interface segregation principle
D – Dependency Inversion principle

Single Responsibility Principle states that a class should only have one job, only one reason to change. The reason why we should use SRP is because it makes your software easier to implement and prevents unexpected side-effects of future changes. Another benefit of this principle is that classes and software components that have only one responsibility are much easier to understand, explain and implement than the ones that provide solution for everything.

Open/Closed Principle states that objects or entities should be open for extension but closed for modifications. Using this principle prevents situations in which a change to one of your classes also requires you to adapt all depending classes.

Liskov substitution principle states that every subclass/derived class should be substitute for their base/parent class. To achieve that, your subclasses need to follow the following rules: 1. Don’t implement any stricter validation rules on input parameters than implemented by the parent class. 2. Apply at the least the same rules to all output parameters as applied by the parent class.

Interface segregation principle states that a client should not be forces to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use. By following this principle, you will be able to prevent bloated interfaces that define methods for multiple responsibilities. You should avoid classes and interfaces with multiple responsibilities because they change often and make your software hard to maintain.

Dependency Inversion principle states that entities must depend on abstractions not on concretions. High-lever and low-lever modules also depend on the abstraction. This design principle does not just change the direction of the dependency, it also splits the dependency between the two levels by introducing an abstraction between them.

 

From the blog CS@Worcester – Danja's Blog by danja9 and used with permission of the author. All other rights reserved by the author.

Test-Driven Development

For this week’s blog, I am choosing test-driven development what is and what it is not by Andrea Koutifaris.

Test-driven development(TDD) is a software development process that relies on the repetition of a very short development cycle. First, developers write a failing test case then produces the minimum amount of code to pass that test and finally refactors it to acceptable standards.

There are three processes that are often described using TDD: Red/Green/Refactor cycle.

Red phase

In the red phase, you write a test that uses a piece of code as if it were already implemented. Without implementation and not thinking about production code, this phase is where you concentrate on writing a clean interface for future users and where you design how your code will be used by clients. The Red phase is the most important phase and it is the rule that makes TDD different from regular testing. You write tests so that you can write production code and not test your code.

Green Phase

For developers, the green phase is probably the easiest part of a TDD. This is the part where you write code but not the whole implementation, just enough for the test to pass. Make all the alarming red on the test report becomes green. In this phase, you do not need to worry about violating best practices since we will do that in the Refactor phase. This phase exists to make your task simpler and less prone to error.

Refactor Phase

In this phase, you are allowed to change the code while keeping the tests green. But there is something mandatory, that you have to remove code duplication. In this phase, you can worry about algorithms and such to make the program better. This is the phase where you can show off your skills to your users.

 

I really liked reading this blog and I kind of liked the idea of Test-Driven Development. I think doing it this way creates more code coverage and thus fewer bugs later on. This kind of development is a bit unique to me since I have only done behavior-driven development so far. Test-driven development changed my mind in a way. I think that doing test-driven development is more beneficial, kind of like reverse engineering, we start from something that we know is gonna work then proceed from there.

 

From the blog CS@Worcester – Computer Science by csrenz and used with permission of the author. All other rights reserved by the author.

Fuzz Testing

Source: https://www.guru99.com/fuzz-testing.html

This week’s reading is on fuzz testing. It is defined as an automated technique that is used to uncover errors that will most likely missed by manual inputs. As it is a black-box testing technique, it deals with the execution itself rather than going through the source code. Due to the way the technique works, it will be able to find serious defects and security loopholes in the software it is tested. It’s also stated that the technique is one of the most common method for hackers to find vulnerability in systems. The general steps to fuzz testing is identifying inputs, generating fuzzed data, executing tests using that data into the system, and logging all of its findings to be reviewed. Typical bugs detected by fuzz testing is assertion failures, memory leaks, invalid input, and correctness bugs. It’s very simple but will improve the quality of the software and improve security overall.

I found it interesting that the article mentions that it is a common method for hackers to use fuzzing in order to gain access into a system. However, I am not surprised that they do considering its overarching objective. Knowing that the technique is used by testers to find vulnerabilities in software, it doesn’t hurt for hackers to see if the common vulnerabilities have been accounted for through testing. Another interesting tidbit would be about correctness bugs, as I do not remember being able to test for corrupted databases, poor search results, and etc. using other techniques available. I also agree that fuzz testing alone will not solve all of the security issues. As it will account for invalid inputs, correctness bugs, memory leaks, and assertion failures. There are probably other methods available that are specialized towards handling complex security threats. In other words, fuzz testing will only help identify common vulnerabilities and sometimes help against major ones. Knowing that this method exists for black-box testing, as with other methods available through white-box testing. By using it in conjunction with other effective methods will create a product that is of high quality, secure, and cost-effective. Its similarity coincides mutation testing as it ensures the software is robust. In conclusion, it is fuzz testing is useful for showing presence of bugs in an application but will not guarantee full coverage.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Mutation Testing

Source: https://www.guru99.com/mutation-testing.html

This week’s reading is on mutation testing. It is stated to be a technique that changes certain statements in the source code to see if the test cases are able to find the errors. The overarching objective of mutation testing is accessing the quality or robustness of the test cases. These mutations are generally placed in several categories, operand replacement, expression modification, and statement modification. For operand replacement, an example would be simply replacing a variable in an if-statement with a constant. For expression modification, an example would be to replace a less than or equal to operator to a greater than or equal to operator. Lastly, for statement modification, an example would be simplify deleting lines of code, adding code, or modifying data types in the program. This type of testing allows the testers to uncover errors that would have remained undetected. By comprehensively testing the tests, it will provide a large amount of code coverage of the source code. It is a very useful white-box testing technique.

I found that the article provides great depth on mutation testing, steps on how the technique works, advantages, disadvantages, and provide examples. This serves as a great refresher to the activity done in class. Before that, I would have never thought about a software technique that provides such coverage to the tests already created. I can agree with the article that it is powerful tool that brings adequate amount of error detection. It will improve code quality and early bug detection will save costs later on in development if they are caught early using mutation testing. What I found most interesting about mutation testing in general is the mutation score. It is defined as the percentage of killed mutants with the total number of mutants. By observing the percentage of killed mutants, we can see if the test cases are effective against the mutations. Unlike other white-box testing techniques, it is a very unique way of effectively testing tests for their techniques. This is similar to the black-box testing technique fuzzing, where it creatively creates unexpected creative inputs for software to uncover bugs that would have otherwise been missed. In conclusion, this exhaustive technique is very useful for comprehensively testing a program and is very effective.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Automated Software Testing the Future?

When we think of AI, we usually think of robots doing certain actions that can replace basic human actions. Sometimes these actions aren’t basic at all. Development timelines have also changed drastically. This Forbes article talks about how AI can potentially take over testing phases due to the shorter time-frames for gold standard releases. With more and more companies releasing software, there will naturally be more competition. Not only is the competition driving this push for AI software testing, but also efficiency. Another thing that drives a push for AI software testing is cost structures. A lot of companies are based on certain cost structures which can heavily impact the quality of a product. With the amount of testing needed for a product to be in working conditions, you may need a lot of people per team to test software. Software testing AI seeks out to have this issue resolved or at least assisted. But, the main reason that AI in software testing is mainly due to narrow constraints of time. Since deadlines are always being set shorter and shorter while standards are being set higher and higher, it seems like the logical solution is to find the easiest and most effective way to combat these restraints

This is an interesting topic because it’s something that you wouldn’t really think would make much sense. If you development an AI to test software, then this could still potentially leave the software vulnerable to bugs. This is due to the fact that there are added variables and “middlemen” added into the equation if you have automated testing. Now, not to say that AI testing is useless, as it is probably extremely efficient at testing a lot of smaller things at once, however it seems like it could be added overhead. New roles would have to be assigned for teams just to test the software for the AI that is testing the software. In concept, AI testing seems like a great idea. However, there are bound to be bugs and whether or not it is worth the risk and cost to implement AI software testing is ultimately up to the companies do it. It is a very unique topic and will be interesting to see just how companies will implement this in the future.

Article: https://www.forbes.com/sites/forbestechcouncil/2018/12/17/ai-in-software-testing-will-a-bot-steal-your-spot/#63c50ce36710

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Importance of API Lock-down

In recent times, it seems as though a few companies are letting major security flaws slip through their developer tools. A few days ago, Facebook had a massive 8+ million user exposure due to a flaw in their Photos API. This bug allowed photos of users using certain apps with this API to have their data leaked and pulled without their knowledge. Facebook didn’t realize this until 12 days after this had occurred. Now, Google is in a similar boat. A few days ago, on Monday, Google revealed that it’s Google Plus social media network site (that was in the process of shutting down) also had an API bug that, again, exposed the information of users. This time on a much larger scale. Google’s API exposure hit over 51 million users. Interestingly enough, this happened just two months after they had discovered another bug that exposed data of 500,000 users. This was the initial reason for Google Plus to ultimately shut down. Although Google reported that there was no evidence of this data being “misused”, this information is still out there and the attention will surely seek the eyes of many who will misuse it.

So why is locking down an API so important? An API is a predefined set of tools that are used by developers to perform multiple actions in program. A lot of these are protocols that either pull certain information or perform various tasks. The API is the middle ground of how software can talk to each other. When there is an API bug, this can lead to information that is not suppose to be able to be pulled, get exposed to fields that should not be able to. This was the case with Facebook’s Photo API, and now the case with Google’s Google Plus media API. What we can learn from this is, whenever you are modifying or testing with API’s, always make sure that boundary tests are in place to make sure that certain data can not be pulled by any alternative means. Most of the time, only authorized developers are only permitted access to certain API’s, nullifying any outside attacks. However, if an API does not have the correct stops and security in place, user data is at a massive risk as shown by these two examples of mass user data exposure.

 

Article: https://www.softwaretestingnews.co.uk/17470-2-google-plus-closure-api-bug/

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.