Category Archives: CS@Worcester

Time for some more REST

This week I have chosen to deepen my understanding of REST API. The article, “RESTful API Design Tips from Experience” by Peter Boyer, goes over good RESTful design practices. The article covers many topics, including some I do not have experience with but are still useful for a greater understanding overall.

The article proposes that the API should state its version
in the URLs. The author suggests adding a prefix to all URLs to specify API
version. This makes the eventuality of version updates easier to handle.

A simple tip mentioned in the article is the use of plurals.
When designing endpoints, plurals should be used instead of the singular form.
This is to avoid possible misunderstandings of what the endpoint is used for.

Example:             /orders/customers          VS           /order/customer

Another tip in the article is to use nesting for
relationship filtering. From what I understand, it is better to have multiple
path variables for filtering than the use of query strings. The example in the
article is very clear on the tip. Following this advice will make your
endpoints easier to read.

The author advises to keep your API as flat as possible. He
suggests that creating and fetching data should have a longer resource path
than updating and deleting data. I do not really understand why this is the
case. My best guess is the shorter path reduces unnecessary processes.

The next tip I found interesting was the important, if not
obvious, idea for pagination of results. When working with larger databases, requests
can have thousands of results. Since it would be impossible to display all the
data, it makes sense to partition the results into pages and return one page at
a time. Pagination can also significantly reduce the expense of a request.

The last tip I will mention from the article is the usage of
status codes.  HTTP status codes should
be used consistently and properly. This means the status should use the proper
code and provide additional details when necessary: an error code with a
message of why the error occurred.

Overall, I found Boyer’s article to be very informative. I
only mentioned a few of the topics raised in his article but all the topics are
interesting and useful. Reading through this article gave me a better
understanding of the nuances of RESTful API design and has helped me comprehend
my current coursework (CS343) better. I suggest to anybody that is new to REST
to read this article.

Article Referenced:
https://medium.com/studioarmix/learn-restful-api-design-ideals-c5ec915a430f

From the blog CS@Worcester – D’s Comp Sci Blog by dlivengood and used with permission of the author. All other rights reserved by the author.

Adventures in TypeScript

This week in CS-343, I was introduced to TypeScript, a
scripting language that uses typed variables to prevent coding errors caused by
mismatching data types. My introduction to TypeScript was my first time working
with a scripting language, and I am really looking forward to learning more
about it and using it to write new kinds of programs. However, the tutorial I
did in class was rather simple, as it only showed how to set up a basic project
in TypeScript. To learn more about this new language, I decided to research
TypeScript’s other features. I eventually came across a useful blog post by
Nwose Lotanna titled “New Features in TypeScript You Didn’t Know Exist.”

Link to the blog:

https://blog.bitsrc.io/new-features-in-typescript-you-didnt-know-exist-54b7ab8d0b4f

As the title suggests, this blog post discusses several
features of TypeScript that were introduced between versions 3.0 and 3.4. Each
feature has a short description that explains why it is useful, and many are
also demonstrated using example code. The post seems to be targeted at developers
who are experienced with TypeScript, which made it difficult for me to
understand every feature it discusses. However, there are a few features listed
in the blog that I could definitely see myself and other newcomers to TypeScript
making use of even in simple programs. I chose to discuss this blog to help spread
information on these useful but often overlooked features.

The first feature discussed in this blog is project
references. As of TypeScript 3.0, it is possible for one TypeScript project to
reference another by referencing its tsconfig.json files. This feature is
heavily reminiscent of the import feature in Java or the #include
feature in C, which have proved invaluable in my experience. Having the ability
to reference code in other files helps keep projects simple and organized, which
will undoubtedly be just as important in TypeScript as it is in other
languages.

Another feature discussed in the blog that caught my
attention is TypeScript’s BigInt object. BigInt was introduced in
TypeScript 3.2 and allows programs to use numbers greater than 253.
While I can’t recall a time where numbers that large were necessary, it is nice
to know that the option exists in TypeScript. There are certainly potential
uses for such large numbers, such as the Fibonacci function that Lotanna uses
to demonstrate the feature. Should I ever need to write a program that uses
numbers of this magnitude, I would certainly take advantage of TypeScript’s BigInt
feature.

A third TypeScript feature that I expect to be useful is the
const assertion. Since TypeScript 3.4, it has been possible to declare
arrays and objects as read-only constants using the syntax demonstrated by the
example code in the blog post. Though the syntax is different, this feature
seems to function just like C’s const and Java’s final keywords.
Having used these features frequently, I expect TypeScript’s const
feature to be useful in many programs to ensure that the values of certain variables
cannot be changed.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

TypeScript/JavaScript

Ever wanted to build a website and be able to add in
interactive elements to the page? With JavaScript this can all be accomplished.
JavaScript is a programming language that is used with websites in order to
make the design better, sleeker and an overall better website. JavaScript can
also allow for people to create games and apps as well. These games would be browser
based and would allow for the creator to be able to implement the useful tools
that JavaScript provides. Mobile apps are starting to come around to JavaScript
as well. However, JavaScript is not always the easiest to learn or pickup. That’s
where typescript comes in handy.

With typescript, users are able to code with a far more interactive
interface. Once the typescript is written, there is a TSC or type-script
compiler that compiles the typescript code and converts it into JavaScript.
This allows for a much easier integration of JavaScript because of the ease of typescript.
Typescript is a rather new concept, only recently appearing in 2012 by the
creator Anders Hejlsberg. Typescript has many advantages to it over JavaScript.
For starters TypeScript, as mentioned before, is far easier to code and the
user does not have to worry about it being converted into usable JavaScript. TypeScript
is also open-source, meaning that the software for its development can be used
by anyone which allows for constant improvements to be made to the software.
TypeScript also comes with development tools built in. With these tools,
debugging becomes far easier to do and allows for different types of testing.
Another great feature that TypeScript has is that it can function with REST
parameters. REST is a type of software design that I have mentioned previously
in this blog. Another benefit of the TypeScript language is that it is an OOP
language, or object-orientated programming language. This allows for TypeScript
to be easy to maintain and keep clean.

TypeScript is essentially the next phase of JavaScript
because of the fact that TypeScript is a superset of JavaScript. This means that
every feature that JavaScript normally has is implemented with TypeScript. The only
difference is that TypeScript, as mentioned before, has far more additional
benefits to it. When it comes down to choosing whether or not to use JavaScript
or TypeScript, it’s a personal choice. However, since TypeScript was recently
released, it has not had as much time to spread but I believe the obvious choice
would be to switch over to TypeScript because you are missing out on many great
features.

https://en.wikipedia.org/wiki/TypeScript

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Path Testing

Testing comes in many forms and all are important in their own regard. Path testing is a type of testing used to determine that each path a program can take will produce the correct output. By doing this you are able to determine if there is a fault in the code and where this fault is produced. With path testing you are able to create decision to decision paths or DD paths and you can use these to create a program graph. A DD path is a node in a program that satisfies a certain condition, such as it has one in-degree and one out-degree, meaning there is only one flow of code that can happen that enters the node then leaves. When creating all these nodes they can be assembled into a program graph. These graphs help show the flow of the program and can help with determine how to test your code with path testing.

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Clarifying Edge Testing


I happened upon a very
interesting blog recently, searching for more examples of edge case testing,
wherein the author provides an interesting set of common edge cases. While they
use the same names or similar ones as those we’ve seen lately in class some
have new meaning here. In addition, they may have an actual value tied to the
name – such as zero, one, and two – but their values are defined otherwise.

 In order they
are as such:

  • Zero “represents
    any form of null input”, including the actual value zero but also null, an empty
    List or Array, et cetera. This of course is to test the capability of a program
    to deal with input that isn’t properly usable by it.
  • One represents what
    we have referred to as nominal up to this point, meaning a valid normal input
    which should test proper functionality of a program under ideal conditions.
  • Two does not
    correspond to the value two, but rather refers to testing the same code twice,
    usually in sequence, to see how repeated executions affect a system.
  • Two to Max-1 is
    most like one, in that it represents a nominal value as well, but in opposition
    this value should not be the absolute simplest needed to function but an average
    use case; meaning possibly complicated.
  • Max is
    fairly self-explanatory, used to test the upper limit accepted by a program, and
    can sometimes be an extreme value. As such, it can test the limits of the
    program under incredible load.
  • Max + 1 is
    used to ensure that limits placed on an application are working and that anything
    that does not correspond to a valid range is rejected in a reasonable manner.

These would most closely correspond to Normal Boundary Value Testing as we have covered in class but they are each less abstract than those counterparts. They provide an insight on what these values look like in actual QA testing, as well as expectations upon being used. Two for example does not have an actual value associated with it, but rather refers to a testing orthodoxy outlined above. One example of a test in this vein is found in another blog, in which the author proposes the edge case of the same user trying to log in from two different computers. I believe between the two of these blogs a clearer picture of the concepts we have covered can be found through these more concrete examples of the testing procedure.

Sources

A Beginner’s Guide to Testing: Error Handling Edge Cases
Build Strong Edge Test Cases

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Template Design Pattern

For my blog this week I choose to write to about the template design pattern. Picked it because I was recently learning about what design patterns were. So I was google and and came along a list of different ones and I liked the template design pattern because it seemed to be useful and fairly simple so I picked it.

The article first talks about how the template design pattern is used for the base of an algorithm where the steps always happen in the same order and sometimes some or none of them are the same most of the time. Then they talked about an example of an algorithm to build a house were the steps are always first to build the foundation then to build the pillars. Next to build the walls and last to build the windows. They also said that these have to built in the same order every time. Then they started talking about the first class which was an abstract template with the first method being a method to build the house which its self called all of the other steps in the process which were abstract methods. They also said the since the method to build the house was the algorithm to build the house it should be final so it cant be changed. They also you leave the step methods unimplemented or not if they are mostly going to be the same. Then for each type of house you create a new class to build it extending the template class while overriding the unimplemented methods or any methods you need to change.

This design pattern helps with the repetition of having to type the algorithm over and over for each subsequent class you make. It also lets you make reuse so of the steps if they never change. It does have some disadvantages though like how you cant change the order of the main algorithm and that you have to override most of the methods everything. So even if several of the implementations use the same step you still have to rewrite them anyway. I learned a lot from this article first of all I learned a completely new type of design pattern. I think this pattern is very useful especially with algorithms as they said. I also learned just by looking at the list on the website that were far more types of design patterns then I originally thought.

Template Method Design Pattern in Java

From the blog CS@Worcester – Tim’s Blog by therbsty and used with permission of the author. All other rights reserved by the author.

Getting a solid grasp of SOLID

For this week’s blog, I have decided to go over the SOLID set of design principles. The blog article “SOLID design principles: Building stable and flexible systems” by Anna Monus describes SOLID and gives solid examples of each design principle with code and UML diagrams.

Single responsibility

Each class should only have a
single responsibility, such as handling data or executing logic. While this
increases the number of classes in a program, this will allow easier
modification of the code and code clarity.

Open/Closed

Classes should be open to extension
and closed to modification. This means that new features should be introduced
through extension and not through modifying existing code. The open/closed
principle allows programs to expand with new features without breaking the old.

Liskov substitution

Named for its creator Barbara
Liskov, the Liskov substitution principle states that a subclass should be able
to replace its superclass without breaking the properties of the program. This
means that a subclass shouldn’t change its superclass’ characteristics, such as
return types. Following this principle will allow anything that depends on the
parent class to use the subclass as well.

Interface segregation

Interface segregation states that
classes should not have to implement unused methods from an interface. The
solution is to divide the interface so that classes can implement only the
desired methods. I found this principle to be easier understood from a visual
example and I found the article’s UML diagram for interface segregation useful
for this.

Dependency inversion

High and low-level modules should
depend on abstractions. It is common to think that high-level classes should
depend on low-level classes, but this makes expanding the code difficult so
it’s best to have a level of abstraction between the two levels. The article’s
UML example for this principle shows how the abstraction level allows for easy
expansion.

The SOLID principles are important for code architecture as
it makes code expansion simple and easy to understand. I have found myself
applying SOLID principles to a project I have been playing with, a simple GUI
animation. Originally, I had drawn objects handling their own draw and movement
methods, but by using the single responsibility principle I separated the
movement-based code to its own class and used composition between classes. This
allowed for me to be able to use the movement code (contains position,
velocity, and acceleration values and methods) for all the different objects
that I make. I also made use of the O, L, and D of SOLID to handle the drawn
object hierarchy allowing my frame class to depend on an abstraction of all my
drawn objects. I use a loop to cycle through all drawn objects in a linked list
that’s type is an abstraction of all drawn objects. I can tell that the
structure of the code has made adding new functionality easy and intuitive.

Article Link:         https://raygun.com/blog/solid-design-principles/

From the blog CS@Worcester – D’s Comp Sci Blog by dlivengood and used with permission of the author. All other rights reserved by the author.

Take a REST

What is the RESTful architecture style and what is it good
for? Well I am here to break that down for you. REST stands for representational
state transfer, and it is an interface that can be implemented in order to help
with abstracting resources. In order for a system to be considered RESTful,
they must implement 5 guiding principles. The first principle is client-server
which deals with the separation of user-interface and data storage. The next principle
is that it must be stateless, meaning when information is requested, all the information
must be present and cannot be stored on the server. The next principle of REST
is that the data must be labeled either cacheable or non-cacheable. If the
information is cacheable, it is stored on a client cache. The fourth principle
deals with having a uniform interface. The next principle is the layered system
which allows for a hierarchy of layers to implement constraints.

Now that the principles have been laid out, the next main
part of REST is the information and data it deals with. A resource in REST is
the abstraction of information. Resources can be anything containing a name from
documents, pictures, and so on. REST then uses resource identifiers to be able
to find what resource is needed. Resources contain resource representation,
which is a timestamp of the resource containing the data, metadata, and hypermedia
links. REST contains resource methods which can be used for working with the data.
Many people associate REST with HTTP methods of GET/PUT/POST/DELETE however
since REST has a uniform interface, the user will be able to decide which
resource methods to use. However, with REST you are able to utilize these HTTP
methods in order to help with resources. The most common implementation has GET
to retrieve resources, PUT to change or update a resource and POST to create a
resource. Obviously DELETE is used to delete resources.

RESTful API’s are very beneficial when working with cloud
computing and working with the web. Because REST does not store any information
between executions; is stateless, this allows for the for scaling. This also
means that the if anything fails, it will be easy to re-work since nothing was
stored on the server. This makes it particularly useful for websites as well because
a user will be able to freely interact with the website while not storing any
information with-in it. If you want to read more on REST and RESTful API, look
into these two websites.

https://restfulapi.net/

https://searchapparchitecture.techtarget.com/definition/RESTful-API

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Test Driven Development: Formal Trial-and-Error


Test Driven Development (TDD), like many concepts in
Computer Science, is very familiar to even newer programming students but they
lack the vocabulary to formally describe it. However, in this instance they
could probably informally name it: trail-and-error. Yes, very much like the
social sciences, computer science academics love giving existing concepts fancy
names. If we were to humor them, they would describe it in five-ish steps:

  1. Add
    test
  2. Run
    tests, check for failures
  3. Change
    code to address failures/Add another test
  4. Run
    tests again, refactor code
  5. Repeat

The TDD process comes
with some assumptions as well, one being that you are not building the system
to test while writing tests, these tests are for functionally complete
projects. As well, this technique is used to verify that code achieves some
valid outcome outlined for it, with a successful test being one that fails,
rather than “successful” tests that reveal an error as in traditional testing.
Related as well to our most recent classwork, TDD should achieve complete
coverage by testing every single line of code – which in the parlance of said
classwork would be complete node and edge coverage.

Additionally, TDD has
different levels, two to be more precise: Acceptance TDD and Developer TDD. The
first, ATDD, involves creating a test to fulfill the specifications of the
program and correcting the program as necessary to allow it to pass this test.
This testing is also known as Behavioral Driven Development. The latter, DTDD, is
usually referred to as just TDD and involves writing tests and then code to
pass them to, as mentioned before, to test functionality of all aspects of a
program.

As it relates to our coursework, the second assignment involved writing tests to test functionality based on the project specifications. While we did not modify the given program code, at least very little, we used the iterative process of writing and re-writing tests in order to verify the correct functioning of whatever method or feature we were hoping to test. In this way, the concept is very simple, though it remains to be seen if it stays that way given different code to test.

Sources:

Guru99 – Test-Driven Development

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Follow the Yellow Brick Road

Path testing peaked my interest when discussed in my CS-443 Software testing class, so I decided to dig deeper into the topic and see what others said about the testing method. I found an Article on GeeksforGeeks that focused on Path Testing. this type of testing focuses on the path of the code itself. calculating the complexity by McCabe’s Cyclomatic Complexity = E – N + 2P, where E = Number of edges in control flow graph, N = Number of vertices in control flow graph, P = Program factor. The advantages of Path Testing are reducing redundant tests, and focusing on the logic of the program.
Path testing seems to focus on the specified program and create the most appropriate test cases based on that program which in turn allows for best possible tests to be performed. Understanding code in a node graph way allows the tester to accurately understand the program and what needs to be tested and what can be tested individually or as a group. I really like the way that path testing views code because it is easy to understand and follow. Path testing, to me, is a directed path of testing that most people do without realizing it on a much simpler scale and because of its complexity calculations, it has more concrete evidence to support the style of testing.

Link to Article Referenced: https://www.geeksforgeeks.org/path-testing/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.