LimitsOfTheSagaPattern: UweFriedrichsen

CS@Worcester
 

CS-343
 

Week-15

Why I chose this post

This post intrigued me as it talked about microservices and since we had worked with the microservices that are makeup Thea’s Pantry it was a great opportunity to learn something new concerning microservices. Saga pattern was also a new concept that I had not heard or implemented before. Considering earlier posts that I had done with Friedrichsen I was sure that he would give good and relevant examples to understand the topic.

Summary of the post

Friedrichsen begins by introducing the concept, saga pattern. He describes it as, “if you have a transaction that spans multiple services and thier database, do not use distributed transaction. User the Saga pattern instead: Call the affected services in a row – tupically by using events for activity propagation.”

The author then goes to explain how the saga pattern might lead to limitation within a microservices systems due to technical errors. Despite the ability of the saga pattern to cover roll backs, Freidrichsen talks about how if a roll back fails it might lead to larger errors which cannot be solved. The main technique used to handle technical errors is to have a copy of the changes that were to be applied, go back to the last checkpoint by loading the last snapshot and then retrying application of the changes while avoiding the potential error that initially led to the technical error.

Saga’s pattern strength is in recovering from business error rather than technical errors. Friedrichsen states that recovering from technical error invovles a two layer approach where there is a lower layer which is hidden from the upper business layer. Also Friedrichsen cautions that if the saga pattern is used too often there might be an existing design smell within the system.

Reflection and Applicaition

As I have learnt about how important it is to consider the limitations and the advantages of using certain design patterns. In this case the strength of the saga pattern was in handling business errors but its downside was in handing technical errors. Also frequent application of a certain design might be a sign of an underlying problems which might require diagnosis. I also learnt of how the business and the technical layer can interact and work together and how to hide the functionality of the technical layer from the business layer.

I would apply the concepts that I have learnt from this post as I design systems which involve microservices. I will be careful to examine the pros and cons that different technologies might have before implementing them and relying on them to carry out roles within the entire system.

From the blog De Arrow's Webpage by Samuel Njuguna and used with permission of the author. All other rights reserved by the author.

Blog Week 14 (Token)- Abstraction and Composition

The Two of the fundamental aspects of coding, Abstraction and composition, are discussed thoroughly in this blog as well as the overall impact these processes can have on the code as a whole, we discussed these two towards the beginning of the classes and how they have there rolls in being able to not only code better but to understand and lay out the structure of the code.

At first I didn’t really understand how reducing a problem to its most basic form could help when I need to make code to very Specific actions and work a certain way, however after utilizing those processes in order to simplify the problem, then follow up by building up from those basic models allows me to utilize basic code to solve my more advanced problems. This opened up my thought Process when it came to Writing code as now I could think of all of the previous Projects I had where I had to create multiple objects and set Attributes for each specific one, and now I could seamlessly do it on a larger scale reusing other basic code.

For abstraction, it is the process of reducing all of the but the most important details in the code and leaving all of the extra out, it is important as it all owes for the most basic process to be worked on, and then subsequent work can be delegated to the more advanced versions of that the problem. an Example would be to rather than making multiple functions for different things, we could makes basic function that can be implemented repeatedly and reused. We can look at an example of the duck Project where we looked at different models of these classes and we noticed that certain ducks needed specific flying actions or squeaking actions, so rather than making multiple classes for multiple different types of ducks we made a basic duck class and created specializations for them In order to better the overall structure and reduce clutter. Then the using Composition you may make the connections to the different Objects in order to share information. Using the Duck Project Again, we made different types of ducks with Connections being made to the main Duck class with all of the Parameters, then we made connections for the squeak and Fly behaviors.

The Writer Focus on some key Traits for Good Abstractions, that being Simple, Concise, and Reusable. These are the things to look for when you want to simplify the work you do.

Elliott, Eric. “Abstraction & Composition.” Medium, JavaScript Scene, 28 May 2020, https://medium.com/javascript-scene/abstraction-composition-cb2849d5bdd6.

From the blog cs@worcester – Marels Blog by mbeqo and used with permission of the author. All other rights reserved by the author.

Concurrency

This week I learned about concurrency in software. I read “Concurrent Programming Introduction” by Gowthamy Vaseekaran. Vaseekaran explains what concurrency is in programming as well as its positives and negatives of it. Overall it was an interesting post to read and I think it gave me a better understanding of how computers work.

Vaseekaran starts by explaining concurrency is the ability to run several programs or parts of a program in parallel. This can be a huge benefit for performing a time-consuming task that can be run asynchronously or in parallel. Vaseekaran then goes on to explain the reasons that led to the development of operating systems that allowed multiple programs to execute at the same time.

These factors are resource utilization, fairness, and convenience. Resource utilization is needed because when some programs have to wait for external operations there is downtime that could be used to let another program run. Fairness is when multiple users and programs have an equal claim on the computer’s resources. It is more beneficial to let them share the computer through finer-grained time slicing than to let one program run until it is down and then start the next one. 

The next thing Vaseekaran brings up is threads. Threads are a series of executed statements that are lightweight and have their own program counter, stack, and local variables. Threads are used to help run background or asynchronous tasks. They increase the responsiveness of GUI applications and take advantage of multiprocessor systems. Java uses at least one thread when running. Threads help java run more smoothly but there are risks. The main risk is when there are shared variables/resources. Deadlocks can also happen when threads are used and multiple processes get stuck waiting for each other.

This was a good amount of information to learn and I think Vaseekaran did a great job explaining what concurrency is and its ups and downs of it. Starting with the reasons why we use it and then explaining how it is useful for a programming language like java was a good way to make it easy to understand what it is and how it is used today in software development. I think it would be interesting to learn more about how threads can be used in java. Vaseekaran’s post was useful for understanding concurrency and what threads are but how exactly a java developer implements them was very brief here. I would like to know more about how that works exactly but this was a good introduction to the topic and was an easy read. I would definitely recommend Vaseekaran’s post to anyone trying to learn more about how software is run and how to make it efficient. 

Link: https://gowthamy.medium.com/concurrent-programming-introduction-1b6eac31aa66

From the blog CS@Worcester – Ryan Klenk's Blog by Ryan Klenk and used with permission of the author. All other rights reserved by the author.

APIeXprience(AX),Concept Spill: UweFriedrichsen

CS@Worcester
 

CS-343
 

Week-15

##Why I chose this post
This post caught my attention as I was scrolling through Uwe Friedrichsen’s list of blogs. It had API in its subtitle and since we had covered RESTful API’s in class I though this would be a great opportunity to see his approach to API design. After scanning through the blog I saw the term concept spill which I had not heard before and I thought it would be interesting to learn something new about API through the lens of an experienced IT specialist.

##Summary of the post
The author begins by talking about how by taking technologies and fitting them into the entire system might work easily but we end up missing the important aspect of fully fullfilling the users needs. He introduces a concept AX which reffers to APIeXperience as to how the programmer will be able to work with an API design to fulfill the requirements of his users. Similar to UX in application design, a good AX might breed good results on the operational and functional side of a system or system software. He then introduces concept spill which is one of the factors that might reduce the AX of a programmer.

He defines concept spill as a situation where, “in order to use another service to solve your acutal problem, you first need to understand its internal concepts.” He gives a good example of how you would have to understand a city’s transport systems, such as zones, bus and train stops etc., when you want to move from one point to another in a new city. You are force to understand the city’s layout before you solve your initial problem of going to a particular place within the city, concept spill.

Concept spill pours over into the IT world in API design where a programmer is forced to learn how the creators of the API came up with the API’s stucture and inner workings on top of the issue that he is trying to solve. How do we solve the problem of concept spill? At the root of avoiding concept spill is to understand the user’s needs and design your API to cater for the user’s needs. An API designed without the user’s context/story in mind will lead to poor operations of the API thus futher complications. Understand common uses of the API and base your design on these uses.

##Reflection and Application
The blog post offered good advice on how to make API more user based which makes good design and acceptance by developers. Even as I look forward to working on my software capstone next semester, I think this will be great to put in mind as we work on improving Thea’s Pantry.

From the blog De Arrow's Webpage by Samuel Njuguna and used with permission of the author. All other rights reserved by the author.

Thoughts on Front-End Development

I often find it hard to relate back to other courses and easily forget about what I have previously learned. Luckily, my memory is not as bad as I think it is and it eventually comes back to me, especially in the case where I have to come back to a skill/technique regularly. I find that with Computer Science the evolution of knowledge is one that is clear yet has a depth of knowledge that intertwines one course with the next.

On one hand, you can have a course that is so specific in a niche of CS that it may be hard to see its relevance in another CS course on the other hand you may have a course that is seemingly so broad that it is hard to pinpoint how it may carry over.

I think that with any specific area of interest, as one continues with their education, the degree to which prior knowledge is necessary and relevant to learning a new topic only increases the further you progress.

All this is to say that last semester I took a cloud computing course and I remember that course being broad in its application of cloud computing. I wanted to look into the use of cloud computing in the context of software design and architecture. Secondly, after only getting a taste of front-end development in this course I wanted more and I wanted to solidify my understanding of the back-end and front-end in an attempt to satisfy a goal of mine described in a previous blog post.

Overall I’m not seeing any major differences between implementing software through the cloud vs other options other than the vast benefits that cloud computing can offer. Benefits of cloud computing range from storage, server, database, software networking, intelligence, and analytics. The blog begins with describing what cloud computing is then goes into detail about what front-end and back-end cloud architecture is along with cloud based delivery.

I was then led to another blog about specifically front-end development as I was not satisfied with what the previous blog provided. This blog hooked me with its first line saying “Front-end developers need to design sites that are engaging enough to nudge the target audience toward a conversion.” I find this idea to be very interesting because it starts to dive into the purpose of front-end development.  In my next post I will discuss where I may see myself in the future and what role I might want to play in the tech industry. There is a point in the blog in which the duties of a front-end developer are laid out leading to an intrigue and wonder about whether this is a niche in CS where I may see myself, in front-end development I feel it might be a role in which I can use a variety of skills/techniques in order to develop myself.

https://www.clariontech.com/blog/cloud-computing-architecture-what-is-front-end-and-back-end

https://webflow.com/blog/front-end-development

From the blog CS WSU – Sovibol's Glass Case by Sovibol Keo and used with permission of the author. All other rights reserved by the author.

Resilience, the new paradigm of the 21st century: UweFriedrichsen

CS@Worcester
 

CS-343
 

Week-14

Why I chose this post

I chose this post because the author touches on an important topic, resilience, which is a term that I have heard before in relation to IT related fields but I have never really taken time to understand it or even know how it is applied. Again Uwe Friedrichsen gives relevant examples as he covers this topic. He gives examples both within the IT sphere and some examples outside IT making resilience as a concept easier to understand.

Summary of post

The author begins by defining resilience, “resilience means that a system can ideally withstand adverse external influences completely or at least recover from them quickly.” Adverse effects can range anywhere from external influences from illness, overload situation, a pandemic, extreme weather and so on.

Friedrichsen gives an example of how the COVID-19 pandemic affected German car manufacturers. Since they had found an efficent way of procuring car parts, they became dependetn on chip makers in Taiwan. And in a figure of speech he says that, “ they have put all their money on a single horse and expect that this horse will always be the first to cross the finish line. And their calculation already includes the prize money for the horse.” This show how reliant car manufacturers had become dependent on Taiwanesse chip makers. However, when COVID hit the supply chain was greatly affected.

The author then shows the delicate balance that exists between efficiency and resilience, he also shows that a highly efficient systems is also very highly rigid. A balance must exist by reducing the efficiency in order to maximize on th resilience of a system. In the post industrial world the author says that he has witnessed a lot of uncertainity and full control of varience has become hard. We should therefore as early as we could begin to embrace the fact that there might be adverse situations down the road

Reflections and application

The blog offers a lot of important advice for the age and time that we live in. Due to the inconsistent nature of systems and resources in the world we live in today it is foolish to be single minded, despite the security that you may enjoy for the brief period when disaster strikes it will be hard to recover.

In my personal life and even in my career, this will be an important principle to live by.

From the blog De Arrow's Webpage by Samuel Njuguna and used with permission of the author. All other rights reserved by the author.

The Non-Existence of ACID consistency 2: UweFriedrichsen

CS@Worcester
 

CS-343
 

Week-14

Why I chose this post

I chose this post because I find that Uwe Friedrichsen’s insights are usually revealing and an accurate description of what I am to encounter in my work life. This topic, surrounding ACID consistency, is also closely related with software architecture as most systems work in unison nowadays due to the structure of microservices in most systems. Above all he gives real life examples that support his claims in a logical and meaningful manner. His structure of developing arguments with more than one blog post is also great as it slowly build up claims with good evidences and examples.

Summary of post

Friedrichsen begins by introducing the concept of eventual consistency and compares it to strong consistency. He talks about an example of money transfer between two people and how it takes time for the money to reflect on the recipients account. Strong consistency would mean that immediately the money is transferred there would be a reception on the recipients end, however, eventual consistency is what we have where after a while where the money eventually reflects after a period time.

The author again gives another example of how strong consistency is non-existent. In this example a clerk gets called by a customer who needs some infomation, the clerk retrieves data need to answer the question and gets it displated on the screen. As he goes through the information before he answers the customer the data changes and the author poses a question,”Will the answer of the clerk be based on the current data? Or will the answer be based on the old data that was valid until some seconds ago but is invalid now?” This again show the incosistent nature of data and therefore the non-existence of a strongly consistent system.

Reflections and application

The author makes a valid argument on the non-existence of strong consistency. I understood the difference between strong and eventual consistency. I also realized that all systems today run on eventual consistency. We do not have a guarantee of the outcome but it is after time that we eventually get consistency.

This concept reminds me that we can never achieve perfection and reminds me that progress matters more than perfection not only in our careers but in our daily lives as well. So as I work on projects invlolving systems I would focus more on having the individual parts communicate and then gradually improve their performance as time progresses.

From the blog De Arrow's Webpage by Samuel Njuguna and used with permission of the author. All other rights reserved by the author.

Overcoming Anti-Patterns

This week I encountered a blog regarding Anti-Patterns. As we have learned, design patterns are reusable solutions to common problems and provide a way for us developers to solve problems in a proven way, rather than trying to reinvent the wheel every time a problem was to arise. On the other hand, Anti-patterns are unhelpful or ineffective approaches to problem solving that can negatively impact the efficiency and effectiveness of our work. 

Some common examples of anti-patterns include:

  1. The Golden Hammer, which is when a specific tool or approach is overused or applied to every problem. I can personally say that I’ve fallen into this trap as I would always use the same programming language or framework to write code and would come to a standstill not knowing what to do next. Little did I know there were more suitable options that could’ve made my job easier and the end product more efficient.
  2. The God Class antipattern, occurs when a single class in a software system becomes excessively large and complex, with too many responsibilities. I believe all developers including myself, at one point or another, created a class with too many responsibilities and would wonder why we have issues in our code. This would even violate the Single Responsibility Principle as each class should only have one key responsibility. 
  3. The Big Ball of Mud antipattern, is when a solution lacks a clear and flexible architecture. As a program developer, I’ve encountered the big ball of mud antipattern and it can be a major source of frustration and inefficiency. Working with a system that has become a “big ball of mud” can be extremely difficult, as it can be nearly impossible to understand how the different parts of the system fit together and what each component is responsible for. This can make it difficult to make changes to our code, as it is unclear how those changes will impact other parts of our code.
  4. The Copy and Paste Programming antipattern, is where code is copied and pasted from other sources without proper understanding or modification. I believe every programmer at one point found code that they believed they could reuse from another program and placed it into their new program. The program may work, but it causes many bugs and becomes difficult to later make changes. 

Overall, as important as design patterns are to follow, sometimes we will fall into the trap of an antipattern. In my own experience, I have fallen into the trap of using anti-patterns in my code. Now knowing how to avoid these patterns going forward, I’ll be able to recognize and avoid antipatterns and leverage design patterns that can help to create more effective and efficient code. By doing so, anyone can better achieve their coding goals and improve the quality of their work. 

https://medium.com/geekculture/anti-patterns-in-software-development-that-you-should-avoid-780841ce4f1d

From the blog CS@Worcester – Conner Moniz Blog by connermoniz1 and used with permission of the author. All other rights reserved by the author.

Blog Week 14- Good Software Technical Writing

One of the most Relevant and important aspects of programming that I have neglected for a while is commenting and proper technical Writing, when I first started out I figured I would just remember all of the changes I would make to my code and didn’t require the small notes in-between methods. later on I began to understand the importance when I began working with many different files that needed to work in tandem and couldn’t remember what each method I wrote did or how it worked in the system as a whole.

In this blog the author Goes over many of the different aspects of technical Writing from either commenting on each method to adding context to the code overall, the biggest take away I got from it is that code without Comments is Worthless, by reading the documentation you should be able to understand why the previous engineers made changes or added functionality to the code. this allows for other developers to come in and quickly understand what is going on and be able to delete or insert sections of code in order to continue the development cycle.

the Writer goes on to show many different examples with one being a sequence diagram that gives the step by step explanation of what the Sequence of the systems in play, much like the different design architectures we discussed in a previous class where it shows the link between user and the database. The Importance of this kind of writing is that it can convey the was the system is supposed to work together so if another developer were come along and look over the schema they would understand the process and be able to work off of that.

Oliveira, Vincent. “HOW TO WRITE Good Software Technical Documentation.” Medium, Medium, 15 June 2022, https://medium.com/@VincentOliveira/how-to-write-good-software-technical-documentation-41880a0e7814.

From the blog cs@worcester – Marels Blog by mbeqo and used with permission of the author. All other rights reserved by the author.

Some APInions on REST and GraphQL

Whenever you’re new to a thing, a comparative look at different tools can help you understand the problem by learning how each tool approaches a solution. As someone new to consuming and designing APIs for the web, I’m interested in understanding APIs by looking at the difference in approaches of the REST specification and the GraphQL query language. This post is based on Viktor Lukashov’s GraphQL vs. REST blog post, which explores some GraphQL basics from the perspective of a REST API user.

Priority: server-defined endpoints or client-defined queries

The largest difference mentioned by most sources is that a well-built REST API relies on extensive backend definitions of endpoints, while GraphQL puts the onus on the consumer to carefully query the correct data.

In REST, accessing multiple entities requires visiting an endpoint for each entity. These endpoints expose data through predefined parameters and responses. Meanwhile, GraphQL exposes a single endpoint while only returning data that corresponds to the consumer-defined query. This approach requires higher effort from the user, but allows them to construct tailored queries without the need for forethought from the API designer.

As a fan of query languages, I think this comparison is very favorable to GraphQL. For any interesting or useful dataset, a user exploring the data should have more ideas about how to observe it than its maintainers and gatekeepers will. Providing flexibility for query writers lets your interface be used in ways you can’t predict.

Implication: caching and performance

One upside of REST’s focus on a planned set of endpoints and parameters is that expected frequent responses can be use HTTP caching to improve performance. Responses to GET requests will be cached automatically, reducing server requirements and potentially improving response speed for future identical requests.

In GraphQL, the query writer is responsible for using a unique identifier to generate useful cache data. That said, the consumer may be able to use single queries across multiple tables that would require more than one REST API call to access.

Relying on architecture over following best practices is probably the better way to make performance accessible, which is a point in favor of REST.

Consequence: rules and security

Another difference Viktor mentions is how developers implement secrurity rules. REST’s default to the expansion of endpoint-HTTP method combinations includes setting rules on this basis. In GraphQL, with it its single-endpoint, rules are instead set per field. This may require a steeper learning curve, but it also provides more flexibility for easily making some attributes of an entity more available than others.

Conclusion: rigid or demanding

One recurring theme of this comparison is that REST APIs are built to be rigid, and another is that GraphQL requires higher effort from the client. This is how I would decide between the tools. If writing something that I expect to be querying frequently myself in new ways, I’d want the query freedom offered by GraphQL. If I wanted a small and fixed feature set, REST seems like the spec to follow.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.