Category Archives: Computer Science

Apprenticeship Patterns

Dave’s story introduces both motivation and fear. A fear that next new big thing might make all our efforts wasteful but motivation to work hard in learning every new skill and language.  We see many versions of this statement by researchers like, “Failure is merely an incentive to try a different approach” by Carol Dweck or “recognize the inadequacies in what you do and to seek out solutions” by Atul Gwande.

However, Etienne Wenger said it best using ‘situated learning’ that, “best way to learn is to be in the same room with people who are trying to achieve some goal using the skills you wish to learn”. ‘Situated Learning’ is also the key to Apprenticeship and achieving Mastery. A man who is always working to find a better/smarter/faster way to accomplish their goals using connections between practitioners, the communication channels within and outside the team is a man who will enhance the skills of his apprentice and journeymen.

That said, ‘Situated learning’ is no small feat. It is a long and hard journey for an apprentice to become a journeyman and then Master. Learning new skills throughout one’s career is vital but perfecting skills warrants one to have a concrete learning skill to learn new skills and therefore Perpetual learning pattern must be applied as early as possible in our career path.

The biggest mistake I have heard graduates make is that they overestimate themselves. Coming out of university with bachelor’s in computer science doesn’t mean we have four years of experience, it means we have just started and that is why the author writes in the very first paragraph, “this book is for people at the beginning of the journey.” To avoid this one must have accurate self-assessment skills. We must recognize the limited information we have compared to the limitless treasure of knowledge available and strive to acquire them by learning about other teams, organizations, journeymen, and master craftsmen. We can also acquire great amounts of knowledge and skills through internet via text, audio, and video however, we must also recognize that the vast amounts of wisdom captured in the books of experienced software craftsmen cannot be replaced by blogs posted on world wide web.

“Walking the long road” teaches us how to practice patience and hard work, we cannot learn all languages by working hard only one month or year. We need to keep practicing and polishing our skills, which we learn in “Perpetual learning”, skills like self-assessment (“Accurate Self- Assessment) and learning not only through internet but also through wisdom and experience of software craftsmen who came before us. However, I do not agree with “Emptying our cup”. We should not forgo of our skills and knowledge but build on them. We should never be arrogant or foolish enough to think we have learned everything we possibly can.

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Why Vue

There are several frontend frameworks available to pick from, so why do we use Vue? To research about Vue and learn about its benefits, I decided to read blogs from Vue mastery, specifically one written by Lauren Ramirez.

  • Vue does not use up too much memory. Vue allows us to import only the pieces of the library that we need, which means whatever we don’t use will be removed for us via treeshaking.
  • Virtual DOM (Document Object Model) uses compile-based optimization resulting in faster rendering times.
  • To work with Vue, we did not have to learn HTML, CSS, and JavaScript. It was surprisingly easy how we were able to learn as we go.
  • Vue has many libraries that can be added as needed. Some of which are:
    • Vue Router (client-side routing)
    • Vuex (state management)
    • Vue Test Utils (unit testing)
    • vue-devtools (debugging browser extension)
    • Vue CLI (for rapid project scaffolding and plugin management)
  • Vue’s one of the best features – Composition API;
    • We are able to group features into composition functions then call them in the setup instead of having large unreadable and unmaintained code directly in setup.
    • We are able to export features from components into the functions. This means we don’t have to keep re-writing code and avoid having useless repetition.
  • Vue has enhanced support for TypeScript users as well.
  • In Vue, we are able to use multi root components. In most front-end frameworks component template should contain exactly one root element because sibling elements aren’t allowed. The way around to that problem is using functional components, they are components where you have to pass no reactive data means component will not be watching for any data changes as well as not updating itself when something in parent component changes. However, they are instance less and you cannot refer to them anymore and everything is passed with context. With the multi root component support of Vue (Vue 3), there are no such restrictions and we can use any number of tags inside the template section.
  • Vue 3 gives us the Teleport component, which allows us to specify template HTML that we can send to another part of the DOM. Sometimes a piece of a component’s template belongs there logically, but it would be preferable to render it somewhere else. This is useful for things like modals, which may need to be placed outside of the body tag or outside the Vue app.
  • Most importantly, Vue is open source. Vue has complete freedom to be community-driven and its bottom line is the satisfaction of its end users. It doesn’t have to answer to the company-specific feature demands and corporate bureaucracy.

Source: https://www.vuemastery.com/blog/why-vue-is-the-best-framework-for-2021-and-beyond/

From the blog CS-WSU – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

UML Diagrams – PlantUML

We learned PlantUML at beginning of the semester and even did an entire homework creating UML diagrams of a project. But the reason I am writing about UML diagrams right now is because as I was going through previous activities I realized, not only can UML diagrams be used for code built on OOP but also to create a simple and easy to read diagram of an microservices architecture.

As we all know we are going to be using microservices architecture to build Libre Food Pantry so I decided to look for a resource that can make using PlantUML simpler and easier to use and understand. I found a great blog by solutions architect Alex Sarafian. Below are the few features that I found very useful:

  • We can color code arrows for multiple flows flow of the diagram and add legend to specify what color represents which flow.

Format example:        A -> B #Blue : text

legend

    | Color | Flow |

    |<#Red>|  Flow 1 |

    |<#Blue>|  Flow 2 |

endlegend

We can color code arrows inside PlaceOrderService and ApproveOrderService for easier understanding of the services.

  • Autonumber feature will automatically add a number alongside the text of every event. Autonumber gives a linear sequence of events that are going to take place. For example, 1st UI placing the order to API, then 2nd from API to Database and so on.

Format example 1: numbers in front of event text

autonumber

Bob -> Alice : Authentication Request

Bob <- Alice : Authentication Response

Format example 2: Numbers are 2-digit padded and highlighted

autonumber “<B>[00]”

Bob -> Alice : Authentication Request

Bob <- Alice : Authentication Response

  • PlantUML limits the image width and height to 4096. Therefore, when image file is created over the limit of 4096, diagrams are cutoff completely taking away the advantage of efficient flow UML diagram. To fix this we can use command line parameters to increase the width and height limit of image. Another way is to use ‘skinparam dpi X’ parameter. However, the downside to using skinparam is that we must find the value for ‘X’ by experimenting to see when UML diagram fits the best.
  • We can display text relevant to an event below the arrow for a cleaner UML diagram. We a skin parameter to achieve this. For example:

skinparam responseMessageBelowArrow true

autonumber “<B>[00]”

Bob -> Alice : Authentication Request

Bob <- Alice : Authentication Response

  • PlantUML supports a lot of different colors. A neat trick is to use the ‘colors’ command to render a picture with all colors.

Sources: https://www.codit.eu/blog/plantuml-tips-and-tricks/?country_sel=be

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

REST API

We have been using REST API for most of the semester now but did not really read or have knowledge about it. We have been reading a lot of documentation of MongoDB – its operators, commands, methods, or collections. but nothing about REST even though we will be using REST API again in future semester i.e., for capstone when we are working on Libre food pantry. Therefore, I wanted to research REST and found a very interesting blog by Adam Duvander.

REST stands for REpresentational State Transfer. REST API’s are a form of web services used to run websites (like we have built Libre food pantry example), mobile applications and most enterprise integrations.

Important thing to know about REST is that it is not a standard, it is built over HTTP standard. The information can be in several formats: JSON, HTML, XLT, Python, PHP, or even plain text. JSON is the one we have been using and probably will be using because it is easy to read by both people and machines.

Developers used HTTP methods or HTTP verbs to define the requests being made. GET, PUT, POST, and DELETE are ones we have used so far. PATCH is another commonly used HTTP method to update a subset of existing data.

REST resource is data that is modified or accessed using HTTP methods. For example, when we worked on backend, we defined a path to access or modify data by using GET, PUT or POST methods. An example of a request would be:

POST /order/{id}/items

{id} would be an identifier to find order with that specific id value. Identifiers can be integers or hashes.

REST architecture is made of this resources and requests and requests are made using HTTP methods. REST architecture also states that information should not be stored after any request is executed, meaning every request is independent of the other. However, resources should be accessible and modifiable by the user. Therefore, interface between components is needed so that resources requested are identifiable and separate from representations sent to the user. The representation has information which can be then used by user to access and modify resources. REST architecture also needs cache for interactions between user and server and a system to handle different servers used to process requested information into hierarchies.

Built on these principles, REST is very versatile, able to work in a large variety of environments, with multiple data types making REST API’s faster and lightweight.

Sources: https://blog.stoplight.io/rest-api-standards-do-they-even-exist

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Visual Code: Docker Extension

We have used visual studio code and docker extensively in our software design learning process. Even though it wasn’t made necessary for us to get the docker extension, I have been using the docker extension for a while now. Since docker is one of the biggest open source platform providing virtual containers, I wanted to further explore what benefits would an extension bring to visual studio code. For this I am focusing on blogpost under microsoft by Mike Morton.

Using the extension, we can easily add docker files through command palette and using Docker: Add Docker Files to Workspace command. This generates ‘Dockerfile’ and ‘.dockerignore’ files and adds them directly to our workspace. The command also gives us an option to add Docker Compose files. Extension provides option to build docker file in more than ten most popular development languages and then we can set up one-click debugging of Node.js, Python, and .NET Core inside a container.

Extension has docker commands to manage to manage images, networks, volumes, image registries, and Docker Compose built right into the command palette. So, we no longer must go to the terminal and meticulously type $ docker system prune -a or search IDs of specific container we want to stop, start, remove, etc.

Moreover, the extension lets us customize many of the commands. For example, when you run an image, you can now have the extension put the resulting container on a specific network.

Docker Explorer, another feature of extension, lets us examine and manage Docker containers, images, volumes, networks, and container registries. We can use the context menu to hide/show them on explorer panel.

The best feature is extension’s ability to select multiple containers or images and execute commands on the selected items. For example, we can select ‘nginx’ and ‘mongodb’ container and stop or start them at the same time without affecting other containers and without having to run start or stop command twice. Similarly, we can run or remove multiple images of our choice. Moreover, when running start command through command palette, we can see list of all the containers that can be started with a checkbox next to each.

When we are working on say Libre Food pantry microservices and have multiple development containers running – running commands through command palette is going to be quick and concise, explorer will give us a very simplified and organized way to manage docker assets and executing run/stop commands for multiple containers at the same time will be an extreme time saver. All these features combined are going to increase development productivity exponentially.

Source: https://devblogs.microsoft.com/visualstudio/visual-studio-code-docker-extension-1-0-better-than-ever/

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Working in containers

Today we will be focusing on containers and why containers have become the future of DevOps. For this we will be looking at a blog by Rajeev Gandhi and Peter Szmrecsanyi which highlights the benefits of containerization and what it means for the developers like us.

Containers are isolated unit of software running on top of OS i.e., only applications and their dependencies. Containers do not need to run full operating system; we have been using Linux operating system kernel through docker and using capabilities of our local system hardware and OS (windows or macOS). When we remote accessed SPSS and logic works through virtual machine, VM came loaded with full operating system and its associated libraries which is why they are larger in size and much slower compared to containers that are smaller in size and faster. Containers can also run anywhere since docker (container) engine supports almost all underlying operating systems. Containers can also work consistently across local machines and cloud making containers highly portable.

We have been building containers for our DevOps by building and publishing container (docker) images. We have been working on files like api and backend in development containers preloaded with libraries and extensions like preview swagger. We then make direct changes in the code and push them into containers – this can lead to potential functionality and security risks. Therefore, we can change the docker image itself. Instead of making code changes on backend, we are building image with working backend code and then coding on frontend. This helps us avoid accidental changes to a working backend, but we must reconstruct the container if we are making changes to the container image.

Containers are also highly compatible for deploying and scaling microservices, which are applications broken into small and independent components. When we will be working on Libre food pantry microservices architecture, we will have five to six teams working independently on different components of the microservices in different containers giving us more development freedom. After an image is created, we can also deploy a container in matter of seconds and replicate containers giving developers more experimentation freedom. We can try out minor bug fixes, or new features and even major api changes without the fear of permanent damage on the original code. Moreover, we can also destroy a container in matter of seconds. This results in faster development process which leads to quicker releases and upgrades to fix minor bugs.

Source:

https://www.ibm.com/cloud/blog/the-benefits-of-containerization-and-what-it-means-for-you

https://www.aquasec.com/cloud-native-academy/docker-container/container-devops/

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Microservices Architecture: Uses and Limitations

Using information found by Narcisa Zysman and Claudia Söhlemann we take a closer look at microservices architecture and learn about why it is being widely used and what are its limitations.

Uses:

Better fault isolation: If one microservice fails, others will likely continue to work.

Optimized scaling decisions: Scaling decisions can be made at a more granular level, allowing more efficient system optimization and organization.

Localized complexity: Owners of a service need to understand the complexity of only what is within their service, not the whole system.

Increased business agility: Failure of a microservice affects only that service not the whole application so enterprises can afford to experiment with new processes, algorithms, and business logic.

Increased developer productivity: It’s easier to understand a small, isolated piece of functionality than an entire monolithic application.

Better alignment of developers with business users: Microservice architectures are organized around business capabilities, developers can more easily understand the user perspective and create microservices that are better aligned with the business.

Future-proofed applications: Microservice architectures makes it easier to replace or upgrade the individual services without impacting the whole application.

Smaller and more agile development teams: Teams involve fewer people, and they’re more focused on the part of microservices they work on.

Limitations:

Can be complex: While individual microservices may be easier to understand and manage, the application may have significantly more components involved, which have more interconnections. These interdependencies increase the application’s overall complexity.

Requires careful planning: Because all the microservices in an application must work together, developers and software architects must carefully plan out how to break down all the functionality and dependencies. There can be data challenges when starting an application from scratch or modifying a legacy monolithic application. Also, multiple iterations can be required until it works.

Proper sizing is critical and difficult: If microservices are too big, might have all the drawbacks of monoliths. If it is too small, the complexity of the individual services is moved into the dependency maps, which makes the application harder to understand and manage at scale.

Third-party microservices: Third-party services can change their APIs (or dependencies) at any time and in ways that may break your application.

Downstream dependencies: The application must be able to survive failures of individual microservices, yet downstream problems often happen. Building fault-tolerance into an application built with microservices can be more complex than in a monolithic system.

Security Risks: Microservices growth in popularity, may increase an applications’ vulnerability to hackers and cybercriminals. Because microservice architectures allows the use of multiple operating systems and languages when building an application, there’s the possibility of having more targets for malicious intrusions. We are also unaware of the vulnerabilities of third-party services being used.

When complexity increases, we make sure it is warranted and well understood. Regularly examining interconnected set of microservices so the application does not crash. Learning about limitations helps us come up with such solutions which we can apply in our future work and when we work on LibreFoodPantry system.

Source:

https://www.castsoftware.com/blog/microservices-architecture-a-good-or-bad-approach

https://kruschecompany.com/microservice-architecture-for-future-ready-products/#Microservices_cons

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Object Oriented Programming

OOP is used to structure a software program into simple, reusable pieces of code blueprints (usually called classes), which are used to create individual instances of objects.

Building blocks of OOP:

  • Classes are where we create a blueprint for the structure of methods and attributes. Individual objects are instantiated or created from this blueprint. For example, we can look at Duck class covered in class activity.
  • Objects are instances of classes created with specific data, for example rubber duck is an instance of duck class. It is crucial to remember that class is a template for modeling (a duck in our example), and an object is instantiated from the class representing an individual real-world thing (a rubber duck in our example).
  • Methods perform actions; methods might return information about an object or update an object’s data. The method’s code is defined in the class definition. In simple terms, methods represent behavior. For our Duck example, ducks had methods like fly() and quack(). Duck’s had different behavior. Rubber duck did not fly and did not quack but squeaked. These behaviors are specified in methods.
  • When objects are instantiated, individual objects contain data stored in the Attributes. State of objects depend on data in attribute. For example, Rubber duck is handled differently than a mallard duck based on the information in attributes.

The four principles of OOP:

  • Inheritance allows classes to inherit features of other classes. Basically, child classes inherit data and behaviors from parent class. In our example, Rubber duck inherited display() from duck class.
  • Encapsulation is containing information in an object and exposing only selected information to other classes. Private methods and properties are accessible by other methods of the same class. Public methods and properties are accessible by methods of other classes too.
  • Abstraction is using simple classes to represent complexity. It uses simple things to reduce complexity. Abstraction means that the user interacts with only selected attributes and methods of an object. Abstraction is used in interface. In FlyBehavior we had abstract fly() which was defined in concrete classes as flyWithWings or flyNoWay.
  • Polymorphism uses inheritance. Objects can override shared parent behaviors, with specific child behaviors. In method overriding, a child class can provide a different implementation than its parent class. In method overloading methods or functions may have the same name, but a different number of parameters passed into the method call.

Abstraction reduces complexity and constant overriding of method. Inheritance gives reusable structure across program. Polymorphism allows for class-specific behavior and objects of different types to be passed through the same interface. Encapsulation helps us prevent unwarranted change of important data by developers. Also, Prevents greater security risks like phishing that we face today.Advantages of OOP will never die out. Therefore, I have written this blog explaining OOP and I hope it has been useful.

Sources

https://codecoda.com/en/blog/entry/object-oriented-programming

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

About me…

Hi! I am Murtaza. I am a tech nerd. I love to learn and read about new technologies. This is my blog where I hope to share my take on tech today. Thank you!

From the blog CS-WSU – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

WSU x AMPATH || Sprint Retrospective 6

Sams ShipsHey guys, it’s a weird feeling to be wrapping up my last semester in my undergraduate career in computer science (and sociology). For this final installment of my AMPATH sprint series, I will just go over the general overview of what went on for my team.

Most of our updates went onto our PowerPoint presentation, which is going to be presented on May 15th, 2019. My team and I are looking forward to presenting the overall process of our group’s learning and working process, the many lessons we learned, our advice for future students, the work we tried to implement, and various other technical aspects of our project.

We decided that on top of the search bar work, it is also a priority to organize the git repository so it can be better organized and an open work environment for other classes. I think this is important especially when new people come in and cannot efficiently locate and access the files they need. As someone who was once new to an organization that had a lot of different projects and files to sort through, I believe that this is very considerate of the team to go with this.

With some limitations my team had to face, we worked around it to determine how we should move forward. The end result is pretty much a search bar that is attached to the toolbar. It cannot currently be live tested due to there being no backend but it can definitely be done in the future under certain circumstances.

It was interesting being able to observe how much planning you can start with but still end up having to take detours, starting new paths completely, or sometimes even needing to take U-turns.

I thought it would be important to pull some of the advice for future students from our PowerPoint and include them in this wrap-up:

  • Point out and address problems with technology right away because others around you might have the same problem(s) so you can solve them collectively
  • Do all team implementations in a separate component based on what you will be working on
  • Merge your work constantly to the master branch so each team can have the updated changes

A pattern I am noticing in a lot of teams or group projects is that not everything is going to work out in ways that you expected or were hoping for but you learn to move as a shifting team to make progress and continue growth.

Overall, I’d say I learned an important life lesson from this: if I am to contribute extra time on top of my technology career in the future to work on side projects, it will be a challenge to allocate time if it is a group initiative. I also learned that even when we try to communicate everything there is still more room for miscommunication, so there may be no such thing as over-communicating. I am happy to say that we always tried our best to move forward in all ways!

 

From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.