THE SOLID

SOLID

 SOLID is an acronym that stands for five key design principles: Single responsibility principle, Open-closed principle, Liskov substitution principle, Interface segregation principle, and Dependency inversion principle. All five are used by software engineers and provide significant benefits to developers.

 

Principles of SOLID:

  • SRP – Single Responsibility Principle

SRP states that, “a class should have one, and only one, reason to change.” Following this principle means that each class does only one thing and each class or module is responsible for only one part of the program functionality. For simplicity, each class should solve only one problem.

Single responsibility principle is a basic principle that many developers already use to create code. It can be used for classes, software components, and sub-services.

Applying this principle simplifies testing and maintaining the code, simplifies software implementation, and helps prevent unintended consequences of future changes.

  • OCP – Open/Closed Principle

OCP states that software entities (classes, modules, methods, etc.) should be open for extension, but closed for modification.

Practically this means creating software entities whose behavior can be changed without the need to edit and compile the code itself. The easiest way to demonstrate this principle is to focus on the way it does one thing.

  • LSP – Liskov Substitution Principle

LSP states that “any subclass object should be substitutable for the superclass object from which it is derived”.

While this may be a difficult principle to put in place, in many ways it is an extension of a closed principle, as it is a way to ensure that the resulting classes expand the primary class without changing behavior.

  • ISP – Interface Segregation Principle

ISP principle states that “clients should not be forced to implement interfaces they do not use.”

Developers do not just have to start with an existing interface and add new techniques. Instead, start by creating a new interface and then let your class implement as many interactions as needed. Minor interactions mean that developers should prefer composition rather than legacy and by separation over merging. According to this principle, engineers should work to have the most specific client interactions, avoiding the temptation to have one large, general purpose interface.

  • DIP – Dependency Inversion Principle

DIP – Dependency Inversion Principle states that “high level modules should not depend on low level modules; both should depend on abstractions. Abstractions should not depend on details.  Details should depend upon abstractions”

This principle offers a way to decouple software modules. Simply put, dependency inversion principle means that developers should “depend on abstractions, not on concretions.”

Benefits of SOLID principle

  • Accessibility
  • Ease of refactoring
  • Extensibility
  • Debugging
  • Readability

REFERENCES:

  1. https://www.bmc.com/blogs/solid-design-principles/

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

Working in containers

Today we will be focusing on containers and why containers have become the future of DevOps. For this we will be looking at a blog by Rajeev Gandhi and Peter Szmrecsanyi which highlights the benefits of containerization and what it means for the developers like us.

Containers are isolated unit of software running on top of OS i.e., only applications and their dependencies. Containers do not need to run full operating system; we have been using Linux operating system kernel through docker and using capabilities of our local system hardware and OS (windows or macOS). When we remote accessed SPSS and logic works through virtual machine, VM came loaded with full operating system and its associated libraries which is why they are larger in size and much slower compared to containers that are smaller in size and faster. Containers can also run anywhere since docker (container) engine supports almost all underlying operating systems. Containers can also work consistently across local machines and cloud making containers highly portable.

We have been building containers for our DevOps by building and publishing container (docker) images. We have been working on files like api and backend in development containers preloaded with libraries and extensions like preview swagger. We then make direct changes in the code and push them into containers – this can lead to potential functionality and security risks. Therefore, we can change the docker image itself. Instead of making code changes on backend, we are building image with working backend code and then coding on frontend. This helps us avoid accidental changes to a working backend, but we must reconstruct the container if we are making changes to the container image.

Containers are also highly compatible for deploying and scaling microservices, which are applications broken into small and independent components. When we will be working on Libre food pantry microservices architecture, we will have five to six teams working independently on different components of the microservices in different containers giving us more development freedom. After an image is created, we can also deploy a container in matter of seconds and replicate containers giving developers more experimentation freedom. We can try out minor bug fixes, or new features and even major api changes without the fear of permanent damage on the original code. Moreover, we can also destroy a container in matter of seconds. This results in faster development process which leads to quicker releases and upgrades to fix minor bugs.

Source:

https://www.ibm.com/cloud/blog/the-benefits-of-containerization-and-what-it-means-for-you

https://www.aquasec.com/cloud-native-academy/docker-container/container-devops/

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Data Science who?

What Is Data Science?
First of all, data science is an interdisciplinary field. In this article, we’ll cover the key aspects that you can expect to encounter in a data scientist role. Data science is, in fact, a very broad discipline, which continues to expand with new data-related needs.

Type of data used:
Activities performed on the job
Time allocation
Key skills
Frequently used data science methods

What Types of Data Do Data Scientists Use in Their Analysis?
The answer is: they use both structured and unstructured data. Structured data comes in the form of Excel spreadsheets and CSV files. Examples of such data are client tables and spreadsheets with transaction information. Unstructured data, on the other hand, is everything else: images, video, and audio files, all types of other data we can have. Reportedly, unstructured data represents more than 80% of all enterprise data, so every data scientist worth their salt should be able to take advantage of it.

What Are the Main Data Scientist Responsibilities?
It depends mostly on company size. In larger enterprises, there will be a higher degree of specialization as the company is able to afford more resources. The main activities that a data scientist can perform – but not necessarily does – in a business environment are:

Data collection and storage
Data preprocessing (also referred to as data cleaning)
Data organization
Data visualization and the creation of KPI dashboards
Experimentation and A/B testing
Statistical inference
Building ML models
Evaluating ML models
Deploying ML models
And Monitoring how ML models perform

Which Data Science Tasks Take Up the Most Time?
Ask anyone in the industry and you will hear the same answer. They’ll tell you they spend 80% of the time in an effort to make a hypothesis, find the necessary data, and clean it. Only 20% of the useful hours are dedicated to performing analysis and interpreting the findings.

What are the Key Data Scientist Skills?
There are many abilities that you should have in order to become a skilled data scientist. Some of the most frequently used data science techniques are:

Statistical inference.
Linear regression.
Logistic regression.
Machine Learning techniques such as decision trees, support vector machines, clustering, dimensionality reduction.
Deep Learning methods – supervised, unsupervised, and reinforcement learning.
Regardless of the method, a data scientist’s end goal would be to make a meaningful contribution to the business – to create value for the company.

How Does Data Science Make a Meaningful Contribution to the Business?
We can distinguish among two main ways to do that. First, help a company make better decisions when it comes to their customers and employees. We hope you enjoyed this article and learned something new. Now that you have a basic understanding of the field of data science, you might be wondering where to start your learning journey. Our Introduction to Data Science course offers a beginner-friendly overview of the entire field of data science and all its complexities.

https://365datascience.com/trending/data-science-explained-in-5-minutes/

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

The Bug

Internal bug finding, where project developers find bugs themselves, can be superior to external bug finding because developers are more in tune with their own project’s needs and also, they can schedule bug finding efforts in such a way that they are most effective, for example after landing a major new feature. Thus, an alternative to external bug finding campaigns is to create good bug finding tools and make them available to developers, preferably as open-source software. Bug finders are motivated altruistically (they want to make the targeted software more reliable by getting lots of bugs fixed) but also selfishly (a bug finding tool or technique is demonstrably powerful if it can find a lot of previously unknown defects). A priority for “a valid bug that should be fixed, but” I’ve found FindBugs’ “Mostly Harmless” useful for this and have lobbied to include it in every bug tracker I’ve used since, though it’s more of a severity than a priority. External bug finders would like to find and report as many bugs as possible, and if they have the right technology, it can end up being cheap to find hundreds or thousands of bugs in a large, soft target. A major advantage of external bug finding is that since the people performing the testing are presumably submitting a lot of bug reports, they can do a really good job at it.

A priority for “a valid bug that should be fixed, but not something that will ever be at the top of the list unless something unforeseen happens.” The OSS-Fuzz project is a good example of a successful external bug finding effort; its web page mentions that it “has found over 20,000 bugs in 300 open-source projects. As an external bug finder, it can be hard to tell which kind of bug you have discovered (and the numbers are not on your side: the large majority of bugs are not that important). However, if they actively don’t trust you, for example because you’ve flooded their bug tracker with corner-case issues, then they’re not likely to ever listen to you again, and you probably need to move on to do bug finding somewhere else. Much more recently, it has become common to treat “external bug finding,” looking for defects in other people’s software, as an activity worth pursuing on its own. An all-too-common attitude (especially among victims of bug metrics) is that bug reporters are the enemy—but those making pull requests are contributors. First, every bug report requires significant attention and effort, usually far more effort than was required to simply find the bug. Occasionally, your bug finding technique will come up with a trigger for a known bug that is considerably smaller or simpler than what is currently in the issue tracker.

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

Security in API’s

As we continue to work with API’s, I have decided to dedicate this blogpost to talk about security with API’s as eventually we will have to take this into consideration when we go into more in depth with them in the future. Security will always stay priority in which I think it would be helpful to look at this area now. I have chosen a blog post that gives us some good practices we can look at to help better our API’s.

In summary, this authors first goes over TLS which stands for Transport layer security which is a cryptographic protocol used to help prevent tampering and eavesdropping by encrypting messages sent to and from the server. The absence of TLS means that third parties get easy access to private information from users. TLS can be found when the website URL contains https and not just http such as the very website you are reading from now. Then they go over Oauth2 which is a general framework and use that with single sign-on providers. This helps manage third party applications access and usage of data on the behalf of the user. This is used in situations such as granting access to photos to used in a different application. They go in depth over codes and authentication tokens with Oauth2. Then they go over API keys where we should set the permissions on those and don’t mismanage those. They say at the end to just use good, reliable libraries that you can put most of the work and responsibility onto so that we can can minimize the mistakes we can make.

These practices will help bolster my knowledge on the security placed within the API’s we are working with. This blogpost has helped me learn more on the general framework on what security measures are placed in the general API landscape. TLS looks to be a standard protocol used in 99% of websites you see but also makes me wonder on all of the websites that I have traversed through that didn’t have TLS and you should too and make sure that you have no private information in danger on those sites. This also makes me wonder on how TLS is implemented such as with the LibrePantry API that is being worked on if it is being used there(hopefully). Then perhaps when we work further in the API’s, we get to see the security practices implemented.

Source: https://stackoverflow.blog/2021/10/06/best-practices-for-authentication-and-authorization-for-rest-apis/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Docker Images

Resource Link: https://jfrog.com/knowledge-base/a-beginners-guide-to-understanding-and-building-docker-images/

For this post I wanted to explore and delve into what exactly docker images are, how they’re composed, and how they’re interpreted within docker. I wanted to investigate this topic because while I understand at a high level how docker images work and what they’re used for, I don’t have a deeper understanding of their exact functioning and inner workings. My goal is to attain a better understanding of docker images so that I may understand the full scope of their functionality and uses. The purpose of this is so that I can better make use of docker images, because while right now I can use docker images just fine, I can’t utilize the more advanced functions and uses of them.

Basically, a docker image is a template that has instructions for creating a docker image. This makes it an easy way of creating readily deployable applications or servers that can be built instantly. Their pre-packaged nature makes it easy for them to be shared amongst other docker users, for example on a dev team, or to make a copy of a certain version of an application or server that’s easy to go back to or redistribute. In this way, Docker Hub makes it easy for users to find images that other users have made, so a user can just search for an image that fits their needs and take use of it with little to no configuration needed.

A docker image isn’t one file, it’s composed of many files that can be composed to form a docker image. Some examples of these files are instillations, application code, and dependencies that the application may require. This plays into the pre-packaged nature of a docker image, as all the code, instillations, and dependencies are included within the docker image, meaning that a user doesn’t need to search elsewhere for the dependencies required to configure a docker image. This also creates a level of abstraction, as all the user needs to know is the image that they want to use, and how to configure it, but they could have little knowledge of the code, instillations, or dependencies required for that docker image because the image handles it all automatically.

The way to build a docker image is either through interactive steps, or through a Dockerfile. For the interactive steps they can be run either through the command line or compiled into a single file that can be run to execute the commands. For a Dockerfile, all the instructions to build the docker image are within the file in a similar format for each image. There are advantages and disadvantages to each of the methods, but the more common and standard way is through a Dockerfile. With a Dockerfile, the steps to build a docker image are similar and repeatable for each image. This makes it easier to maintain within the image’s lifecycle, and it allows for easier integration into continuous integration and continuous delivery processes. For these reasons it is preferable for a large scale application, often being worked on by multiple people. The advantages of creating a docker image through interactive steps is that it’s the quickest and simplest way to get a docker image up and running. It also allows for more fine tuning over the process, which can help with troubleshooting. Overall, creating a docker image with a Dockerfile is the better solution for long term sustainability, given that you have the setup time required to properly configure a Dockerfile.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Software Framework

A framework is similar to an application programming interface (API), though technically a framework includes an API. As the name suggests, a framework serves as a foundation for programming, while an API provides access to the elements supported by the framework. Also, a framework may include code libraries, a compiler, and other programs used in the software development process. There are two types of frameworks which are: Front end and back end.

The front end is “client-side” programming while a back end is referred to as “server-side” programming. The front-end is that part of the application that interacts with the users. It is surrounded by dropdown menus and sliders, is a combo of HTML, CSS, and JavaScript being controlled by our computer’s browser. The back-end framework is all about the functioning that happens in the back end and reflects on a website. It can be when a user logs into our account or purchasing from an online store.

Why do we need a software development framework?

Software development frameworks provide tools and libraries to software developers for building and managing web applications faster and easier. All the frameworks mostly have one goal in common that is to facilitate easy and fast development.

Let’s see why these frameworks are needed:

  1. Frameworks help application programs to get developed in a consistent, efficient and accurate manner by a small team of developers.
  2. An active and popular framework provides developers robust tools, large community, and rich resources to leverage during development.
  3. The flexible and scalable frameworks help to meet the time constraints and complexity of the software project.

Here are some of the importance of the framework. Now let’s see what are the advantages of using a software framework:

  1. Secure code
  2. Duplicate and redundant code be avoided
  3. Helps consistent
  4. developing code with fewer bugs
  5. Makes it easier to work on sophisticated technologies
  6. Applications are reliable as several code segments and functionalities are pre-built and pre-tested.
  7. Testing and debugging the code is easier and can be done even by developers who do not own the code.
  8. The time required to develop an application is less.

I chose this topic because I have heard many times about software frameworks and was intrigued by learning more about them, what they are, how they work, and what their importance is in the software development field. Frameworks or programming languages are important because we need them to create and develop applications.

Software Development Frameworks For Your Next Product Idea (classicinformatics.com)

Framework Definition (techterms.com)

From the blog CS@Worcester – Software Intellect by rkitenge91 and used with permission of the author. All other rights reserved by the author.

Blog Post 5 – SOLID Principles

When developing software, creating understandable, readable, and testable code is not just a nice thing to do, but it is a necessity. This is because having clean code that could be reviewed and worked on by other developers is an essential part of the development process. When it comes to object oriented programming languages, there are a few design principles that help you avoid design smells and messy code. These principles are known as the SOLID principles. These principles were originally introduced by Robert J. Martin back in 2000. SOLID is an acronym for five object oriented design principles. These principles are:

  1. Single Responsibility Principle – A class should have one and only one reason to change, meaning that a class should have only one job. This principle helps keep code consistent and it makes version control easier.
  2. Open Closed Principle – Objects or entities should be open for extension but closed for modification. This means that we should only add new functionality to the code, but not modify existing code. This is usually done through abstraction. This principle helps avoid creating bugs in the code.
  3. Liskov Substitution Principle – Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T. This means that subclasses can substitute their base class. This is expected because subclasses should inherit everything from their parent class. They just extend the parent class, they never narrow it down. This principle also helps us avoid bugs.
  4. Interface Segregation Principle – A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use. This principle helps keeps the code flexible and extendable.
  5. Dependency Inversion Principle – Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions. This means that dependencies should be reorganized to depend on abstract classes rather than concrete classes. Doing so would help keep our class open for extension. This principle helps us stay organized as well as help implement the Open Closed Principle.

These design principles act as a framework that helps developers write cleaner, more legible code that allows for easier maintenance and easier collaboration. The SOLID principles should always be followed because they are best practices, and they help developers avoid design smells in their code, which will in turn help avoid technical debt.

https://www.digitalocean.com/community/conceptual_articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design#single-responsibility-principle

https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.

Best Docker Tutorial IMHO

While in the process of trying to learn Docker as part of my Software Architecture class, I watched 5 YouTube videos. YouTube has been a real benefit to me for learning college level science courses. I have had good luck with Biology, Organic Chemistry, and even Statistics tutorials, but have not had much luck so far with Computer Science videos. There are some good ones out there, though, and “Docker Tutorial for Beginners – A full DevOps course on how to run applications in containers [1] is certainly one the best I have seen.

This 2-hour course by Mumshad Mannambeth covers all the bases in a clear and interesting manner and is enhanced with well thought out structure and graphics. The lectures are accompanied by direct access to a full-fledged Docker environment through labs [2] on a website

The course is broken up into sections exploring the definitions of containers, images, Docker itself, why you need it, and what you can do with it. He then goes on to explain the basic commands needed to create a docker image, how to build and run it in a container, the basic concepts of Docker Compose, YAML, and DockerFile.

He leaves the technical details of installing Docker Desktop for Windows and the MAC until later in the video, giving more time up front to clearly describe why you want to use it, what it accomplishes for the software development community, and how containerization is the future of enterprise level software. These installation sections are also well done but are not relevant for those who already have docker installed, or for those without the time to build their own environments. The tutorial and accompanying interactive quizzes on the right side of the site, are resources I will come back to in the future, because of their depth and clarity.

He then allocates about 40 minutes going into docker concepts and commands in depth and follows up with a history and description of the importance of continuous integration using container orchestration tools like Swarm and Kubernetes. He clearly lays out the architecture of a system that is complex, distributed, fault-tolerant, and easy to implement. He details the importance of DevOps, where the design, development, testing, and operations teams are seamlessly connected and have a symbiotic relationship. This makes everyone’s jobs easier, cuts down on departmental finger pointing when things go wrong, and brings product to market much quicker and with less bugs shipped.

He also covers the following areas:

1. Layered architecture

2. Docker registry

3. Controlling volumes

4. Port forwarding

5. Viewing logs

6. The advantages of container architectures over Virtual Machines and Hypervisors.

7. KubeCtrl

I was pleasantly surprised to have found this. Maybe I should give the computer science YouTube community more credit.

References:

[1]  YouTube link:

[2]  Tutorial links:

 www.freecodecamp.org

 www.kodecloud.com

Exhibits:

(1) Why do you need Docker?

(2) Container orchestration tools

(3) Layered architecture

(4) Hand-on Lab

From the blog cs@worcester – (Twinstar Blogland) by Joe Barry and used with permission of the author. All other rights reserved by the author.

Artificial Intelligence and its future

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Leading AI textbooks define the field as the study of “intelligent agents”: any system that perceives its environment and takes actions that maximize its chance of achieving its goals, Some popular accounts use the term “artificial intelligence” to describe machines that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving”, however, this definition is rejected by major AI researchers.

AI in healthcare:

AI in healthcare has gained significant traction during the pandemic. AI has taken on a much bigger role in healthcare during the pandemic. The White House partnered with AI research institutions to mine scientific literature to better understand Covid-19. Biotech companies and big tech players leveraged AI to understand the structure of the novel coronavirus to expedite drug discovery. Social distancing and lockdown measures forced medical labs to accelerate their digital pathology capabilities.   Amid economic uncertainties, healthcare AI companies raised recording funding in Q3’20.

AI will change healthcare by 2030:

This article is part of the World Economic Forum Annual Meeting. By 2030, AI will access multiple sources of data to reveal patterns in disease and aid treatment and care. Healthcare systems will be able to predict an individual’s risk of certain diseases and suggest preventive measures. AI will help reduce waiting times for patients and improve efficiency in hospitals and health systems. The first big consequence of this in 2030 is that health systems are able to deliver truly proactive, predictive healthcare.

AI and the Future of Work:

A recent study from Redwood Software and Sapio Research underscores this view. Participants in the 2017 study said they believe that 60 percent of businesses can be automated in the next five years. On the other hand, Gartner predicts that by 2020 AI will produce more jobs than it displaces. Dennis Mortensen, CEO and founder of x.ai, maker of AI-based virtual assistant Amy, agreed. “I look at our firm and two-thirds of the jobs here didn’t exist a few years ago,” said Mortensen.

In addition to creating new jobs, AI will also help people do their jobs better — a lot better. At the World Economic Forum in Davos, Paul Daugherty, Accenture’s Chief Technology and Innovation Officer summed this idea up as, “Human plus machine equals superpowers.” 

Potential to transform businesses and contribute to economic growth:

These technologies are already generating value in various products and services, and companies across sectors use them in an array of processes to personalize product recommendations, find anomalies in production, identify fraudulent transactions, and more. The latest generation of AI advances, including techniques that address classification, estimation, and clustering problems, promises significantly more value still. An analysis we conducted of several hundred AI use cases found that the most advanced deep learning techniques deploying artificial neural networks could account for as much as $3.5 trillion to $5.8 trillion in annual value, or 40 percent of the value created by all analytics techniques. 

Deployment of AI and automation technologies can do much to lift the global economy and increase global prosperity, at a time when aging and falling birth rates are acting as a drag on growth. Labor productivity growth, a key driver of economic growth, has slowed in many economies, dropping to an average of 0.5 percent in 2010–2014 from 2.4 percent a decade earlier in the United States and major European economies, in the aftermath of the 2008 financial crisis after a previous productivity boom had waned. AI and automation have the potential to reverse that decline: productivity growth could potentially reach 2 percent annually over the next decade, with 60 percent of this increase from digital opportunities.

Work Cited:

Artificial intelligence – Wikipedia

The Role of Artificial Intelligence in the Future of Work | Blogs | CDC

AI, automation, and the future of work: Ten things to solve for (Tech4Good) | McKinsey

From the blog CS@Worcester blog – Syed Raza by syedraza089 and used with permission of the author. All other rights reserved by the author.