InfoSeCon Day2 Keynote

This is the Keynote speaker for day two of Raleigh ISSA’s 2021 Information Security Conference. In my previous post about the Keynote of Day 1 Keynote Speaker I detailed my attachment to the Association and my appreciation for the monthly resource of meetings that I used to attend when I lived in Raleigh. The presentation for Day 2 is a little different in that it’s not themed around Halloween or any comedic points.

The speaker, Armando Seay, is a Co-Founder & Board Member at Maryland Innovation and Security Institute which is an organization that assists clients in finding security solutions and applying them to their IT infrastructure. He begins the presentation discussing the importance of understanding security standards and practices around the world and how young professionals in the IT field get their starts in the US, Ukraine, Israel, and other various countries. He describes his organization as “geographically boundless” due to the fact that they assist clients all over the world.

The importance of being able to work with people around the world is something that I’m finding to be more and more important as I travel around the US from one cultural hub to another. I grew up in Columbia, South Carolina and then I joined the Coast Guard after High School and now I’ve moved to Worcester. Each of these places are full of different demographics and being successful in each of them required working with different people from different backgrounds and finding common ground to build upon. I have friends that were born in Haiti, China, West Africa, Europe, Canada, and they all come from backgrounds that are exceptionally different from my own. Engaging with people from different backgrounds allows one to broaden their perspective and find solutions that they wouldn’t normally find on their own.

Armando goes on to talk about the Academic Partners and what they call their “Partner Ecosystem,” I believe that POGIL team structures kind of works as a microcosm of these in that when groups are switched around we form partnerships that we can maintain and use to better understand our assignments and accomplish goals.

One thing that I found particularly interesting was his discussion about Maritime Attacks, that is to say attacks on the software that ships use to either catalogue their cargo, their navigation, or even their movement. It’s important to be aware of software vulnerabilities and best practices so that we aren’t exposing others to cybersecurity vulnerabilities due to our negligence. Armando closes his presentation discussing Zero-Trust policies and how critical it is to verify any process or user in your network and everything you do before you entrust any security clearance to them.

From the blog CS@Worcester – Jeremy Studley's CS Blog by jstudley95 and used with permission of the author. All other rights reserved by the author.

.JS File

This week, I have a chance to work with JavaScript files of a backend in my homework-5. However, there were many syntax that I had never seen, so I got no sense when I was writing code. Thus, I looked for some resources to satisfy my curiosity and to get a better understanding of JavaScript.

JavaScript Tutorial is one of the resources that I find useful because it provides all the information related to JavaScript. First of all, from the website, I know that JavaScript is one of the most popular programming language of the web. It is used to program the behavior of a web page. Next, according to the website, I can look up and learn more about new syntax or new definitions of JavaScript respecting to my homework 5. For example, the keywords “let” and “const”, Array Iteration, Async/Await appearing in my homework make me question that what they are for and why they are there in the .js files.

However, based on the information from the website I can answer those questions. It said that “let” is used to defined a variable, whereas “const” is used to define a constant variable which cannot be reassigned. Every “let” and “const” variables cannot be redeclared and have block scope. For the Array Iteration, JavaScript has many methods to operate on every item of an array, such as map(), filter(), reduce(), every(), especially forEach() which I have met in my homework. forEach() is used to apply a condition or a function inside the parenthesis to each element of an array. For Async/Await, it said that “async” is used to make a function return a promise, and “await” keyword is only used inside the “async” function. “await” will make a function wait for a promise. So, what is a promise in JavaScript? Promise is a JavaScript object that links producing code and consuming code. The below picture is an example given by the website to explain clearly what producing code and consuming code are.

In conclusion, in my opinion, this website is one of a good resources to help me learn more about JavaScript. It includes all the information that I wanted to know and it also explains clearly every new definition with easily understandable examples. Thanks to this website, I am getting more familiar with JavaScript and get a better understanding of the code given in my homework-5. So, I believe that this website will give me a good foundation to work with my homework-5 or with any .js files.

From the blog CS@Worcester – T's CSblog by tyahhhh and used with permission of the author. All other rights reserved by the author.

Thinking about software testing

blog.ndepend.com/10-reasons-why-you-should-write-tests/

For as long as I have been aware of it, I have been skeptical of the value of software testing. It has always struck me as unnecessary busywork, mostly because that is how writing them for my classes feels (granted, that’s generally how writing code for any class feels in my experience). Either the program works or it doesn’t, right? Why bother writing a test when you could use that time to tighten the ratchet on the code you’d otherwise be testing, or even move on to something else?

In an attempt to broaden my horizons, I sought out some arguments in favor of testing. One idea I found, from Avaneesh Dubey (which he discusses in the above article), is probably the one I personally find the most compelling. He argues that the hallmark of a poorly constructed test case is essentially one that is too narrow in its scope or the functionality that it covers. Proper tests, he argues, must reflect “usage patterns and flows.”

Jumping off of this, I would articulate this slightly differently. I think that proper testing methodology would necessarily force developers to be aware of the boundaries they want to encapsulate between. For example, it would be kind of absurd to write tests to make sure a factory class works correctly, because whether or not you’re even using the factory paradigm is almost certainly too technical for non-technical product managers to care about. My understanding of testing is that it’s primarily a way for this kind of person to make judgments about the development process independently of the actual developers.

When you write software tests, you are, or at least should be, asking yourself questions about the high-level flow of the program – what it’s actually doing in physical reality rather than very tiny implementation details – and that is ultimately where your head should be at all times during the development process, in my opinion.

Though obviously, test writing is an important skill for actual work in the industry, I had no intention of ever writing any tests for my personal projects. Now, I’m not really sure that I’m sold on it for personal use, and I’m still a little skeptical about the efficacy of test-driven development in single person projects, but I think it might be of some use to me. In particular, I hope it can help me make some sense of the WebGL code I’m planning to write for a project in the near future, which is certain to contain many fine technical details that can quickly become a headache if not managed properly.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

API

For this week’s blog, I was looking more into API since we covered this in the class to know more about it. Application programming interfaces (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. API enables companies to open their applications data and functionality to external third-party developers, business partners, and internal departments within their companies. This allows service and application to communicate with each other and leverage each other’s data and functionality through a documented interface, developers do not need to know about how the API is made because they will simply use the interface to communicate with an application and web services.

How an API works: API stays between an application and web server, acting as an intermediary layer that processes data transfer between systems.  APIs are sometimes thought of as contracts, with documentation that represents an agreement between two parties, if one party sends a remote request structured a particular way, this is how party two’s software will respond.  For example, imagine a medical equipment distribution company. The distribution company could give its customers a could app that lets the hospital’s employee check a certain equipment availability with the distributor. The app could be expensive to build, limited by platform, and require long development times and ongoing maintenance. This has several benefits like it lets the customer access data via an API which helps to check information about their inventory in a single place saving time, The distributor can make changes to its internal systems without impacting customers, with a publicly available API it could result in higher sales for the business.

In short, APIs lets you open up access to your resources while maintaining security and control. How you open access and to who is up to you. There are four main types of API those are Open API, Partner API, Internal API, and Composite API. Open APIs are open-source application programming interfaces you can access with HTTP protocol. Partner API are application programming interfaces exposed to or by strategic business partners. Internal APIs are application programming interfaces that remind hidden from external users. Composite API combine data or service APIs. I used this article because it explains API in an easy way, explains about it giving an example which then makes it’s more clear. And in today’s world innovation is very important as technology is increasing day by day so knowing about API is helpful in the future as it helps in fast innovation.

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

Understanding RESTful APIs and what makes an API “RESTful”

This week, I wanted to solidify my understanding of the concept of REST or RESTful APIs, what they are, and how they work. There is a plethora of information on this topic but I’ll be referring to a blog post by Jamie Juviler and a website on REST API that I found helpful.

Before we get into what REST is, let’s first go over what an API is. API stands for application programming interface, and they provide a way for two applications to communicate. Some key terms include client, resource, and server. A client is a person or program that requests the API, to retrieve information, a resource is any information that can be returned to the customer, and a server is used by the application to receive the requests and maintain the resources that the client is asking for.

REST or RESTful is an acronym for Representational State Transfer and is a term used to describe an API conforming to the principles of REST. REST APIs work by receiving requests and returning all relevant information in a format that is easily interpretable by the client. Clients are also able to modify or add items to the server through a REST API.

Now that we have an understanding of how REST APIs work, let’s go over what makes an API a RESTful API.

There are six guiding principles of REST that an API must follow:

  1. Client-Server separation

The REST architecture only allows for the client and server to communicate in a single manner: all interactions are initiated by the client. The client sends a request to the server and the server sends a response back but not vice versa.

  1. Uniform Interface

    This principle states that all requests and responses must follow a uniform protocol. The most common language for REST APIs is HTTP. The client uses HTTP to send requests to a target resource. The four basic HTTP requests that can be made are: GET, POST, PUT, and DELETE.

  1. Stateless

All calls with a REST API are independent from each other. The server will not remember past requests, meaning that each request must include all information required.

  1. Layered System

    There can be additional servers, or layers, between the client and server that can provide security and handle traffic.

  1. Cacheable

REST APIs are created with cacheability in mind, when a client revisits a site, cached data is loaded from local storage instead of being fetched from the server.

  1. Code on Demand

In some cases, an API can send executable code as a response to a client. This principle is optional.

If an API follows these principles, then it is considered RESTful. These rules still leave room for developers to modify the functionality of their API. This is why REST APIs are preferred due to their flexibility. RESTful APIs offer a lot of benefits that I can hopefully cover in my next blog post. 

What is REST

What is an API?

REST APIs: How They Work and What You Need to Know

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Training a Machine to Learn

Machine learning, its a buzzword that many people like to throw around when discussing a myriad of tech related subjects. Often times as a way to convey that this product or that service is “state of the art” and “cutting edge” because it uses machine learning. While in some instances that can absolutely be the case, in most modern applications its a bit of an oversell. Regardless, teaching a computer to do a task, and then improve at it as more data is given is no small feat and takes a lot of know how to do properly and efficiently. In this blog post I hope to share the things I have learned regarding machine learning and how it works, to hopefully help others who found themselves in a similar spot to me. Which is to say they somewhat understand how it works, but dont exactly know the specifics behind machine learning.

So how does machine learning work? In its essence machine learning is a set of techniques that give computers an ability to learn and improve at a task without being specifically programmed to do so. This is what most people familiar with the term understand it to be. But as with many things we can go deeper. The underlying science of actually having the machine learn its task is pretty complex, and I wont even pretend that I somehow taught myself the math and logic behind it in the time I spend researching for this blog post. Simply put, to start the process of machine learning you need to force a computer to run through its set task over and over again while introducing more data points and allowing it to sort them out on its own. This is not as hands off as many people may think. Just because you wrote the initial algorithm doesn’t mean you can just run it on a dataset and call it a day. Not only do you have to procure and categorize the initial training datasets, you also need to guide the machine in its initial learning process. And I’m not talking a dataset of tens or even a few hundred data points. For any machine learning algorithm to be successful you realistically need thousands of data points in each set. The datasets also need to vary in their content in order to give the algorithm the best chance at learning its task. Take for example a common use of machine learning, image identification. Lets say you want to teach a computer to identify photos that contain a monarch butterfly. You write your algorithm and now you need to procure a dataset to teach it. What kinds of photos should you train the algorithm on? The simple answer is obviously photos containing monarch butterflies. And while this approach is sound, it leaves the door open for false positives/negatives. Realistically you need a balance of photos containing monarch butterflies, photos not containing monarch butterflies, photos containing things that look like but aren’t monarch butterflies, photos containing other species of butterflies and so on. Now keep in mind you realistically need thousands of data points to teach a machine to do something, and each data point in the training dataset needs to initially be categorized by a human, So now take each of those individual data variants and multiply them by at least a 1000 and you can see where teaching a machine to do something can become very difficult. This isnt to say that you cant teach a machine to do something with much less data, it just wont be as accurate and reliable as a machine that was taught using a much much larger dataset. Now with all the data that has been given to the monarch butterfly identifying machine, it can begin to chew through it and identify data patterns between the images, and begin to form its own method of categorizing the initial training images. There is an entire post graduate field of study dedicated to what actually happens INSIDE the algorithm, so I will definitely not be covering the ins and outs of that here. Simply put, the algorithm comes up with its own way to categorize these images, and applies that method to any new data that comes in, continuously adjusting its own model with each additional data point. This in its absolute simplest form is how many machine learning algorithms work.

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

Creating Docker Images With DockerFiles

Christian Shadis

Docker, a main focus in my Software Construction, Architecture, and Design course, is an essential tool for the modern developer to be able to completely separate the application being built from the developer’s local machine. Essentially, this allows for all the application’s dependencies and configurations to be packaged with the application and independent of the machine the application is being run on. A Docker container is built from an image, which is basically the ‘blueprint’ of a Docker container. Images are easily-reusable, and many images can be found for use on the Docker Hub. Images can be created from scratch using Dockerfiles, which I previously did not understand. In order to improve my Docker skills and gain the ability to create my own Docker images, I chose to research and write about the structure and use of Dockerfiles.

There are situations in which a developer would want to create their own Docker image – maybe they need a specific version of the Ubuntu operating system, or there are specific modifications that need to be made to that operating system before the application can run. Maybe for an application to be run, many slightly modified versions of the same container must be deployed. The developer can address these scenarios by creating a Dockerfile, which contains all software to be included in the image. Any time the image needs to be used, a container can be created with that image in a single command, preventing the necessity of importing dependencies or changing the machine’s configuration repeatedly.

 In Docker Basics: How to Use Dockerfiles, Jack Wallen first describes the necessity and use cases of Dockerfiles. He then supplies a list of keywords that can be used in a DockerFile, such as CMD, which executes a docker command inside the container, ENV, which is used to set environment variables, and EXPOSE, which is used to publish networking ports in the Docker image. From there, Wallen proceeds to demonstrate the process to create a Dockerfile from scratch in a text editor. Finally, Wallen outlines the process for building the usable image from the Dockerfile. The article concludes with a short section including a second worked example, this time building a Dockerfile for a different operating system.

 Knowing how to create containers using pre-built images from DockerHub is an important first step for a developer to get started with Docker, but the true power of Docker images is realized when the developer has the capability to create a new image with the exact configurations needed. Creating custom images is a skill I expect to use often in my development career, since a large portion of applications in development today use Docker containers.

Reference:

https://thenewstack.io/docker-basics-how-to-use-dockerfiles/

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Javascript

On this post I decided I need to catch up on learning javascript for my homework assignments, having no real prior experience I needed more than just the simple basics but rather things I must know to succeed and not be confused. Which lead me to this article below.

https://dev.to/deen_john/resources-to-master-javascript-3d0p

I have honestly not done any web development aside from a brief exposure to html in high school, and while I did get some experience with javascript I didn’t experience enough to retain any memory of it. So this article above has not only the fundamentals in forms of multiple links and tutorials, it also contains a plethora of actual content in the article on each subject that is brought up. Often searching for articles on javascript it would be just a repository of links, but this is structured with other links prior to an example given as well as an explanation.

Right off the bat the type system was interesting to learn about, considering that the types are not actually different but the syntax is quite different as well as coercion being even a thing is interesting. That you could multiply a string and a number and get a number due to implicit conversion.

Another interesting thing I learned is that every value that isn’t a primitive value is an object. So variable1 = 500; is a primitive value so it’s simply just a number value. If it doesn’t contain a primitive value of any kind it will need to be defined as an empty object with {} after the name declaration. There is no item declaration type like int or short or double. There is just var name = (primitive value) or {empty object}.

There was also another interesting thing that I didn’t expect was that null is used as an object and that undefined is generally there to show that a value has been declared and not assigned a value.

I also learned about truthy and falsy, and in specific I was surprised to learn that the Or operator didn’t quite work like I expected. Generally, when using Or there are three conditions I would expect and one I wouldn’t worry about. The first operand is true, the second operand is true, neither operands are true and lastly both operands are true. With the first, second and fourth resulting in a TRUE, I was surprised to learn that javascript only cares if the first operands is “truthy” otherwise it returns the second.

This article helped me a lot and will at least give me a head start and not be confused when I try using Or and expecting a positive result and not just the second operand. It seems unintuitive in some regards but it may just seem that way due to me being so used to the other operators for such a long time.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.

Blog # 4 Design Patterns (Accidental and Intentional)

During my career as a software engineer, design patterns were heavily used by some clients, and were not used at all by others. I was exposed to them early on by a developer at UNISYS while developing a Windows based control system for NTSB. At that time, there were class library extensions in the marketplace which were commonly used to enhance UI and database access development. We had the opportunity to code our own classes using their framework, as week as modify their code. In order to use their code in a proper way, you had to really understand it, which involved studying the extended class library code and documentation. While doing so, I became familiar with 4 design patterns they had heavily used. These were Singleton, Observer, Iterator and Builder patterns

My architecture class has shown me 3 new patterns I am very impress with. The memento, strategy, and the facade pattern. The class focused on the strategy pattern primarily, with some concentration on the Singleton. Memento and Façade patterns.

I will focus on the benefits of the strategy pattern for this blog entry because I found it to be eye opening. It is very powerful and useful, yet I hadn’t known of it specifically prior to the class. It turns out I had used this pattern many times without knowing it.

Many project I have worked on had objected oriented class libraries, where a solid grasp of OOP programming skills were necessary in the mid-to late 90’s and early aughts. Interface based technology grew into the class libraries and class library extensions, where interfaces became a really useful tool. There were times I coded directly from an interface (when trying to implement a specific API, or to a specification requiring a specific interface), and other time would purely use the Object hierarchy. There were times I had probably used the strategy pattern BY ACCIDENT! By knowingly combining interfaces and OOP with the use of the strategy pattern, a much better design can be constructed where you get the best use of both technologies.

I find the best way to concisely describe the power of this pattern is to quote from the “Better Programming” website [1]

Advantages of the Strategy pattern

1. It’s easy to switch between different algorithms (strategies) in run-time because you are using polymorphism in the interfaces.

2. Clean code results, because you avoid using conditional-infested code (not complex)

3. More clean code, because you separate the concerns into classes (a class to each strategy).

I wish I had had a clearer understanding of this pattern years ago. The example in the Homework using Duck classes was particularly easy to follow, and really showed the symbiosis of interface oriented and object-oriented methodologies in clear way, stressing the benefits of coding to separation of concerns in a well-structured project.

Design Patterns are broken up into Creational, Structural and Behavioral classifications as a way to direct the designer to the appropriate top-level category.

References:

1. http://www.betterprogramming.pub – Carlos Cabarello.

From the blog cs@worcester – (Twinstar Blogland) by Joe Barry and used with permission of the author. All other rights reserved by the author.

Microservices vs Monolithic

Microservices seem to be a very popular concept in software design, and it is one I have trouble fully understanding. Because of this, I wanted to spend some time developing my understanding. Chris Richardson’s “Introduction to Microservices” is the first article in a series of seven discussing microservices; it compares microservices architecture to monolithic architecture, and it is where I began looking.

Monolithic architecture typically involves one application at its core built to handle a multitude of tasks. This application may branch out into APIs, adapters, or UIs that allow the application to access objects outside of its scope. This application, along with its more modular pieces, is packaged and deployed as a single monolith.

This approach to software design has some inherent problems. Applications tend to grow in size, scope, and lines of code over time. In a monolithic application, code can easily become too large and too complex to efficiently deal with. The resulting applications are often too complicated for a single developer to understand, which also complicates further development. Larger applications also suffer from longer start-up times. Continuous deployment becomes difficult since the entire project needs to be redeployed. It’s difficult to scale monolithic applications as well as employ new frameworks or languages.

Many of the issues monolithic architecture is prone to can be solved by instead adopting microservices architecture. Microservices architecture involves splitting an application into connected but smaller services. These services are typically dedicated to a single feature or functionality and are usually each run in an individual Docker container or VM.

Using microservices architecture comes with a number of benefits. Splitting a complicated, monolithic application into smaller pieces makes that application significantly less complicated. It becomes easier for individual developers to understand the services, thus making development faster. This also allows a team of developers to focus on a single service. Teams can become more familiar with the service they are working on, and maintenance becomes more manageable. Splitting an application into microservices also makes that application easier to scale.

Despite its benefits, microservices architecture is not without its drawbacks. Splitting an application into microservices can introduce a new complexity. These services need to talk to each other, so a mechanism to send and receive these messages must be put in place. It is also more complicated to test an application that uses microservices. A test class for a monolithic application would only need to start that application, but a test class for a microservices application would have to know which services it needs to work.

I chose this article because I appreciated how thorough the information was. I plan on reading the remaining six articles in Richardson’s series. I had not thought about how connecting microservices together might complicate a project. I will be thinking about the pros and cons of microservices as I continue into my career.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.