Author Archives:

Agile Software Development

This week on my CS Journey, I want to focus on Agile software development and its methodologies. Agile methodology is a type of project management process, mainly used for software development. It evolves through the collaborative effort and cross-functional teams and their customers. Scrum and Kanban are two of the most widely used Agile methodologies. Today I want to focus mainly on Scrum. Recently I saw that employers are looking for candidates who have experience in scrum and agile development, so it is important that we learn more about it.

 Scrum is a framework that allows for more effective collaborations among teams working on various complex projects. It is a management system that relies on step by step development. Each cycle is consisting of two-to four-week sprints, where each sprint’s goal is to build the most important features first and come out with a potentially deliverable product. Agile Scrum methodology has several benefits,  it encourages products to be built faster since each set of goals must be completed within each sprint’s time frame.

 Now let’s look at the three core roles that scrum consists of scrum master, product owner, and the scrum team. The scrum master is the facilitator of the scrum. In addition to holding daily meetings with the scrum team, the scrum master makes certain that scrum rules are being enforced and applied, other responsibilities also include motivating the team, and ensuring that the team has the best possible conditions to meet its goals and produce deliverable products. Secondly, the product owner represents stakeholders, which are typically customers. To ensure the scrum team is always delivering value to stakeholders and the business, the product owner determines product expectations, records changes to the product, and administers a scrum backlog which is a detailed and updated to-do list for the scrum project. The product owner is also responsible for prioritizing goals for each sprint, based on their value to stakeholders. Lastly, the scrum team is a self-organized group of three to nine developers who have the business, design, analytical, and development skills to carry out the actual work, solve problems, and produce deliverable products. Members of the scrum team self-administer tasks and are jointly responsible for meeting each sprint’s goals.

Below I have provided a diagram that shows the structure of the sprint cycles. I think understanding the Agile methodologies Is helpful because most of the major companies help teams and individuals effectively prioritize work and features. I highly recommend visiting those websites it provides detailed explanations of how a scrum cycle works.

 

Sources: https://zenkit.com/en/blog/agile-methodology-an-overview

https://www.businessnewsdaily.com/4987-what-is-agile-scrum-methodology.html#:~:text=Agile%20scrum%20methodology%20is%20a,with%20a%20potentially%20deliverable%20product.&text=Agile%20scrum%20methodology%20has%20several%20benefits

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Data persistence layer(12/1/2020)

 When it comes to management and figuring out how to run a data tool, think of it as a warehouse or retail store front in this sense. This in particular is how a Data presistence layer works. When it comes to a DPL you will have 6 different methods were you must go through in order for to the data to reach its finally desistation. these different methods are considered to be semantic, Data Warehouse, presistent, transform, staging, and finally source. to be more specific a presistent layer contains a set of tables that are in charged of gathering and recording information. This is done so we understand the full history in etor spect of changes to the data of the table or query. infact when it comes to having multiple source, table or query files they are allowed to be staged at a different table or in a different view inorder to meet the transfer layer requirements. Something crucial to learn about a DPL is that in many cases you can consider it a bi-temporal table.You might be wonder to yourself what is a bi-temporal table is? well it is considered to be a table that permits queries with a size of two or larger to be stored in. This all comes down too how useful DPL is. With DPL you are allowed to include and store all of the history you need all of the time instead of how many other software do it where you have a break up of history and information when needed the most. As well this will save you a tremondce amount of time do to the flecability to be able to drop and rebuild your data set with the same amount of data history at any point in time. DPL as well offered ETL performance which is considered to be where you are allowed to make changes to to the process data. When it comes to DPL, DPL is aware of what has been altered and when it has been altered in order to make the appropriate changes in the data set. The main reasoning behind switching to a data persistence layer is mainly for the reason of allowing you to be able to analyst data in a timely manner while still staying accuarete. This also allows you to have add ons like churn or other third party software to boost the persistence layering unlike like other data processesers. Another feature is the ability to have evidence for example presistanent layering will give you information that will point directly too and explain why something has changed. As well it gives you the power of auditing where you are able to orginiaze, maintain, complete an accurate picture of past event to understand what truly happened in the data. 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

software development for tuning (12/1/2020)

In today’s modern economy everything we touch unless it’s a pull or push door is run of software and hardware. And modern-day vehicles are no different, we have come along way from the typical car having an engine crank. Instead, now we have a push to start a feature where within seconds your car will start up and this is all run off of computers. These computers take crank and camshaft singles from seniors in order to start the car. In laments terms, they need computer hardware and software in order to make this happen. What we will be diving into is taking these stock parameters that the vehicle was given from the factory and we will be trying to alter them to either A increase fuel economy, B increase power, or C even be able to increase longevity for the vehicle itself. The possibility is endless when it comes to vehicles and the ability to alter and make little or big changes to see a maximum gain. Many might wonder how does tuning works how is it even possible. Now tuning is not easy by no stretch of the imagination, you must know precisely what you’re doing or you will fry the computer in your vehicle. In a sense think of it as you are changing parameters in your OS system. Instead of it being like the Linux OS which is open-source and you allowed to make as many changes as you would please without needing keys to access the OS. A vehicle Operating system is locked more in the sense of a window or MAC OS system where you need to have certain keys or software to unlock the computer or even be able to read it when servicing the vehicle. This is where a tuner comes in or in other words a software developer who goes into the parameters of the vehicle and will make changes. For example, we will go into something that is ignition timing. What this is, is the amount of time before the piston reaches the top dead center.  TDC is when the top of the piston reaches the top of the cylinder. What a tuner can do is go into the computer and advance this so instead of it being 0 degrees before TDC we can change it to 12 degrees so when the camshaft is at 12 degrees of rotation which translates actually to 88 degrees of rotation. This allows us to see the performance of the engine which will optimize how the fuel gets burnt in the combustion chambers. With tuning, you’re also allowed to go into different modules like the transmission modules which will let you write a code that allows the shift solenoids to open up much later to hold the RPM band higher and give you more efficiency, and lets you have better cooling opportunities. Another parameter that you’re allowed to access is the body module which controls everything inside the interior of the vehicle. Know you must be careful with this and it is one of the main reasons why these computers are closed source is because it makes it more secure  and much harder to hack 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

REST API Design

This week on my CS Journey I want to focus on REST API Design. In my last blog, I talked about how an API request works and how to be able to read the API documentation and use it effectively. In this blog, I will be emphasizing briefly the key constraints to REST API design. There are six important constraints to the design. Which are: Client-Server, Stateless, Cache, Uniform Interface, Layered System, and Code on Demand. Together, these make up the theory of REST.


Starting with client-server constraint is the concept that the client and the server should be separate from each other and allowed to evolve individually and independently. In other words, a developer should be able to make changes to an application whether on the data structure or the database design side at the same time it is not impacting the client server side. Next REST APIs are stateless, meaning that calls can be made independently, and each call contains all the data necessary to complete itself successfully. The other Constrain is Cache, since a stateless API can increase requests and handles large loads of calls, a REST API should be designed to encourage the storage of cacheable data. That means that when data is cacheable, the response should indicate that the data can be stored up to a certain time. 


The next constrain is Uniform Interface, having a uniform interface that allows the client to talk to the server in a single language. This interface should provide standardized communication between the client and the server, such as using HTTP with resources to CRUD (Create, Read, Update, Delete). Also, another constrain is a layered system. As the name implies, a layered system is a system comprised of layers, with each layer having a specific functionality and responsibility. In REST API design, the same principle holds, with different layers of the architecture working together to build a hierarchy that helps create an Application. Also, A layered system helps systems to increase flexibility and longevity and it allows you to stop attacks within other layers, preventing them from getting to your actual server architecture. Finally, the least known of the six constraints is Code on Demand which allows for code to be transmitted via the API for use within the application. Together, these constraints make up a design that operates similarly to how we access pages in our browsers on the World Wide Web.


Overall, I learned the most important aspects of REST API Design. The blog was certainly helpful to understand the key constraints very well. I have only mentioned the main important parts of it. I highly recommend everyone taking a look at the source below. 

Source: https://www.mulesoft.com/resources/api/what-is-rest-api-design

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Windows Vs Linux (Week 1 makeup)

 As many of us know there are two main Os systems one being the more commonly known as Windows operating system and the second most commonly known operating System is Mac OS. With the two giants out of the way, there is still one more OS that is overlooked by many to this day.  Most may know this particular OS as Linux. Linux is an open-source  Operating system that is derived from the Unix platform that is also most commonly found in MAC operating systems. Many of you might be wondering what open source means, and it is quite simple to open source stands for a system where anyone can be a developer for it and can make their own or appropriate changes to the program in order to help or straighten the operating system. While windows as many might know by is a licensed operating system where in order to install the operating system you must buy the rights in order to download and have it on your system if not already installed.  When it comes to Windows it is very straightforward and simple to use rather than Linux where it will take you a little reverse engineering in order to fully understand and grasp the full potential. When it comes to both of these operating systems they both offer pros and cons. Some of Linux’s pros we will start off, first and for most Linux is a free open-source OS. Linux uses a more traditional monolithic Kernel. A monolithic Kernel is considered to be modular in its design where it allows most of its drivers to be able to dynamically load and unload when running. One of the cons with the Linux system though is that it is still considered and regarded as a bare-bone operating system with little to no features and has barely any updates throughout the years. Know when it comes to actually see the usage comparing between Linux and Windows there is no comparison between the two for the fact that windows will beat Linux tenfold especially since every prebuilt laptop or computer you buy from the store other than MAC will be running Windows at any price point.  As well when it comes to windows you will usually see multiple kinds of users ranging from businesses, developers, and kids. As well something to consider with windows which could be a double-edged sword and this would be for the fact that windows are a closed sources system that usually has an issue running open-source networks. With that being said it still is possible when downloading third-party apps or software that are add ons to the Windows OS which can turn your once known windows machine into a Linux machine to run open-source software. Which allows Windows to be versatile in many ways.  

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

Tools Of the Trade for Big Data Analyst (Nov 27th)

 When it comes to Big Data we must use a specific set of tools in order to complete the job properly. most of the tools of the trade are software and hardware. One of the biggest software tools to use while trying to analyze data is Tableau. Tableau is the best at visualizing data. Tableau allows us to explore without having the interruption of the flow of data when analyzing.  Another benefit of using Tableau for any data analyst is the ability of Tableau using AI which will allow a faster and predicts outcomes much faster using CRM. CRM which is a branch of Tableau allows many sales associates to make the correct decision which allows us to work more efficiently. With this efficiency, we can ten to spot trends faster, predict outcomes. The AI feature usually allows us to have the guidance we desperately need so we do not make any mistakes because as humans we are not perfect and can make mistakes and dumb ones at that. In my research, I decided to go down the rabbit hole of information for Tableau CRM it allows many data scientists to unify platforms, focus on outcomes, automate discovery, unify multiple platforms into one and build a different database with just a simple push of a button. Like many different software programs, they all have their pros and cons. A similar software program is SAS visual analytics. This kind of software is most helpful with and sharing and analyzing data as well as presenting the data in clean and formal nature. The main function of this specific software is to allow companies in need of powerful software that will be to tie pieces of data together to have them in one file in order to showcase at a large event. In many ways, I would consider this software to be like Microsoft excel and word in one program where you are able to show different analyses in one meeting to make sure different departments understand how they all work together as a set of gears. With that being said there are multiple pros and cons to SAS one of them being it allows us to have a large number of users under one simulation. in the sense we could have multiple presentations all linked into one where SAS will take the data that you have selected and put it under one easy to read module instead of having to sift through many different simulations.  The second pro about SAS is it allows you to have a customizable portfolio that splits up the business into different aspects so you’re allowed to divide a business. For example, we can take a car dealership where we allow to have sales, parts, and service department. So in retrospect, we need to see how much this car dealership makes in revenue. And in order to calculate that we need to divide it into sections while when showing the Ceo or sales team we need software that merges everything into one which is where SAS comes into play. 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

REST API’s

This week on my CS Journey, I want to look closely at the topic of REST API Design. I know We have been doing several activities regarding the topic in class and the homework assignment is associated with it, however, I wanted to be very knowledgeable on the topic, so I decided to do more research. REST is an acronym for Representational State Transfer. A REST API is a way for two computer systems to communicate over HTTP in a similar way to web browsers and servers do. Let start by looking at what An API is,  An API is an application programming interface. It is a set of rules that allow programs to talk to each other. The developer generally creates the API on the server and allows the client to talk to it and the REST determines how an API should look like.

Now let’s look at the anatomy of a request is, An API request has four main important parts: The endpoint, The method, The headers, and The data or body. When an API interacts with another system, the touchpoints of that communication are considered endpoints. Each endpoint is the location from which APIs can access the resources they need to carry out to do their function. The way APIs work is using  “requests” and “responses.” Meaning that each URL is called a request while the data sent back to you is called a response.

Generally, when it comes to methods it has five types. Which are: GET, POST, PUT, PATCH, and DELETE. These methods provide meaning for the request you’re making. They are also used to perform four possible actions that are Create, Read, Update, and Delete also known as CRUD. Next, the Headers are used to provide information to both the client and the server. It can be used for many purposes, such as authentication and providing information about body content. Lastly, the body or the data is what contains information you want to be sent to the server. This option is only used with POST, PUT, PATCH, or DELETE requests.

Overall, I learned a lot from this blog. The source I used explained the topic very well. I highly recommend everyone to check it out, because it has a variety of examples and documents that you need to know about REST APIs to be able to read the API documentation and use them effectively. It also goes deep into the methods and the request meaning of each of them, I think it is very important to understand those concepts because companies all over the world are using APIs to transfer vital information, processes, transactions, and more.

 

Source: https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

            https://www.sitepoint.com/developers-rest-api/

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

What is Rest API

 

API is short for Application Programming Interface (API), which describes a class library’s characteristics or how to use it. Your personal library may contain “API documentation” of available functionality.

A REST API in API Gateway is a collection of resources and methods integrated with back-end HTTP endpoints, Lambda functions, or other AWS services. You can use API Gateway features to help you with all aspects of the API lifecycle, from creation through monitoring your production APIs.

API Gateway REST APIs use a request/response model where a client sends a request to a service and responds back synchronously. This kind of model is suitable for many different kinds of applications that depend on synchronous communication.

When many people refer to API documentation these days, they often refer to an HTTP API that might share your application data over the web. For example, Twitter provides an API that allows users to request tweets in a specific format to easily import them into their own applications. This is where the HTTP API is potent. It can mix and match data from multiple applications to a mixed application or create an application that enhances the experience of using other people’s applications.

It is an application that allows us to view, create, edit, and delete parts.

REST is the shorthand for Representational State Transfer, which was proposed by Roy Fielding T to describe the standard way of creating an HTTP API, and he found that the four common behaviors (view, create, edit, and delete) could all be mapped directly to the implementations in HTTP.

The different HTTP methods:

GET

POST

PUT

DELETE

OPTIONS

HEAD

TRACE

CONNECT

Most of the time, when you’re looking at your browser’s dots, you’re using the HTTP GET method. The GET method is only used when you request resources from the Internet. When you submit a form, you often use the POST method to send data back to the site. As for the other approaches, some browsers may not fully implement them at all. But that’s fine if it’s for our use. We have many HTTP methods to choose from to help describe these four behaviors, and we will use client libraries that already know how to use these different HTTP methods.

Rest API benefits:

Resource oriented, easy to see

To GET something, you need to GET (GET is safe, does not modify the service resource), you need to POST (POST is unsafe), you need to PUT (PUT is idempotent), and DELETE (DELETE is idempotent).

Traditional CRUD requires four different interfaces, but the REST API requires only one. (Distinguish between different requests)

Source:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-from-example.html

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

What’s the difference between JavaScript and Java?

 What is Javascript?

It’s a scripting language that runs in a browser, and Javascript can do almost anything on a Web page:

HTML can be manipulated, providing a tool to change HTML at runtime. Events can be attached and executed, in line with the idea of event-oriented programming. The data verification function verifies the validity of the form of data when submitting the form. The operation of the client’s browser: forward, backward, refresh, jump, open a new window, print, etc. Cookies can be created and used.

What is Java?

Java is a programming language introduced by Sun. It is an interpreted language with syntax rules similar to C++. Java is also a cross-platform programming language. The program written in Java language is called “Applet” (small application program). After compiling it into class files by the compiler, it is stored in the WWW page and marked on the HTML file accordingly. As long as the client software of Java is installed, the client can run the “Applet” directly on the network. Java is well suited to enterprise networks and Internet environments and is now one of the most popular and influential programming languages on the Internet. Java has many remarkable advantages, such as simplicity, object-oriented, distributed, interpretability, reliability, security, structural neutrality, portability, high performance, multithreading, and dynamism. Java dispenses with C++ features that do more harm than good and many that are rarely used. Java can run on any microprocessor, and programs developed in Java can be transmitted over the network and run on any client.

Different data types

Java has eight data types: byte, short, int, long, float, double, char, and Boolean, while JavaScript has three data types: number, String, and Boolean.

In addition, there are differences in Java and Javascript variables.

They are positioned differently

Unlike Java, which is a completely object-oriented programming language that requires you to design objects before you can write in Java, JavaScript is an object-oriented scripting language that provides developers with built-in objects, making it easier and less time-consuming.

Link in different ways

Unlike Java, which USES static linking, where object references must be made at compile time and the compiler needs to implement strong type checking, JavaScript USES dynamic linking, where object references can be checked at run time.

Different USES

The most essential difference between them is their usage. At present, Java is widely used in PC terminal, mobile terminal, Internet, data center, and so on, while JavaScript is mainly used to embed text into HTML pages, read and write HTML elements, control cookies, and so on.

Java and JavaScript have different strengths and different specialties. The Java arena is programming, while JavaScript’s heart is in Web pages, where it can do almost anything.

Sourses:

https://www.upwork.com/resources/java-vs-javascript-what-is-the-difference?gclid=Cj0KCQiAwMP9BRCzARIsAPWTJ_G1ymcXEZzbxXRSv4C38P8hTynhLvbfWVea1UEjHNZfbiRwSMScx9kaAgftEALw_wcB

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

How does Docker Assist Big Data analyst

 As we dive deeper into the semester we start to learn more tools that will assistance us with open source networks. For this pertrucalr week I looked into how docker will assistance someone in a bigdata constintration like myself. While doing my research I found out that docker allows us to put data or large files into containers. Which will allow us to deliver and respond much quicker to the customer with the use of docker. This is enabled not only the customer but the data scientist to have a more organized platform for information to be transferred. This is valuable because instead of  having to go through multiple platforms to transfer information instead with the use of docker you are able to use one platform to put the infromation into a container which will then transfer it to the client. With the power of docker it allows big data scientist to be self-sufficient and build effectce models which can be tested and modified on multiple occasions with out having to change the main structure of the data. With the use of docker it allows us to have the developer systems and product environment to be constant and be monitored under the same platforms, while prior to docker we where unable to do such a thing because the environments would always switch up and not be uniform. Finaly more systems should incorporate docker into there platforms because it allows us to package the information as applications, dependencies to be deployed as one single package. 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.