Author Archives: Jared Moore

Your First Language

No matter the area, everyone must start from somewhere. In Apprenticeship Patterns, the author addresses this concern for learning to program in his patten “Your First Language.” Your first language will determine your career direction, and it is essential to make sure your initial language aligns with your desired career path. For instance, an individual who aspires to be a Game developer should learn C++, whereas someone interested in data science should learn python as their first language. For someone who wants to be a software developer, their first language is flexible and can solve as many problems as possible that their work demands. The author also suggests that tests should be used frequently while learning your first language to check your understanding of the language over time.

While reading this pattern, I reflected on how I learned my first language, JavaScript, using NodeJS. I personally related to the pattern’s recommendations. This pattern is beneficial for individuals who have yet to learn a programming language or are still on the fence about which one to dive into deeper. I found it interesting the pattern recommends focusing on learning testing libraries to check your knowledge of the language while you are also learning to program in it.

Doing this would be challenging at first but prove to be beneficial in the long-term because of how necessary testing is in the software world. I have created software in the past while completely excluding testing from the project, and I did this from lack of knowledge of how to test and not understanding the importance of testing long-term. If I had known this pattern earlier, I would have incorporated testing into my process of learning my first language.

I slightly disagree with the pattern’s suggestion that you choose your first language based on whether you have someone in your network who can act as a teacher to guide you with learning that language. I think it would be beneficial in some situations, but if someone cannot network with a person who knows about the language they want to learn, I do not think this should be a limiting factor. There are many online resources and forms that can act as learning opportunities that can stand in place of a single person who can act as a teacher or a guide.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Relfection

The author aligns software development with a craft rather than strictly a science. A craft is an art or skill where tools, methods, and knowledge are required to be an expert. He gives various examples that show how craftsmanship is needed to create well-designed software, and I agree with this point of view. I believe software development should be approached as more of a craft, especially when it comes to how programming is learned through apprenticeship.

Rather than traditional school-based education, programmers would learn from experienced people. Not from books and assignments where one’s ability is determined from the accumulation of grades.

There should not be the target for an education system that creates a class of ‘programmers’ who know general knowledge about computer science and the basics of programming, but an environment that helps each student master their software development craft. We shouldn’t focus only on code but should focus on improving how we approach programming.

An area where I see this approach put to practice is in coding boot camps. These are usually fast-paced programming courses that last less than one year. The classes are taught by individuals who have deep knowledge of the field and years of experience as professional programmers. Instructors have worked in corporations directly with the tools and technologies they teach their students. It is also an environment where learning programming is more hands-on. Creating projects is the center point of a coding bootcamp. The curriculum is much more comprehensive on practical knowledge than what is taught in the traditional college environment; relating to what students will be working on in a future job. I find less use in the existing 4-year college degree model of education. I am not the only one, as the rates of college enrollment are dropping year after year. Ultimately, a college degree acts as a form of accreditation, but the value of a degree is losing value with time.

To me, the most relevant chapter was 4, “Accurate Self-Assessment.” It is common to learn just enough to accomplish what is needed to be done and not extend your knowledge any further. Learning something new takes effort, and it is much easier to remain complacent with the technologies we already understand. The author also suggests comparing yourself to whom you were yesterday and avoiding comparisons to someone else. The benefit of this methodology is you identify your growth over time. And once you realize the distance between who you were yesterday and who you are today, changes can be made to reach levels of improvement of where you want to be.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Thea’s Pantry User Stories

Thea’s Pantry is a custom pantry management system for Worcester State University. For developers to correctly design an application to meet the needs of the end-user, user stories are created. These stories are a generalized overview of the application workflow from the perspective of a type of user. Staff and Administrator are the two identity roles where Administrator has all permissions including those of the Staff role. The staff role is responsible for handling interactions with guests visiting the pantry and updating inventory when donations come in. Administrators are responsible for monitoring the inventory levels and generating monthly reports for the Worcester County Food Bank. I chose to examine the user stories of the Theas Pantry project because I am interested in how the application is operated by its users and the features it provides.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

All About LibreFoodPantry

The mission of the Libre Food Pantry is to offer free and open-source software to manage the logistics of operating a food pantry. The project is developed using agile values and adhered to FOSSisms which are a collection of quotes that relate to open-source software development. The code of conduct of the Libre Food Pantry is to not tolerate discrimination of any form and to operate with a strict equity policy. Those who contribute must follow the GPLv3 and attributions must follow the CC-BY-SA 4.0 where each sign-off follow the Developer Certificate of Origin 1.1 schema. The Libre Food Pantry acknowledgments detail the sponsors, software, clients, people, and institutions that make the project possible. A major sponsor is AWS which provides free hosting for the project. I chose to write about these details to familiarize myself with the details of the development process of this project.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Vue.js Breakdown

Traditional web applications function by having single pages that need to be refreshed when data has to be updated or submitted. You can likely recall using a website where each time a form was submitted or a link was clicked, you had to wait for the data to process and a few seconds later a new page appears. This process feels very cumbersome and discourages the use of the service.

Modern-day web applications are single-page, meaning data is dynamically added and removed from the HTML without the page needing to be redownloaded. To do this, a virtual dom is created through javascript and rendered as HTML to the page. When the virtual dom has a change, the actual dom is updated only in the area that is needed, without reloading everything. This change in design improves speed, reduces the amount of data to load, and improves user experience. Most services used online today are single-page applications and when they are not, it feels old and outdated.

There are three main web frameworks that are used to create single-page applications and these are React.js, Angular, and Vue.js. Of these frameworks, Vue has the fastest-growing developer userbase. A main benefit of Vue is that it is an easy-to-learn framework. In Vue, the virtual dom has a two-way data binding. To dynamically control the webpage, Vue uses user-created components that are linked together in a hierarchy. Each component is abstracted and allowed to have variables passed to it to make it display specific data. This feature allows custom HTML components to be reused, reducing repeated code and unmanageably long HTML files. A benefit to Vue components is that they can be easily unit tested, as only a small portion of the web application will be tested at a time.

In each component, it can contain state variables, which are like javascript variables that are tied to an element on the webpage. When a state variable is changed, its value is updated on the webpage. The process of watching a variable for a state change and updating the virtual dom then rendering the update to the webpage is called data-binding. This process happens nearly instantaneously and is what makes using Vue-built applications feel seamless and snappy.

I have used React.js in the past and I am familiar with the structure of combining html and javascript together. I am interested in learning more about Vue and what benefits this framework has to offer. Vue separates out the style, structure, and logic of each component which forces developers to keep their code cleaner than in React.


Source: https://www.altexsoft.com/blog/engineering/pros-and-cons-of-vue-js

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

‘NoSQL’ Comparison

SQL databases use tables and relations to store data which makes them rigid in the way data is managed. Developers are forced to store data in a predefined way according to the table and database specifications. This strictness makes working with the data easier in the future because the data is highly structured. Given the table properties, a developer will be able to know the properties on each row of the table. The downside of the rigidity of SQL databases is that making changes and adding features to an existing codebase becomes difficult. In order to add a field to a single record, the entire table must be updated and this new field is added to all records in the table. In PostgreSQL, there can be JSON columns where unenforced structured data can be stored for each record. In this way, it is a workaround for the highly structured nature of SQL databases. However, this approach is not ideal for all situations, and querying data within a JSON field is slower than if it was in a table. SQL databases use less storage on average than NoSQL databases because the data is standardized and can be compressed using optimizations. However, when a SQL database grows, it usually must be scaled horizontally. Meaning the server running the database must have upgraded specifications rather than spreading the resources throughout more instances.

NoSQL databases use key-value pairs and nested objects to store data making them much more flexible compared to SQL databases. One of the most popular NoSQL databases is MongoDB. In these databases, tables are replaced with collections and each entry is its own object rather than being a row in a table. The document-based storage allows for each record to have its own fields and properties. This allows for code changes to be made quickly. The downside of having no enforced structure is that required fields can be omitted and expected data when it is not present on the object. MongoDB fixes the issue of no enforcement with features called schemas. Schemas are a way to outline objects with key names and the data type associated with them. Ensuring each object in a collection follows the same format. NoSQL databases are scaled horizontally easily, easing the resources by distributing the workload on multiple servers.

I selected this topic to learn more about the different use cases between SQL and NoSQL databases such as MongoDB and PostgreSQL. I will use what I learned on future projects to ensure I select the right database technology for the project I am working on.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Server Side and Node.js

The internet could not exist without servers to handle the exchange of data between devices. Given the importance of servers, the software and systems that run on them are equally as important. The programming of these applications is called Server-side development and is a large computer science field.

Most server-side applications are designed to interact with client-side applications and handle the transfer of data. A common form of server-side development is for supporting web pages and web applications. An emerging popular platform for web application backends is Node.js, a server-side language based on JavaScript.

A benefit to Node.js being based on JavaScript means that the same language can be used on the front end and the back end. Because JavaScript can handle JSON data natively, handling data on the server-side becomes much easier compared to some other languages. As the name suggests, JavaScript is a scripting language so code for Node.js also does not need to be compiled prior to running. Node.js uses the V8 engine developed by Google to efficiently convert JavaScript into bytecode at runtime. This feature can speed up development time for running smaller files frequently, especially during testing.

Node.js also comes bundled with a command-line utility called Node Package Manager. Abbreviated npm, it manages open-source libraries for Node.js projects and easily installs them into the project directory. The npm package repository is comparable to the Maven repository for Java. However, according to modulecounts.com, npm is over three times larger than Maven with nearly 1.8 million packages compared to less than 500 thousand. Each Node.js project has a package.json file where settings for the project are defined such as running scripts, version number, required npm packages, and author information.

The majority of Node.js applications share common packages that have become standard frameworks throughout the community. An example of this is Express.js, which is a backend web framework that handles API routing and data transfer in Node.js.

At the core of node.js is the event loop which is responsible for checking for the next operation to be done. This allows for code to be executed out of order without waiting for an unrelated operation to finish before the next. The default asynchronous ability of node.js is ideal for webservers where many users are continuously requiring different tasks to be done with differing speeds of execution. When people visit different API routes, they expect the server to respond as quickly as possible and not be hung up on a previous request.

I have used Node.js before but now that we have begun to use it in class, I wanted to learn more about its benefits in server-side development. For me, the most important takeaway is the need to take advantage of Node.js’s nonblocking ability when developing a program. Doing so will improve the speed of the application and increase usability.


Source: https://www.educative.io/blog/what-is-nodejs, http://www.modulecounts.com

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

What is Front-End Development?

During the early days of the web, front-end development was less complex focusing mostly on static documents, linking pages, and basic design. As time went on, more and more functionality has been given to the web browser making front-end development a major field. Front-end development is the client-side programming of a web appl

During the early days of the web, front-end development was less complex focusing mostly on static documents, linking pages, and basic design. As time went on, more and more functionality has been given to the web browser making front-end development a major field. Front-end development is the client-side programming of a web application. This type of development focuses on the design, functionality, and interaction of the webpage with the backend components. 

The structural components of front-end development takes place in HTML, which stands for HyperText Markup Language. HTML defines the layout of the webpage elements along with their content and properties. Each html element has its own tag name which separates it from others based on the functionality it provides. For example, a video element will provide different functionality than an image element or a header element.

By default, an html page is very bland. CSS, Cascading Style Sheets, is used to style an html page giving each element or a group of elements custom properties. If the background color of the website needed to be black and the text white, this would be done using CSS. 

Other than some simple animations with css, all front-end functionality of a web page comes from Javascript. A once simple scripting language designed for simple html modifications that evolved into a large front-end and back-end programming language. One use of javascript is to dynamically control html elements on the webpage. It also handles making and receiving requests from web servers. 

Originally, when data was submitted in a web application, the entire page would need to be refreshed to pull an updated html document which was common with PHP. Today, a common design architecture is to have a single page web application where the data is synchronized to the webpage without needing to be reloaded to update. 

Due to the complexity of making a single page web application, many frameworks exist to make development easier. These include react, vue, and angular. These frameworks handle linking data to components and binding the data between the server and the client. This way, only the element with updated data needs to be refreshed rather than the entire page. Handling updates in this way makes working with web applications faster and improves the user experience.

I selected this article to learn a bit more about frontend development and its current features. We will be working on client-side development in this class and I thought it would be a good topic to look into. I find it interesting the history of how the web has developed over the years. I will use this guide with its details to help me structure my projects in the future.


Resource: https://info340.github.io/client-side-development.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Why Use Gradle?

Gradel is an automation tool that allows developers to quickly run tasks to prepare, compile, and test software. There are a handful of automation tools similar, but Gradle is one of the most popular and supported build automation tools. Without a build-tool, a developer would have a series of commands that must be manually typed into the terminal to execute a set of steps before code can be built and run. Doing this is very tedious and does not scale easily, especially for larger projects.

The Gradle build process is divided into three phases. The first is initialization, which prepares the coding environment for the build process. The main type of initialization is fetching dependencies for software projects. A developer is able to declare which libraries their code requires and Gradel will download them to the project library.

The second phase of the build process is configuration, where Gradel determines what needs to be executed for the software to run. In Gradel, each step is called a task and is run automatically. The order of the tasks is determined during this step and is defined based on defined in the Gradel configuration file. Each task can be dependent upon the last allowing multistep tasks to execute without user intervention. Gradel can also automate testing to be run during the build sequence. This allows for a developer to identify testing issues during development because the build will fail if a test does not pass. The purpose of running tests with Gradel is to identify run-time issues that the Java compiler will not pickup.

The final phase is execution, where each task is run by Gradel. At the end of this phase, the user will see the status of their build. A successful build means that the tasks ran without error and the software is able to be run. An unsuccessful build can occur for a number of reasons but usually occurs due to a failed test, unmet dependency, or compilation error.

I selected this blog because I wanted to learn more about the Gradel build tool. Especially because we will be using it for projects in this class. I am familiar with the build tool Webpack for NodeJS but I have never used a build tool for Java. There are similarities between them but the configuration for each varies greatly.

One difficulty I have had with Java is dealing with dependencies. In other languages such as NodeJS or Python, there are command-line tools that make installing libraries easy such as npm and pip. I will definitely use Gradel in my next project for managing dependencies, as I have yet to find a solution for this in Java, until now.


Reference https://docs.gradle.org/current/userguide/what_is_gradle.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Docker and It’s Components

I chose this article because it explains in detail the components of docker and how they each are applied in different phases of software development. I believe this will help unfamiliar students learn more about the components of docker beyond that of a general understanding. Given that we are already using docker in this course, I imagine we will be using it for future projects as well. A deeper understanding of this platform would be highly beneficial.

The docker service is made up of several components. A container is like a stripped-down virtual machine where programs can be run in isolation. A Docker image is what defines what will make up a container and contains the directions for how and what will run. For example, a MySQL Docker image would contain the instructions for running an instance of MySQL server and exposing the necessary ports. What runs the docker image is called the Docker Engine and is responsible for virtualizing the container on the host machine. The last major component is docker-compose, which allows one to configure storage, network, and interaction between Docker image containers. The purpose of docker is to allow pre-configured containers to run exactly the same wherever docker is installed. Because all containers are identical to the image they are created on, multiple containers can be run to quickly scale to meet performance demands.

I have used docker in the past on multiple occasions, the last being for my database design final project building a full-stack web application. To see how docker is used in a full-stack application, you can review the source code here gitlab.com/mjared94/todo-app inside the server directory. In the app, we used docker to streamline team development and simplify the backend services in use. The application required a database and a way to manage the database during development. Rather than installing the software individually on each of our computers, we used docker to quickly launch the services identically on each machine. This can be done using docker-compose, which allows you to outline how you want your containers to interact with each other as mentioned above.

Our web server was run directly from the terminal relying on the docker services. In production, we could have also containerized our web server to run in conjunction with the other containers. Doing so would have enabled anyone with docker to run our todo application, along with the required backend services, without any prior configuration. Containerizing our application would also make deployment seamless because our pre-configured container could be launched on virtually any web host. After reading this article and understanding the docker components in more detail, I will be better able to use them in the future. 


Blog resource link: https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.