Category Archives: CS-343

To-may-to : To-mah-to; Po-tay-to : Po-tah-to ; Framework : Library?

At the end of my previous blog post, I incorrectly referred to React.js as a framework. It is actually a JavaScript library. Although the two can be used to achieve a common goal, the two terms are not exactly interchangeable. Allow me to use the following analogy to explain my understanding of the two.

The main difference is when you’re using a library, you’re asking for it to assist you with completing your code. Frameworks on the other hand can be used to write your code but require you to “relinquish ownership” and abide by its rules. To discern the two, let’s look at the code to be written in terms of sharing information with one another. 

Scenario A.

You’re browsing StackOverflow and you come across a user who is asking a question about how to use various functions/methods in a particular programming language. You, being a well-seasoned programmer and active user in the StackOverflow community, wish to give this user a bit of assistance. So you decide to do some research on said programming language and functions/methods. Once you’ve gotten a firm understanding of the concepts, you give a friendly and in-depth response to the user, which helps to solve their problem. 

Scenario B. 

You’ve been assigned to write a paper explaining how to use various functions/methods in a particular programming language by your professor. They require the paper to be written in an accepted formatting style (MLA/APA) of your choosing. You, being a top student of your class, do some research to produce a high quality paper that reflects your standing. As you write your paper, to adhere with formatting guidelines, you use in-text citations. Once your paper has been completed you also cite your sources on your works cited page. Your professor gives you perfect marks on your paper due to the accuracy and proper formatting of your paper. 

In both of these scenarios, we were able to relay the information (write our code) in different ways. While the method of finding the information was roughly the same the end product is what differed. 

In scenario A the user was able to answer the question in any manner that suited their needs with no restriction. In scenario B, however, the student was not granted the same leisure and was required to structure their response according to a specific set of guidelines. Scenario A represents the usage of a library in your code, while scenario B represents the use of a framework. While the tools used were essentially the same, the control over the end product was not. The correlation to be made here is the control over the code that highlights one of the main differences between how libraries and frameworks operate.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Object Oriented Programming vs Functional Programming

There is a strong usage of java programming for almost every computer science student I have encountered. Every one of my programming classes has been taught in java, with the exception of a free elective where I took a Matlab class. For most students, if they want experience working with a functional programming language they need to take an elective course or teach themselves in their free time. Some of the most common functions/procedural programming languages are C, SQL. Java, C++, and Ruby are examples of object oriented programming. For a while, I didn’t quite understand the differences between object oriented programming and functional programming languages. I wanted to take some time this week to revisit that idea and learn the pros and cons of each programming style and which one is preferred. 

I found this video to be a great basis for answering these questions. Starting with functional programming, there are three bases that need to be followed: functions only, no side effects, and fixed control flow. Functional programming generally has less temporary variables and thus less memory intensive, and the methods are shorter and to the point. A strict control flow means as you are writing your code, you are mapping a clear input and output flow. The basis of object oriented programming are classes containing methods and attributes and creating objects. Object oriented programming is a great option for when there is going to be a lot of similar code, or many working parts of your code operate on the same basis then branch outwards with variations. With functional programming, if you have similar pieces of code being reused and rewritten, bugs could be harder to narrow down if they are embedded throughout your program. With object oriented programming, fixing a bug in the base class will fix the solution in all extended classes and implementations of those methods. 

From the blog CS@Worcester – Site Title by lenagviaz and used with permission of the author. All other rights reserved by the author.

Week 13: 9 Best Practices for Optimizing Frontend Performance

https://blog.bitsrc.io/9-best-practices-for-optimizing-frontend-loading-time-763211621061

So last week I wrote a blog post on good practices for REST API design so naturally I had to follow it up with a blog post on good practices to follow on optimizing frontend performance since we’re working with frontend stuff this week.

This week’s blog post is about a blog post from bitsrc.io from Nethmi Wijesinghe titled “9 Best Practices for Optimizing Frontend Performance.” In this blog post, she writes about 9 best practices that will be useful to the reader to optimize frontend data loading. The 9 best practices she lists are: minify resources, reduce the number of server calls, remove unnecessary custom fonts, compress files, optimize images, apply lazy loading, caching, enable prefetching, and use a content delivery network.

To summarize the 9 practices:

  1. Minify Resources: remove unnecessary, redundant data from your HTML, CSS, and JavaScript that are not required to load, such as code comments and formatting, etc.
  2. Reduce the Number of Server Calls: More calls to server, more time to load. 3 ways to reduce the number of server calls include: using CSS sprites, reducing third party plugins, and preventing broken links.
  3. Remove Unnecessary Custom Fonts: Self explanatory, but also you can take these 3 actions when using fonts on your website like, converting fonts to most efficient format, subset fonts to remove unused characters, and preloading fonts explicitly required.
  4. Compress Files: Compress files to save loading time since larger files = longer loading time.
  5. Optimize the Images: Images improve user engagement but make sure they are optimized to save loading time.
  6. Apply Lazy Loading: Lazy loading allows web page to load required content first and then remaining content if needed
  7. Caching: Allow caching so browsers can store the files from your website in their local cache and avoid loading same assets when revisiting
  8. Enable Prefetching: Prefetching loads the resource in anticipation of their need to reduce the waiting time of the resource.
  9. Use a Content Delivery Network: Using a CDN (group of servers distributed across several geographical locations that store a cached version of the content to deliver fast to the end-user) allows you to optimize loading time as well.

I picked out this blog post because we were using frontend in class during Activity 15-16 and I wanted to follow up with practices to optimize your frontend. This was one I found that was interesting but also introduced new terminology to me. Things like lazy loading, prefetching, and CDNs I didn’t know about until this reading this post. Also, I haven’t had any web dev experience prior to this class and doing the in class activities made me more interested because it was something new to me I had never touched. I knew web dev things existed I just never knew how complex it can actually get and it’s actually made me more interested in possibly pursuing a web dev position instead.

From the blog CS@Worcester – Brendan Lai by Brendan Lai and used with permission of the author. All other rights reserved by the author.

Refactoring

Hello everyone, hope you are all doing well. As this is the final week, I would like to wish you all Goodluck with your finals. In this week’s blog, I would like to talk about Refactoring as it is one of the important parts of programming. Refactoring is a process of restructuring code while making sure it does not change the original functionality of the code. The main goal of refactoring is to improve internal code by making small changes without altering the code’s external behavior.

Computer programmers and software developers refactor codes to improve the design, structure, and implementation of software. Refactoring improves code readability and reduces complexities. It also helps software developers find bugs or vulnerabilities hidden in their software. Refactoring improves codes by making them more efficient by addressing dependencies and complexities, becomes more maintainable and reusable by increasing efficiency, makes the code cleaner which then makes the whole code easier to read and understand, these are a few purposes of refactoring.

Refactoring can be performed after a product has been released, this should be done before adding updates and new features to the existing code or it should be done as a part of day-to-day programming. There are several benefits of refactoring like it makes the code easier to read, encourages a more in-depth understanding of code, improves maintainability. It also comes with various challenges like this process will take extra time if the development team is in the rush to finish the product and refactoring is not planned for, without clear objectives refactoring can lead to delays and extra work, Refactoring cannot address software flaws by itself, as it is made to clean code and make it less complex. There is various technique to refactor the code some of the examples include moving features between objects, Extracting, refactoring by abstraction, and composing. The best practices to follow for refactoring include Planning for refactoring when a project starts, Developers should refactor first before adding updates, refactor in small steps, Test often while refactoring, fix software defects separately, Understand the code before refactoring, focus on code deduplication, use of Automation tools make refactoring easier and faster.

I choose this article because it has a clear definition of refactoring and purpose of refactoring, benefits of it, explains different techniques to perform code refactoring, and gives best practices to follow for refactoring and since it is very helpful for software engineers, in the future these practices will help me do my job easier and help me do my job in an effective way.

Article: https://searchapparchitecture.techtarget.com/definition/refactoring

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

Solo Project Management Using Government-Grade Advice

https://derisking-guide.18f.gov/state-field-guide/basic-principles/

Something I struggle with quite a bit is directing my personal software projects. Staying committed, composing large pieces of code with each other correctly, and even figuring out what I want to do in the first place are all things that I just sort of play by ear, with limited success. In an attempt to gain some insight, I found this article published by a technology and design consultancy for the US government, directed at non-technical project managers. The gist of it is that these managers must understand six concepts: user-centered design, Agile software development, DevOps, building with loosely coupled parts, modular contracting, and product ownership. I won’t go into all of these, since some are still more technical than others, but I want to highlight a few.

One of my favorite points in the article is something that I’ve believed for a long time, which is that all development should be centered on the needs of the end user, rather than stakeholders. Project risk is reduced by ensuring the software is solving actual problems for actual people, and the problems are identified via research tactics like interviews and testing for usability. It would be kind of silly to interview myself, but I think this is a good mindset to have. It kind of sounds meaningless when stated so directly, but if you want a product you have to focus on creating the product, rather than daydreaming about every possible feature it could have.

Another point I liked was the discussion of Agile software development. Without getting too into the weeds on details, the basic problem it seeks to solve is that detailed, long term plans for major custom software projects generally become incorrect as the project proceeds and new technical considerations are discovered. To combat this, agile developers plan in broad strokes, filling in details only as necessary. In a way, it kind of reminds me of an analogy I heard once to describe Turing machines – an artist at their desk, first drawing a broad outline and then focusing on specific sections and filling in details (it may or may not be obvious how this is related to Turing machines but that’s not relevant). The primary metric is how fast documented and tested functionality is delivered.

I found two other somewhat related points useful as well, both of which deal with modularity. The first is the idea of “building with loosely coupled parts”, which essentially boils down to the idea that if one agile team is in over their head, they should split the work into two or more distinct products each with its own dedicated team. Modular contracting is just applying this concept before even beginning a project. Together, I think this a helpful way of possibly connecting all the small fleeting app ideas I have – rather than one unfocused monolith, I could work on a small ecosystem with a shared API that I add and remove things from as needed.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

API calls

We know api but how are they called?

The Uniform Resource Identifier (URI) of the server or external software whose data you desire is the first thing you need to know when making an API request.
This is essentially a digital version of a street address.
You won’t know where to send your request if you don’t have this. For example, the HubSpot API’s URI is https://api.wordpress.com. It’s worth noting that most APIs have several endpoints, each with its own set of end routes. Consider the case when you want to stream public tweets in real time. Then you could utilize the filtered stream endpoint on Twitter. The base path is https://api.twitter.com, which is shared by all endpoints.
/2/tweets/search/stream is the filtered stream endpoint. You can either add that to the end of the base path, or just list the endpoint in your request.

Add an http verb

Once you’ve got the URI, you’ll need to figure out how to make the request. The first thing you must include is a verb that expresses a request. The following are the four most fundamental request verbs: To retrieve a resource, use the GET command. To make a new resource, use the POST command. To alter or update an existing resource, use the PUT command. TO DELETE A RESOURCE, USE THE DELETE KEY. Let’s say you want to see a list of the nearest alternative fuel stations in Denver, Colorado, and you use NREL’s Alternative Fuel Station API. Then you’d make a GET request that looked something like this:

GET https://developer.nrel.gov/api/alt-fuel-stations/v1/nearest.json?api key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

This instructs the server to look for a list of alternative fuel stations in Denver in the database. If that list exists, the server will return an XML or JSON copy of the resource along with a 200 HTTP response code (OK). If that list doesn’t, then it will send back the HTTP response code 404 (not found).

Test API calls

Making API calls to various endpoints, receiving responses, and validating the status codes, response times, and data in those answers are all part of API testing. ReqBin, for example, is a software tool or web service that does this type of testing. Although the processes are similar, they will differ depending on whatever tool or service you use. The steps for utilizing ReqBin to test an API are listed below. Enter the URL of the API endpoint. Select the appropriate HTTP method like GET POST etc., Enter your credentials in the Authorization tab, Click Send to submit your API request.

Why this topic?

I chose this topic since it was the subject of a school assignment.
I followed the directions in the readme file you provided, but I wanted to learn more about why we use the GET method and why we use the POST method in which situations.
Overall, this topic piqued my interest, and I believe it is critical knowledge for all students aspiring to be software developers.

Link: https://blog.hubspot.com/website/api-calls

From the blog cs@worcester – Dream to Reality by tamusandesh99 and used with permission of the author. All other rights reserved by the author.

Encapsulate What Varies

When we write code, we try to think ahead to what possible changes we may need to implement in the future. There are many ways that we can implement these changes, ranging from slapping together a quick patch, methodically going through the code and changing all the affected parts, or writing the code in such a way that anticipated changes can be added in with just one or two small adjustments. This last method is what “encapsulate what varies” means. Writing code will often cause us to think about what future changes we need, and by isolating those parts of the code we can save ourselves time in the future. I found an article that does a good job explaining this concept, and while reading through it I was reminded of a recent project where using encapsulation ended up saving me a lot of time and headaches in the future.

The specific event that the article caused me to remember occurred during my most recent internship. One of the projects I worked on was a script that would automatically assemble 3D CAD models of of any of the systems the company was working on at the time. This script needed to read the system specifications from a database and then organize that data, identify key parts of the system, and figure out how it is assembled so that it can then send those instructions to the CAD software and create the 3D model. It was a big project, and I and the other intern working on it were daunted by the amount of ever changing data that would need to be accounted for. Many systems were of a unique design, and as such we couldnt use the same exact code for all systems. The engineers we were attatched to for this internship introduced us to something called python dataclasses. These essentially allowed us to structure parts of our code that we knew were going to be subject to change in such a way that adding or removing certain data points from the database doesn’t break the overall program. If any changes arise, we only need to alter the related dataclasses for the rest of the code to be able to work with the new change. Without these we would have had to create new methods/classes for each unique change every time it came up; which is not something anyone wanted. I am glad I found out a way of “encapsulating what varies” since I can now write better and more future-proof code by isolating the parts that I believe will be changed the most often.

https://alexkondov.com/encapsulate-what-varies/

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

GRASP

What is GRASP?

GRASP, standing from “General Responsibility Assignment Software Patterns” is a design pattern in object-oriented software development used to assign responsibilities for different modules of code.

The different patterns and principles used in GRASP are controller, creator, indirection, information expert, low coupling, high cohesion, polymorphism, protected variations, and pure fabrication. GRASP helps us in deciding which responsibility should be assigned to which object/class.

The following are the main design principle

  1. Creator
  • Who creates an Object? Or who should create a new instance of some class?
  • “Container” obejct creates “contained” objects.
  • Decide who can be creator based on the objects association and their interaction.

2. Expert

  • Provided an object obj, whoch responsibilities can be assigned to obj?
  • Expert principle says that asign those responsibilities to obj for whoch obj has the information to fultill that responsibility.

3. Low Coupling

  • How strongly the objects are connected to each other?
  • Coupling – object depending on other object.
  • Low Coupling – How can we reduce the impact of change in depended upon elements on dependant elements.
  • Two elements can be coupled, by following if:
    • One element has aggregation/composition or association with another element.
    • One element implements/extends other element.

4. High Cohesion

  • How are the operations of any element are functionally related?
  • Related responsibilities in to one manageable unit.
  • Prefer high cohesion
  • Benefits
    • – Easily understandable and maintainable.
    • – Code reuse
    • – Low coupling

5. Controller

  • Deals with how to delegate the request from the UI layer objects to domain layer objects.
  • It delegates the work to other class and coordinates the overall activity.
  • We can make an object as Controller, if
    • Object represents the overall system (facade controller)
    • Object represent a use case, handling a sequence of operations

6. Polymorphism

  • How to handle related but varying elements based on element type?
  • Polymorphism guides us in deciding which object is responsible for handling those varying elements.
  • Benefits: handling new variations will become easy.

7. Pure Fabrication

  • Fabricated class/ artificial class – assign set of related responsibilities that doesn’t represent any domain object.
  • Provides a highly cohesive set of activities.
  • Behavioral decomposed – implements some algorithm.
  • Benefits: High cohesion, low coupling and can reuse this class.

8. Indirection

  • How can we avoid a direct coupling between two or more elements.
  • Indirection introduces an intermediate unit to communicate between the other units, so that the other units are not directly coupled.
  • Benefits: low coupling, e.g Facade, Adapter, Obserever.

9. Protected variation

  • How to avoid impact of variations of some elements on the other elements.
  • It provides a well defined interface so that the there will be no affect on other units.
  • Provides flexibility and protection from variations.

I chose to talk about GRASP because as a computer science major interested in software development, I was curious to learn more about this and how GRASP is used to assign responsibilities for different modules of code, how it provides a means to solve organizational problems.

rao.pdf (colorado.edu)

GRASP (object-oriented design) – CodeDocs

From the blog CS@Worcester – Gracia's Blog (Computer Science Major) by gkitenge and used with permission of the author. All other rights reserved by the author.

‘NoSQL’ Comparison

SQL databases use tables and relations to store data which makes them rigid in the way data is managed. Developers are forced to store data in a predefined way according to the table and database specifications. This strictness makes working with the data easier in the future because the data is highly structured. Given the table properties, a developer will be able to know the properties on each row of the table. The downside of the rigidity of SQL databases is that making changes and adding features to an existing codebase becomes difficult. In order to add a field to a single record, the entire table must be updated and this new field is added to all records in the table. In PostgreSQL, there can be JSON columns where unenforced structured data can be stored for each record. In this way, it is a workaround for the highly structured nature of SQL databases. However, this approach is not ideal for all situations, and querying data within a JSON field is slower than if it was in a table. SQL databases use less storage on average than NoSQL databases because the data is standardized and can be compressed using optimizations. However, when a SQL database grows, it usually must be scaled horizontally. Meaning the server running the database must have upgraded specifications rather than spreading the resources throughout more instances.

NoSQL databases use key-value pairs and nested objects to store data making them much more flexible compared to SQL databases. One of the most popular NoSQL databases is MongoDB. In these databases, tables are replaced with collections and each entry is its own object rather than being a row in a table. The document-based storage allows for each record to have its own fields and properties. This allows for code changes to be made quickly. The downside of having no enforced structure is that required fields can be omitted and expected data when it is not present on the object. MongoDB fixes the issue of no enforcement with features called schemas. Schemas are a way to outline objects with key names and the data type associated with them. Ensuring each object in a collection follows the same format. NoSQL databases are scaled horizontally easily, easing the resources by distributing the workload on multiple servers.

I selected this topic to learn more about the different use cases between SQL and NoSQL databases such as MongoDB and PostgreSQL. I will use what I learned on future projects to ensure I select the right database technology for the project I am working on.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

JavaScript Best Practices

As a trend from the previous posts, I am still working with JavaScript, and I am still learning more and more about it. In this article I learn some best practices around JavaScript itself. Some new things and some things common for any programming language.

https://www.w3.org/wiki/JavaScript_best_practices

There is not much to summarize for this article as it is simply some best practiced coding techniques for proper JavaScript coding. Like mentioned earlier it does include some things that by now I should already know and practice. As in comments should be as needed but not in excess, naming conventions should be simple and understandable, Big O notation matters and to optimize your loops and avoid nesting them and keeping to clean code style of one function one purpose instead of excess purposes inside a function that might be iterated out later or nonsensical to someone code reviewing. But there were some more JavaScript specific practices that were more related to web development.

                Progressive Enhancement is a concept that I get on a basic level just thinking in terms of providing a service to someone means going through the barriers of your platform to make sure they have access to it, like Microsoft office products working on mac. In this article it mentions the idea that when scripting or perhaps even JavaScript itself is not available to a platform that you need to manage the code in a style that will work with any platform. To me that seems easier said than done but it does make sense that if the interface to the user can be managed by something else before scripting is done then you achieve your goal of progressing your user base and opening your code up.

                Another practice I learned was regarding data security. That at any time any data being passed through my code should be checked first. I have heard examples of specific businesses being hacked due to a very specific fault in the design itself that left open vulnerabilities which lead to personal information being stolen. Most cases I have heard is simply the human aspect in security vulnerability where a hacker just calls to get access to a password for an account that can then access that data. But in the examples given in the article it is specific to making sure that the data passed to you does not fault in error and that there is some methods that allow you to discern data types from another to avoid further conflicts or generally avoid validation on the users end to prevent them messing with your own websites code.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.