Vue.js and How it Works

 

This article, An Introduction to Vue.js:
Understanding the Framework and Getting Started, is a good starting point for
people who want to use vue.js, a JavaScript framework that is used to make User
interface. It lights major features of vue.js; components, directives, instance
and router and how they work.  It then goes
into deeper topic of vue.js , stage management in vue.js from beforeCreate to destroyed,
introducing vuex which is the state management library. It explains the
structure of Vuex (state, mutations, actions, context) and demonstrates how
Vuex can be integrated with Local Storage and Session Storage to persist
application state across page reloads or browser sessions. It continues on to
reactions and even handlers, showing how computed values react automatically to
data changes and how event listeners allow components to respond to user
interaction. Last thing it explains about server-side rendering, what it does
and how it used to improve performance and SEO optimization This article concludes
stating that vue.js is a powerful and popular framework for web development,
offering a versatile and intuitive approach to building interactive and dynamic
user interfaces and simplifies the whole process of building and encourages
continued learning through documentation and community resources.

           I
chose this article as I usually did my front-end work based on html, css and
javascript while learning how to use react(typescript). Vue.js seems to decrease
a lot of jobs of adding things, especially fact that it helps with performance and
Seo optimization route which must be taken in consideration making the website.
Also, with the fact vue.js is more lightweight than other frameworks like angular
and react and heard a lot about ease of use, so I thought I need to give it a
try and this article was the best introduction of how vue.js works.  With the current knowledge that I got, I expect
to apply this knowledge by organizing my projects around reusable components,
using lifecycle hooks more intentionally for initialization and cleanup, and
relying on Vuex when managing complex state across multiple parts of an
application. Not only that, I plan on trying to make a website just suing
vue.js and trying to compare with other plan html,css, javascript that brings
in library based on performance, lightness, seo etc for the long run.

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

Vue.js and How it Works

 

This article, An Introduction to Vue.js:
Understanding the Framework and Getting Started, is a good starting point for
people who want to use vue.js, a JavaScript framework that is used to make User
interface. It lights major features of vue.js; components, directives, instance
and router and how they work.  It then goes
into deeper topic of vue.js , stage management in vue.js from beforeCreate to destroyed,
introducing vuex which is the state management library. It explains the
structure of Vuex (state, mutations, actions, context) and demonstrates how
Vuex can be integrated with Local Storage and Session Storage to persist
application state across page reloads or browser sessions. It continues on to
reactions and even handlers, showing how computed values react automatically to
data changes and how event listeners allow components to respond to user
interaction. Last thing it explains about server-side rendering, what it does
and how it used to improve performance and SEO optimization This article concludes
stating that vue.js is a powerful and popular framework for web development,
offering a versatile and intuitive approach to building interactive and dynamic
user interfaces and simplifies the whole process of building and encourages
continued learning through documentation and community resources.

    I chose this article as I usually did my front-end work
based on html, CSS and JavaScript while learning how to use react(typescript). Vue.js
seems to decrease a lot of jobs of adding things, especially fact that it helps
with performance and Seo optimization route which must be taken in
consideration making the website. Also, with the fact vue.js is more
lightweight than other frameworks like angular and react and heard a lot about
ease of use, so I thought I need to give it a try and this article was the best
introduction of how vue.js works.  Ending
with some interesting state management library with different coding then other
ways. With the current knowledge that I got, I expect to apply this knowledge
by organizing my projects around reusable components, using lifecycle hooks
more intentionally for initialization and cleanup, and relying on Vuex when
managing complex state across multiple parts of an application. Not only that, I
plan on trying to make a website just using things I learned about vue.js in
this article and trying to compare with other plan html,css, javascript that
brings in library, another one built out of React typescript( a basic one that I
have built as test) based on performance, lightness, seo etc for the long run.

 

https://medium.com/@phamtuanchip/an-introduction-to-vue-js-understanding-the-framework-and-getting-started-d0ad0f3a6c01

 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

The public and software

Link to article: https://github.blog/open-source/social-impact/software-as-a-public-good/

This article goes into detail on the relationships between the public good services and open source software and its importance. As recently many open-source software proponents went to the United Nations office in New York City to discuss the importance of open-source software for the world at large for non-profits and the private sectors of society to further society as a whole. One of the most important points that was made is the underfunding of open-source projects either due to technology not being updated or vulnerabilities being found in said software causing further security issues for public good repositories. They also stress that both private sectors and public sectors must work together to further as 80% of public good software lives on GitHub whether its properly funded or not. One of these project that the article touches upon as a public good is the PRISM project which help gathers data about geological disasters and events to better inform local governments about risks and how to best deal with a situation. The largest takeaway from the ending of the article is that proper maintenance and contributing financial or through additions to public repositories is extremely important for both public and private sectors as both rely on help from governments and individuals to maintain for long periods of time.

I chose this article due to myself supporting open-source software as a public service, Due to my use of many tools and helpful resources that come from GitHub and open-source repositories and I also the believe government should further support the trend of open-source having more funding for better projects and better legal protections. This blog post also taught me the importance of open-source software for the public good that benefits all members of society due to its ease of access and its advantages to be built off of and how the private sector also benefits from the funding of open-source projects as often times a benefit to the public good also benefits private interests

Based off of this blog post I will take further actions to support legislation and political pundits that also share the same support for open-source funding for public good systems and as well as try to further contribute to publicly available public good repositories to help with the overall lack of funding to these projects that provide a overall positive impact on society as whole.

From the blog CS@Worcester – Aidan's Cybersection by Aidan Novia and used with permission of the author. All other rights reserved by the author.

Designing Front End

The article I chose shows how strongly front-end design is linked to the structural principles I learn in Software Construction, Design, and Architecture. This is important to know because front-end design is often seen as the “visual” or “creative” side of development. I wanted to look into how these ideas apply to the front-end world because our class focuses on making systems that are scalable and easy to manage through patterns, abstractions, and good architectural choices. To show that UI design and software architecture are more connected than many developers think, this piece built a strong link between the two.

The article says that front-end architecture is a planned way to arrange code, components, styles, and interactions so that the app stays the same and can grow over time. The book talks about practices like using clear naming standards, modular components, reusable patterns, and separating concerns. These are all very similar to ideas covered in the course, such as the Stratified Architecture Model, the SOLID principles, and design patterns like Strategy, Factory, and MVC.

The idea that front-end design isn’t just about how things look but also how they are put together is one of the most important things I learned from the resource. In the same way that writers plan backend classes, the author says that UI components should be planned with clarity, purpose, and future change in mind. This made it easy for me to see how front-end design and software systems are connected. The same goals we have when using architectural principles to build software are met by a well-structured front end: fewer bugs, better teamwork, and easy addition of new features.

When I thought about this, I noticed how often developers rush through UI design without thinking about how to keep it up to date over time. I changed how I build interfaces because of the text. I used to only care about “making it look right,” but now I know how important it is to make parts that can be used again, are predictable, and fit in with the general architecture of the system. This fits with what I’ve learned in class about cutting down on duplicate code, making things more cohesive, and keeping the lines between layers clear.

In the future, I’m going to use what I’ve learned by making a library of components early on in a project, writing down UI rules, and making sure that the front end and back end can talk to each other clearly through well-structured APIs. In addition to making development go more smoothly, this method will also make the product easier to expand as more features are added. In general, this resource helped me learn more about how software building principles affect more than just backend logic and have a big effect on the quality of front-end systems as well. It also told me that software architecture and user experience are linked—a well-designed system helps a well-designed interface work.

From the blog Site Title by Roland Nimako and used with permission of the author. All other rights reserved by the author.

CS343: What is OpenAPI?

This week I read a blog post from Postman titled “OpenAPI vs. Swagger”. You can find that here.

What is OpenAPI?

OpenAPI 3 is an API specification within YAML which outlines a RESTful API, similar to Swagger. We’ve used both Swagger and OA3 in class, and we’ve seen that they are so similar that files made using either can be displayed using the Swagger Viewer. One further note: OA3 is considered a “newer” version of Swagger 2. Being newer, it is substantially different from its predecessor.

What are the key similarities between OpenAPI 3.0 and Swagger 2?

  • Both can display JSON and YAML data.
  • Both utilize the JSON schema (though different kinds of the schema)
  • Info block in both specifications display the same information about the API.
  • Both organize their paths and HTTP methods the same way.
    • OpenAPI 3.0 additionally supports describing all data with the JSON Schema

What are the key differences between OpenAPI 3.0 and Swagger 2?

  • Reusable components
    • Swagger 2.0: reusable components must be defined under specific fields: ‘definitions’, ‘parameters’, ‘responses’, ‘securityDefinitions’
    • OpenAPI 3.0: reusable components are now in the ‘components’ block to avoid ambiguity with actual security elements
      • Also new types of reusable components, such as ‘examples’ and ‘headers’.
  • Data models per media types
    • Swagger only allows one schema for the request and response bodies. However, in OpenAPI 3.0, you can define one schema for each (as we saw in class while working on the pantry).
  • Callbacks and webhooks
    • OpenAPI 3.0 supports asynchronous communication, such as the ‘awaits’ keyword, so the API does not have to provide data tied to such a keyword itself.
  • Improved security
    • OpenAPI 3.0 adds a new universal ‘security’ field and ‘securitySchemes’ to define security procedures and security modes.
    • OpenAPI 3.0. removed Basic Auth and fixed OAuth2.0 support missing from Swagger 2.0.
  • More summaries
    • OpenAPI 3.0 adds a new ‘summary’ and ‘description’ field to describe paths.* level, whereas Swagger 2.0. only had a short ‘description’ field at the info level.
      • The summary allows for a short description of the API, whereas the description field allows more detailed explanations of individual elements of the API.

Why did I choose this blog?

During this semester, I was given an introduction into many of the tools that front-end/back-end developers will be using when working on product, such as JSON, REST, OpenAPI, and YAML. As we approach the capstone next semester, I figure we’ll be using these frameworks even more, so it would be to my benefit to get more acclimated to them now.

That’s all for this week.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Cohesion and Coupling

In this blog post, we will highlights some important details between Cohesion and Coupling. The definition of the two concepts can be difficult to understand at first, but then it is helpful to imagine a picture of how elements are interconnected, leading to further understanding as Cohesion and Coupling are correlate to each other.

In the blog summary, first of all is how they clarify the complicated definition of these two concepts. Cohesion is about how elements belong together, but it does not depend on the number of connections between elements, which is what coupling is all about. The blog post also shows the pictures that helps us imagine of how loose and tight the elements are, which can be ideal when figuring out the correct way to organize module. In other words, it should leads to high cohesion and low coupling. And finally is how to sum up to the point that why high cohesion does not only make the system easy to understand and change, it also reduces the level of coupling.

I personally choose this post because it is helpful to understand how to apply these two concepts carefully. In object-oriented programming, for example when making a class in Java, we should know when we declare too many properties that could make the class ultimately “do everything”. For example, declaring a bookstore class that do the following properties: add, remove, sale, and receipt. As we create multiple properties like this, we notice that there are properties shouldn’t be necessary, such as sale and receipt, and instead should break down into each individual class. That’s what affect the cohesion, where breaking each properties into each individual class will promote higher cohesion and lower coupling. Therefore, we call each classes a cohesive class.

This concept of Cohesion and Coupling also makes me imagine of how to organize properties in one class that looks easier to maintain. In my experience, not always when declaring a class is a cohesive class, as I can’t determine if some related properties in that class should breaks down into new classes. Therefore, it’s difficult to maintain for larger projects, making it look complicated at a glance. What I learned about these concepts is helpful to me so I can visualize how I should organize classes and properties, by applying the definition of cohesion and coupling, as high and low for each can affects future maintenance and for the overall functionalities of the project.

Source: https://blog.ttulka.com/how-cohesion-and-coupling-correlate/

From the blog CS@Worcester – Hello from Kiet by Kiet Vuong and used with permission of the author. All other rights reserved by the author.

CS343: YAML vs JSON

This week I read a blog post from Postman titled “What is YAML?” You can find that here.

What is YAML?

YAML stands for “Yet Another Markup Language.” YAML is actually a superset of JSON, which I didn’t know and didn’t intuit from the name; the blog goes into depth about the differences between JSON and YAML, in addition to other things.

What are some of the key ways in which YAML differs from JSON?

  • Syntax
    • YAML: indentation (or spaces) infer nesting; colons for KVPs and dashes for listing items
    • JSON: syntax is explicit, f.e. curly braces to define objects and square brackets to define arrays
  • Readability
    • YAML is considered more readable due to its less-astringent syntax rules, while JSON is considered less readable due to its highly structured syntax.
  • Data types
    • YAML supports more data types (such as dates and binary data) than JSON, which supports only primitives and some other data types.
  • Usage
    • YAML is the language of choice for config files and other areas where readability is key.
    • JSON is natively supported by many programming languages, meaning that it fits well into microservices and for data interchange between APIs and web services.

What is YAML used for?

  • Writing config files
  • Data interchange for API/web services
  • Storing metadata
  • Data serialization (and even converting from JSON or to/from YAML)

What is essential to know when working in YAML?

  • YAML has no hard requirements for syntax, but developing an internal style guide to ensure that files written across the organization are equally readable is an ideal best practice.
  • YAML files, especially large ones, can be broken up into individual modules in order to improve maintainability/readability.
  • YAML may present a potential security risk when configuration or other files are autogenerated and expose system/data information that otherwise should not be shown. Be careful about what data is stored in such files.

Why did I choose this blog?

During this semester, I was given an introduction into many of the tools that front-end/back-end developers will be using when working on product, such as JSON, REST, OpenAPI, and YAML. As we approach the capstone next semester, I figure we’ll be using these frameworks even more, so it would be to my benefit to get more acclimated to them now.

That’s all for this week.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

CS343-01: Fourth Quarter

Interface, Not Implementation Programming

According to what I recently learned, programming to an interface and not an implementation is one of modern software development’s “most powerful design principles.” And this is a key to building some flexible, maintainable, and professional software systems. In this blog, I’ll go over the key characteristics of this idea.

To understand this idea, the source says that programming to an interface and not an implementation is about separating what the program does and how it accomplishes the tasks it’s performing, or will be performing. The primary characteristics include abstraction over specificity, decoupling components, enhanced flexibility and extensibility, ease of maintenance and testing, and finally, polymorphism and reusability.

Abstraction over specificity means that when programming to abstractions (like interfaces), the behaviors should be without detailing the exact implementations taking place. “This approach allows for multiple implementations that fulfill the same purpose but differ in internal workings.” 

For a note: coupling in software design is where it measures the interdependence between modules or components which indicates how much they rely on each other. Low coupling is desirable. The second characteristic is Decoupling components. This means that those components become less dependent on specific implementations that cause loose coupling. “When components are decoupled, they can be easily replaced or modified without impacting other parts of the system.”

Enhanced flexibility and extensibility is where the developers introduce the new functionalities significantly changing the code and they can also swap out different implementations that “adhere to the same interface.”

Ease of maintenance and testing is the fourth principle or characteristic of “Programming to an interface, not an implementation.” This is where the testing can become simpler since the interfaces now allow for creations of mock or stub implementations. “This isolation helps developers verify each component’s behavior independently, supporting faster maintenance cycles.”

And finally, the last characteristic is Polymorphism and reusability. This is where the objects are more interchangeable, which in turn, enhances the polymorphism when focusing on interfaces. It leads to reusable code which helps a lot when the same interface can also support different implementations for various scenarios it could encounter.

This is important to know since these principles are applied to different programming paradigms. Programming paradigms are fundamental styles or approaches that are used for structuring and writing computer programs. They’re like blueprints when building a building except with problem-solving. And “programming to an interface and not an implementation” is often associated with Object-Oriented Programming, OOP, one of the different types of programming paradigms. Some of the other paradigms that this principle can be applied to are functional programming or FP and procedural programming.

Source: https://medium.com/@Masoncoding/programming-to-an-interface-not-an-implementation-024d01815070 

From the blog CS@Worcester – The Progress of Allana R by Allana Richardson and used with permission of the author. All other rights reserved by the author.

CS348-02: Quarter Four

The Vulnerabilities of Git

We use Git for a lot of things from collaborative projects to personal ones. So security threats are a topic that must be taken into account on any given day, especially for important projects. And in the article posted in July 2025, there are seven distinct security vulnerabilities that had been added, all that affect prior versions of Git.

The first of these vulnerabilities of Git is CVE-2025-48384. This includes that while reading a configuration value, Git will take off— or strip as the article says— trailing two things: carriage returns or CR and line feed or FR characters. “When writing a configuration value, however, Git does not quote trailing CR characters, causing them to be lost when they are read later on.” The article mentions how if something called a symlink exists between the stripped path and the submodule’s hooks directory, then it’s an opportunity for an attacker to “execute arbitrary code through the submodule’s post-checkout hook.”

A symlink, upon further research is also known as a symbolic link or a soft link. It’s a special computer file which refers to either another file or a directory by storing a path to it. This makes an alternative access path which doesn’t duplicate the content of the target. These links can break if the target is either moved or deleted.

The second vulnerability is called CVE-2025-48385 which happens when a repository is cloned and (optionally) Git can fetch a bundle. This allows the server to offload a porting of the said clone to a CDN or Content Delivery Network. The client of Git in this situation does not validate the advertised bundle or bundles properly which allows for “the remote side to perform protocol injection. When a specially crafted bundle is advertised, the remote end can cause the client to write the bundle to an arbitrary location, which may lead to code execution similar to the previous CVE.”

There is also a Windows only vulnerability which is a CVE-2025-48386. This is where Git uses a credential helper to authenticate the request when an authenticated remote is cloned. One of these credential helpers is Wincred which uses Windows Credential Manager to store credentials, but it also uses the content within a static buffer. This static buffer’s content is used as a “unique key to store and retrieve credentials. However, it does not properly bounds check the remaining space in the buffer, leading to potential buffer overflows.”

There are also vulnerabilities in Git GUI and Gitk, some of which are specific. CVE-2025-27613 and CVE-2025-27614 are for Gitk. CVE-2025-27613 is when running Gitk in a specifically crafted repository, Gitk can write and/or truncate arbitrary writable files when running Gitk without additional command-line arguments. CVE-2025-27614 is when the user is tricked into running a gitk filename where the filename has a very specific structure and they may run arbitrary scripts that are provided by the attacker.

Over all, always upgrade to the latest version.

Source: https://github.blog/open-source/git/git-security-vulnerabilities-announced-6/ 

From the blog CS@Worcester – The Progress of Allana R by Allana Richardson and used with permission of the author. All other rights reserved by the author.

The Power of Linters

For my final self-directed blog of the semester, I decided to dive deeper into linters and their function. In class, we had briefly gone over linters, specifically their use to correct non-inclusive or problematic language. This is useful when creating any form of documentation because we want it to be as neutral and non-problematic as possible. I found a blog post from Codacy regarding information about linters, their benefits and drawbacks, and some popular linters for different programming languages.

The article starts by detailing the history of linters, they were created by computer scientist Stephen C. Johnson in 1978 as a tool used to find errors within written code. The name was a reference to the lint trap in a dryer, which was designed to catch the unwanted lint in the machine during the drying process. The linter is a useful tool for static code analysis, which is the process of examining errors in code before executing the code. According to the article, linters can help find “coding errors, stylistic inconsistencies, bugs, violations of coding standards, and potential security vulnerabilities.” It does this by checking your code against a predefined set of rules.

The benefits of linting are that it will reduce the number of errors written into code, it creates a consistent standard for coding practices, and can help improve the objectivity of code. Some argue that the downsides of linting include too many false positives and can negatively affect the performance of programmers in the early stages of development. However, it is generally accepted that linting is a useful tool and is adopted by many development teams.

Prior to this course and the activity we completed in class, I was not aware of linters or what they could do. Most of my code errors over the years would get cleaned up by the debugger in my IDE or by any error messages that came up upon execution. I was not aware of this as a tool that I could use in my development. Though I do not program often, or plan on pursuing it as a career path, I enjoy learning about all aspects of the field. The next time I have to do any kind of programming project, I will be adding a linter to my IDE so I can have a more consistent program with less errors. I enjoy learning more about all aspects of the field to become a more well-rounded academic and professional.

Source Article: https://blog.codacy.com/what-is-a-linter

From the blog CS@Worcester – zach goddard by Zach Goddard and used with permission of the author. All other rights reserved by the author.