Category Archives: CS-343

An Intro to TestProject as an Automated Testing Tool

Classes restarted last week, and during the first week I found myself reading and writing a lot of introductory blog posts about upcoming semester course topics. As part of my setup tasks for CS-443 Software QA and Testing, I read about one of the most popular manual API Testing tools Postman and saw reference to TestProject, an automated testing tool from the same company Tricentis. So, I decided to do some further reading into TestProject as I understood the basics of manual testing, but had not seen automated testing in action yet. To do so, I found a blog post on the Tricentis site containing an overview tutorial of API testing and how platforms like Postman and TestProject can be valuable.

The post begins by discussing the value in API Testing and mentions the utility of Postman, but also brings up some of the limitations that are commonly encountered such as test automation, scheduling, and end-to-end test reporting. Intro TestProject → This testing platform offers solutions for end-to-end API testing by providing an environment to not only test API’s, manually but also to automate API-based test flows, schedule and run them periodically, and generate execution reports without the need for third party tools or writing any code.

The blog also contains a six chapter tutorial on using TestProject, so I delved into Chapter 1 – Basic API Test Automation. This emphasizes the platform’s ability to handle a wide variety of input sources like HTML, Databases, and more, then showing the application GUI and taking the reader through how to add ‘test steps’ in TestProject to set up automated testing for a GET HTTP request using NASA’s public APIs including search parameters and discussion on Dynamic Endpoint URLs (and implementing them) amongst others. Chapter 2 offers another brief tutorial discussing the value of scheduling automated tests with image steps (guide) to doing so with interactions to Android and iOS systems.

From these introductory chapters, I was able to get a basic idea of how to use TestProject to design calls, execute tests and access result reports. The other chapters in this TestProject tutorial cover more advanced API testing and validation flows, shell commands, scheduling API automation and more. As an introduction to Quality Assurance Testing and the course in general, this chapter was intriguing and valuable to get an idea of what an automated software testing tool looks like and how to use it in a basic sense. Stay tuned to read more about these other chapters and other topics in software quality assurance and testing and other exciting computer science related topics!

Sources:
General TP Post: https://blog.testproject.io/2020/11/10/automating-end-to-end-api-testing-flows/

Chapter 1: https://blog.testproject.io/2020/11/10/basic-api-test-automation/Chapter 2: https://blog.testproject.io/2020/11/10/api-test-automation-flows-combined-with-mobile-functional-test/

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Software Architecture Patterns (Token Blog)

While I do not have a great understanding of Software Architecture as much as I do with programming the Backend of that same software, I still take into appreciation the looks of its design.  When I took sight of “The top 5 software architecture patterns: How to make the right choice” written by Freelance writer Peter Wayner, one of the things that was eye-catching was how the Software Architecture Patterns were described throughout this blog.  While the Microservices Architecture was one of the only architectures that I had familiarized myself with while reading through the different architectures, I had been more interested in the endless vacuum of strengths that the Space-based architecture had when seeing the benefits that it poses.  The article had given a very understanding of all the Software Architectures by showing their strengths and their weaknesses, making it easier to see where their limits are foreseen.

I chose this article since I searched far and wide for a topic that I least understood, but still would be inspired to follow through reading about it for fulfillment.  The article does a really great job at grabbing the reader, as the most complex part about trying to understand a Software Architecture is when you are trying not to break your own code with a Software Architecture that poses a great risk in causing your program to crash unexpectedly.  For the simplicity of the reader, Wayner had given either a telling example or a visualization that would help the reader have a simple explanation of what the structure is designed to do for your program as its structure.

The biggest takeaway I had after reading this article was that I was still able to learn that each architecture shapes your program in a way to help better understand what to do with your program next.  However, Software Architecture is still only the beginning of what you need to do to fully design your software.  A software may still need changes that an architecture alone cannot fix, but what an architecture should do is help you see the full picture of your own work that you have done.  A Software Architecture is only a frame in the end, but the program you create would be best suited for an environment that is needed to put it on full display.

https://techbeacon.com/app-dev-testing/top-5-software-architecture-patterns-how-make-right-choice

From the blog CS@Worcester – Elias' Blog by Elias Boone and used with permission of the author. All other rights reserved by the author.

REST API (Token blog)

When I was first introduced to the concept of an API, a shortened abbreviation of Application Programming Interface, I never thought of it being used far outside the scope of Computer Science.  However, as I revisited some software that I used in the past and observed the current software I worked with recently, I was more than surprised by the fact that an API is universally used across the Internet.  For this blog, I explored a blog called “10 Most Popular Frameworks For Building RESTful APIs” written by Developer Advocate at Moesif, Preet Kaur. In this blog, the author explained that when deciding on the API framework to use, you needed to choose one that uses a programming language that you are familiar with, but you can also use in spite of its shortcomings.  Further down the blog, she gave some examples of popular API frameworks that are used by many developers; one of those frameworks that was mentioned was Express JS, a framework I am familiar with.

One of the biggest contributing factors that brought me to use this article for this blog is not only the different frameworks that Kaur had given examples of, but also where you can find and learn about them.  Reading this blog got me hooked on the many different types of API frameworks that I could look at to familiarize myself with each that may give me a better understanding of the specifics of REST API for my drive toward a career in Software Engineering.  Even if the frameworks in question were written in programming languages that I am not as efficient in programming in, I still believe that this blog has helped me in understanding the overall meaning of building an API for use in creating applications and beyond.

My biggest takeaway from this blog is that the API framework you use to build an application, software or other kind of design is not only dependent on your skill at programming or engineering, but also what your goal is with using the framework you choose for your creation.  While this blog was more general than the previous ones I read through, I still stand with a vision that the API framework that I use will only be as helpful as the skills I use to develop and engineer my path to a greater goal in creating the perfect software.

Reference: https://www.moesif.com/blog/api-product-management/api-analytics/10-Most-Popular-Frameworks-For-Building-RESTful-APIs/

From the blog CS@Worcester – Elias' Blog by Elias Boone and used with permission of the author. All other rights reserved by the author.

Architecting for Agility

Hey everyone, it’s Andi and I presented an article on decoupled software architectures. Essentially, decoupled architecture aims to break down monolithic applications into modular, isolated components that can operate and evolve independently. I wanted to talk about this concept because decoupling enables more agile, resilient systems that can better adapt to changing customer needs and scale more consistently clean . Article written by Arunkumar Ganapathy, a solutions architect focusing on the design and development of software systems with over 17 years of hands-on experience and is highly skilled in Java, Node JS, and AWS technical stack. So to start off, what is decoupled architecture? It’s a system design approach that tries to cancel as many dependencies between application components by enabling them to operate autonomously. This provides advantages like independent scalability, avoiding regression issues typical of tight architectures. As organizations move toward rapid digital transformation, a separate architecture of focused, isolated services becomes critical for adaptability and resilience. With users all over with interfaces and microservice designs, building modular applications where components can be maintained and updated independently is essential to handle change. By allowing developers to construct and use discrete, self-contained services, architecture transforms how modern agile software is envisioned and created.

Decoupled software architecture presents an important pattern shift that computer science students should understand as they prepare to design and build the systems of the future. More modular system designs will be critical as applications scale up to handle massive data and traffic volumes driven by AI, IoT, and cloud adoption. Understanding loose coupling principles provides valuable insight into creating codebases that avoid failures and unintended side effects when changes are introduced. Exposure to concepts like event streaming and publish-subscribe architectures gives examples of techniques students may employ in decoupled designs. As tightly integrated monoliths become very hard to define, transitioning to more flexible and advanced microservices-based ways will be a reality for many computer science graduates. Getting early experience reasoning through decoupled architectures as students will pay profits in building adaptable and maintainable production systems down the road. The skills used to remove complex processes into cooperative but small components have broad applicability across software domains and will serve CS students well as the landscape continues to move in this direction.

As a student exploring different architectural patterns for connecting user-facing experiences to back-end logic, this article on decoupled architectures opened my eyes to the immense benefits of modular system design. I gained new knowledge into concepts like architectures that enable present / faster responsiveness by finding changes across isolated services. The examples of decoupled architectures powering complex platforms like Uber showed real use cases I could relate to from a front-end perspective.
Seeing how divided application components can evolve discreetly, avoiding regression issues affecting legacy systems, showed me a different perspective on the importance of loose coupling and principles in sustainable software development. My biggest takeaway was understanding how standardized APIs serve as a stronghold facilitating coordination between discrete front-end and back-end elements – while still allowing for independent scaling, troubleshooting and innovation.
As I progress to construct my own full-stack applications, I will apply learnings around planning sturdy interfaces between autonomous services and handling new versions when dependencies change. The reflection overall shows the specific learnings around technical concepts and design principles as well as expected impacts on the student’s future architectural thinking and implementation priorities. It includes both big takeaways as well as practical development relevant to front-end and back-end coherence in decoupled systems.

December 20, 2023
andicuni
cs-343, CS@Worcester
cs-343, CS@Worcester

https://www.computer.org/publications/tech-news/community-voices/decoupled-architecture

From the blog CS@Worcester – A Day in the Life as a CS Blogger by andicuni and used with permission of the author. All other rights reserved by the author.

Building Trust in Open Source Software

Hey Everybody! I chose the article “Boosting faith in the authenticity of open source software” to talk about how it affects present day software and how we can build off it. The article starts off presenting, Speranza is a new system that allows users to verify the authenticity of open source software packages while preserving the identity of developers. It builds on an existing signing system called Sigstore but uses a novel “identity co-commitments” cryptographic technique to prove a trusted developer signed the software without revealing their identity. This approach addresses vulnerabilities in the software supply chain by ensuring the origin of downloaded packages while improving usability and privacy compared to traditional signing keys. The article shows how Speranza enables trust in open source software through automatic verification of authenticity from maintainers.

Speranza presents an interesting case study for computer science students because it sits at the field of cryptography, real-world security issues, core computer science challenges, and the open source ecosystem so ubiquitous in the field. Specifically, the novel “identity co-commitments” technique aims to address the practical threat of vulnerabilities being introduced into software supply chains, whether through some type of malice. In order to take care of the identity, it aligns with both usability and privacy goals in computer science. The cryptographic research here goes as far as verifying authenticity, speaking also to the broader challenge of establishing trust between producers and consumers of open source software. Comments highlight the potential for Speranza’s automated integrity checks to transform that trust relationship. For CS students deeply embedded in open source as users and future contributors, understanding the latest developments working to up security and privacy could deeply inform both perspectives. More broadly, this research borders computer science trade-offs around security, privacy, and useability that come across domains.

As a student learning about building both front-end and back-end software systems, I found this article on the Speranza software authentication project a good addition to know about. I relied heavily on open source code in assignments, often importing libraries without much thought into their integrity. This resource impressed upon me that you can never fully trust third party code, even from regular or normal open source developers. Without an assurance like Speranza’s cryptographic identity commitments, any component in my software supply chain could be compromised.
I now understand the privacy-preservation behind Speranza’s design. By avoiding revealing developer identities in the authentication process, the system overcomes adoption barriers and aligns motives between creators and users of open source work. The commitment to privacy likely occurs from an open source mindset resisting that overcome.

Reading about Speranza’s novel technical approach using zero-knowledge proofs was clarifying as well from a software architecture perspective. Thinking through how I may include third party open source functionality in future applications, I need to consider adding checks on the integrity and origin of imported code. Though Speranza itself deals with authentication before download, it inspired me to reflect on similar proofs-of-authenticity within a completed software project. If I can confirm signed identities of any imported dependencies, it would make me more confident my project doesn’t have vulnerabilities introduced through my software supply chain. I still have much more to learn, but Speranza made me realize the trustworthy software development I had never considered.

December 20, 2023

andicuni

cs-343, CS@Worcester

cs-343, CS@Worcester

https://techxplore.com/news/2023-12-boosting-faith-authenticity-source-software.html

From the blog CS@Worcester – A Day in the Life as a CS Blogger by andicuni and used with permission of the author. All other rights reserved by the author.

CS-343 Wrap Up

APIs, or Application Programming Intеrfacеs, sеrvе as softwarе intеrmеdiariеs facilitating communication bеtwееn divеrsе applications or dеvicеs. Thеy arе vital for intеraction with wеb browsеrs, mobilе dеvicеs, and third-party softwarе. APIs opеratе through protocols, primarily ovеr thе intеrnеt using HTTP, and arе likеnеd to a mеdiator facilitating communication bеtwееn a usеr and a backеnd systеm.

REST APIs, a usеr-friеndly stylе of API architеcturе, usе HTTP mеthods likе GET, PUT, POST, and DELETE. REST API Managеmеnt involvеs four kеy componеnts: API Dеsign, API Gatеway, API Storе/Dеvеlopеr Portal, and API Analytics/Dashboard.

Thе principlеs of writing clеan codе еmphasizе using commеnts judiciously, brеaking down mеthods into focusеd units, incorporating unit tеsting, and еliminating codе duplication. Clеan codе еnhancеs rеadability, еfficiеncy, scalability, and rеducеs thе likеlihood of introducing bugs.

Documеntation is crucial for codе maintеnancе and collaboration, еncompassing low-lеvеl or inlinе documеntation, high-lеvеl documеntation, and walkthrough documеntation. Convеrsеly, anti-pattеrns likе Spaghеtti Codе, Goldеn Hammеr, and Boat Anchor hindеr codе maintainability and dеvеlopmеnt.

Dеv Containеrs, or dеvеlopmеnt containеrs, offеr a complеtе and isolatеd dеvеlopmеnt еnvironmеnt accеssеd through SSH in prеfеrrеd IDEs. Thеy addrеss sеtup configuration issuеs, standardizе projеct build instructions, еnsurе isolation of dеvеlopmеnt еnvironmеnts, еnablе consistеncy across tеams, and simplify onboarding and training procеssеs. Thеir adoption is bеnеficial for rapid dеvеlopmеnt, prototyping, and addrеssing challеngеs associatеd with divеrsе local еnvironmеnts.

This is just a quick summary of what I have been through in CS-343. It was a very impactful semester that taught me many new things. I believe what I enjoyed the most was working collaboratively in teams. I find myself always working better in a structured group and retaining information much better. Towards the end of the semester, when we started to work with APIs, specifically the backend/frontend, I noticed a big usage of JavaScript. The homeworks became a little more challenging because of this, and I am going to use my break to get a crash course in JS. I also am working on a personal project where I will soon be implementing a frontend GUI, so the timing works out great.

I also cannot wait for the capstone class next semester as the whole idea behind the class just sounds perfect. I am a very hands-on guy, so getting to finally work on an actual project for a semester with a team and implementing what we learned will be a learning experience. Also, a great way to practice for internships and job opportunities for the future.

Sources:

https://www.freecodecamp.org/news/standardize-development-environment-with-devcontainers/

https://medium.com/@krunalchauhan_/article-worth-reading-on-what-is-an-api-what-is-a-rest-api-and-deep-diving-into-rest-api-fea074dacaed

https://www.linkedin.com/pulse/benefits-clean-code-your-application-development-daniel-donald/

https://swimm.io/learn/code-documentation/code-documentation-benefits-challenges-and-tips-for-success

https://www.atlassian.com/agile/project-management/scrum-values#:~:text=Scrum%20is%20a%20set%20of,solving%20and%20reducing%20project%20timelines.

From the blog CS@Worcester – CS: Start to Finish by mrjfatal and used with permission of the author. All other rights reserved by the author.

Dev Container Basics

What is a Dev Container?

A Dev Container, or development container, encapsulates a complete development environment accessible through Secure Shell (SSH) in your preferred Integrated Development Environment (IDE). It overcomes workflow impediments such as low performance and limited bandwidth by providing an isolated environment with standardized configuration stored in a .devcontainer.json file. This JSON file, structured with Comments (jsonc) metadata, allows customization for specific needs, such as adding tools or extensions.

Why Use it?

Addressing Setup Configuration Issues: Maintaining and managing local environments involves the use of various tools and configurations, leading to a cumbersome process. Standardizing this process with a unified approach can significantly save time and streamline setup configurations.

Standardizing Build Instructions of the Project: Documenting dependency upgrades and changes can be challenging. Utilizing code rather than extensive documentation simplifies the process, enabling anyone to ship without being hindered by the “it works on my machine” dilemma.

Ensuring Isolation of Development Environments: Developers often work on multiple projects simultaneously, each with its own complexities. Isolating environments prevents conflicts with other software on the host system, creating a clean, controlled space for development tasks.

Enabling Consistency Across Development Teams: Achieving portability across diverse teams is complicated by varying technologies and configurations. Implementing a standardized development environment ensures uniform configurations among team members, minimizing inconsistencies from individual machine differences.

Simplifying Onboarding and Training Processes: Quickly launching environments in isolation facilitates learning new languages or frameworks. This approach is particularly beneficial for onboarding and training processes, keeping machines clean and allowing for smooth presentations and workshops, where everyone can follow along without interruptions caused by missing tools or confusion mid-step.

Dev Containers in Real World Enviroment

  1. Standardized Development Environments:
    • Dev Containers provide a standardized and reproducible development environment, ensuring that all team members work with the same configuration. This minimizes the “it works on my machine” issue and streamlines collaboration.
  2. Setup Configuration Management:
    • Addressing setup configuration issues is simplified with Dev Containers. They help in managing dependencies, tools, and configurations uniformly, reducing the time and effort required for setting up development environments.
  3. Version Control Integration:
    • Dev Container configurations are often stored in version control systems (e.g., Git), ensuring that the development environment is versioned along with the code. This enhances collaboration and makes it easier for team members to switch between branches or versions seamlessly.

Personal Experience

Dev Containers are something I have recently just learned about, and I feel like the need for them is understated. The overall idea behind such a tool is to create a simple and consistent environment for a team to work in. Anytime I work on a project, I stress the need for one as it eliminates many common problems teams face in the early stages of development. I also believe that when I start my professional journey, I will find these containers to be more standardized within the teams I work with.

Sources:

https://www.freecodecamp.org/news/standardize-development-environment-with-devcontainers/

From the blog CS@Worcester – CS: Start to Finish by mrjfatal and used with permission of the author. All other rights reserved by the author.

Software Framework

After reflecting on the previous blog that I had written about coding, I pondered about what would happen if I were to work with software, and I immediately wanted to consider something about a software’s own making. For this blog, I decided to focus on Software Framework, which many programmers and organizations make great use of in the workforce. I took great interest in an article written by Tiago Monteiro, fittingly titled “What is a Software Framework?” It starts by giving the reader a simple concept where a man uses an axe to cut trees and then cuts wood to use for his fireplace for his family. When the chainsaw was introduced into this analogy, it related really well to programming, where you either start with fresh new code or make use of an existing framework done by another user to build your program. Reading more into the article, I found that there were advantages and disadvantages to using a software framework. While a framework is how anyone would start to form their own software, nobody has much freedom to alter the framework, limiting the use of a software framework to the functionality it was designed for.

I chose this article for my blog, because I believe that this article might help me in understanding a bit more about Software Engineering. Although the author of this article is a high-end Computer Engineer, his article could still give me an idea of what I should visualize when I start to learn more about designing software. A software has to have a framework to maintain functionality so it does not become inept, just as a computer will stop working if one of its most important parts either burns out like a wildfire or starts to malfunction upon reaching the end of its half-life.

My biggest takeaway from this article is that there are lots of software frameworks with many functionalities that are made useful in different manners.  Just as we try to create the framework to integrate smart machines into our appliances and everyday tools, we try to do the same with creating framework in our software as well.  We give life to our software the same way we try to create new computers, by giving it a spark of new and innovative programs for the greatest user and technological experience.

Reference: https://www.freecodecamp.org/news/what-is-a-software-framework/

From the blog CS@Worcester – Elias' Blog by Elias Boone and used with permission of the author. All other rights reserved by the author.

some design principles

We’ve covered a great many design principles during the course of this semester in Software Construction, some of which I’ve even covered in these blog posts (law of demeter comes to mind). For the end of the semester, I wanted to have a little review on some of the principles that I don’t recall all too much.

Starting off with one that isn’t too complicated, I wanted to briefly refresh on the YAGNI (or You Ain’t Gonna Need It) principle. According to a blog post by Tatum Hunter of Built-In, the practice entails only building features when needed. I found this post fairly insightful in the way it goes over how customers might want a large-scale feature now, so you may have to talk them down to a more realistic goal to avoid adding functionality that won’t be necessary, or to say no.

The principle of “striving for loosely coupled designs between objects that interact” is essentially implementing the observer design pattern. I believe I went over this in a previous blog post, but I did find another post by Harold Serrano that provides a brief summary as well. Serrano states that the principle means that objects should be able to interact with each other, but shouldn’t know much about each other.

For the principle of “encapsulating what varies,” a simple blog post from Alex Kondov explains what this means and why we do it. Essentially, we want to encapsulate the parts of the code that we write which are prone to change so that we don’t have to change a whole block of code for something that should be a one line fix. This makes our code adaptable and leads to cleaner code.

Inversion of control is used for abstraction simplicity. Kent C. Dodds explains that we want our abstraction to have less responsibility, while the user has more. He uses an example of a filter method that uses inversion of control, and one that doesn’t. The difference is that when the control is passed into the method rather than handled in the method, there is a lot less going on within the method, which increases simplicity. I found this really interesting because I was thinking about doing this for our GuestInfoBackend homework, before I kind of lost interest because I’m not too well-versed with Javascript, and I didn’t have much time to do it, hah. Needless to say, it’s a really interesting tool in my opinion and I find it very practicable.

This semester, we’ve gone over a great many ways to ensure proper software design, and these practices and skills are a great way to streamline your thought process. I think the important thing, as I’ve said before, is to not treat these as concrete rules, but to consider them in higher priority before writing code. Sometimes, you can’t get the perfect solution that fits all the proper software design principles, and that’s okay. It’s a matter of making sure the code is solid, pun intended.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

YAGNI

“You Aren’t Gonna Need It” is an agile design principle from ExtremePrograming that goes against building code for future features before they are needed. Martin Fowler states in his blog “YAGNI” (https://martinfowler.com/bliki/Yagni.html) how building these features early typically add more time than they save. Fowler breaks down the individual costs of these features and where they add up, along with how these costs vary between three different scenarios. The best scenario is that the presumptive code was still required and added to the project when it was needed. The next scenario is that the feature is scrapped from the project entirely. Arguably the worst scenario is that the feature is added, however, the code that was built in the past was built incorrectly and needs to be updated. These scenarios each have their different costs, but even with the best scenario it is likely time was still lost between deciding to build the feature and when it was finally needed. Assuming that time can be saved by adding a future feature early because it relates to something you are currently doing will only add more work when it comes to debugging, refactoring, and reading the code.

  • Cost of Building – All effort and time programming, analyzing, and testing the feature.
  • Cost of Delay – The time spent on the presumptive feature could have been spent on a more relevant and required feature, creating a delay in the release and actually monetary and time based costs.
  • Cost of Carry – Adding unnecessary code to a project only makes it harder to read and understand, slowing down further production.
  • Cost of Repair – Assuming the feature is eventually added to the project, the way it was built in the past may not be right anymore and needs to be repaired or redone.
  • Cost of Removal – In the event that the feature is scrapped and removed from the project, the code will also need to be removed or will only continue the accumulation cost of carry.

Following the principle of YAGNI means time spent building code should be spent building code that is relevant to the current state of the build. This does not mean that refactoring falls under YAGNI principles as it is time being spent making the software easier to modify.

As projects become larger and more complicated it is important to keep my code clean, organized, and on schedule. The best way to do this is to ensure features are added only when they are needed to not clutter the space with unused code.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.