Category Archives: Week-2

Software testing

In the fast-evolving world of software development, the significance of rigorous testing cannot be overstated. A recent blog post from The Code Camp titled “Software Testing and Why It’s Important” delves into this critical aspect, shedding light on its indispensability in the development process. This article serves as a comprehensive guide, explaining the necessity of testing, its types, and methodologies, thereby making it an invaluable resource for developers and testers alike.

The Essence of Software Testing

Software testing stands as a cornerstone of development, ensuring that applications perform as intended, are secure, reliable, and user-friendly. The article articulates how testing not only identifies bugs but also secures software against potential cyber threats, a growing concern in today’s digital age. By involving real users, testing guarantees that the software offers a seamless user experience, an aspect critical to the success of any application.

Why This Article?

I chose this resource because it offers a profound understanding of testing’s role in the development lifecycle, a topic directly related to our coursework. The article’s clear explanation of various testing types, such as unit, integration, system, and acceptance testing, complements our learning, providing practical insights into their applications.

Personal Reflection

Reflecting on the content, I was particularly struck by the emphasis on security and user-experience testing. In an era where digital threats are rampant and user expectations are high, these aspects of testing assume paramount importance. This article reinforced my understanding of the multifaceted nature of testing, extending beyond mere bug detection to encompass a holistic approach to creating robust, secure, and user-centric software.

Application in Future Practice

Moving forward, I plan to integrate these insights into my testing strategies, particularly the early involvement of real users through acceptance testing and the rigorous assessment of security vulnerabilities. Emphasizing these areas will not only enhance the quality and security of the software I contribute to but also ensure a superior user experience. Security is something that I have wanted to learn more about for a long time and I think testing is a good way to move towards that

Conclusion

The insights gained from “Software Testing and Why It’s Important” are instrumental for anyone involved in software development. It underscores the critical role of testing in delivering high-quality, secure, and user-friendly software, aligning perfectly with the principles we’re learning in our course. For those interested in exploring this topic further, the full article is available at The Code Camp, offering a deeper dive into the vital world of software testing.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Visual Studio Code (VS Code): A Powerful Tool for Developers

Considering Visual Studio Code is a staple, core feature of our class, i wanted to see if there was anything more to this application that would be the fundamental base block for our classes. From what i’ve learned, it’s a highly versatile and widely embraced code editor in the realm of software development-having been developed by Microsoft; being a FOSS(Free and open source software) which caters to the needs of developers on platforms like Microsoft/Windows, macOS and Linux, being significant in being able to provide a unified environment for coding, editing and troubleshooting any problems that may arise.

Some of the key features of the VS Code program includes the aforementioned cross-platform compatibility- being available for Windows, Linux and macOS, language support in that a variety of programming languages exist and extensions can also be installed for other programming languages that aren’t currently supported, a robust community that contributes to the vast selection of extensions that exist- alongside users being able to develop their own extensions- making this even more of a dynamic ecosystem that prove of mutual benefit for all.
VS Code also has built-in features- ranging from a console to Git support to a debugger. Besides those practical features, it’s the more broad applications- as VS Code finds a utility amongst most data experts, machine learning practitioners, system and cloud developers, and those working with various cloud providers.

I chose this text since it proves a valuable insight into the utilities of the Visual Studio Code program that we use daily in our class- in software process management having a variety of development tools at your disposal is somewhat critical as well as being to understand the capabilities and advantages of a tool like Visual Studio Code can and will prove useful in ensuring effective software development in most projects-as well as it’s versatility as a tool- performance and versatility are key considerations when choosing development tools, and these factors are closely tied to software process management’s goals of efficiency and effectiveness.

Taking this into account, i see these development tools not just as a means/way to write and edit the code for the projects but integral components that can streamline the software development process , this can align with the more efficient code practices in software process management that place emphasis on version control, debugging and code quality assurance. This knowledge reaffirms that selecting tools like VS Code is not merely a technical decision but one with broader implications for team productivity, code quality, and project success.

https://shiftmag.dev/vs-code-171/

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

What is Concurrency?

This week I wanted to learn more about concurrency because when I first heard about it I thought that it had to do with money but I found out it has a different meaning than I thought. So what is concurrency and why is it important? Concurrency is the execution of multiple sequences at the same time. This happens when operating systems have multiple threads running in parallel, with the threads running they communicate with each other’s shared memory. Concurrency is the sharing of those resources that cause problems like deadlocks. Concurrency helps with problems like coordinating the execution process and the scheduling for maximizing throughput. There are some ways that allow a concurrency execution, two of them are logical resource sharing which is a shared file and the other is physical resource sharing which is shared from hardware. In concurrency, there are two types of processes executing in the operating systems, that is independent and cooperating processes. The first independent process is a state that isn’t shared with another process, meaning that the end result depends on the input and that it will always be the same for the same input. A cooperating process is the opposite of an independent process, it can be shared with another process and the end result will not be the same for the same input. If you were to terminate the cooperating process it could affect other processes as well. A lot of systems can use at least two types of operations that can be used on process deletion and process creation. For example, process deletion is a parent deleting the execution of one of its children’s classes if the task assigned to the child is no longer needed. A process of creation is when a parent and child class can execute concurrency and share all common resources. Interleaved and overlapped processes are examples of concurrent processes and the relative speed of execution can’t be predicted. Concurrency all depends on the activities of other processes, and the scheduling of operating systems. Concurrency has a better time running multiple applications, a better response time, and a better performance. The sources I used to go more into detail about concurrency and explain the pros and cons of it very well. The reason why I picked this topic is because I thought it was interesting how it enables resources that aren’t used by one application and used for another application instead. It’s also interesting how without concurrency everything would take longer to run to completion because the first application would have to run first before starting another one.

Source:

https://www.geeksforgeeks.org/concurrency-in-operating-system/

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

Patience is Key

Over the weekend, I spoke with a retired Electrical Engineer, Bob. While we were chatting the topic of software somehow came up and the difference between today’s programming versus it’s past. We discussed how much things have changed from the 1960s to the present day. Bob had gone to WPI in the mid to late ’60s and was an Engineering Major. He enjoyed math and naturally gravitated toward the Engineering field but one day he realized how utilizing programs could help him compute highly complex math problems. Like everyone, he had to start his journey somewhere and the best language that seemed suited for him at the time was Fortran. Fortran is a very old language and I honestly didn’t know much about it, past that it was created at the time of punch cards and operators who compiled your programs for you. Bob would place his punch cards into a mailing box marked with the last 3 of his social. He said if he was lucky, the next afternoon the code was run. Normally, he said it would take about 2-3 days before getting your results back. Once returned, say if there was a period instead of a comma, a message would say “Program Terminated” or something along that. This is when the debugging process begins by carefully examining the code above the “terminate” message. Once the issue was found, he would fix it and then start the waiting process all over again. Today, we can run code in seconds, be able to debug and fix code in minutes, then re-run the code again. I’ve spent 6 hours in the past incrementally fixing and building a project for class and looking back I have a newfound appreciation for the tools and languages that aid us in programming today. But this made me think, what is Fortran and how was it used? To my surprise, while being a language seemingly in limbo, there is still a strong community surrounding the language. I found this article on Medium.com detailing a year’s journey attempting to revitalize and attract new programmers to Fortran. Over that one year, work had been done to implement a better-improved standard library (stdlib), a lot of focus and progress was being made into creating a Fortran Package Manager (fpm), and a website was used to bring the community together while also to help retain new learners instead of letting them struggle alone. While the modernization of the language still has some ways to go, the patience and commitment from the contributors to the stdlib, fpm, and website just show how patience is key to creating the best possible end product. This reminds me of the saying “Slow is Smooth and Smooth is fast” which resonates with software developers since the moment you rush yourself is when things end up half-baked and many issues arise. I should take my time more, that way I could catch small mistakes that can potentially snowball into more complex issues.

Article Link: https://medium.com/modern-fortran/first-year-of-fortran-lang-d8796bfa0067

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

code review, what it is and why it matters

For my first blog post for CS-348 and in general (in terms of actual content), I wanted to look into code review. I already had an inkling as to what it could entail, but I wanted to know what sorts of techniques and tools are used in looking over our peers’ (and our own) code.

For this blog post, I consulted a post on SmartBear to get a better understanding of it all. The post establishes reasoning for why we need code review so that we can overall reduce the excess workload and costs that can be caused by unreviewed code being pushed through. The post also gives us 4 common approaches to code review in the current day (which is noted to have been very much improved from methods found in the past). These approaches are email threads, pair programming, over-the-shoulder code review, and tool-assisted reviews.

An email thread provides advantages in versatility but sacrifices the ease of communicating that you get in person. Pair programming is the practice of working on the same code at the same time, which is great for mentoring and reviewing at the same time as coding, but doesn’t give the same objectivity as other methods. Of course, over-the-shoulder reviews are simply having a colleague look over your code at your desk, which while fruitful, doesn’t provide as much documentation as other methods. Lastly, tool-assisted reviews are also straightforward, utilizing software to assist with code review.

The SmartBear post goes on to say that tracking code reviews and gathering metrics helps improve the process overall, and should not be skimped out on. Some empirical facts from Cisco’s code review process in 2005 are given as well. According to an analysis of it, code reviews should cover 200 lines of code or less, and reviews should take an hour or less for optimal results. Some other points are given as well if you visit the post.

Considering most of my ‘career’ has been independent coding (that is, coding as the sole contributor), this was rather interesting to me. I’ve done code reviews for my peers, helping them with assignments and the like, while I’ve only really utilized tools and software to assist myself. It’s interesting to see how something as simple as looking over someone’s code on their computer is such an important step in the software development process, but it certainly makes sense. I also wonder how much the code review process has changed since the popularization of AI companions such as ChatGPT and Github’s Co-Pilot. Perhaps these tools have made code review with our peers less important, but I wonder if it’s more important to have our peers second-guess the AI’s suggestions in case of mistakes. Nonetheless, having a solid grounding of the actual ramifications of code review will prove very useful during my programming career, I am sure.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

Week of September 18, 2023

This week, I wanted to make a post showcasing some examples of documentation for free, open source software. Comprehensive documentation is essential for any software project, so I want to see what useful documentation looks like. I was inspired to make this post when I found myself in need of a new podcast app for my Android device. The one I had been using was no longer refreshing my subscribed podcasts when I opened the app, and I wasn’t able to load the episode lists of any shows. I needed a new podcast app, but I didn’t immediately want to download the Google Podcasts app that was at the top of the search results on their Play Store. I understand Google collects user telemetry and data from their apps, and I didn’t want Google to connect advertising data to my account from the ads many podcasts read from their sponsors. Ideally, I wanted a free and open source app I could use so I could feel more secure in my data privacy.

From Opensource.com, the definition of open source software is “software with source code that anyone can inspect, modify, and enhance.” Many open source projects are supported by communities of volunteers connected over the Internet. The benefits of open source software include stability over time, meaning that because the project can be maintained indefinitely by anyone, the software may remain compatible with modern systems for a longer time than closed source software. Open source software also promotes security for end users. Since the software’s source code is openly accessible, there is a greater chance that undesirable code is deleted or corrected once discovered.

Large-scale projects that require collaboration are supported by extensive documentation for end users. The example I mentioned earlier, AntennaPod, has a simple-to-navigate documentation page that begins with the basic act of subscribing to a podcast, and ends with instructions for podcast creators on how to have their podcast listed on the app through the use of existing podcast dictionaries. One interesting section I found was an explanation of centralized versus distributed podcast apps. Centralized apps are always in communication with a central server, and content is delivered from that server to your device. In contrast, distributed apps send requests to the podcast publishers directly, and do not contact a central server. This approach allows the developers of the app to devote more resources to maintaining and iterating on the app, instead of maintaining a server. Distributed apps are also a protection for user privacy, as there is no interaction with any central server to provide an opportunity to collect user data. The app developers don’t have access to information like which users are subscribed to which shows either. This decentralized, distributed approach also helps protect against censorship, because there are multiple sources to download shows from instead of one central server owned by one entity. Likewise, the app will continue to function even if development ceases, where in contrast a central app will stop functioning if the central server shuts down.

Sources:

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

AHK as a Developer Tool

AutoHotKey Scripting Software

AutoHotkey (AHK) is a powerful and versatile free, open source software and scripting language for Windows operating systems exclusively. It provides an environment where users can create custom scripts to automate repetitive and/or menial tasks, create custom GUI’s, manipulate windows, files and applications, and create custom hotkeys, macros, and key-rebinds. Users can quickly create and tailor scripts toward specific tasks they face to significantly improve their overall efficiency and minimize errors. In essence, AHK can be used to automate and otherwise enhance users’ ability to perform software development and other processes from start to finish.

Upon downloading AHK, the main Dashboard GUI has several useful features to help users get started and in general as they create scripts, including a compiler, link to Help Files, Settings, and a Window Spy tool to extract application window data. Using Window Spy, users can easily identify and access a window’s “ahk id” to hook into and directly input to that window using AHK methods, such as ControlClick, which sends mouse inputs to a given client at a specified x-y mouse coordinate. Conveniently, the Window Spy tool also provides functionality to easily identify the on-screen/client x-y coordinates of a given mouse position. There are similar functions for directly inputting strings (rather than clicks).

One of the most intriguing features of AHK which led me to pursue learning it on a deeper level is its capacity to directly interact with application client windows and any separate application GUI’s (including mouse and key inputs, minimizing/maximizing, setting window focus, and more) without focusing on them and interrupting other active processes. This is extremely beneficial for testing, data collection/recording and more – for example testing may be run in the background with the results recorded to an output file whilst the user is actively working on other tasks independently. Furthermore, AHK scripts can easily be set to repeat at a specified time period; the previously mentioned background testing/recording can be called upon to execute every X minutes with zero input from the user or interference on their tasks. Increased convenience and efficiency with less human error or interaction!

Some other functions in AHK which I have read or seen videos about and hope to start implementing soon include PixelSearch and ImageSearch. PixelSearch is a versatile tool of the form: 

PixelSearch, &OutputVarX, &OutputVarY, X1, Y1, X2, Y2, ColorID , Variation

Searching for a specified hexadecimal ColorID within the specified screen space and returning output variables containing the coordinates of the first found pixel of that color. The outputs can easily then be used in other functions, and this method synergizes well with other tools that may identify certain occurrences with a color highlight or other marker. The ImageSearch function works similarly, but takes an Image input (easily generated with the Window Spy tool) and searches for an occurrence of the input within the search coordinate range. An interesting component of both PixelSearch and ImageSearch is the Variation parameter visible above near the end of the PixelSearch skeleton. This variable allows variance in the color/image which is being searched for to increase the likelihood of finding the target even if minor screen/image shifts have occurred, which can be common depending on the application or instance.

There are countless other features and functions in AHK which I have not yet gotten a chance to learn about. If you or a friend/colleague have experience with AHK functions, feel free to reach out with questions/discussion/advice to my email at jelbirt@worcester.edu !

Information/Source Links:

Post discussing use of AHK for coding: https://python.plainenglish.io/coding-faster-with-autohotkey-453a2face9af
AHK Home site: https://www.autohotkey.com/
AHK Method Documentation –
ControlSend: https://www.autohotkey.com/docs/v2/lib/ControlSend.htm
ControlClick: https://www.autohotkey.com/docs/v2/lib/ControlClick.htm
PixelSearch: https://www.autohotkey.com/docs/v2/lib/PixelSearch.htm
ImageSearch: https://www.autohotkey.com/docs/v2/lib/ImageSearch.htm

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Cs-348, Cs @ Worcester Week 2

Normal 0 false false false EN-US X-NONE X-NONE

 

The contents that I have been learning in week 2 are git, GitHub and FOSS communities. First, we focused on FOSS communities, git and GitHub work together to allow this communities to share their work. We also have focus on working in your local repository using branches and commits and then upstreaming your changes using a pull request. Then we learned how to keep the local and origin repositories synchronized with the project’s upstream repo.

I found this blog giving the general definition about the upstream which relates to a content that I have learned in CS-348 class this week.

URL:
https://www.redhat.com/en/blog/what-open-source-upstream

In this blog, they talk about what an upstream is, how it relates to enterprise open-source products, and how they matter to your organization.

What is an upstream?
Upstream refers to the flow of data within information technology, particularly
in open-source projects. It serves as the precursor to other projects and
products, with contributions flowing from upstream to downstream. Users may
receive releases or code directly from the upstream. So, why are upstreams
important?
They are important because that’s where the source contribution comes from.
Each upstream is unique, but generally the upstream is where decisions are
made, the contribution happens, and where the community for a project comes
together to collaborate for the benefit of all parties. Work done at the
upstream might flow out to many other open source projects. The upstream is the
focal point where collaborators do the work. It is so much better if all the
contributors work together.

 

From the blog CS@Worcester – Hong Huynh-CS348-WSU by hhuynh3 and used with permission of the author. All other rights reserved by the author.

The White Belt

I have a very strong idea that there is always something new to learn or that there is always something to learn. This is how I can keep myself busy for hours at a time. I haven’t hit a point in my life in which my abilities of self-improvement has fallen short because I’m unable to acquire new skills because once again even If I know everything about a certain subject will always try to perfect what I know or dive into something else that hasn’t been explored yet. Like when I was 17, I played this videogame for almost 4,000 hours. I had been playing it since I was 10 and some people considered me good at the games because I put in the time to learn everything about. But while playing with people who also did the same, I realized that my game was flawed, and I still needed to work on some stuff. This is what I mean, being ‘good’ at programming isn’t good enough.

In this pattern, the author puts the reader in the context of them being quite good at the job. You are the go-to for any solution or situation that happens, and you are so good that you find it difficult to improve any more than what you already know.

What I find interesting about this pattern is that the author explains the problem as not being a problem. As I described in my first paragraph, it’s hard to know everything about a certain subject, because there is always something to learn. The author goes on to explain that having the mindset of not knowing enough should be the objective. Being ignorant about knowing everything, especially at such an early stage in your life, is only going to halt your abilities to improve more and to adapt. I’m not at this point in my programming journey yet, but I know that at some point I’ll get there, and so I feel that once I feel like I’m at this point of my career, it is best to come back to this pattern as a reminder of what I should do next in my journey.

Sources:

Hoover, Dave H., and Adewale Oshineye. Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman. O’Reilly, 2010.

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

week-2 from the book

 Hello, a blog for the second week; it seems nice and claims a little bit. I’m starting to read chapter 2 for apprenticeship patterns. While reading, I found one of the helpful patterns was “Record What You Learn.” It is nice to look back at the progress from the start and end with some struggles for learning something every day to help keep track of things that progress to improve at working efficiently. There is another pattern I would like to disagree with my opinions is “Reflect as You Work”; because it made me have doubts like “why didn’t I do this earlier or how couldn’t I think of it.” regardless, it is nice to see what you have to learn, and done for this to work.

Has the practice caused you to change how you think about your intended profession or how you think you will work?

For developers of these practices, it helps for using the patterns from this book can have several benefits. One of the benefits was to advance their understanding and proficiency in software engineering, and lifelong learning helps keep up to date on software engineering trends and technologies. It was even helping to gain more self-assurance in their skills by offering developers a collection of best practices and productivity-boosting strategies.

Those who want to become experts in software development can prepare to devote themselves to continuous learning and practice because the industry is challenged and undergoing rapid change. In this process, focused practice and reflection are both essential steps. Reflection entails taking time to pause and think about one’s knowledge and practice, evaluating what one already knows and what one may do to improve on a crucial stage in acquiring mastery since it enables the person to assess their abilities and pinpoint areas for development.

Purposeful practice and reflection lay the groundwork for domination in software development since they both emphasize enhancing knowledge and skills over time. On the other hand, deliberate practice is setting aside time to practice and hone abilities, gradually increasing over time. To attain mastery, one must engage in this kind of exercise since it makes the person pay attention to their areas of weakness and invest the time to strengthen them.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.