Category Archives: Week 12

Software testing

Software testing is verifying that a software product performs the way it is designed to.  The benefits of software testing are preventing errors or bugs from being in a program. When doing software testing, it is best to do testing during the start of a build, during the build, and after the deployment or release of a program. Through doing software testing, developers and companies can ensure there are no defects, or issues before releasing or prompting a product. If there are bugs or defects in company products, there is going to be a loss in customers and money. Some of the different types of software testing include acceptance testing, unit testing, and security testing. Acceptance testing is ensuring that a coding program runs and that there are no errors. Unit testing is testing parts of a program instead of testing a whole program. In  Unix systems programming class, I was exposed to unit testing in homework projects, where I had to write parts of a program and receive a grade for each part, and couldn’t move forward until receiving full credit on a part of a program. Unit testing can lead to fewer errors than acceptance testing. It is a better practice to test your code small bits at a time than to run a whole program. Security testing is ensuring that software programs are safe from hackers that could lead to being denied access to your software, or your software working incorrectly.

Software testing was first developed after the ending of world war two. Computer scientist Tom Kilburn wrote the first piece of software by performing mathematical calculations. Debugging was the main testing strategy at the time. By the 1980’s there were other strategies of software application testing outside of fixing bugs and errors. The process of software testing includes determining a testing method to use, developing test cases or setting up requirements for the test, writing scripts or parts of a program, analyzing test results, and writing a report. For large software programs, there are tools used to complete tasks and work on running tests for different tasks, with instant feedback on what works or not works for a program. Through reporting and analytics, teams can present their results in a dashboard, which would allow everyone to see the overall result of a project. Reporting results show how testing the product leads to the result. 

Blog url: https://www.ibm.com/topics/software-testing

From the blog CS@Worcester – jonathan's computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

AI Is Not A Software Engineer

In this blog, the author discusses how much the times have changed for new CS graduates. Reminiscing about how little they knew and how easily they got a job. Then talks about how much more prerequisite knowledge is needed to even sniff a job. The topic of the article is how now more than ever it is easier to get code that works. Thanks to AI, code is now more plentiful than it ever was before. However, all code is not good code. This leads to them discussing how despite how much code there is these days. Having people capable of understanding and able to build software are still very necessary. 

Although AI can now code for us, the coding wasn’t the hard part in the first place. The hard part was building software, and making good software. It’s easy to throw a bunch of code snippets together that accomplish something. But it is something entirely different to build specialized software that fills certain functions and meets certain criteria. AI cannot replace people, even though it may take away some jobs. At its heart, AI cannot build unique software. Teams of capable developers are still needed. The nature of how people code is changing. It’s becoming more important to be able to harness AI, but still oversee and build functional software.

I chose this article because I think it relates to team building. Like the article said, you need people who can understand code, not so much write it. Writing code is easier than ever, but finding people who understand how to build software is harder than ever. When using these tools it’s important not to rely on them too much. Discerning who can actually code these days is probably one of the most important skills for employers these days.  I think it’s important for me and everyone to keep in mind that AI is a tool. Tools dont make up for lack of knowledge. Tools are used best by people who know how to use them and maximize their use. One tool can’t solve every single problem. At the end of the day, knowledge is the most important part of being a software developer. 

Citations

https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/

By Charity Majors

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

How AI Tools Separate Us From Information

It is no secret that ChatGPT has blown up recently. It is not just used by CS people, but everyone from all walks of life. It has become a common tool used to help people with a wide range of problems. Offering a quick way to get answers without needing to look for answers by yourself. However, these AI tools are not just a catch all solution for every problem. In this blog from Stack Overflow called “Knowledge-as-a-service: The Future of Community Business Models” discusses how these recent developments have affected how we access information.

In just the last twenty years alone, the way of searching for knowledge has changed. Going from books, to search engines, and cloud technology allowing for farther reach. In recent times we have seen the rise of AI tools that help guide us to the answers we seek. These AI tools however, create a separation between knowledge and the people who make it. AI does the searching and synthesizing for us. Although convenient, it raises the question if that is the best way for people to learn.

Some common concerns held by people are that ChatGPT offers answers. It often does provide context as to why solutions work. What works for one dev environment might not work in another. AI is also reliant on humans for new consumption knowledge. If humans are not creating new knowledge, AI cannot create new information. The credibility of these tools often comes under scrutiny as well. Many developers mention how much variance there is to answers. Although these are certainly draw-backs, developers are learning that community created content is more needed than ever.

I choose this topic because I believe that most students use ChatGPT or some other tool to help us. I myself use it often to help with pretty much every single class I take. But I definitely rely on it the most for CS. I ask how something works or what is the best course of action. I think it is a common concern for many employers cause many don’t know how to actually code. Many people just copy and paste without learning. I am guilty of this myself. But I have been working on trying to actually understand every bit of code. And learning of where and when to apply these code snippets I use. I believe it is still very important to learn from sources outside of chatGPT. Like from classes or other websites composed of trustworthy data. It’s good to learn how to do things yourself without relying on outside sources.

Citations

https://stackoverflow.blog/2024/09/30/knowledge-as-a-service-the-future-of-community-business-models/

By Ryan Polk and Ellen Bradenberger

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

Blog Post Week 12

This week, as we’ve been learning a lot more about Git and different features of it, I decided to find an article that talks about different commands that we may have not used and what they do. The article I found titled “Modern Git Commands and Features You Should Be Using” by Martin Heinz, explains some newer(ish) commands in Git that people still may not know about or just hardly ever use.

He opens up with the switch and restore commands but these are commands we’ve already learned about and used, so I’m going to skip over these.

The first one he mentions that I had not heard of is “sparse-checkout”. If you have a large repo with many different individual directories, it can cause certain commands to run extremely slow such as the normal “checkout” command or the “Status” command. With sparse-checkout, you can configure git to only checkout files in a specific directory. You would then use sparse-checkout set to download or checkout that specific directory. As you can see, this would be extremely useful in scenarios where you have a massive repo with a large amount of directories. Being able to specifically select the directories you want, rather than having to deal with all of them on more generalized git commands can be a huge time saver which is certainly a value many programmers hold highly.

Another command he mentions, which I find to be extremely cool and probably one of the most useful commands I’ve seen is “bisect”. Essentially, you run a “git bisect start” command linking a commit that does not work, as well as the last known working commit. Bisect will find the halfway point between these two commits, and you can either say “Good” or “Bad” depending on whether or not the commit is selects works or doesn’t. From there, it will keep on going halfway until it finds the exact commit where the errors that stopped the code from working started. This seems to be an extremely useful and honestly just cool command as it makes the process of finding the issues within a given program a million times easier. It is a command I will certainly be using in the future, probably a lot, and I’m very glad someone was keen enough to actually make this a working command.

Overall, the two commands I spoke about seem to be extremely useful, especially bisect, and I will certainly hold onto them for future reference in Git. Heinz also mentions the “Worktree” command but, while this command also seems quite useful, I found the other two to be much cooler as well as understandable to use. It also opens my eyes to the fact that there are also many other git commands and features that could be utilized, and I’m definitely going to look into the rest of them as I am sure I will find a few more very useful commands.

Source: https://martinheinz.dev/blog/109?utm_source=tldrwebdev

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

Blog Week 12

This week, I decided to find another Reddit post as the last Reddit post I covered I felt had a lot to take from it as it was a community of people giving their real thoughts and feelings on a certain aspect of Comp Sci. Today, I found a post on the importance of object-oriented design, and people discussing the different values it holds.

One of the top comments on the post is about how understanding object-oriented design is a great foundation for writing clean code, which is something we discuss in CS-348. It isn’t necessarily something you can just learn on the go while working, and, rather something you should try to learn about as much as you can while in school to then apply in the workplace. I’m very appreciative of reading this actually as I typically like to think most stuff gets easier to apply/learn as you’re working, but if a lot of people agree that you should understand as much as you can about object-oriented design BEFORE actually starting a job, then it’s certainly something I’m going to want to have down. The general consensus wasn’t that you won’t get better at applying these principles as you progress in your career but that you should have a strong foundation of knowledge on these principles going into your career as it is crucial to know certain aspects of it, like when one object ends and another begins or how to model object relations.

OOP is so important, many of the Redditors on this post also seem to agree that you’ll find it very hard to even get a job if you don’t at least have a base understanding of the concepts. It isn’t necessarily hard to understand a lot of these concepts as they’re pretty fundamental, but you should be able to answer questions either directly related, or somewhat related to OOP in job interviews as if the interviewer begins to think you may not know what you’re talking about, you may quickly lose your opportunity at that job. Some people did make comments explaining how it does depend on the concept of the job too such as the primary language you’ll be programming in or even the exact role of your job, but the general consensus still seems to be that you want to have at least a strong fundamental understanding of object-oriented design and the ability to apply it’s principles in your coding.

There are over 100 comments on this post that all make great points on why understanding these principles are very beneficial to you, even just a fundamental understanding of them can help you go a long way. It seemed to me that people had varying levels on how important overall understanding all of these principles are, but they all seemed to agree that knowing the basics of them and being able to apply them all to some degree in your coding, as well as being able to understand and talk about them (in interviews especially), is certainly the most important/beneficial thing you could do.

Source: https://www.reddit.com/r/learnprogramming/comments/z2fcyb/how_important_is_object_oriented_design/

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

The Observer Pattern

Back again with another design pattern, and this time with one I have absolutely no idea about. I chose this particular video because I like this guy’s videos and I think he does an excellent job with explaining things in a way I can understand and I wanted to know some more design patterns. From another video by Christopher Okhravi I am learning about the Observer Pattern. Mr.Okhravi goes over this pattern with a lot of visuals which I very much appreciate and also goes into great depth with this pattern, but what does this pattern do? This pattern which involves utilizing one object that acts as the “Observable” and then this Observable object has a relationship with many “Observer” objects where if there is a change inside the Observable the Observable pushes out a change to all the Observer objects it’s connected to.

Looking at this pattern it’s kind of hard for me to wrap my head around an example of what it could be used for, but I did understand how the system itself would work, it just feels somewhat more complicated than I can really handle at this moment. Otherwise, though I did feel like I learned quite a bit about this pattern like how different languages have different variations and limits on what an observer pattern can do. Though the somewhat odd nature of the pattern does confuse me, even though it looks so simple. Like how it’s kind of cyclical where we are passing observer objects to the observable and then back again. This just really confused me but I think I’ll need to watch the video again to really grasp it fully. The example Okhravi uses of a “weather station” helped to really elucidate what I was confused about, where we have the physical components displaying the data and then the actual data that is being monitored by the weather station and watched by those observer components.

I think for the future I’m not really sure how often I’ll be using this pattern but I can foresee some use cases for it as it might be very useful when I need constant monitoring of something. But I think evidently even if I can’t come up with any ideas now I definitely think in the future I’ll be making use of this pattern and that I need to learn about even more patterns so that I can apply them where I need them, and to maybe go back and relearn those initial patterns I learned about.

Here’s the video:

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Git in a Visual Explanation

As an expressly visual and hands-on learner, I try to find resources that have decent practical visuals and explanations. In class we already do this, and I’ve already had the chance to have some practice with Git and various remote repo websites like GitHub. But I wanted to have a more succinct and shorter summarization of Git and how to use it. This video was actually perfect for that as it essentially runs through what Git is, how it’s used and what it’s used for and then goes over some complicated problems that can show up eventually when using Git. I mostly chose this specific video because of what I’ve mentioned previously where I was looking for something a bit more visual for me to sink my teeth into that I can extract information from as using git is still somewhat complicated to me.

Watching through this video was actually pleasant as it had lots of very appealing and easy to understand visuals with a lot of examples of everything discussed or mentioned in it. I very much enjoyed the experience it provided but it was still a mostly basic, more foundational resource designed to give a nice outline of what git can do and that I can really appreciate as I’m still relatively new when it comes to something like this. But seeing how git is more flexible than I initially thought was nice to know as I didn’t really connect that it also works with other repo websites other than GitHub, as I’ve only really had to use it in that instance.

But seeing the different applications of git and the different issues that can arise with it I am imagining that it will most likely be a headache that I’ll have to contend with very often, especially when it comes to merging problems. Hopefully though this will not be the case and every project I work on will go perfectly. Evidently though I can foresee that git will actually just be something I have to interface with on a most likely daily basis where I’ll be pulling, committing, fetching, merging, and pushing all the time especially if there’s any collaboration to be had. So, it would only make sense for me to really practice and understand the depths and complexities of what git can do, so for the near future I’ll probably be looking for something to take me into those depths.

Here’s the video:

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

REST API

Growing up, sometimes when I would Google search things, the page would not load and instead, it would give me a code, typically a 404. I never understood what it was or what it meant until recently. The 404 is a REST API response code, a code that the server returns when a web page or URL requests something. There are a bunch of codes, ranging from successful requests to malformed URLs to unstable connections to the servers. But there is more to them than just response codes.

In this blog post, the Postman Team talks about everything REST API related, including their history, how they work, their benefits, some challenges, and go over some examples. REST API uses resources, which can be a number of things, such as a document, an image, or multiples of them. REST is able to use an identifier to determine the type of resource being used in interactions. REST API uses methods, which is the type of request that is being sent to the server. These methods are GET, PUT, POST, DELETE, and PATCH. Each does something different from each other, allowing the user and the server to do a multitude of actions. GET does what the name suggests, it asks the server to find the data you asked for, and then it sends it back to you. DELETE deletes the specified data entry. PUT updates the specified entry, PATCH will do a similar thing. POST will add a new entry. With these methods, they return codes, describing what happened with the request. 200 is a successful response, 201 is a successful creation, etc. There are a number of codes, going from 100 to 599, each with a different response. REST API is flexible, allowing you to do more with them. REST API is used mainly for web use, but can also be used in cloud services and applications. The benefits of using REST API include scalability, flexibility and portability, independence, and lightweight. The challenges of it though are endpoint consensus, versioning, and authentication. The blog post goes into detail about all this in their post.

I chose this blog post because it did a good job of explaining everything about REST API. It even has a YouTube video listed in the post, which also explains what is in the blog post. APIs are used everywhere, so it is interesting to learn about something that is essentially a part of all computer related things. Although this is REST API related, there are a number of APIs, each with something different that they offer.

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

CS 448-01 Team 3: Sprint 2 Retrospective (4/4)

With the second sprint, we had so much trouble with our sprint until near the end of the sprint. To elaborate on what went wrong, I would like to start out with what we were planning from the very start, as this will be very important for what we will be doing for the next sprint.

While our last sprint, we split between meeting remotely and meeting in-person, we finally decided that it would be better for us to meet in-person. We also came up with a wireframe that we decided to use as our template to create our framework for AddInventoryFrontend (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/documentation/-/blob/main/Developer/Wireframes.md). Since we already had AddInventoryBackend working as intended with the proper testing IDs being used as a way to test our code for the Backend, we only just needed to create AddInventoryFrontend so that we can try to put a frame over all the work that was done with the Backend from last year. At the very least, we knew exactly how we wanted to build our front-end.

On the contrary to how we finally have a plan for our Frontend, I was having lots of trouble with trying to build the Frontend. Since I had lots of trouble with some of the issues that we did, I instead decided to focus on redoing some of the issues we had from last sprint (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/addinventoryfrontend/-/issues/36). At the very least, I could at least contribute a little bit to our sprint, knowing the tasks that we were unable to completely finish.

What we as a team learned from sprint 2 was that we learned about using Vue, a Javascript framework that we would use to help build our Frontend. While we were not able to get the entire page running, we added a functionality to be able to add a button to our Frontend, just as we intended when we were following our wireframe example from earlier. Once we had explored our options to how we would build our Frontend, we decided to use a new wireframe that my teammate would create for our team to follow along with.

The things I could do improve on as an individual is that I need to speak out more with my team about the issues that may have, let it be related to work or anything other. I had trouble with this sprint because I was not great with programming with HTML and Javascript, and I felt like that was really hindering my performance as a team member. I did my best with trying to get help with working on the sprint, and when that was not working out well for me, I consulted my search engines instead. As someone who was much better with AddInventoryBackend, working with the Frontend was not my strength as shown in this sprint. I was confused with what wireframe we were using for the sprint until the end of the sprint when we had a semi-functioning Frontend that we were going to tweak in our next sprint. For the next sprint, I am hoping that I can get to do anything that is not too technical like directly running the Frontend, and I hope that then next sprint will be where our team will be able to get a working Frontend by the end of next sprint.

From the blog CS@Worcester – Elias' Blog by Elias Boone and used with permission of the author. All other rights reserved by the author.

Rubbing Elbows: Shortcutting the Path to Software Mastery

The “Rubbing Elbows” pattern advocates for software developers to actively seek out opportunities to work hands-on, side-by-side with other skilled programmers. The core premise is that certain techniques and micro-habits can only be absorbed through close collaboration on shared coding tasks. Practices like pair programming try this kind of knowledge transfer, but the pattern applies to any endeavor that allows you to observe the workflow and decision-making of an experienced developer.
I found this pattern incredibly insightful and motivating. As a software engineering student, I’ve already experienced how much more I can learn by watching lecturers code and explain their thought process in real-time, versus just reading examples. The “Rubbing Elbows” pattern highlights how that same example of accelerated learning through consistency well beyond the classroom.
The authors make an good point that there are “thousand little everyday moves” that skilled developers have pressed through years of experience. These small refinements may seem minor on their own, but include into real improvements in productivity, code quality, and problem-solving prowess. However, these micro-habits are nearly impossible to fully carry through written documentation or formal teaching. Rubbing elbows allows an apprentice to organically absorb them through repeated, intimate observation.
I’m reminded of when I pair-programmed on a school project with a talented classmate. While daunting at first to have my code exposed, I soon realized I was gaining insight into his mental models, techniques for juggling complexity, and little shortcuts that markedly lifted his coding flow. Rubbing physical and mental elbows enabled knowledge transfer that couldn’t have occurred through solo learning.
This pattern has inspired me to be a go getter about joining open source projects, participating in local meetups, and seeking out internships that enable close collaboration with experienced mentors. Identifying and creating these “rubbing elbows” opportunities will be important for go beyond my current peak and speeding my progression as a capable, well-rounded software crafter.
While the unusual feeling of being the “newbie” amongst experts is unavoidable, I’m excited by the authors’ advice: embrace feeling lost at times, ask questions, rotate pair partners if stuck, and record learnings to cement them. Absorbing the tacit knowledge of those further along the path is key to rapidly elevating my own skills.

andicuni
April 28, 2024

From the blog CS@Worcester – A Day in the Life as a CS Blogger by andicuni and used with permission of the author. All other rights reserved by the author.