Author Archives: sohoda

Craftsmanship Log #1 – My First Language

In the first Craftsmanship post I made, I mentioned that I am reading through Hoover and Oshineye’s book named “Apprenticeship Patterns”, which outlines certain patterns that inexperienced software developers may adopt so that they may overcome any potential hurdles during their personal learning experience rather than providing technical solutions. In fact, in that very post I mentioned (though not by name) two of such patterns, namely “Craft over Art” (introduced in chapter 3) and “Record What You Learn” (introduced in chapter 5). Though I may expand on these two particular patterns in a later post, I want to first address my reactions to one of the very first patterns that are introduced in the book, namely “Your First Language”.

            Briefly put, “Your First Language” is a pattern that is meant to address a software developer’s potential issues that may arise due to the fact that, though they may know multiple languages, they lack sufficient fluency in one of them. As such, such lack of fluency may put the developer in a difficult position when they are needed to work on a project that needs to be in a specific programming language. In this case, the book suggests picking a specific programming language to learn and master, preferably one that is used by any experts one might know. Now, it is important to specify that “learning a language” is not simply achieved by reading some resources related to a specific language, but by using that language to solve problems and actually apply what has been learned. Thus, by continuous application, a software developer may hone their problem-solving skills, which then may help them in learning other languages as well.

            While I personally found this pattern to be particularly helpful when I first started studying Software Development, I feel like my experience has changed the way I approach specifically learning a programming language. Though becoming fluent in a language is important, I believe it is also important to be proficient in the learning process itself. In my case, while I was learning my first language, I also made sure to internalize what that language was composed of in terms of concepts and structures I could use so that, when there is a need for me to switch to a different language, I would have some expectations as to what concepts I should expect to encounter and only worry about syntax while learning the new language.

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Craftsmanship Log #0 – Apprentiship Patterns Ch. 1, Ch. 2-6 Introductions

            In the earlier years of my studying Software Development, there have been times during which I began to question myself regarding how I tend to approach the process of learning and software development. During such times, my approach to learning software development was essentially the same as my approach to learning Mathematics; when presented with a problem, understand what needs to be addressed and utilize the tools that may help in creating a solution. While at first this may seem fine by freshman student standards, there is a slight danger of complacency if such approach persists when I transition from being a student that codes for a grade to being a professional developer whose career is defined by coding.

            As I am reading through Hoover and Oshineye’s book called “Apprentiship Patterns”, I am oftentimes taken aback by certain points and approaches to learning that the authors bring up as such points clashed with my own idea of what it means to develop software, notably in chapter 3. Personally, though I was interested in the scientific approach to software development, I always believed that there was some artistic merit to programming. Such belief is, in a way, contested in the reading as a pattern which emphasizes the importance of understanding and working towards making something usefulrather than focusing on delivering art. Granted, my original belief stemmed from my own definition of “art” in the sciences, such pattern actually made me reconsider my current approach to problem solving through developing software. Though I am still, by no means, “just a software developer”, I still need to embrace the fact that I am learning to create things that function.

            One point that has been brought up around chapter 5, which point I find myself in agreement, is that learning never truly ends, not for software development. However, things may not always stick, and it is easy to forget certain techniques and conventions and what have you. It is important to write down and archive, as well as practice many times, what you learn because it is possible that this knowledge will be important in the future, regardless of what one’s pride may say. Such approach to learning is not exclusive to the field of Computer Science. In fact, I can speak from experience that writing down and practicing on points that I know I am weak at has been a better learning experience overall than simply reading and memorizing any new knowledge that will inevitably be long forgotten.

            Though I have found myself often agreeing with several other points that have been brought up across the chapters, there have been many points where I had to face certain convictions that I have and consider which may or may not be detrimental to my learning. Though it is never too late to reconsider my approaches, it would be a good thing to get a head start in changing some of my ways.

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #7 – Utilizing Docker-Compose YAML files

            Although I dedicated my previous post to introducing REST and RESTful API writing, I feel that it is important to briefly return to talking more about Docker, given how much I have come to using Docker containers for a significant part of my studies recently. In this blog, I have talked about containerization as a concept overall and not exclusive to Docker, as well as some of the things that can be done when using Docker. Namely, I discussed data management through use of volumes and mounts and container networking through port mapping. Such  features can be rather straightforward and easy to use individually and with enough practice during application development, however as development becomes more complex and demanding, having to repeatedly run commands to enable further functionality can be rather cumbersome for individual developers and overwhelming for the entire development team overall. This is not exclusive to Docker; certain utilities or commands used in development that are used especially frequently may overcomplicate the development process. Thus, often, a certain degree of automation or simplification is needed.

In my experience, I have used scriptwriting in order to speed up parts of my development process. In Docker, where it is possible to have to manage multi-container applications at once, having to manually define ports and volumes for each service individually can be extremely inconvenient for complex application development. Therefore, docker-compose YAML files  are often utilized to manage multiple services, volumes, and networks at once and in a more organized manner. Though a docker-compose file is not exactly a script file when it comes to defining container services, they can still help condense the entire process to create a service into one file. Moreover, to utilize a Docker container for application development, a docker-compose YAML file is not enough, as a Dockerfile is also needed in order to build the container’s image. However, for simplicity, I will only focus on the compose file.

Like I mentioned, a compose file is not necessarily a script file, not at least from my experience of writing script files. Despite this, there is a certain structure that needs to be followed when writing docker-compose files in order to ensure the proper functionality of the application. One resource that I found in my research regarding writing docker-compose files is an article named Introduction to Docker Compose posted on Baeldung.Com by Andrea Ligios. In this article, Ligios begins by briefly introducing the theory YAML configuration files and why using such a configuration file is preferable for development. Moreover, they proceed to illustrate the basic structure of a docker-compose file by highlighting the services, volumes, and networks to be used for the multi-container application, before they go into further detail on how to set up each of the previously mentioned sections and provide examples.

While it is important to understand how each part of a docker-compose configuration works individually, when it comes to larger-scale development and deployment, simplifying the process through use of docker-compose files can be extremely helpful.

Direct link to the resource referenced in the post: https://www.baeldung.com/ops/docker-compose

Recommended materials/resources reviewed related to :
1) https://www.tutorialspoint.com/docker/docker_compose.htm
2) https://docs.docker.com/compose/
3) https://dockerlabs.collabnix.com/beginners/difference-compose-dockerfile.html
4) https://phoenixnap.com/kb/docker-compose
5) https://runnable.com/docker/docker-compose-networking

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #6 – Introducing APIs and Representational State Transfer APIs

            When the topic of interfaces is brought up, the concept of User Interfaces tends to come to mind more times than not, given how we tend to utilize interfaces to exchange information between an end-user and a computer system by providing an input request and receiving data as an output result. Simply put, we often think of interfaces as the (often visual) medium of interaction between a user and an application. While this is true, such interactions are not limited between end users and applications, as it is possible for other applications to interact with other applications by sending or receiving information to each other to carry out a certain function. Such interface is an Application Programming Interface (API), which is a set of functions used by systems or applications to allow access to certain features of other systems or applications. One such case is the use of social media accounts to log-in to one’s Gitlab account, in this case Gitlab’s API will check if a certain user is logged into a specified social media account and has a valid connection first before allowing access to a Gitlab account. For this blog post, I want to focus on mostly web APIs.

            There are, however, three different architecture styles, or protocols, used for writing APIs, therefore there is not once single way of writing the API that an application will use, and different advantages and trade-offs need to be considered when choosing a specific API protocol as the standard for an application. The styles used for writing APIs are the following:
1. Simple Object Access Protocol (SOAP)
2. Remote Procedural Call (RPC)
3. Representational State Transfer (REST)

Among the above protocols, REST seems to be the most widely used style for writing APIs. REST provides the standards constraints utilized for interactions and operations between internet-based computer systems. The APIs of applications that utilize REST are referred to as “RESTful APIs” and tend to utilize HTTP methods, XML for encoding, and JSON to store and transmit data used in an application. Although writing RESTful APIs cannot exactly be considered as programming in the same way writing an application in JAVA is, such APIs still utilize some level of scriptwriting and creating endpoints for an API still utilizes specific syntax when specifying parameters and what values they must contain. One article that I came across when researching tutorials is titled A Beginner’s Tutorial for Understanding RESTful API on MLSDev.Com uses an example in order to show how RESTful architecture design works for RESTful APIs. In this example, the author Vasyl Redka, proceeds to show an example of a response to a request, which HTTP methods and response codes are utilized, along with how Swagger documentation is utilized when writing APIs.

            Though RESTful API may be somewhat confusing at first due to how the approach of writing APIs differs to the approach used for writing code, being able to effectively write APIs for web-based applications can be a rather significant skill for web-based application development.

Direct link to the resource referenced in the post: https://mlsdev.com/blog/81-a-beginner-s-tutorial-for-understanding-restful-api

Recommended materials/resources reviewed related to REST APIs:
1) https://www.redhat.com/en/topics/api/what-is-a-rest-api
2) https://www.ibm.com/cloud/learn/rest-apis
3) https://www.tutorialspoint.com/restful/index.htm
4) https://spring.io/guides/tutorials/rest/
5) https://searchapparchitecture.techtarget.com/definition/REST-REpresentational-State-Transfer
6) https://www.developer.com/web-services/intro-representational-state-transfer-rest/
7) https://www.techopedia.com/definition/1312/representational-state-transfer-rest
8) https://searchapparchitecture.techtarget.com/tip/What-are-the-types-of-APIs-and-their-differences
9) https://www.plesk.com/blog/various/rest-representational-state-transfer/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #5 – Understanding Port Mapping in Docker

          In a previous post regarding containerization, I briefly mentioned the uses of containers regarding application development and deployment prior to specifically learning how to utilize Docker and its containers. I went on to explain how Docker can handle development of applications through the use of images that contain the dependencies needed to be used for development, as well as how Docker also manages data retention between the local host system and the container file system through the use of volumes and mount binds. My learning of Docker started with a focus on local application development and deployment before moving onto utilizing ports to access the running applications through the network via a port number.

          By default, ports in a Docker container are not published and thus cannot be accessed this way, even if the container itself is up and running. This means that if one needs to access an application from port 5000 of the local host, this would not be possible given that such port may have not been published in the first place. Therefore, an important part for deploying web-based applications through docker involves utilizing the container’s ports. Such concept is referred to as “port mapping”, in which case ports of the localhost are mapped to specific ports in the container. This is a useful concept in the case that, during development, multiple containers that are running may need to use the same ports, therefore different ports of the localhost are mapped to that specific port that needs to be used and directing traffic appropriately, thus avoiding any potential port conflicts between containers.

          As I was researching for resources specifically talking about port mapping and port forwarding on Docker containers, I came across the following article named Understanding Docker Port Mapping to Bind Container Ports on LearnItGuide.Net. In this article, the author goes through the process of explaining how to map docker ports by demonstrating how the process is done through the command line by using a basic application and multiple containers as an example. Moreover, they show the multiple ways a port value can be expressed, which ways include either passing the port number directly (thus needing to access the application through localhost:port_number), by passing the IP of the host along with the specified port number that needs to be mapped, or by using automapping and thus mapping a random host port to the docker container. However, it is important to note that port mapping can also be applied using docker-compose YAML files. In an article named Understanding Docker Port Mappings on Dev-Diaries.Com, the author explains how port mapping works when using docker-compose files while showing examples of the ways we can map host ports to docker ports.

          Although there is more to port handling, such as exposing ports to only be accessible to linked services, it is still important to know how ports work in Docker containers in order to properly utilize them in the process of development and deployment.

Direct links to the resources referenced in the post: https://www.learnitguide.net/2018/09/understanding-docker-port-mapping.html and

Recommended materials/resources reviewed related to Docker port publishing:
1) https://www.dev-diaries.com/social-posts/docker-port-mappings/
2) https://betterprogramming.pub/how-does-docker-port-binding-work-b089f23ca4c8
3) https://www.tutorialspoint.com/docker/docker_managing_ports.htm
4) https://www.ctl.io/developers/blog/post/docker-networking-rules
5) https://nickjanetakis.com/blog/docker-tip-59-difference-between-exposing-and-publishing-ports
6) https://riptutorial.com/docker/example/2266/binding-a-container-port-to-the-host
7) https://www.dev-diaries.com/social-posts/docker-port-mappings/
8) https://www.whitesourcesoftware.com/free-developer-tools/blog/docker-expose-port/
9) https://digitalthoughtdisruption.com/2020/09/28/docker-port-mappings/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #4: Understanding Semantic Versioning

          Software releases are not always one-and-done affairs; more often than not software that we use is actively being worked on and maintained for an extended period of time well after its initial release. We can expect that software, during its support lifetime, will undergo several types of changes, including implementation of new features and various vulnerability fixes. However, it is important to note that such changes to software are as important to properly document as the technical aspects of the software, such as its use and structure as they were conceived during development. Such documentation of changes is often referred to as “Software Versioning” and it involves the process of applying a version scheme in order to track the changes that have been implemented to a project. While developer teams may develop their own scheme for versioning, some may prefer to use Semantic Versioning (https://semver.org/) as a means of keeping track of changes.

          Semantic Versioning is a versioning scheme that applies a numerical label to a project, which numerical label is separated into three parts (X.Y.Z), which then parts are incremented depending on the type of change that has been implemented. These parts are referred in the documentation as MAJOR.MINOR.PATCH and defined as:

1. MAJOR: version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards compatible manner, and
3. PATCH version when you make backwards compatible bug fixes.

https://semver.org/

The way semantic versioning works is that, when incrementing the left most part, the progress of the remaining parts is reset to zero, meaning that if a major change is implemented then the minor and patch numbers are reset to zero. Likewise, when a minor change is implemented, the patch number is reset to zero. While this scheme is relatively straightforward in and of itself, the naming convention of the numerical labels (specifically major and minor) may confuse some due to the ambiguity that such names may present. However, there is another naming convention that applies to semantic versioning, which defines the numerical version label as (breaking change).(feature).(fix).

          Though both naming conventions are used, I find the later to be far more straightforward to understand and utilize, as the names can give one a better idea of the importance of a newly implemented update. As I was researching on more resources regarding Semantic Versioning, along with the official documentation, I came across the following archived article on Neighbourhood.ie titled Introduction to SemVer. In this article, Irina goes into further detail regarding semantic versioning by explaining the naming of each component, as well as noting the difference between the two naming conventions.

          Although they go into further detail into semantic release in another article, this article sufficiently covers the fundamentals of semantic versioning. This versioning scheme is not the only way to  software development, it is still an important tool that can help in documenting a project’s history during its support lifetime and outline important changes more clearly and efficiently.

Direct link to the resource referenced in the post: https://neighbourhood.ie/blog/2019/04/30/introduction-to-semver/

Recommended materials/resources reviewed related to semantic versioning:
1) https://www.geeksforgeeks.org/introduction-semantic-versioning/
2) https://devopedia.org/semantic-versioning
3) https://www.wearediagram.com/blog/semantic-versioning-putting-meaning-behind-version-numbers
4) https://developerexperience.io/practices/semantic-versioning
5) https://gomakethings.com/semantic-versioning/
6) https://neighbourhood.ie/blog/2019/04/30/introduction-to-semantic-release/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #3 – Understanding Docker Volumes, Mounts, and data management

Though most people may not consider this an issue during the early stages of learning how to use Docker for deployment, data retention and persistence is one of the problems that one needs to consider when they need to utilize Docker containers. Previously, I wrote about virtualization using either Virtual Machines and Docker, though I mostly focused on how both essentially work on an operating system and what they require regarding system resources. What I did not mention, however, was how either Virtual Machines or Docker operate when it comes to data persistence. We are aware that the host systems that we use retain the data that we create between sessions. This means that if I power the computer that I am using now off and then on after submitting this blog post, the majority of the data that was created in the previous session will be available in the next session. However, as we begin to work with virtualization, this  issue of data persistence becomes a much greater issue for us to consider when working with Docker. In the case of virtual machines, data persistence is not much of an issue.

Data persistence for Docker containers, however, works differently. It is stated in the Docker documentation that:

The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.

https://docs.docker.com/storage/

When a container is stopped and then restarted, whatever files that were created and used in the container will be deleted and the container will essentially run on a “clean state”. However, there is a way to guarantee data persistence on a docker container at any point through binding mounts or volumes to the container. Though Docker offers other types of binds, such as tmpfs mounts and named pipes, I will mostly be focusing on bind mounts and volumes for the remained of this post as ways of maintaining data persistence between a host machine and a Docker container.

While I was researching for more information regarding the differences between using bind mounts and volumes, I came across the following two articles, one tutorial titled Docker Volumes – Tutorial on buildVirtual.Net and one article titled Guide to Docker Volumes on Baeldung.Com by Ashley Frieze. In the Baeldung article, Frieze showcases how the Docker file system works and, in turn, how data retention is affected in a Docker container before explaining the differences between using volumes and bind mounts. Likewise, the buildVirtual tutorial also outlines the above differences, as well as showing how to utilize and delete volumes through docker commands.

Although both bind mounts and volumes can be used for data persistence, it is important to know which method to utilize depending on where we want the binds to be stored in the host system or how other docker or non-docker processes may need to interact with the specific data.

Direct link to the resources referenced in the post: https://www.baeldung.com/ops/docker-volumes and https://buildvirtual.net/amp/docker-volumes-tutorial/

Recommended materials/resources reviewed related to Docker mount and volumes:
1) https://4sysops.com/archives/introduction-to-docker-bind-mounts-and-volumes/
2) https://medium.com/@BeNitinAgarwal/docker-containers-filesystem-demystified-b6ed8112a04a
3) https://www.baeldung.com/ops/docker-container-filesystem
4) https://digitalvarys.com/docker-volume-vs-bind-mounts-vs-tmpfs-mount/
5) https://medium.com/devops-dudes/docker-volumes-and-bind-mounts-2fb4bd9df09d
6) https://docs.microsoft.com/en-us/visualstudio/docker/tutorials/use-bind-mounts
7) https://blog.logrocket.com/docker-volumes-vs-bind-mounts/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #2 – Learning about Containerization and Virtualization

            In my experience, the concept of virtualization is currently synonymous with the creation of virtual machines that are used to emulate hardware and operating systems that, for one reason or another, are not readily available during the process of software development. For example, during my studies I have needed to use programs that were not available on Windows operating systems or to study an operating system for which a physical unit may not be readily available. Whatever the case may be, virtualization is not a new concept, and it is widely utilized in software development. It is important to note that virtualization is not exclusive to virtual machines, as it has a broader range that includes any concept related to the abstraction of a system’s physical components into virtual components.

            Among others, one concept of virtualization is containerization, with which most of us are familiar through Docker. Conversely, containerization refers to the containment of applications, their dependencies, and their required operating system into a singular package, also called container (hence, the name) that can be deployed and used on any operating system. By design, containers are meant to be a portable and lightweight way of testing and deploying applications, at least when compared to virtual machines. However, it is important to note that singular instances of containers cannot be modified, whereas it is possible to customize and modify virtual machines. Despite the caveats or benefits of virtual machines and containers, both are equally important during development.

          As I mentioned before, I have used virtual machines during my studies to create and use servers that I had no immediate physical access to, so the concept of virtual machines is not entirely foreign to me. However, I have little experience by comparison when it comes to Docker and using containers for software development, so I believe it is important for me to understand their differences so that I can know how to properly utilize them. As I was researching more regarding the concepts of virtualization and containerization, I came across the post titled What’s the Diff: VMs vs Containers on BackBlaze.Com, in which Roderick Bauer defines what virtual machines and containers are in detail, how they are different based on their structure on a server, as well as listing their benefits and best uses. Though Bauer does not directly state their caveats, by looking at the differences of both virtualization and containerization I can better understand when either approach could be more suitable depending on the needs during development.

            Moreover, what this post also helped me understand better is that neither option is mutually exclusive; it is possible (and sometimes, even preferable) to utilize both virtualization and containerization during development, rather than being limited to either option, so long as doing so does contribute to improving development.

Direct link to the resource referenced in the post: https://www.backblaze.com/blog/vm-vs-containers/

Recommended materials/resources reviewed related to virtualization, virtual machines, and Docker/containerization:
1) https://www.oracle.com/cloud-native/container-registry/what-is-docker/
2) https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html
3) https://www.docker.com/resources/what-container
4) https://devopscon.io/blog/docker/docker-vs-virtual-machine-where-are-the-differences/
5) https://www.airpair.com/docker/posts/8-proven-real-world-ways-to-use-docker
6) https://opensource.com/resources/virtualization
7) https://en.wikipedia.org/wiki/Virtualization (Definition of Virtualization)
8) https://www.ibm.com/cloud/learn/containerization

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Software Construction Log #1 – Visualizing Software Design and Modeling through Class Diagrams

            So far in my Software Development courses, the programming assignments I was tasked to work on have been either partially coded or straightforward enough for myself and other students to work on without much of a need of extensive planning beforehand. However, this is not the case when a programming project’s scope expands as the functionalities such project is expected to perform increases in complexity. At this point, simply getting head-first into programming features without at least a basic understanding on how features should interact with one another will do more harm to this project in the long run than save time in the short run.

            Before beginning development, it is important to create schematics for the project in order to properly convey how any components interact with one another and possibly optimize such interactions prior to development, as well as have solid and understandable documentation for the project after development. Recently, I was introduced to Unified Modeling Language (UML) and the class diagrams that can be created using this language and can be used for object-oriented development. For this post, I want to focus specifically on the concept of class diagrams overall instead of how they can be created with the use of UML-based tools.

            As I was researching resources on the topic of class diagrams, I came across the materials listed at the end of this post, though I will mostly focus on this one tutorial post titled UML Class Diagram Tutorial: Abstract Class with Examples on Guru99.Com. Although there exists extensive documentation regarding UML modeling and class diagrams, this article by Alyssa Walker introduces and defines the basic concepts of UML Class diagrams, as well as the types of relations between classes such as Dependency, Generalization, and Association, as well as outline the best practices for designing class diagrams. Though I was introduced to class diagrams conceptually while I was enrolled in a Databases course, I am now able to understand how the concepts used in such diagrams can transfer from database design to software design, specifically the concepts of Association and Multiplicity.  

           Moreover, by examining the class diagrams provided in the article, I can understand better why it is important to literally visualize the interactions between classes, especially when it comes to developing software in object-oriented languages; by seeing how the classes implemented will interact with each other, as well as how the features I wish to implement will add complexity to the project overall. Therefore, any factor that contributes to increasing complexity in a project in any capacity can be properly taken care of well in advance without sacrificing additional development time in the long run, as well as provide important documentation that can contribute in the maintenance of the project after development.

Direct link to the resource referenced in the post: https://www.guru99.com/uml-class-diagram.html

Recommended materials/resources reviewed related to class diagrams:
1) https://developer.ibm.com/articles/the-class-diagram/
2) https://www.microtool.de/en/knowledge-base/what-is-a-class-diagram/
3) https://www.ibm.com/docs/en/rsm/7.5.0?topic=structure-class-diagrams
4) https://www.bluescape.com/blog/uml-diagrams-everything-you-need-to-know-to-improve-team-collaboration/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

CS-343 Introduction – Glad to be back

I believe it is high time I resume using this blog after a relatively peaceful summer break. Starting this September, I will be using this blog to catalogue my findings regarding Computer Science, as well as any learning material that I find to be particularly helpful and useful.

To the reader who has come across this blog for the first time, hello! My name is Sofia and, as of the writing of this post, I will be a senior student majoring in Computer Science with a focus on Software Development. I began using this blog as part of my CS-443 course and now this blog has returned as part of my CS-343 course as well. Just like I did last Spring semester, I will continue to record my findings regarding the courses that I am taking, as well as discuss any potential projects I may be working on at any point during the remainder of my academic career.

As always, I will do my best to make this blog interesting and helpful to any readers that come across this blog. Let’s get along this semester as well!

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.