Category Archives: Week 12

When It’s Easier to Just Do Everything [More] Manually

Sometimes doing things the hard way is a lot easier. The more tools you use and the more complicated those tools are, the more complexity you have to deal with. So while it may be nice to call a few simple methods and have a framework do everything for you behind the scenes, you’ll have to to learn how the framework works and maybe realize down the line it can’t do everything you want it to do. There may even be incompatibilities with other parts of your program.

This week in my independent study, I tried to figure out how I could run a machine learning model on Android. I had some success, but quickly discovered some complications. Android has the option of using TensorFlow Lite, which seems great. However, I built my model using Keras, so I needed to convert the model. That was relatively straightforward, but before I started calling my model, I realized that I needed to extract audio features on Android. This required using Python code on Android, particularly Librosa and Numpy. This led me to other potential frameworks to get this to run.

This would lead to a bloated app, so I looked into Google Cloud services and thought about running server-side code there. I already set up a way to upload and download files with Google FireBase, so this seemed reasonable. But this is a paid service and would be even more work to make it functional.

I already have all the code running on my personal machine, so what if I just set up a server with a REST API to upload and download files and run the necessary Python code locally? If I could get that working, it would be trivial to call the code I’m already running.

Getting the server to upload and download files is what I did this week. I used Flask, which makes it very easy to get a basic server up and running. For the time being, data can only be transmitted via WiFi, as there will be uncompressed audio files transmitted back and forth.

While there was some additional work to figure out HTTP requests on Android, already knowing the basic building blocks gives me much more flexibility moving forward. But with great flexibility comes great responsibility, and proper error checking will be an important part of development moving forward. Security measures are also very important to consider before deploying an app to production.

The next iteration will involve running the machine learning code with a REST API call and getting back both the results of the model’s prediction and any data I will need to plot within the app.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Final Project Progress

For my final project in CS 343, I have chosen to create a
Pokédex SPA that uses a public database, PokeAPI, with RESTful API. So, my
focus has been on page layout and how to search for data with limited methods
since the backend is all set.

I went through several different ways of trying to set up
the page layout, but eventually I settled on CSS grids. I found these grids to
be intuitive and easy to manipulate. It was not long until I was able to
successfully create a basic layout to work with. I used the grid-template-areas
CSS property to set a dynamically resizable layout.

CSS:       grid-template-areas:
     “header header”
                                                                “menu content”
                                                                “footer
footer”

Current page progress

I used the CSS fractional units to determine the width of
the columns (1:4) and static sizes for the height of the header and footer with
the content in-between filling the page. Now that I have a basic layout to work
with, I can focus on added functionality.

A function I have currently implemented is a search for Pokémon
by id number or name. For now, the page simply displays the name, image, and id,
but the API provides much more data that I haven’t included. The evolution tree
function is still a WIP. Connecting the evolution chains to the specified Pokémon
was a small issue. The API does not provide a way for an evolution chain to be
searched for by Pokémon. I eventually settled on creating a map, at page load, by
looping through all available chains and pairing them with their respected Pokémon.
The plan is to use the chain to render a pop-up that displays the entire evolutionary
tree. I also have a moves search that works similarly to the Pokémon search.

I am now trying to think of ways of using the pokeAPI in interesting ways. I will probably add some more search options to the menu as well as adding more options for linking relevant data. Even though my project is still fairly new, I have learned a great deal about HTML and CSS so far.

From the blog CS@Worcester – D’s Comp Sci Blog by dlivengood and used with permission of the author. All other rights reserved by the author.

Using UML for a project

I have have been working on a big term project lately for my all classes. I when me and my group were planning what to do I decided to try use a UML to do the planning like I had learned about in my class a few months ago. It was big help in figuring about how to structure the program and figure out how make it work. It also help a lot with being able to communicate with group members so we could all work on it at same and it would still be compatible.

From the blog CS@Worcester – Tim’s Blog by therbsty and used with permission of the author. All other rights reserved by the author.

CS-343 Final Project – Part 1

As there are now just a few weeks left in the semester, it
is time to start working on my final project for CS-343. This project is to
develop a Single Page Application in TypeScript using the Angular framework,
which we have been learning in class over the past month. From now until the
end of the semester, I will be making weekly posts documenting my progress with
this project and what I learn while working on it.

My final project began with a proposal, for which I was to
create a conceptual design for a Single Page Application using a wireframe. This
helped teach me how to design a layout for an application’s components before
programming it. My idea was to design a layout for a customizable puzzle game. When
it comes to software development, my main interest is in making games. For this
reason, I thought that using this project to make a basic game while also
learning about creating Single Page Applications in TypeScript would be
something I’d enjoy.

My current concept involves some kind of grid-based puzzle game,
such as minesweeper. The user would be able to interact with a variety of components
in an options menu to change the size of the grid as well as other aspects of
the game, like the difficulty and time limit. Changes made to these options
would update the main play area in real time without the need to reload the
page. I also included a help menu that would contain instructions and potentially
a hint button for extra interactivity in my proposal.

I drew my wireframe layout for this application concept on
paper. You can take a look at it right here:

I still am not certain that this is the idea I want to go
with for my project. I think it is a rather simple idea due to its lack of
communication with a back-end server. I also have yet to decide on the details
about the puzzle game itself, and I don’t know if such a game is even possible
to make with angular components. I will have to do more research about Angular
and TypeScript to help solidify my plan. Despite my doubts, I am looking forward
to learning more about writing applications in TypeScript, and I will definitely
get development started during Thanksgiving break.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Final Project Update: Taking a different shape than planned

The final project for this class is taking on a different form than I initially planned. I had planned to use google sheets API in my project, and while I am not ruling out that possibility, I am running into some roadblocks. The API seems to require authentication that I can’t get to work quite right. I may end up using a different solution for a backend, but we will see.

I am also thinking that my front-end will end up looking not much like my wireframe. At this point I’m not sure I care. Figuring out HTML and CSS implementations for something that I’ve had no experience in for this project is very difficult, and I think I (like most people), will end up modeling my project off of something that already exists. As I posted last week, Tour of Heroes is a very appealing course and I am learning a lot by deep-diving into it. I would highly suggest implementing that into the course. It utilizes a lot of concepts that are useful in angular and in the project: buttons, pages, loops, and CSS stylings, and more.

The other roadblock I am running into is that the work this semester sure has been back-loaded. 5 classes, 3 projects and 4 exams to prepare for is a tough ask. As an adult student with a job and a mortgage, it sure gets stressful. I will definitely not pretend to be the most overworked student ever, and people have definitely overcome tougher obstacles. Yet even still, my despair is immeasurable.

From the blog CS@Worcester – Alan Birdgulch's Blog by cjsteinbrecher and used with permission of the author. All other rights reserved by the author.

Mock Testing

Recently in CS-443 I was introduced to testing using
mocking. Mock testing makes use of a mocking framework (we used Mockito in class)
to create mocks which the place of regular objects. A mock can call the methods
of its associated class or interface, but it will return a default value of 0
instead of actually running the behaviors specified in the class’ methods. It
is also possible to tell the mock to return specific values other than the
default to make sure that different methods return different results. It was
interesting to learn about implementing Mockito and working with mocks in my
projects, but there was one question that I kept asking myself: What is the
point? Why go through the trouble of setting up mocks when you could just
finish writing the code and test its actual behavior? I decided I would search
for an answer to these questions on my own, and in doing so I came across an
article by Michael Minella titled “The Concept of Mocking.”

The article can be found here:

https://dzone.com/articles/the-concept-mocking

Unlike the example in class, this article teaches mocking
and its purpose clearly and simply. This purpose, as the article explains, is
to test functions without executing other functions that they depend on. The
article demonstrates this with a simple example which includes a doLookup
method that calls a lookupByKey method. By using mocks, it is possible to test
doLookup without needing to make sure lookupByKey is also working correctly. This
extremely simple example has helped make the point of mocking much clearer to
me. It still seems better to me to write tests based on the actual code of a
project, but I can see mocking being useful in situations where the code a project
depends on is not all accessible. I think the example in class may have been
too complex an introduction to mocking, and the difficulties I had getting the
example code to work made it difficult for me to understand the basic concepts behind
mocking. The simplicity of this article enabled me to see the purpose of
mocking, which I think will make it easier for me to apply what I learned from
the class activity.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

FPL&S 3: Component Interaction, Custom Services, and Fear of Commit

After completing the file upload portion of the project last week, I found myself in fear of committing my changes. I have always had this problem: I want each commit to be concise, change as little as possible, and even have perfect whitespace. This minimizes changes and makes it easier to track down commits where bugs are introduced, but moving forward like this will slow me down, as it has in the past.

But once I made that first commit, things starting progressing quicker. It was time to start adding more Components. Once I could interact with the files I was uploading, it was time to be able to delete files. With more functionality and interaction with the cloud storage, it was best to remove this behavior from the Components and move them to a Service. This custom service class handles all interaction with the cloud storage service, so there aren’t some components adding files and others deleting files. Everything is nicely encapsulated, and user interaction only needs to forward actions to the Service.

I do wonder if there are downsides to this design, and whether Angular might have features I’m missing that may be better suited. However, I definitely believe that UI Components in any framework should delegate user interaction to another class that contains the logic. Even so, is importing a Service class the proper avenue? I have also considered passing commands in each Component up to the main app Component, which would have a single reference to the custom Service I created. This would quickly complicate the application architecture, however, because changes in the cloud storage will require updates in the children Components. For example, the list of Files will have to update, and a file shouldn’t be removed from the UI unless the delete action is successful, requiring a callback to be passed down to children.

The software engineering practices I’ve studied and written about this past semester are at the front of my mind. My current solution is rather simple, and although I am anticipating possible problems, I am not over-engineering and adding unnecessary complexity. I am following Clean Code principles from Robert C. Martin; making code easy it read, making it work, and refactoring when necessary.

The overall design of my final project is beginning to solidify. I naively thought I would download all file metadata from the cloud storage when loading the web page and then maintain them from there, but I’ve realized that all of this data is going to have to be stored in another database, which I will access using a REST API. This will store all the file data and URLs to the files themselves and will be a good chance to practice with data synchronization between databases.

Through the next week I will implement the database. Once I have this synced client-side, I can add Components to search/filter the files and load the files that are selected by the user.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Final Project-Week 1

I will be completing a final project for my CS-343 class. I plan to implement a workout planner using angular and webstorm. The general idea of the website will start with the homepage. On the home page there will be any workout that the user has created. Here it will show the workout types, how many reps/sets and what each workout is. Along the side bar there will be a list of muscle groups. The user will be able to click on these side bars to display a list of different exercises that they could choose from. When the user clicks on an exercise, the tab will open up to a display a picture or video of the workout. From here the user will also be able to set their reps and sets count in which they want to add to the workout. To get started on this process I will be designing each of the slides for each workout. Once these slides are created, I will host them together in a folder. These will then be able to be used and manipulated by implementing a new component called workouts. This will contain and html file for designing the basic template of the slides and their layout. The next will be a css file. This file will allow for me to implement more stylistic changes to the pages. The last two components in the workout section will be .spec and a typescript file. The typescript file is where all the code that will allow for the user to manipulate and change pages. This will also include the code which will allow for the user to take add their reps and sets to the workout. For the first week I have been working putting together each of the slides that will be used for the site.

The tentative plan for these slides will be to make them
images and incorporate the other elements on top of the image. Each image will
have a unique ID which will allow for the code to distinguish what workout the
user is currently looking at. Once this large section has been completed, I
plan on moving onto the homepage design where the workout will be displayed. On
the homepage the user will be able to remove workouts that they no longer want
or remove their whole workout plan entirely and start from scratch. Due to the
set up of the app, the workouts will not save once disconnected because that
would require saving the data somewhere, while the app is offline.

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

10 Software Testing Trends

Hello again everyone and welcome to my fourth entry for the semester on this blog. today we are going to talk about some software testing trends. As the title of this post suggests, we will be talking about ten of them today. The article was written by Ulf Eriksson (Really cool name) and i started this article by skimming and it seems to be very short and concise, which means it’ll be easier for me to write about. I will only be writing about the five i found the most interesting.

So obviously, this article is going to be about trends that everyone should be seeing in 2019. Ulf leads off with mentioning the “evolution of new testing approaches” (Eriksson) due to new developments with Agile and DevOps. He then begins his list with discussing Agile. He says that Agile is being used in more and more comapanies. He then talks about what Agile is and how it works, but if you’re reading this you probably know what agile is so I won’t bore you with that. The next part caught my eye because it has to do with machine testing. I don’t know much about machine testing, but it still has my interest. Ulf describes how it is used as follows: Test suite optimization (redundancy), predictive analytics(key parameters), log analytics(automatic executing), traceability (test coverage), and defect analytics(identifying high risk areas). The next trend is the adoption of DevOps. This part was very short and it talks about continuous integration and continuous delivery. Another trend was shortening the delivery cycle. This section talks about how new technologies are being used in order to speed up the deliveries. This is interesting because this will always be a trend. New technologies are coming out everyday, so it is impossible for this trend to die down. Ulf also discusses big data testing as a trend, and I chose to write about this because it isn’t my concentration so it is interesting to read about this considering I am not studying it. Basically this kind of testing makes sure the large amounts of data are being verified correctly. In other words, this tests the quantity and quality of data.

I would have loved to write about every trend on this list, but this blog would be way too long and I would lose all my reader(s) about halfway through. However, Ulf Eriksson did a great job with this article. He didn’t go into much detail about every trend because some of them should have already been known. However, the lesser know trends were explained very well. This article was a very interesting read because I’m in quality assurance testing now, and it is nice seeing topics I learned in class in articles. I would recommend this to any testers out there.

https://dzone.com/articles/10-software-testing-trends-to-watch-out-for-in-201

From the blog CS@Worcester – My Life in Comp Sci by Tyler Rego and used with permission of the author. All other rights reserved by the author.

Creating Workflow Diagrams & Beginning the GitLab Migration

Last week started on Monday with trying to resolve why the DCO bot would not show up as a status check for a GitHub repository. I tried enabling it for the test organization instead of an individual repository and that still wasn’t working. Eventually by searching the error message that appeared in the box I found out through an article for enabling CI that in GitHub you need to manually trigger the process before you can enable it as a status check. This solved the issue and after creating a branch, committing to it, and merging it with master, the DCO bot ran for the first time and then showed up as a status check in the branch restrictions menu. I updated the pull request with a comment about this since it resolved a discussion that had been ongoing with Dr. Jackson. After that, I commented on the workflow diagrams issue to see which sections should have diagrams created for them. I looked over the contributing document and thought that Getting ready to work, Work, and Getting your work reviewed and merged all seemed like they could use some diagrams. Finally, I figured out what I would be doing for the rest of the week and planned questions for the next day’s meeting. 

Tuesday started with a research meeting that was quickly joined by Dr. Jackson over the phone. We discussed various things including the ongoing project board issue that needs to get resolved so we could update the documents with which board structure to use and how to set it up. We also talked about changing to a different license for some of the project files. I helped to resolve a Discord issue with server joining messages. Most of what I would be working on for the rest of the week was beginning the migration from GitHub to GitLab as it was at this point a pretty sure thing that we were going with this platform. I would start this by deleting the previously imported repositories and re-import them.

Wednesday I started this GitLab migration and deleted all of the previous repositories in the LibreFoodPantry GitLab group. I then imported all of the projects from GitHub to my account and transferred them to the LFP group. One issue that I discovered when importing the projects was that it didn’t link issues that were created by some users on GitHub that didn’t have a GitLab account and it made me the owner of these un-linked issues. After that, I read over the license that Dr. Jackson suggested we start using. After reading it over, I agree with Dr. Wurst that it seems a little too complicated. After getting a response from Dr. Jackson on which sections to create diagrams for, I started working on this by creating a new feature branch. I started with the Getting ready to work section. I looked at the links he suggested for how commit diagrams were designed and somewhat used some of the styling from these later when I was creating my commit diagrams. As I was creating the diagrams, I was running through the workflow myself to see that it worked properly for me. I ended up adding some git commands into the contributing document that I thought would be helpful. I also created a new issue that I discovered when testing this workflow about how to shops update their master branch when changes are made in LFP’s upstream. I suggested using GitLab’s auto repository mirroring function that takes care of this automatically. I also tested out pushing an empty commit to a new branch and creating a work-in-progress merge request back to the LFP project. I thought it was cool that when you push to a branch after doing this the commits also go to the WIP merge request so others can see your progress. By the end of the day I had created diagrams for Getting ready to work and the work sections. 

Thursday was exciting. A decision had finally been reached and we would end up using GitLab and Discord! After seeing this I posted my migration issue and also replied what was the best document to follow for importing repositories. It was decided that shop managers would re-import the projects so that they would be the owners of any issues that didn’t properly link in the import. I then went back to the diagrams and tested out how you update a feature branch from the master branch. I realized that I forgot to add the developer’s computer to the diagrams and went back to the previously created diagrams and added this. I also wanted to figure out where upstream was pointing to and realized that it was the LFP project’s master branch after some searching. I then had to figure out how to set the upstream’s master for a cloned repository of this from the shop fork, this was aided by a past exercise from CS348. I think this command should probably be added to the contributing document since it is not mentioned and not something that happens automatically. I also discovered some other issues that should be touched up on in the contributing document but I wanted to finish the diagrams before migrating to GitLab so my commits would transfer over properly. The hardest part of the diagrams was creating the commit graphs. I figured this out by looking at the commits that were on the different branches through the Web UI and after that I discovered that GitLab has a commit graph for repositories. That helped greatly with creating diagrams for the merge commits part of the workflow. I then finished the Getting your work reviewed and merged diagram and pushed that to the branch. 

On Friday there was a bunch of notifications since a lot of new issues were created on GitLab since we started migration to this platform. I started looking at some of the new ones and also replied to Dr. Wurst’s question about why commits on his projects seem to have transferred correctly. I discovered that the issue with linking accounts is more related to issues and not commits and that GitLab seems to make dummy accounts for commits if it can’t link to an actual account. I then enabled the DCO push rule for Dr. Wurst’s imported projects. I then created a pull request for my diagrams before the ProjectTemplate repository was imported to GitLab. I  also assigned myself some new issues to work on. Later that night Dr. Jackson discovered when importing the ProjectTemplate repository that issues I created on GitHub weren’t being linked to my GitLab account on. We tried to resolve this in various ways by enabling different options. I tried signing in to GitLab with my GitHub account and set the same email address on both platforms to the same one and made it publicly viewable. Sadly, this did not resolve the issue and we would try to fix it the next day.

Saturday, Dr. Jackson decided that I should import the ProjectTemplate repository myself so that it would automatically link all of my work to my GitLab account. This worked fine except for a couple of merge commits I had done with a private GitHub email address through the Web UI. We decided this was fine and I left it as-is. Finally, I enabled DCO checks for the BEAR-Necessities-Market project and updated the issue with this information.

From the blog CS@Worcester – Chris' Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.