The LibreFoodPantry FOSSisms

For this first (technically second) post of this semester,  I decided to talk about the FOSSisms section of the LibreFoodPantry main page. This sections was located under the about section. This sections leads off with a short introduction about what FOSSisms are and it was very useful considering I have never seen the word used before. To put it simply, FOSSisms are maxims that Heidi Ellis developed from, open source culture. I won’t talk about all 16 of them because I would be here all day, but I will talk about some of my favorites. I really enjoyed the productively lost section because it explains that the students should get lost, but that they should use this sense of confusion to learn more about what they are confused about. The last section, I’ll talk about is avoiding uncommunicated work. This section was interesting because it basically says that all work should be communicated with other members of the group, and I think that is extremely important for this project that we are about to take on.

From the blog CS@Worcester – My Life in Comp Sci by Tyler Rego and used with permission of the author. All other rights reserved by the author.

One Month Later, We’re Back

Hello again everyone, I know it has only been a month, but I am back to writing posts bi-weekly for my software development capstone. In this blog, I will be mainly discussing topics covered from the book for CS-448. This is my last semester at Worcester State University, and I’m looking forward to writing more blogs for all of you.

From the blog CS@Worcester – My Life in Comp Sci by Tyler Rego and used with permission of the author. All other rights reserved by the author.

Return of the Blog

Today, I began my Spring 2020 semester at Worcester State. In my first class of the day, I discovered that I once again have a requirement to write blog posts. Hooray.

This time, my posts will be for CS-448 – Software Development Capstone. In this class, I will finally get to experience being a part of a major software development project – LibreFoodPantry. As much as I have disliked writing these blog posts, I think the readings for this class and my experiences working on the project will teach me many interesting new ideas about software development that I look forward to sharing.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Software Development Capstone Introduction

Finally I have made it. Today marks the first day of my final semester in my college career. I am going to be writing posts on this blog related to my Software Development Capstone class, CS-448. This blog post is just a test in order to make sure that this blog links correctly to the aggregate blog for this class.

From the blog CS@Worcester – Your Friendly Neighborhood Programming Blog by John Pacheco and used with permission of the author. All other rights reserved by the author.

Angular / GoogleSheets / CRUD

We need to access a Google Sheet for basic CRUD operations from Angular. There are several great node modules available that will allow us to perform CRUD operations on Google Sheets but most require OAuth2 and we want to keep this fairly straight forward by using a Google Service Account and JWT (JSONWeb Token).

The best fit module for this is:
[google-spreadsheet](https://www.npmjs.com/package/google-spreadsheet)
This handy little module takes care of the authentication as well as providing the hooks to the Google Sheet. But since this is a Node.js module we are trying to run in a browser we have to create a workaround. Here’s a quick overview of what we need to do to get this up and running. First we’ll enable the Google Sheets API and create a servive account. Next we’ll extend Webpack in Angular to help setup our workaround. Then we’ll configure our components and, for this example, write a method that writes to the Google Sheet.

This is going to be a little lengthy so I won’t go over creating an Angular project. All right let’s get started!

First enable the Google Sheets API and create the Service Account:

Go to the Google Developers Console and navigate to the API section. You should see a dashboard.

Click on “Enable APIs” or “Library” which should take you to the library of services that you can connect to. Search and enable the Google Sheets API.

Go to Credentials and select “Create credentials”.

Select “Service Account” and proceed forward by creating this service account. Name it whatever you want. I used SheetBot.

Under “Role”, select Project > Owner or Editor, depending on what level of
access you want to grant.

Select JSON as the Key Type and click “Create”. This should automatically
download a JSON file with your credentials.

Rename this credentials file as credentials.json and create a sheets directory in your src/assets directory of your project.

The last super important step here is to take the “client email” that is in your credentials file and grant access to that email in the sheet that you’re working in. If you do not do this, you will get an error when trying to access the sheet.

Configure Angular:

Now let’s start configuring the Angular project to play nice with the Node.js packages we’ll be installing.

Edit the tsconfig.app.json and “node” to the “types”:[] section and paste this right below it “typeRoots”: [ “../node_modules/@types” ]

The two should look like this:

“`
“types”: [ “node” ],
“typeRoots”: [ “../node_modules/@types” ]
“`

***Dont forget your commas***

Since we’ll be mocking Node.js we need to add in the Node typings to the angular project. Install them by running this from the terminal:

“`
npm install @types/node –save
“`

Now let’s extend Webpack. We’ll be following some of the steps that
Vojtech Masek provides in this [article](https://medium.com/angular-in-depth/google-apis-with-angular-214fadb8fbc5?)

Install the Angular Custom-Webpack: “`
npm i -D @angular-builders/custom-webpack
“`
Now we have to tell Angular to use the correct builders for the custom webpack. Open up angular.json and replace the builder in architect with:

“`
“builder”: “@angular-builders/custom-webpack:browser”,
“`
then paste this in to the options section right below:
“`
“customWebpackConfig”: {
“path”: “./extra-webpack.config.js”
},
“`

It should look like this:
“`
“architect”: {
“build”: {
“builder”: “@angular-builders/custom-webpack:browser”,
“options”: {
“customWebpackConfig”: {
“path”: “./extra-webpack.config.js”
},
“`

and under serve replace builder with:
“`
“builder”: “@angular-builders/custom-webpack:dev-server”,
“`
It should look like this:
“`
“serve”: {
“builder”: “@angular-builders/custom-webpack:dev-server”,
“`
More details about using Custom Webpacks can be found in Angular’s builder
[docs](https://github.com/just-jeb/angular-builders/tree/master/packages/custom-webpack#Custom-webpack-dev-server).

In your projects root directory create a new javascript file and name it:

extra-webpack.config.js

paste this code into it:
“`
const path = require(‘path’);

module.exports = {
resolve: {
extensions: [‘.js’],
alias: {
fs: path.resolve(__dirname, ‘src/mocks/fs.mock.js’),
child_process: path.resolve(
__dirname,
‘src/mocks/child_process.mock.js’
),
‘https-proxy-agent’: path.resolve(
__dirname,
‘src/mocks/https-proxy-agent.mock.js’,
),
},
},
};`
“`

So what does all of that do? We are telling WebPack to use the mock javascript files instead of trying to call additional Node.js modules. Let’s create the mocks, first create a folder in the projects src folder and name it mocks. In the mocks folder create three javascript files:

child_process.mock.js
fs.mock.js
http-proxy-agent.mock.js

paste this code into the mocks for child_process and fs:

“`
module.exports = {
readFileSync() {},
readFile() {},
};
“`

For the http-proxy mock use this code:
“`
module.exports = {};
“`

These mock methods let the node.js modules think they are running correctly but
in reality don’t do anything. Now we need to provide a way for node to
handle process and Buffer calls since node isn’t able to access the global
variables from the browser side. To do so install these two packages:
“`
npm i -D process buffer
“`
Now add these to the Application imports in polyfills.ts:
“`
import * as process from ‘process’;
(window as any).process = process;

import { Buffer } from ‘buffer’;
(window as any).Buffer = Buffer;
“`

and add this to the head section of index.html:
“`

“`

Ok almost there! At this point we have Angular configured with the mocks and are ready to install a few more modules. First let’s install google-spreadsheet:
“`
npm i google-spreadsheet –save
“`
Depending on the platform you are on you may receive some warning errors
indicating that optional fsevent was not being installed. Since it’s listed as an optional module I ignored it. I’m working on a Windows 10 device and had to install these modules to make the compiler happy:

eslint
fs
child_process
net
tls

Wait, didnt we just mock fs and child_process? Yes but the compiler still sees them listed as dependencies and wants them installed. Now that we have everything installed and configured let’s try it out.

Wrapping up

I added a contact component and created a contact form with an onSubmit function. The onSubmit function passes the jsonObject to the addRows method for the Google Sheet. Here’s what my contact.component.html looks like:

“`

Sheet Crud Example








“`

Not very elegant, and yes I need to add formReset to it, but it gets the job done for now. For the contact.component.ts I added these two imports first:
“`
import { FormGroup, FormBuilder, FormControl } from ‘@angular/forms’;
import {HttpClient} from ‘@angular/common/http’;
“`
The Form imports are to build the form while the HttpClient will be used by the google-spreadsheet module for sending the JWT for authenticating and for our CRUD operations.

I then added the node related cont’s:
“`
const GoogleSpreadsheet = require(‘google-spreadsheet’);
const creds = require(‘../../assets/sheet-api/credentials.json’);
“`
If you forgot to install the node types (npm i @types/node) you will get an error becasue TypeScript doesn’t use require. If you get a message telling you to convert the require to an import just ignore it.

Next I configured my constructor:
“`
constructor(private formBuilder: FormBuilder, private http: HttpClient, ) {
this.contactForm = new FormGroup({
fullName: new FormControl(),
email: new FormControl(),
message: new FormControl()
});
}
“`

Then I setup the onSubmit method:

“`
contactForm: FormGroup;

onSubmit() {
const jsonObject = this.contactForm.value;
console.log(‘Your form data : ‘, this.contactForm.value);
const doc = new GoogleSpreadsheet(‘***YOUR GOOGLE SHEETID***’);
doc.useServiceAccountAuth(creds, function(err) {
doc.addRow(1, jsonObject, function (err) {
if (err) {
console.log(err);
}
});
});
}

“`
So what exactly are we doing here? Well, we take the current contactForm values and assign them to jsonObject. Then we put them out to the log and create a new GoogleSpreadSheet. It’s important that you replace ***YOUR GOOGLE SHEETID*** with the actual ID of the Google Sheet you are trying to work with. You can find it by opening up the Google Sheet in your browser. The ID is that really long hash string between the /d/ and the /edit:

https://docs.google.com/spreadsheets/d/***2he8jihW5d6HGHd3Ts87WdRKqwUeH-R_Us8F3xZQiR***/edit#gid=0

doc then calls the useServiceAccountAuth method in google-spreadsheet and passes the credentials.json as a wrapper to call addRow. This authenticates the session and let’s us add a row to the existing sheet. If you have the browser console open you will see a couple of warnings. The first warning (which looks intimidating but is not) is a compliler warning:

Critical dependency: the request of a dependency is an expression

You may also see a console log that says:

Error: incorrect header check

This is because of the function (err) in the useServiceAccountAuth method. The function (err) is a node error callback that Angular cannot process correctly and we haven’t coded around.

…and that wraps it up. Why am I writing a message like this to a Google Sheet instead of using an email component? I’m using Google Sheets as the backend for a webapp and I use a script in Google Sheets to forward the message as an email and for logging purposes.

Check out the google-spreadsheet [repository](https://github.com/theoephraim/node-google-spreadsheet)
for additional details on the calls you can make and how to use the module.

From the blog Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.

Angular / GoogleSheets / CRUD

We need to access a Google Sheet for basic CRUD operations from Angular. There are several great node modules available that will allow us to perform CRUD operations on Google Sheets but most require OAuth2 and we want to keep this fairly straight forward by using a Google Service Account and JWT (JSONWeb Token).

The best fit module for this is:
[google-spreadsheet](https://www.npmjs.com/package/google-spreadsheet)
This handy little module takes care of the authentication as well as providing the hooks to the Google Sheet. But since this is a Node.js module we are trying to run in a browser we have to create a workaround. Here’s a quick overview of what we need to do to get this up and running. First we’ll enable the Google Sheets API and create a servive account. Next we’ll extend Webpack in Angular to help setup our workaround. Then we’ll configure our components and, for this example, write a method that writes to the Google Sheet.

This is going to be a little lengthy so I won’t go over creating an Angular project. All right let’s get started!

First enable the Google Sheets API and create the Service Account:

Go to the Google Developers Console and navigate to the API section. You should see a dashboard.

Click on “Enable APIs” or “Library” which should take you to the library of services that you can connect to. Search and enable the Google Sheets API.

Go to Credentials and select “Create credentials”.

Select “Service Account” and proceed forward by creating this service account. Name it whatever you want. I used SheetBot.

Under “Role”, select Project > Owner or Editor, depending on what level of
access you want to grant.

Select JSON as the Key Type and click “Create”. This should automatically
download a JSON file with your credentials.

Rename this credentials file as credentials.json and create a sheets directory in your src/assets directory of your project.

The last super important step here is to take the “client email” that is in your credentials file and grant access to that email in the sheet that you’re working in. If you do not do this, you will get an error when trying to access the sheet.

Configure Angular:

Now let’s start configuring the Angular project to play nice with the Node.js packages we’ll be installing.

Edit the tsconfig.app.json and “node” to the “types”:[] section and paste this right below it “typeRoots”: [ “../node_modules/@types” ]

The two should look like this:

“`
“types”: [ “node” ],
“typeRoots”: [ “../node_modules/@types” ]
“`

***Dont forget your commas***

Since we’ll be mocking Node.js we need to add in the Node typings to the angular project. Install them by running this from the terminal:

“`
npm install @types/node –save
“`

Now let’s extend Webpack. We’ll be following some of the steps that
Vojtech Masek provides in this [article](https://medium.com/angular-in-depth/google-apis-with-angular-214fadb8fbc5?)

Install the Angular Custom-Webpack: “`
npm i -D @angular-builders/custom-webpack
“`
Now we have to tell Angular to use the correct builders for the custom webpack. Open up angular.json and replace the builder in architect with:

“`
“builder”: “@angular-builders/custom-webpack:browser”,
“`
then paste this in to the options section right below:
“`
“customWebpackConfig”: {
“path”: “./extra-webpack.config.js”
},
“`

It should look like this:
“`
“architect”: {
“build”: {
“builder”: “@angular-builders/custom-webpack:browser”,
“options”: {
“customWebpackConfig”: {
“path”: “./extra-webpack.config.js”
},
“`

and under serve replace builder with:
“`
“builder”: “@angular-builders/custom-webpack:dev-server”,
“`
It should look like this:
“`
“serve”: {
“builder”: “@angular-builders/custom-webpack:dev-server”,
“`
More details about using Custom Webpacks can be found in Angular’s builder
[docs](https://github.com/just-jeb/angular-builders/tree/master/packages/custom-webpack#Custom-webpack-dev-server).

In your projects root directory create a new javascript file and name it:

extra-webpack.config.js

paste this code into it:
“`
const path = require(‘path’);

module.exports = {
resolve: {
extensions: [‘.js’],
alias: {
fs: path.resolve(__dirname, ‘src/mocks/fs.mock.js’),
child_process: path.resolve(
__dirname,
‘src/mocks/child_process.mock.js’
),
‘https-proxy-agent’: path.resolve(
__dirname,
‘src/mocks/https-proxy-agent.mock.js’,
),
},
},
};`
“`

So what does all of that do? We are telling WebPack to use the mock javascript files instead of trying to call additional Node.js modules. Let’s create the mocks, first create a folder in the projects src folder and name it mocks. In the mocks folder create three javascript files:

child_process.mock.js
fs.mock.js
http-proxy-agent.mock.js

paste this code into the mocks for child_process and fs:

“`
module.exports = {
readFileSync() {},
readFile() {},
};
“`

For the http-proxy mock use this code:
“`
module.exports = {};
“`

These mock methods let the node.js modules think they are running correctly but
in reality don’t do anything. Now we need to provide a way for node to
handle process and Buffer calls since node isn’t able to access the global
variables from the browser side. To do so install these two packages:
“`
npm i -D process buffer
“`
Now add these to the Application imports in polyfills.ts:
“`
import * as process from ‘process’;
(window as any).process = process;

import { Buffer } from ‘buffer’;
(window as any).Buffer = Buffer;
“`

and add this to the head section of index.html:
“`

“`

Ok almost there! At this point we have Angular configured with the mocks and are ready to install a few more modules. First let’s install google-spreadsheet:
“`
npm i google-spreadsheet –save
“`
Depending on the platform you are on you may receive some warning errors
indicating that optional fsevent was not being installed. Since it’s listed as an optional module I ignored it. I’m working on a Windows 10 device and had to install these modules to make the compiler happy:

eslint
fs
child_process
net
tls

Wait, didnt we just mock fs and child_process? Yes but the compiler still sees them listed as dependencies and wants them installed. Now that we have everything installed and configured let’s try it out.

Wrapping up

I added a contact component and created a contact form with an onSubmit function. The onSubmit function passes the jsonObject to the addRows method for the Google Sheet. Here’s what my contact.component.html looks like:

“`

Sheet Crud Example








“`

Not very elegant, and yes I need to add formReset to it, but it gets the job done for now. For the contact.component.ts I added these two imports first:
“`
import { FormGroup, FormBuilder, FormControl } from ‘@angular/forms’;
import {HttpClient} from ‘@angular/common/http’;
“`
The Form imports are to build the form while the HttpClient will be used by the google-spreadsheet module for sending the JWT for authenticating and for our CRUD operations.

I then added the node related cont’s:
“`
const GoogleSpreadsheet = require(‘google-spreadsheet’);
const creds = require(‘../../assets/sheet-api/credentials.json’);
“`
If you forgot to install the node types (npm i @types/node) you will get an error becasue TypeScript doesn’t use require. If you get a message telling you to convert the require to an import just ignore it.

Next I configured my constructor:
“`
constructor(private formBuilder: FormBuilder, private http: HttpClient, ) {
this.contactForm = new FormGroup({
fullName: new FormControl(),
email: new FormControl(),
message: new FormControl()
});
}
“`

Then I setup the onSubmit method:

“`
contactForm: FormGroup;

onSubmit() {
const jsonObject = this.contactForm.value;
console.log(‘Your form data : ‘, this.contactForm.value);
const doc = new GoogleSpreadsheet(‘***YOUR GOOGLE SHEETID***’);
doc.useServiceAccountAuth(creds, function(err) {
doc.addRow(1, jsonObject, function (err) {
if (err) {
console.log(err);
}
});
});
}

“`
So what exactly are we doing here? Well, we take the current contactForm values and assign them to jsonObject. Then we put them out to the log and create a new GoogleSpreadSheet. It’s important that you replace ***YOUR GOOGLE SHEETID*** with the actual ID of the Google Sheet you are trying to work with. You can find it by opening up the Google Sheet in your browser. The ID is that really long hash string between the /d/ and the /edit:

https://docs.google.com/spreadsheets/d/***2he8jihW5d6HGHd3Ts87WdRKqwUeH-R_Us8F3xZQiR***/edit#gid=0

doc then calls the useServiceAccountAuth method in google-spreadsheet and passes the credentials.json as a wrapper to call addRow. This authenticates the session and let’s us add a row to the existing sheet. If you have the browser console open you will see a couple of warnings. The first warning (which looks intimidating but is not) is a compliler warning:

Critical dependency: the request of a dependency is an expression

You may also see a console log that says:

Error: incorrect header check

This is because of the function (err) in the useServiceAccountAuth method. The function (err) is a node error callback that Angular cannot process correctly and we haven’t coded around.

…and that wraps it up. Why am I writing a message like this to a Google Sheet instead of using an email component? I’m using Google Sheets as the backend for a webapp and I use a script in Google Sheets to forward the message as an email and for logging purposes.

Check out the google-spreadsheet [repository](https://github.com/theoephraim/node-google-spreadsheet)
for additional details on the calls you can make and how to use the module.

From the blog Home | Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.

Mutation Testing Blog

Hello everyone! Today I wanted to write a blog about someone else’s blog post that I found online last week while reviewing for one of my final exams. Its by James White ( https://blog.scottlogic.com/2017/09/25/mutation-testing.html ) and in his blog he talks about mutation testing, which is one of the topics we’ve covered in class recently. Mutation testing is a way to test the code that you have written by altering your code with changes that are designed with the intent of being detected by different tests you have written for your code. Basically, it mutates your code so your tests can find the mutation and, if that happens, then you know your tests worked correctly. If the mutation survives in your code then you know the tests didn’t work.

What I like about James’ blog is that he makes the idea easy to understand, and incorporates plenty of different examples to walk readers through how mutation testing works. I specifically like his example of why mutation testing is needed, and how, in his example, mutation testing was able to point out a flaw in his code that otherwise would have gone unnoticed, since you could have changed the line of code or even deleted it all together and, without mutation testing, it would have looked correct even though it wasn’t.

The other thing I liked about James’ blog is the different examples he gives of what these mutations look like. How it can change (input > 0) to (input <= 0) or how it could mutate code to check true instead of false. I also liked how he addressed the common question of how long it would take mutation testing to run, since that is a question I had myself once we started learning about mutation testing. I also thought it was pretty cool that he mentioned how mutation testing can help point out instances of redundant code, and some other neat things about it too. Overall I enjoyed reading his blog, and it helped me better understand the material we learned about in class.

From the blog CS@Worcester – Nick’s Blog by nramsdell1 and used with permission of the author. All other rights reserved by the author.

CS-343 Final Project – Part 4

In my last week of working on my CS-343 final project, I was focused on implementing the remaining features that I had planned to add in my original wireframe. This included creating a working puzzle game that the player could customize and developing my layout further to make it more appealing. While I was able to create a puzzle game and clean up my layout a little, I was unable to get everything to work the way I intended before I presented the project on Friday. My final version of my project, and the one that I presented, looks like this:

As you can see, I was able to create a simple grid-based puzzle game, which was my plan from the beginning. While working on my layout and data service over the past week, I finally came up with an idea for a basic game I would be able to implement in time for the presentation. I decided to use a puzzle I have seen before, in which the player interacts with tiles on a grid to change their color as well as the color of all adjacent tiles. The goal of the puzzle is to make all tiles the same color within the time limit. I was able to create a new Angular component to represent my tiles that alternate between gray and white when clicked. However, my attempts to make them change the colors of adjacent tiles let to many issues. I had the puzzle working in this way temporarily, but it would stop behaving correctly after the player reset the puzzle for the first time. I think this had something to do with how I was recreating my array of tiles in TypeScript and how that interacted with the HTML files. Clearly, I am still not an expert in TypeScript or HTML, even after a month of working on this project. Since I couldn’t get the puzzle to work the way I wanted, I ended up settling for a game where only the tile that was clicked would change colors. This makes the puzzle extremely easy, but at least it works consistently.

Aside from my troubles with the game mechanics, the rest of the development went pretty well. I was able to change the behavior of my width and height forms to alter the dimensions of the puzzle in tiles instead of the dimensions of a rectangle in pixels. I was also able to implement a timer that could have its time limit customized as well. If this timer runs out, a message displays saying that the game is lost. If all tiles are made white within the time limit, a victory message displays and the timer stops. You can see the application in action in the following screenshots:

Winning the Game – All tiles are made white within the time limit!
Losing the Game – Some tiles are still gray when the timer hits 0.

Overall, I found this project to be an enjoyable experience that taught me a lot about creating web applications using the Angular framework. It not only taught me how to work with TypeScript, HTML, and CSS, but it also helped me figure out how to create GUI layouts in HTML using <mat-grid-list> and how to pass data between components using a data service. While my difficulties getting the game to work correctly prevented me from implementing everything I wanted to, such as a more appealing GUI or a back-end server to save player scores, I think the experience will definitely help me write better single-page applications in the future.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Sights Set On the Future

This project was a fun one and helped me grow significantly as a computer scientist and as a coder. all the tools needed where given to me to succeed and the freedom to make something that interested me and was fun, fuel my passion and made me happy to do this final project. Angular is a powerful tool and through the presentations of my fellow classmates I saw many ways that I can work on and improve my own project. One thing that another classmate used was Ionic which a framework built on top of angular that allows for easy app porting to android, ios and other devices and does the creation and scaling for you. Applying this to my project would help me to make it even more stylish and user friendly since at the moment it is not fully mobile friendly and has some of the styling relying on pixel size which will not scale correctly on mobile. I also saw alot of my classmates use a nav bar which would lock in place as you scroll down the screen and have drop down menus and searches, Which I thought my site and app would benefit from.
This project helped me to learn the extent that the skills learned in about software construction can go and that looking to other people’s creations and seeing how they did something can also benefit your site/application as well. I also felt great when I learned a new thing in both Angular or HTML/CSS and seeing it work on my Website made me want to do more and sad that I didn’t have more time to work on this Project. The design patterns and tools learned in this Software Construction and Design class will aid me in any future career i have in software development and i have a newfound respect and interest in front end coding as well since it was one of the best parts of this project in my opinion. Online tutorials and websites that gave examples on how to do certain things helped my project immensely and made the process easier and less stressful. I might return to this project in the near future and further develop the site to become more interactive and user friendly as well as support more devices. One thing I really wanted to do but didn’t have the time to was both a keyword search to find articles that contained a word or words when searched and an auto fill function for searches of sources.

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.

The Finished Project

So, we finished The project and it remained mostly the same though the process. The biggest change was that at first one of the processors was going to be running shortest time remaining process scheduling rather than both of them being round robin. Very late in the process after everything but the process manager was written we decided to remove the second processor. So we only had one processor running with round robin. Neither one of us could figure out how to make the algorithm not let both processors work on the same instruction. Other than that, everything else was the same.

There were a few parts of the project that were difficult. First of all just figuring out the overall design was fairly difficult. Although doing a UML was a big help with that. What we ended up deciding that the tables would all be collections of another class. The next part that was difficult was figuring out the data structure for the job and block list. At first we thought we would use an arraylist as the structure but that there was then to find anything the managers would have to run through the entire list. Next we thought what about a map but regular maps are not ordered so the order the jobs went in would remain so we tried a linked list map but this didn’t work because there is no way to get the next or last element in the linked list map. The one we tried that worked was a tree map since tree map are sorted and have methods to get the next and last element in the map. The hardest part by far however was coding the method to run the processors it went through four or five rewrites in the end we managed to get it though.

There were a few important things we learned doing this project. First of all we learned how the different tables interact with the managers and how complicated that interaction is. We also learned how big of a help making a plan for a project is to figure out how to do it. In this case we did it using a UML. We also learned a lot about the different steps each manager has to do. Overall we both really enjoyed the project and learned a lot from it.

There we several things we would do differently or add if we had time. First of all we would figure out how to do the two processors. In addition to that we would have liked to add the job table to the display. The most complicated thing we would have liked to add is an option to select a different operating system configuration. Such as an option for dynamic memory allocation or a paging system. On the processor side we would have liked to add an option to select a number of processors and be able to pick what type of scheduling each processor did. Those are the only things we can think of though.

This is a picture of the full UML

This is a picture of the Application

From the blog CS@Worcester – Tim&#039;s WebSite by therbsty and used with permission of the author. All other rights reserved by the author.