Category Archives: cs-wsu

CS 401 – Software Development Process

This is my first blog post ever. Huzzah.

[What do I expect?]

Entering into my last semester at WSU, I expected my capstone class to be heavily interwoven with the knowledge gained from the other CS courses up to this point. Using all skills and techniques of software design and analysis and my average level of coding, I believe it is going to be a hard but valuable 16 weeks. I think it’s taken too long for us to learn about GitHub, something that should be taught to us very early on so we are masters at using it as the years build up. It’s unfortunate that I am only just beginning to know how to use one of the biggest platforms for version control and I head out into the real world in a couple of months. The same goes for IRC. Having heard about and generally knowing what IRC is for years, I’ve never had a reason to use it so I never bothered with it. Not having the class time to go over it was a shame, but I think I’ll be able to figure it out.

[Readings]

After reading through the articles, it doesn’t exactly add to anything I didn’t already have a grasp on as far as the ideas go for free and open source software. The quote from The Cathedral article, “Too often software developers spend their days grinding away for pay at programs they neither need nor love”, pretty much describes how I feel for a future of software programming.

Free software, as defined by the other articles, was pretty strict in cases where it defined certain things not free, but I guess it has to be in order to promote the truest form of open source software. I think overall it’s a nice premise and cause worth promoting, but in the end it just results in things like where we are with Linux (dozens of distro’s, i.e.). Open source projects should more be used for promoting concepts and ideas as ways to teach people to code better. Grabbing source code to investigate what certain things do and see the inside of the program is a good learning tool, but developing and tweaking each aspect of it and releasing it to the public convolutes everything.

[Git activity]

The entire GitHub activity was confusing and too “hardcore” from what I experienced. The whole manual process of doing it via command line was simply done in a matter of a few minutes with the GUI version. I think in this day and age, to say that you “aren’t a real programmer if you use a GUI/mouse” is too harsh of a constraint that people try to abide by in this field. It’s 2014. There are UI’s to make people’s lives easier and actions simpler, and we should use them. I was able to get up and running using the GUI GitHub install (not the 3rd party site download) and I think it represents it pretty well.

[IRC activity]

I have not done anything other than download an IRC client (HydraIRC: http://www.hydrairc.com/content/downloads) so I am not sure how much I can comment on this. Hopefully we can use it more in depth in the next coming class meetings.

From the blog slykrysis » cs-wsu by slykrysis and used with permission of the author. All other rights reserved by the author.

First Week, CS-401

Well then, the OpenMRS is certainly a very large and interesting project to say the least.  But with such a large and ongoing effort to provide an Open Source EMR database one is immediately struck with the question.  “What could (what I consider) a novice programmer possibly contribute to such a large ongoing effort?”  The upcoming few months will certainly answer that question without doubt.  As for the time being, I am very excited to be working (soon) be working and contributing to the OpenMRS community.

My expectations of this course are already set quite high, with the hopes of simply becoming a much more savvy programmer.  My only regeret is not being introduced to such sources as GitHub and exposure to other open source projects sooner, but that cannot be helped at this point in time. On that topic, over the Eric S. Raymond’s article The Cathedral and the Bazaar have also opened me up to the raw potential  — and how powerful open source communities can be.  Maybe my current inexperience will still be of good use to the OpenMRS project after all.

From the blog aboulais » CS WSU by aboulais and used with permission of the author. All other rights reserved by the author.

Rafter Maker an Android and iOS Project

This will be the first of many blogs this semester explaining the progress and discoveries I have made during the development of my app.  My task, simply put, is to develop an application for Android which can be used to create roof rafter templates and generate specific instruction sets at run time.  I then intend to convert the Android version to iOS. I will be covering the most basic of roof applications for the first version.  It will will include shed, gable, hip, and gambrel rafters as well as gable and shed dormers.  Fascia and soffit will be handled as well, but they will be integrated into each template – due to their dependence upon the roof angle or pitch. The templates will be the main function of the app, but I will work to also include detailed instruction sets regarding roof construction.  

 Once that is accomplished, I will publish the app to the Google Play Marketplace as a free beta for a minimum of thirty days.  Hopefully, my users will find any bugs that have eluded me.  I plan to test all of the template algorithms with JUnit for Android and OCUnit for iOS, so there should be no bugs within those methods.  When the app is published I will begin working to convert it app to iOS.  I have done some preliminary research on developing for multiple platforms and despite there being softwares like Phone Gap and Titanium, I have decided to code the Android and iOS apps separately in native code.  My thought is, I will not need to put as much effort into the iOS version because the app content will already be completed, and also by coding natively my app will have OS specific look and feel.  In theory, I will just have to rewrite the code in objective-C though I am sure there will be plenty of problems along the way due to the major differences between Android and iOS app structure.  

 Upon completion of the iOS app, I will publish it to the Apple App Store as a free beta for a thirty day minimum.  Once the apps come out of beta, I will make them available for a fee in both marketplaces.  I had considered embedding advertising into the apps and offering them for free, but the market for construction apps is so narrow I have decided against that.  However, if I was marketing to children – who seem to love clicking anything and everything I would most certainly have made it free and embedded advertising.  

 With so much work to do I decided to begin early before the start of the semester.  I actually will be starting with a half finished android application.  I have a completed the shed rafter template and embedded the fascia soffit calculations into its process.  I also created a separate UI for tablets so the extra screen real-estate will be properly utilized. The app will support Android version 2.2 and newer, but there may be some limitations on small screen sizes.

 I will be blogging later in the week about some of the more difficult challenges I have encountered thus far.  There will be some screen shots of the phone interface vs the tablet as well.  Once the Android app is complete and I will begin coding in iOS; I will focus some of my blogs on the the differences between developing for both platforms. 

 Till next time.

 Jason Hintlian

From the blog jasonhintlian » cs-wsu by jasonhintlian and used with permission of the author. All other rights reserved by the author.

Working as a Team but on Individual Projects

Although this is an independent study, at times I worked with Dillon Murphy. For the most part we worked on separate tasks, but there were a few we worked together on.

We did work together on all of the virtual server installs, also working with Dr. Wurst. Dillon and I were pretty familiar with server installs, but since the new CS server was being used for the Wiki (which I was focusing on), and the blogs (which Dillon focused on), it made sense to plan out and figure out what we needed. As I said in a previous post, MediaWiki relied on an Apache/PHP/MySQL setup, and luckily for Dillon WordPress relied on that as well.

During our individual projects, we would shoot ideas off of each other if we ran into issues. For example, for moving over WordPress, Dillon wasn’t too sure how to approach it but since I was familiar with PHP/MySQL applications I suggested that it was likely all he really needed to do was dump the database and restore it on the new server and then move over the code. There were a few issues with configuration files pointing to old URLs, but other than that everything worked out.

We worked together on getting the new CS server up, which worked out thanks to some guesswork and brainstorming to figure out the best way to do it.

Getting the cluster with fresh installs of CentOS was another collaborative effort seeing as we planned to use them for our next projects, Dillon focusing on Hadoop and myself focusing on Eucalyptus. We had a few ideas on how to utilize the cluster for both platforms. One suggestion involved installing a virtual machine hypervisor on each machine and having two Virtual Machines on each, one for Hadoop and one for Eucalyptus. Another suggestion was splitting the machines between each project. The suggestion we went with, after Dillon said it looked like it would work out, was having one CentOS install on each machine and installing both on each machine.

Once we figured out what we wanted to do with the network and resolved the issues with it, Dillon focused on getting the servers up. We finally got that up and running, so next is focusing on the final solo projects.

While Dillon focused on that and a bit before, I focused on getting GitLab working to usable conditions. Dillon and Dr. Wurst helped test and ensure that everything was working as it should.

It’s really helpful to work in a team, even if each member is focusing on a separate individual project. Being able to bounce ideas off of each other and resolve things more quickly. Working on your own can be productive for its own reasons, but it never hurts to have a helping hand available.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

GitLab Part 2

I missed a few things I wanted to go over with GitLab, and I have an update on trying out CACerts.

I didn’t really go over GitLab or git itself and why it’s important.

To understand GitLab, first you have to understand git. Git is a revision control software for plain text files. What this means is, you edit the code, commit your changes, and edit it again, and your code changes are tracked. Linus Torvalds, the creator of the Linux kernel created it when he wasn’t happy with the other software that was available. Git is one of my personal favourite tools to use, and has helped me with development a ton. I’ve definitely made poor choices in improving code or deleting code I didn’t like and want to undo that- with proper use of git this is trivial. One really advantageous feature of git is that it’s decentralized- when you clone a project with git, all its contents are self contained on your computer and if you want to update with a remote server you can pull the changes from the server or push your changes to the server. You can even have multiple remote repositories to push and pull from, though in practice you won’t use that feature too much. Another useful feature is you can set the remote repository to be another location on your local machine- this can be useful if you’re away from the internet and worried about accidentally ruining your working directory.

GitLab is probably the best free software alternative to Github. Github is a web front-end for remote git repositories, bring a lot of additional features to git such as editing your code from the web, forking other projects and pushing changes back to main projects (useful for feature development), easy account and group management, and of course the advantage of keeping your code stored on a remote server in case of issues with your local machine. For an example of a Github project, you can check out the CS-401 project from Spring 2013 that I was a member of. GitLab offers pretty much all of Github’s main features and a familiar interface. The big advantage of GitLab over Github is that we can store the data on WSU’s location, as well as private repositories, which are a paid feature of Github.

So as far as our GitLab install goes, last night I looked into using a certificate from CACert. Turns out, to apply for a CACert you need to own the domain, so that was quickly scrapped. I don’t think we can get HTTPS working at this point. I tried a few things with the configuration but it *seems* like the issue is the self-signed certificate and obtaining a legitimate certificate I don’t think is possible. This isn’t a huge issue, though with SSH you have to install keys on the server so it requires a bit more documentation.

What needs to be done with the server from here? I think all we need to do is resize the partition and virtual hard drive the server is on. I believe initially we set it to a 16gb install, but if students are going to use it for many assignments in their classes I feel like that’d fill up pretty quickly.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

GitLab

Where do I start with GitLab?

We had to install 3 servers to finally get this up and running and I’m still not completely satisfied with it. It’s frustrating to set up. It’s great software when you get it up and running, there’s no doubt about that, but it sure is a pain to get up and running.

Before I had done any work on setting up a GitLab server for WSU’s CS department, I had a bit of experience with it. I had tried to set up GitLab on some distro of Linux before for playing around with at home following the install guide but didn’t get anything working with it. We had a GitLab server running at my work at one point using the BitNami installer, but there were a few issues with it so we scraped it for something else.

The first attempt was a GitLab server install with the BitNami installer. We set up a CentOS and I ran the BitNami installer and got something up and running pretty easily. However, there was the most bizarre issue I had seen- for some strange reason pages would just show up as blank in Firefox unless you refreshed. I couldn’t find anything covering the same issue at all. I didn’t think something with an issue like that was worth keeping around, so we scrapped that and set up another CentOS server.

Here I decided to find a guide for installing GitLab on CentOS instead of relying on an installer or anything like that. In hindsight this probably wasn’t the best idea, the official install guide recommends Ubuntu and we probably would have had a much easier time just going with Ubuntu server. However, we had CentOS on hand, and I much prefer using CentOS for a server as it has some pretty good default configuration files for a lot of applications.

I didn’t have any issues with setting GitLab up initially using this guide, but afterwards a few problems became apparent. First off, neither SSH or HTTPS cloning worked. After a bit of hassle we got SSH working and were fine with that and decided that things were good to go. We sent in the request to UTS and got a public IP and subdomain for the server. However, we quickly discovered that sending emails didn’t work. It turned out that it was trying to send emails from gitlab@<localip> which get rejected. Unfortunately, I couldn’t find anywhere to change where it was sending emails from. I changed configuration files all over the place but had no success at all in fixing that. It got to the point where I just settled on doing a complete reinstall with the subdomain in mind which would hopefully fix all the issues. I eventually decided that uninstalling GitLab would be too much of a hassle on its own, so I would have to make a completely new server install. After setting my mind on doing that I spent a bit more time trying to get the current one installed and totally broke it, and didn’t care enough to fix it.

So finally, the last install, the one we have up and running. Using the same guide as before but documenting any oddities I found so we’d have a reasonable installation guide, I got it running and working to a pretty reasonable state, but with a few bumps along the way. First off, the guide recommends running a test to make sure everything’s running before you set up the webserver. It gives an error if the webserver isn’t up and running, but I had no indication this was caused by the webserver not being set-up. I decided to just install the webserver and hope it fixed it, and thankfully it did. Secondly, I tried Apache as the webserver, as the guide had the option of using nginx and apache. nginx seems like what GitLab was developed for so I used that for the previous installs, but for this one I decided to go with what I was familiar with. Didn’t get anywhere with it, and just went back to nginx and go that working pretty easily. Thirdly, the SSH issue we had before came back. This issue had two parts: First off the SELinux permissions aren’t set correctly for the ssh keys file. I’m not sure the exact cause for this, but I think it’s because GitLab creates the file itself if one doesn’t exist, so the correct permissions don’t exist for it. It’s an easy fix if you can figure out that’s the issue, but there’s really no clear indication that’s the issue so I was stumped for a while until I decided to stop looking for GitLab related issues with SSH and instead CentOS related issues with SSH. It was a really simple command, restorecon -R -v /home/git/.ssh We didn’t have this issue with the other server installs, but I think in those cases I disabled SELinux altogether and this time I was trying to get it to work with it. The other issue was that CentOS by default locks accounts without passwords. The GitLab docs recommend editing the /etc/shadow file manually, but I found a command that accomplishes the same thing without editing important configurations: usermod -U git. Email was working fine too.

I still need to get HTTPS cloning working. I believe it’s an issue with using a self-signed certificate, so I’m going to try out using a CACert certificate and seeing if I can get it working from there.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

MediaWiki + Going Live with the new CS server

In my first post of the semester, we set up a CentOS server that we planned to have be the new CS server. The old one was running a dated version of Redhat, and with the success of the server that was set up for CS401, the latest CentOS seemed like the way to go.

The tasks were split up between myself and Dillon Murphy. Dillon worked on moving over the blogs and I focused on getting a new wiki up.

The old wiki had been plagued by spambots and used a different software so Dr. Wurst figured just setting up MediaWiki and moving over what he could would be a good direction to go in.

MediaWiki is a PHP/MySQL wiki software built and designed for Wikipedia. Although it was built for Wikipedia, it’s a free software application and as such can be freely used by anyone. I’m not sure if it’s the most popular wiki software, but it’s definitely the one I’ve seen used around the most. Aside from Wikipedia, one notable site that uses the MediaWiki software is Wikia where users can easily create MediaWiki installs for arbitrary uses. It’s also the wiki software of choice we use at my work, which we mainly use for documenting various important information such as networking info, what’s running on servers, troubleshooting tips, and that sort of content.

A PHP/MySQL application relies on PHP for the code and MySQL for the database. These applications are usually pretty trivial to install, so long as you have php, mysql, and a webserver running. I’ve only ever really worked with Apache as a webserver, so we have the common Apache/PHP/MySQL set-up for this server. This is also the set-up one would use for WordPress, so it worked well for our multi-purpose server.

I’m pretty familiar with MediaWiki in a general sense, I remember installing it and playing around with it years ago because I found wikis interesting. I remember years ago it wasn’t as easy to install as it is now, with its main focus being to make Wikipedia better, but nowadays its really trivial to set up and get running. (It could have been that I wasn’t as used to installing web applications as I am now though, haha)

So there shouldn’t be any issues setting this up right?

Luckily there weren’t any issues getting this up and running. I had also recently set-up Wikipedia styled URLs for pages at work, so I was able to get that working with no problem either. After setting everything up, I created a user for Dr. Wurst and a user for Dillon to test that everything was working. Everything seemed to be running with no issues, so mission success!

The next step was to figure out how we were going to handle user authentication. Ideally, we don’t want it possible for spam bots to sign up at all due to what happened to the older wiki. Initially I set it up so users can not register at all until we figured things out. We decided against using LDAP authentication since we didn’t want to bother UTS too much. Originally we were thinking of editing the code to only accept registrations from the @worcester.edu domain- this probably wouldn’t be too difficult to implement, but I found that the Google Apps Authentication plugin accomplished pretty much the same thing, restricting logins to Google Apps users from a specific domain using OpenID authentication. Users would be created automatically on login if they didn’t exist, so it was an ideal solution. Unfortunately I found that this authentication method wasn’t enabled for the worcester.edu domain, so we sent in a request to UTS to see if they could enable it.

While we waited for an update on if it was a good idea to enabled and if its enabled, Dr. Wurst asked about setting up subpages. Subpages is a feature not enabled by default, but allows you to make pages with the title PAGE/Subpage which are pages by the name of “Subpage” that link to the original page “PAGE” right at the top for easy navigation. This was pretty trivial to setup, and it’s a pretty nice feature to have.

We also moved the server from testing to production as Dillon had also finished moving over the blogs. All we really needed to do for this was take the old server of its current local IP and move the new server to that IP and the domain and public IP would point to it. We weren’t 100% sure it would work out this way, but it seemed like that was the way it would work to me and we could always undo the changes if there were problems and send in a request to UTS to figure things out, but luckily everything worked out and the server was live.

Earlier today I got word that the OpenID authentication feature was enabled for the worcester.edu domain, so I enabled the plugin again and everything worked out as it should. This lead to two issues though: First of all it wouldn’t let me edit permissions of users with an @ in the name as it treats editing those users as editing users from another wiki for some reason. I found a pretty easy fix for that by changing what delimiter is used for that, from @ to #, allowing editing the @worcester.edu users as normal. The second issue was that the old accounts couldn’t be used any more, so I just redirected their user pages to the new accounts. Not the cleanest fix, but I looked into changing the users who made each contribution over and it seemed like a lot of work for an easy task, but it made sense for MediaWiki to do that as having it possible for people to “steal” contributions on a big site like Wikipedia wouldn’t be good for anyone.

As far as the Wiki goes, I’m pretty satisfied in how it turned out.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Working With UTS

This is my first time working with a team that manages networking and other things. At work I was able to plan out the networking with a few others and if we ever needed something done it was usually something we could handle ourselves. You don’t really have the kind of freedom and ease for doing those kind of things when you have to send in requests for a lot of different tasks.

I’m not really complaining, it’s just a different experience. The most important takeaway is to plan out what you need in advance and send out requests as soon as possible. Some things will take more time than others as some tasks need to be verified to ensure network security and they definitely have other tasks to deal with than ours.

One big holdup we had was the networking for the cluster. It took Dillon and I a bit to figure out exactly what kind of networking we could do as well as what we wanted to do. The original cluster had one IP on WSU’s LAN, and then a machine acted as a router and handled a separate LAN just for the cluster. We figured that wasn’t really necessary and it would be easier enough to just have each machine with an IP on WSU’s LAN instead. Initially we decided to just use the next 10 IP addresses from the block we were given for virtual servers. This set-up didn’t work at all, and we figured those IP addresses were on a separate VLAN or something like that. Dillon later attempted to use the IP that was assigned to the server before but that didn’t work. We could acquire IP addresses from DHCP, but we wanted some static IP addresses that we wouldn’t have to worry about IP conflicts due to being in the DHCP pool or anything like that. We then had to ask UTS about getting some IP addresses, and while we waited we would just install the servers using the DHCP addresses. When UTS got back to us they told us that the whole block of the IP address that was given for the cluster before should be available, but we had already tested it and it didn’t work, but we tested it again just in case and still it still didn’t work. So we had to troubleshoot with them, and eventually we found out that the VLAN assigned to that port was wrong and got everything sorted out. It was kind of a hassle overall, but we definitely could have avoided it if we tested IPs and figured them out earlier in the semester.

One thing I was working on individually was Google Apps authentication for the wiki. I’ll go into it more in a post about the wiki, but after setting it up I found that the feature wasn’t enabled for WSU accounts. So we had to see if UTS thought that was a good feature to enable- I’ll give an update once we get that up or not.

We had the IP addresses available to us for the virtual machines in advance, Dr. Wurst had asked about them earlier in the year for the server for the CS401 project so we were all set regarding those.

In one particular case we avoided UTS altogether. When we were moving the new CS server to production, we found that the old server wasn’t assigned the public IP address directly- it was assigned a local IP address. This sounded like a 1:1 NAT set-up, which is essentially where you have local IP addresses for devices on your network and you route public IP addresses to certain IP addresses. So for moving the new CS server to production, we figured just changing the IP address from the old one over and giving the new server the old one’s local IP address we’d be able to put it in production with no issues. We were able to do this and everything worked out well.

Overall I’d say working with UTS was a good thing. It definitely caused a bit of hassle as it wasn’t really what I was used to, but overall a team like that is definitely a good thing to have.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Start of the Independent Studty

This semester I’m working on an independent study involving migrating the old Computer Science server (cs.worcester.edu) to a more updated platform (with a focus on installing new Wiki software and moving over articles from the old software), looking into and setting up a Git server for the CS department, and work on setting up a milti-node Eucalyptus cluster. For some of this I’ll be collaborating with Dillon Murphy, in the case of the Eucalyptus cluster I’ll be sharing hardware with his Hadoop cluster for that, in the case of the server migration, I’ll be focusing on migrating the Wiki and he’s focused on the blog, and we’ll be working together for the git server.

For the first step of the project, we installed a CentOS 6.4 virtual server on the VMWare platform. We had a bit of trouble getting the VMWare console to load, the web extension that allows it to work didn’t seem to want to work on anything, and we ended up installing the wrong VMWare clients and not having them work, so eventually Dr. Wurst had to open up his Windows virtual machine and run the VMWare client he had installed on that. Once we got that running, everything ran smooth and we were able to get everything up and running without any issues. We used a minimal install of CentOS, allowing us to focus on only installing the applications we need.

We set up some user accounts for ourselves on the new server and the old server, and on the new server I installed some essential applications as well as the EPEL (Extra Packages for Enterprise Linux) repository. If you don’t know, most distros of GNU+Linux use a package manager to manage the installation of applications, and they use repositories as the sources they get the applications from; CentOS relies on the YUM package manager, and while the repositories it comes with by default are good for most cases, there are a few odd applications you’ll want to get from the EPEL.

Here’s what we installed and why:

  • vim – text editor
  • emacs – another text editor (can’t have too many of these)
  • tmux – terminal multiplexer, similar to GNU Screen. This allows you to run multiple terminal windows in the same session, with a few extra features.
  • mosh – Mobile Shell- http://mosh.mit.edu/ ; a replacement for ssh that has some cool ideas such as client ip roaming and lag compensation.
  • httpd – Apache web server
  • MySQL – MySQL database
  • htop – displays running processes, cpu and memory statistics, I think it’s much more easy to use than the standard top application

This should be good enough for now; if any of our projects need any more dependencies, we can install those, but this is a pretty reasonable set-up for a web server.

My next focus will be the installation of MediaWiki, probably to a /wiki directory on the server. I’ve set up one before at my job with nice Wikipedia styled links and it’s a bit of work but shouldn’t be too difficult; my main worry is getting a reasonable set of plug-ins to use with it to provide us all the functionality we could need (as well as documenting the process). I also want to look into using Worcester State’s LDAP server for the MediaWiki users, as MediaWiki has an LDAP plugin. If that doesn’t work or we can’t get the LDAP details, it seems like the Google Apps Authentication plug-in will serve the exact same purpose.

The next step after MediaWiki would be to set up a GitLab server. GitLab is a free software knock-off of GitHub. While it would be nice to have a GitHub server up and running, I’m pretty sure they charge a ton of money for their enterprise self-hosted solution and GitLab provides enough functionality for what we want where it’s not really an option. I have a bit of experience with GitLab, and from what I’ve found installing it from scratch doesn’t really work out too well, but BitNami has an easy installer, as well as a VMWare appliance for GitLab. I’ve set up a server using the easy installer before and it’s extremely easy to set up, although it’s installed to a pretty non-typical location so it behaves a bit different from a native installed application. For that reason I think if we set up the VMWare appliance it’d be the best option for us, as as far as I know that’ll be a native installed GitLab on a pre-configured server. But that’s something we’ll just have to find out. We want something that’s easy to maintain (or doesn’t have to be maintained much at all) and easy to upgrade if we need to upgrade it.

As far as the Eucalyptus platform goes, I haven’t looked too much into that yet. For the hardware I’m sharing 10 or 11 servers with Dillon Murphy as he works on his Hadoop cluster. We’re not sure if we’ll just split the servers between us, attempt to install both platforms on each server, or set up a Virtual Machine host and install a VM for each platform on each server. I think that ideally installing them side-by-side on each server would be the best option but if they have conflicting dependencies there would be problems with that, so it’s something we’ll have to figure out when we get to that.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

A Quick Review on JILOA Course.

First of all, i would say this course was my first experience of working in a real working environment; and it helped me to catch a proper image of how everyone work together to get a big project done. It is true that in programming, we may not know what we are building until all the codes are integrated and form a product. A big project would be broken down into several different parts and each part would be handled by a group of programmers. Then, every group has its own task to do. Some parts of the project do not need to know how the other parts are going. Team-working, brainstorming, setting up tasks for each group are all the important initial steps of the process. That would help to work efficiently.

A project leader or advisor is also very important to help get the project’s process on the right track and make it move as quick as possible. 

Beside those, real working environment involves clients. We need to stay informed and updated with them. Although clients do not always know what they talk about programming, we have to focus on what they would like the product to function so that we can both come to a happy ending of a contract.

Communication is the very important thing to get the project done efficiently. Beside emaillist, Git and GitHub were new to me and they were great tools. They are also the great tools for out teams to manage, share, and contribute on our project. I would learn more about them.

Since our big project was broken into smaller parts and each group had a different part to work on, besides my team’s work, i did not know about other teams’ works – how they had their tasks finished. there were many things that i would like to work on but i did not such as data base, server, user interface, and testing the codes. However, the priority goal of the course was to learn how to work together in a real working environment, to have the ability to process, analyze corporate, and get the job done; and i felt like i already had the achievements for those from this course.

For this course, i wish i could know more about git and the language used for the project before. 

For the weekly post, we were asked to post weekly, but with the lack of programming languages, being stuck on how to get problem solved, some weeks i was on the same thing. Rather than just post a sentence to say about that, i waited until i got something done to post. That caused the lack of my posts.

 

From the blog daunguyen10&#039;s Blog » CS-WSU by daunguyen10 and used with permission of the author. All other rights reserved by the author.