Category Archives: cs-wsu

GitLab

Where do I start with GitLab?

We had to install 3 servers to finally get this up and running and I’m still not completely satisfied with it. It’s frustrating to set up. It’s great software when you get it up and running, there’s no doubt about that, but it sure is a pain to get up and running.

Before I had done any work on setting up a GitLab server for WSU’s CS department, I had a bit of experience with it. I had tried to set up GitLab on some distro of Linux before for playing around with at home following the install guide but didn’t get anything working with it. We had a GitLab server running at my work at one point using the BitNami installer, but there were a few issues with it so we scraped it for something else.

The first attempt was a GitLab server install with the BitNami installer. We set up a CentOS and I ran the BitNami installer and got something up and running pretty easily. However, there was the most bizarre issue I had seen- for some strange reason pages would just show up as blank in Firefox unless you refreshed. I couldn’t find anything covering the same issue at all. I didn’t think something with an issue like that was worth keeping around, so we scrapped that and set up another CentOS server.

Here I decided to find a guide for installing GitLab on CentOS instead of relying on an installer or anything like that. In hindsight this probably wasn’t the best idea, the official install guide recommends Ubuntu and we probably would have had a much easier time just going with Ubuntu server. However, we had CentOS on hand, and I much prefer using CentOS for a server as it has some pretty good default configuration files for a lot of applications.

I didn’t have any issues with setting GitLab up initially using this guide, but afterwards a few problems became apparent. First off, neither SSH or HTTPS cloning worked. After a bit of hassle we got SSH working and were fine with that and decided that things were good to go. We sent in the request to UTS and got a public IP and subdomain for the server. However, we quickly discovered that sending emails didn’t work. It turned out that it was trying to send emails from gitlab@<localip> which get rejected. Unfortunately, I couldn’t find anywhere to change where it was sending emails from. I changed configuration files all over the place but had no success at all in fixing that. It got to the point where I just settled on doing a complete reinstall with the subdomain in mind which would hopefully fix all the issues. I eventually decided that uninstalling GitLab would be too much of a hassle on its own, so I would have to make a completely new server install. After setting my mind on doing that I spent a bit more time trying to get the current one installed and totally broke it, and didn’t care enough to fix it.

So finally, the last install, the one we have up and running. Using the same guide as before but documenting any oddities I found so we’d have a reasonable installation guide, I got it running and working to a pretty reasonable state, but with a few bumps along the way. First off, the guide recommends running a test to make sure everything’s running before you set up the webserver. It gives an error if the webserver isn’t up and running, but I had no indication this was caused by the webserver not being set-up. I decided to just install the webserver and hope it fixed it, and thankfully it did. Secondly, I tried Apache as the webserver, as the guide had the option of using nginx and apache. nginx seems like what GitLab was developed for so I used that for the previous installs, but for this one I decided to go with what I was familiar with. Didn’t get anywhere with it, and just went back to nginx and go that working pretty easily. Thirdly, the SSH issue we had before came back. This issue had two parts: First off the SELinux permissions aren’t set correctly for the ssh keys file. I’m not sure the exact cause for this, but I think it’s because GitLab creates the file itself if one doesn’t exist, so the correct permissions don’t exist for it. It’s an easy fix if you can figure out that’s the issue, but there’s really no clear indication that’s the issue so I was stumped for a while until I decided to stop looking for GitLab related issues with SSH and instead CentOS related issues with SSH. It was a really simple command, restorecon -R -v /home/git/.ssh We didn’t have this issue with the other server installs, but I think in those cases I disabled SELinux altogether and this time I was trying to get it to work with it. The other issue was that CentOS by default locks accounts without passwords. The GitLab docs recommend editing the /etc/shadow file manually, but I found a command that accomplishes the same thing without editing important configurations: usermod -U git. Email was working fine too.

I still need to get HTTPS cloning working. I believe it’s an issue with using a self-signed certificate, so I’m going to try out using a CACert certificate and seeing if I can get it working from there.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

MediaWiki + Going Live with the new CS server

In my first post of the semester, we set up a CentOS server that we planned to have be the new CS server. The old one was running a dated version of Redhat, and with the success of the server that was set up for CS401, the latest CentOS seemed like the way to go.

The tasks were split up between myself and Dillon Murphy. Dillon worked on moving over the blogs and I focused on getting a new wiki up.

The old wiki had been plagued by spambots and used a different software so Dr. Wurst figured just setting up MediaWiki and moving over what he could would be a good direction to go in.

MediaWiki is a PHP/MySQL wiki software built and designed for Wikipedia. Although it was built for Wikipedia, it’s a free software application and as such can be freely used by anyone. I’m not sure if it’s the most popular wiki software, but it’s definitely the one I’ve seen used around the most. Aside from Wikipedia, one notable site that uses the MediaWiki software is Wikia where users can easily create MediaWiki installs for arbitrary uses. It’s also the wiki software of choice we use at my work, which we mainly use for documenting various important information such as networking info, what’s running on servers, troubleshooting tips, and that sort of content.

A PHP/MySQL application relies on PHP for the code and MySQL for the database. These applications are usually pretty trivial to install, so long as you have php, mysql, and a webserver running. I’ve only ever really worked with Apache as a webserver, so we have the common Apache/PHP/MySQL set-up for this server. This is also the set-up one would use for WordPress, so it worked well for our multi-purpose server.

I’m pretty familiar with MediaWiki in a general sense, I remember installing it and playing around with it years ago because I found wikis interesting. I remember years ago it wasn’t as easy to install as it is now, with its main focus being to make Wikipedia better, but nowadays its really trivial to set up and get running. (It could have been that I wasn’t as used to installing web applications as I am now though, haha)

So there shouldn’t be any issues setting this up right?

Luckily there weren’t any issues getting this up and running. I had also recently set-up Wikipedia styled URLs for pages at work, so I was able to get that working with no problem either. After setting everything up, I created a user for Dr. Wurst and a user for Dillon to test that everything was working. Everything seemed to be running with no issues, so mission success!

The next step was to figure out how we were going to handle user authentication. Ideally, we don’t want it possible for spam bots to sign up at all due to what happened to the older wiki. Initially I set it up so users can not register at all until we figured things out. We decided against using LDAP authentication since we didn’t want to bother UTS too much. Originally we were thinking of editing the code to only accept registrations from the @worcester.edu domain- this probably wouldn’t be too difficult to implement, but I found that the Google Apps Authentication plugin accomplished pretty much the same thing, restricting logins to Google Apps users from a specific domain using OpenID authentication. Users would be created automatically on login if they didn’t exist, so it was an ideal solution. Unfortunately I found that this authentication method wasn’t enabled for the worcester.edu domain, so we sent in a request to UTS to see if they could enable it.

While we waited for an update on if it was a good idea to enabled and if its enabled, Dr. Wurst asked about setting up subpages. Subpages is a feature not enabled by default, but allows you to make pages with the title PAGE/Subpage which are pages by the name of “Subpage” that link to the original page “PAGE” right at the top for easy navigation. This was pretty trivial to setup, and it’s a pretty nice feature to have.

We also moved the server from testing to production as Dillon had also finished moving over the blogs. All we really needed to do for this was take the old server of its current local IP and move the new server to that IP and the domain and public IP would point to it. We weren’t 100% sure it would work out this way, but it seemed like that was the way it would work to me and we could always undo the changes if there were problems and send in a request to UTS to figure things out, but luckily everything worked out and the server was live.

Earlier today I got word that the OpenID authentication feature was enabled for the worcester.edu domain, so I enabled the plugin again and everything worked out as it should. This lead to two issues though: First of all it wouldn’t let me edit permissions of users with an @ in the name as it treats editing those users as editing users from another wiki for some reason. I found a pretty easy fix for that by changing what delimiter is used for that, from @ to #, allowing editing the @worcester.edu users as normal. The second issue was that the old accounts couldn’t be used any more, so I just redirected their user pages to the new accounts. Not the cleanest fix, but I looked into changing the users who made each contribution over and it seemed like a lot of work for an easy task, but it made sense for MediaWiki to do that as having it possible for people to “steal” contributions on a big site like Wikipedia wouldn’t be good for anyone.

As far as the Wiki goes, I’m pretty satisfied in how it turned out.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Working With UTS

This is my first time working with a team that manages networking and other things. At work I was able to plan out the networking with a few others and if we ever needed something done it was usually something we could handle ourselves. You don’t really have the kind of freedom and ease for doing those kind of things when you have to send in requests for a lot of different tasks.

I’m not really complaining, it’s just a different experience. The most important takeaway is to plan out what you need in advance and send out requests as soon as possible. Some things will take more time than others as some tasks need to be verified to ensure network security and they definitely have other tasks to deal with than ours.

One big holdup we had was the networking for the cluster. It took Dillon and I a bit to figure out exactly what kind of networking we could do as well as what we wanted to do. The original cluster had one IP on WSU’s LAN, and then a machine acted as a router and handled a separate LAN just for the cluster. We figured that wasn’t really necessary and it would be easier enough to just have each machine with an IP on WSU’s LAN instead. Initially we decided to just use the next 10 IP addresses from the block we were given for virtual servers. This set-up didn’t work at all, and we figured those IP addresses were on a separate VLAN or something like that. Dillon later attempted to use the IP that was assigned to the server before but that didn’t work. We could acquire IP addresses from DHCP, but we wanted some static IP addresses that we wouldn’t have to worry about IP conflicts due to being in the DHCP pool or anything like that. We then had to ask UTS about getting some IP addresses, and while we waited we would just install the servers using the DHCP addresses. When UTS got back to us they told us that the whole block of the IP address that was given for the cluster before should be available, but we had already tested it and it didn’t work, but we tested it again just in case and still it still didn’t work. So we had to troubleshoot with them, and eventually we found out that the VLAN assigned to that port was wrong and got everything sorted out. It was kind of a hassle overall, but we definitely could have avoided it if we tested IPs and figured them out earlier in the semester.

One thing I was working on individually was Google Apps authentication for the wiki. I’ll go into it more in a post about the wiki, but after setting it up I found that the feature wasn’t enabled for WSU accounts. So we had to see if UTS thought that was a good feature to enable- I’ll give an update once we get that up or not.

We had the IP addresses available to us for the virtual machines in advance, Dr. Wurst had asked about them earlier in the year for the server for the CS401 project so we were all set regarding those.

In one particular case we avoided UTS altogether. When we were moving the new CS server to production, we found that the old server wasn’t assigned the public IP address directly- it was assigned a local IP address. This sounded like a 1:1 NAT set-up, which is essentially where you have local IP addresses for devices on your network and you route public IP addresses to certain IP addresses. So for moving the new CS server to production, we figured just changing the IP address from the old one over and giving the new server the old one’s local IP address we’d be able to put it in production with no issues. We were able to do this and everything worked out well.

Overall I’d say working with UTS was a good thing. It definitely caused a bit of hassle as it wasn’t really what I was used to, but overall a team like that is definitely a good thing to have.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Start of the Independent Studty

This semester I’m working on an independent study involving migrating the old Computer Science server (cs.worcester.edu) to a more updated platform (with a focus on installing new Wiki software and moving over articles from the old software), looking into and setting up a Git server for the CS department, and work on setting up a milti-node Eucalyptus cluster. For some of this I’ll be collaborating with Dillon Murphy, in the case of the Eucalyptus cluster I’ll be sharing hardware with his Hadoop cluster for that, in the case of the server migration, I’ll be focusing on migrating the Wiki and he’s focused on the blog, and we’ll be working together for the git server.

For the first step of the project, we installed a CentOS 6.4 virtual server on the VMWare platform. We had a bit of trouble getting the VMWare console to load, the web extension that allows it to work didn’t seem to want to work on anything, and we ended up installing the wrong VMWare clients and not having them work, so eventually Dr. Wurst had to open up his Windows virtual machine and run the VMWare client he had installed on that. Once we got that running, everything ran smooth and we were able to get everything up and running without any issues. We used a minimal install of CentOS, allowing us to focus on only installing the applications we need.

We set up some user accounts for ourselves on the new server and the old server, and on the new server I installed some essential applications as well as the EPEL (Extra Packages for Enterprise Linux) repository. If you don’t know, most distros of GNU+Linux use a package manager to manage the installation of applications, and they use repositories as the sources they get the applications from; CentOS relies on the YUM package manager, and while the repositories it comes with by default are good for most cases, there are a few odd applications you’ll want to get from the EPEL.

Here’s what we installed and why:

  • vim – text editor
  • emacs – another text editor (can’t have too many of these)
  • tmux – terminal multiplexer, similar to GNU Screen. This allows you to run multiple terminal windows in the same session, with a few extra features.
  • mosh – Mobile Shell- http://mosh.mit.edu/ ; a replacement for ssh that has some cool ideas such as client ip roaming and lag compensation.
  • httpd – Apache web server
  • MySQL – MySQL database
  • htop – displays running processes, cpu and memory statistics, I think it’s much more easy to use than the standard top application

This should be good enough for now; if any of our projects need any more dependencies, we can install those, but this is a pretty reasonable set-up for a web server.

My next focus will be the installation of MediaWiki, probably to a /wiki directory on the server. I’ve set up one before at my job with nice Wikipedia styled links and it’s a bit of work but shouldn’t be too difficult; my main worry is getting a reasonable set of plug-ins to use with it to provide us all the functionality we could need (as well as documenting the process). I also want to look into using Worcester State’s LDAP server for the MediaWiki users, as MediaWiki has an LDAP plugin. If that doesn’t work or we can’t get the LDAP details, it seems like the Google Apps Authentication plug-in will serve the exact same purpose.

The next step after MediaWiki would be to set up a GitLab server. GitLab is a free software knock-off of GitHub. While it would be nice to have a GitHub server up and running, I’m pretty sure they charge a ton of money for their enterprise self-hosted solution and GitLab provides enough functionality for what we want where it’s not really an option. I have a bit of experience with GitLab, and from what I’ve found installing it from scratch doesn’t really work out too well, but BitNami has an easy installer, as well as a VMWare appliance for GitLab. I’ve set up a server using the easy installer before and it’s extremely easy to set up, although it’s installed to a pretty non-typical location so it behaves a bit different from a native installed application. For that reason I think if we set up the VMWare appliance it’d be the best option for us, as as far as I know that’ll be a native installed GitLab on a pre-configured server. But that’s something we’ll just have to find out. We want something that’s easy to maintain (or doesn’t have to be maintained much at all) and easy to upgrade if we need to upgrade it.

As far as the Eucalyptus platform goes, I haven’t looked too much into that yet. For the hardware I’m sharing 10 or 11 servers with Dillon Murphy as he works on his Hadoop cluster. We’re not sure if we’ll just split the servers between us, attempt to install both platforms on each server, or set up a Virtual Machine host and install a VM for each platform on each server. I think that ideally installing them side-by-side on each server would be the best option but if they have conflicting dependencies there would be problems with that, so it’s something we’ll have to figure out when we get to that.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

A Quick Review on JILOA Course.

First of all, i would say this course was my first experience of working in a real working environment; and it helped me to catch a proper image of how everyone work together to get a big project done. It is true that in programming, we may not know what we are building until all the codes are integrated and form a product. A big project would be broken down into several different parts and each part would be handled by a group of programmers. Then, every group has its own task to do. Some parts of the project do not need to know how the other parts are going. Team-working, brainstorming, setting up tasks for each group are all the important initial steps of the process. That would help to work efficiently.

A project leader or advisor is also very important to help get the project’s process on the right track and make it move as quick as possible. 

Beside those, real working environment involves clients. We need to stay informed and updated with them. Although clients do not always know what they talk about programming, we have to focus on what they would like the product to function so that we can both come to a happy ending of a contract.

Communication is the very important thing to get the project done efficiently. Beside emaillist, Git and GitHub were new to me and they were great tools. They are also the great tools for out teams to manage, share, and contribute on our project. I would learn more about them.

Since our big project was broken into smaller parts and each group had a different part to work on, besides my team’s work, i did not know about other teams’ works – how they had their tasks finished. there were many things that i would like to work on but i did not such as data base, server, user interface, and testing the codes. However, the priority goal of the course was to learn how to work together in a real working environment, to have the ability to process, analyze corporate, and get the job done; and i felt like i already had the achievements for those from this course.

For this course, i wish i could know more about git and the language used for the project before. 

For the weekly post, we were asked to post weekly, but with the lack of programming languages, being stuck on how to get problem solved, some weeks i was on the same thing. Rather than just post a sentence to say about that, i waited until i got something done to post. That caused the lack of my posts.

 

From the blog daunguyen10&#039;s Blog » CS-WSU by daunguyen10 and used with permission of the author. All other rights reserved by the author.

Final Screensaver

The final decision was to go with the previous screensaver for the better compatible with the iPad. Also, Tim wanted us to modify the introPage format a little bit by putting the navigation button back to the middle of the bottom edge. And that will be it, the final version of screen saver.

From the blog daunguyen10&#039;s Blog » CS-WSU by daunguyen10 and used with permission of the author. All other rights reserved by the author.

Two possible solutions for slideshow?

This post won’t make much sense if you have not read my last.

By snooping around other peoples apps some more, I came across the puzzle groups JavaScript folder. In there I found what was allowing the puzzle pieces to be moved around. It is done by an extension to the jquery suite and adds support for drag on the iPad through touchPunch. TouchPunch has been added to the slideshow as well as jquery, which I made the decision to move away from earlier… Well now I’m back.

In theory I should just be able to add the “draggable” function to the image element. So far no luck. It maybe conflicts with the other drag function. I hadn’t thought of this until now. The problem is, when to call/allow the draggable function to be called. Each image is contained in a separate div, so when is the appropriate time? Something I’ll have to think about.

The other solution would still be what I talked about before. Updating the drag function as it stands now in MooTools-more and add touchMove events.

And… now that I brought back jquery support, maybe I should change the current swipe ‘tech’ in the app to jquery instead of the current iteration.

A lot to think about.

This is probably the last weekend I’ll be able to work on it. I hope I have big breakthrough. Again, the latest version I’m working on is on the test server.

From the blog Sean » cs-wsu by shorton1 and used with permission of the author. All other rights reserved by the author.

Two possible solutions for slideshow?

This post won’t make much sense if you have not read my last.

By snooping around other peoples apps some more, I came across the puzzle groups JavaScript folder. In there I found what was allowing the puzzle pieces to be moved around. It is done by an extension to the jquery suite and adds support for drag on the iPad through touchPunch. TouchPunch has been added to the slideshow as well as jquery, which I made the decision to move away from earlier… Well now I’m back.

In theory I should just be able to add the “draggable” function to the image element. So far no luck. It maybe conflicts with the other drag function. I hadn’t thought of this until now. The problem is, when to call/allow the draggable function to be called. Each image is contained in a separate div, so when is the appropriate time? Something I’ll have to think about.

The other solution would still be what I talked about before. Updating the drag function as it stands now in MooTools-more and add touchMove events.

And… now that I brought back jquery support, maybe I should change the current swipe ‘tech’ in the app to jquery instead of the current iteration.

A lot to think about.

This is probably the last weekend I’ll be able to work on it. I hope I have big breakthrough. Again, the latest version I’m working on is on the test server.

From the blog Sean » cs-wsu by shorton1 and used with permission of the author. All other rights reserved by the author.

CS401: (additional post) what I found at the last Career Fair

So for my last post (probably EVER)…I would like to share what I found out at the last career fair I went to, which was the Boston Startup Job Fair held at Microsoft NERD Center last month.

I think there were about 44 companies there, can’t remember the exact number, all of them are startups, with size ranging from just 2-man company to like about 100 people. And every company was looking for either UI & Web (both front & backend) developers, and Mobile App developers.

I plan to go to another Career Fair definitely after the summer, to see what the bigger companies or corporations are looking for in the developers. But if you want to get a job for the Startups, definitely start learning (if you don’t already know) either web development or mobile app development.

Now there are pros and cons to working for a startup. The startups are small (obviously) and so the employee’s benefit may not be that great (maybe no dental?…just kidding), they probably will not pay for your training or certifications (since they are startups they might not be able to afford it) and the salary is possibly lower than working for bigger companies. On the contrary, some of the benefits…are listed here http://theyec.org/14-top-benefits-of-working-for-a-startup/

But I think I personally might be interested in working for a startup (for a start of my career) because you can actually grow with the company as you keep working, and you won’t be “just” another employee like you would for working at a big companies.

Anyways, good luck for all my graduating classmates, hope you get the jobs that you want!! and for the ones that I’ll be seeing again next semester, have a nice summer vacation!!!

From the blog ssuksawat » cs-wsu by ssuksawat and used with permission of the author. All other rights reserved by the author.

CS401: (additional post) what I found at the last Career Fair

So for my last post (probably EVER)…I would like to share what I found out at the last career fair I went to, which was the Boston Startup Job Fair held at Microsoft NERD Center last month.

I think there were about 44 companies there, can’t remember the exact number, all of them are startups, with size ranging from just 2-man company to like about 100 people. And every company was looking for either UI & Web (both front & backend) developers, and Mobile App developers.

I plan to go to another Career Fair definitely after the summer, to see what the bigger companies or corporations are looking for in the developers. But if you want to get a job for the Startups, definitely start learning (if you don’t already know) either web development or mobile app development.

Now there are pros and cons to working for a startup. The startups are small (obviously) and so the employee’s benefit may not be that great (maybe no dental?…just kidding), they probably will not pay for your training or certifications (since they are startups they might not be able to afford it) and the salary is possibly lower than working for bigger companies. On the contrary, some of the benefits…are listed here http://theyec.org/14-top-benefits-of-working-for-a-startup/

But I think I personally might be interested in working for a startup (for a start of my career) because you can actually grow with the company as you keep working, and you won’t be “just” another employee like you would for working at a big companies.

Anyways, good luck for all my graduating classmates, hope you get the jobs that you want!! and for the ones that I’ll be seeing again next semester, have a nice summer vacation!!!

From the blog ssuksawat » cs-wsu by ssuksawat and used with permission of the author. All other rights reserved by the author.