Author Archives: Chad Wade Day, Jr.

Working as a Team but on Individual Projects

Although this is an independent study, at times I worked with Dillon Murphy. For the most part we worked on separate tasks, but there were a few we worked together on.

We did work together on all of the virtual server installs, also working with Dr. Wurst. Dillon and I were pretty familiar with server installs, but since the new CS server was being used for the Wiki (which I was focusing on), and the blogs (which Dillon focused on), it made sense to plan out and figure out what we needed. As I said in a previous post, MediaWiki relied on an Apache/PHP/MySQL setup, and luckily for Dillon WordPress relied on that as well.

During our individual projects, we would shoot ideas off of each other if we ran into issues. For example, for moving over WordPress, Dillon wasn’t too sure how to approach it but since I was familiar with PHP/MySQL applications I suggested that it was likely all he really needed to do was dump the database and restore it on the new server and then move over the code. There were a few issues with configuration files pointing to old URLs, but other than that everything worked out.

We worked together on getting the new CS server up, which worked out thanks to some guesswork and brainstorming to figure out the best way to do it.

Getting the cluster with fresh installs of CentOS was another collaborative effort seeing as we planned to use them for our next projects, Dillon focusing on Hadoop and myself focusing on Eucalyptus. We had a few ideas on how to utilize the cluster for both platforms. One suggestion involved installing a virtual machine hypervisor on each machine and having two Virtual Machines on each, one for Hadoop and one for Eucalyptus. Another suggestion was splitting the machines between each project. The suggestion we went with, after Dillon said it looked like it would work out, was having one CentOS install on each machine and installing both on each machine.

Once we figured out what we wanted to do with the network and resolved the issues with it, Dillon focused on getting the servers up. We finally got that up and running, so next is focusing on the final solo projects.

While Dillon focused on that and a bit before, I focused on getting GitLab working to usable conditions. Dillon and Dr. Wurst helped test and ensure that everything was working as it should.

It’s really helpful to work in a team, even if each member is focusing on a separate individual project. Being able to bounce ideas off of each other and resolve things more quickly. Working on your own can be productive for its own reasons, but it never hurts to have a helping hand available.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

GitLab Part 2

I missed a few things I wanted to go over with GitLab, and I have an update on trying out CACerts.

I didn’t really go over GitLab or git itself and why it’s important.

To understand GitLab, first you have to understand git. Git is a revision control software for plain text files. What this means is, you edit the code, commit your changes, and edit it again, and your code changes are tracked. Linus Torvalds, the creator of the Linux kernel created it when he wasn’t happy with the other software that was available. Git is one of my personal favourite tools to use, and has helped me with development a ton. I’ve definitely made poor choices in improving code or deleting code I didn’t like and want to undo that- with proper use of git this is trivial. One really advantageous feature of git is that it’s decentralized- when you clone a project with git, all its contents are self contained on your computer and if you want to update with a remote server you can pull the changes from the server or push your changes to the server. You can even have multiple remote repositories to push and pull from, though in practice you won’t use that feature too much. Another useful feature is you can set the remote repository to be another location on your local machine- this can be useful if you’re away from the internet and worried about accidentally ruining your working directory.

GitLab is probably the best free software alternative to Github. Github is a web front-end for remote git repositories, bring a lot of additional features to git such as editing your code from the web, forking other projects and pushing changes back to main projects (useful for feature development), easy account and group management, and of course the advantage of keeping your code stored on a remote server in case of issues with your local machine. For an example of a Github project, you can check out the CS-401 project from Spring 2013 that I was a member of. GitLab offers pretty much all of Github’s main features and a familiar interface. The big advantage of GitLab over Github is that we can store the data on WSU’s location, as well as private repositories, which are a paid feature of Github.

So as far as our GitLab install goes, last night I looked into using a certificate from CACert. Turns out, to apply for a CACert you need to own the domain, so that was quickly scrapped. I don’t think we can get HTTPS working at this point. I tried a few things with the configuration but it *seems* like the issue is the self-signed certificate and obtaining a legitimate certificate I don’t think is possible. This isn’t a huge issue, though with SSH you have to install keys on the server so it requires a bit more documentation.

What needs to be done with the server from here? I think all we need to do is resize the partition and virtual hard drive the server is on. I believe initially we set it to a 16gb install, but if students are going to use it for many assignments in their classes I feel like that’d fill up pretty quickly.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

GitLab

Where do I start with GitLab?

We had to install 3 servers to finally get this up and running and I’m still not completely satisfied with it. It’s frustrating to set up. It’s great software when you get it up and running, there’s no doubt about that, but it sure is a pain to get up and running.

Before I had done any work on setting up a GitLab server for WSU’s CS department, I had a bit of experience with it. I had tried to set up GitLab on some distro of Linux before for playing around with at home following the install guide but didn’t get anything working with it. We had a GitLab server running at my work at one point using the BitNami installer, but there were a few issues with it so we scraped it for something else.

The first attempt was a GitLab server install with the BitNami installer. We set up a CentOS and I ran the BitNami installer and got something up and running pretty easily. However, there was the most bizarre issue I had seen- for some strange reason pages would just show up as blank in Firefox unless you refreshed. I couldn’t find anything covering the same issue at all. I didn’t think something with an issue like that was worth keeping around, so we scrapped that and set up another CentOS server.

Here I decided to find a guide for installing GitLab on CentOS instead of relying on an installer or anything like that. In hindsight this probably wasn’t the best idea, the official install guide recommends Ubuntu and we probably would have had a much easier time just going with Ubuntu server. However, we had CentOS on hand, and I much prefer using CentOS for a server as it has some pretty good default configuration files for a lot of applications.

I didn’t have any issues with setting GitLab up initially using this guide, but afterwards a few problems became apparent. First off, neither SSH or HTTPS cloning worked. After a bit of hassle we got SSH working and were fine with that and decided that things were good to go. We sent in the request to UTS and got a public IP and subdomain for the server. However, we quickly discovered that sending emails didn’t work. It turned out that it was trying to send emails from gitlab@<localip> which get rejected. Unfortunately, I couldn’t find anywhere to change where it was sending emails from. I changed configuration files all over the place but had no success at all in fixing that. It got to the point where I just settled on doing a complete reinstall with the subdomain in mind which would hopefully fix all the issues. I eventually decided that uninstalling GitLab would be too much of a hassle on its own, so I would have to make a completely new server install. After setting my mind on doing that I spent a bit more time trying to get the current one installed and totally broke it, and didn’t care enough to fix it.

So finally, the last install, the one we have up and running. Using the same guide as before but documenting any oddities I found so we’d have a reasonable installation guide, I got it running and working to a pretty reasonable state, but with a few bumps along the way. First off, the guide recommends running a test to make sure everything’s running before you set up the webserver. It gives an error if the webserver isn’t up and running, but I had no indication this was caused by the webserver not being set-up. I decided to just install the webserver and hope it fixed it, and thankfully it did. Secondly, I tried Apache as the webserver, as the guide had the option of using nginx and apache. nginx seems like what GitLab was developed for so I used that for the previous installs, but for this one I decided to go with what I was familiar with. Didn’t get anywhere with it, and just went back to nginx and go that working pretty easily. Thirdly, the SSH issue we had before came back. This issue had two parts: First off the SELinux permissions aren’t set correctly for the ssh keys file. I’m not sure the exact cause for this, but I think it’s because GitLab creates the file itself if one doesn’t exist, so the correct permissions don’t exist for it. It’s an easy fix if you can figure out that’s the issue, but there’s really no clear indication that’s the issue so I was stumped for a while until I decided to stop looking for GitLab related issues with SSH and instead CentOS related issues with SSH. It was a really simple command, restorecon -R -v /home/git/.ssh We didn’t have this issue with the other server installs, but I think in those cases I disabled SELinux altogether and this time I was trying to get it to work with it. The other issue was that CentOS by default locks accounts without passwords. The GitLab docs recommend editing the /etc/shadow file manually, but I found a command that accomplishes the same thing without editing important configurations: usermod -U git. Email was working fine too.

I still need to get HTTPS cloning working. I believe it’s an issue with using a self-signed certificate, so I’m going to try out using a CACert certificate and seeing if I can get it working from there.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

MediaWiki + Going Live with the new CS server

In my first post of the semester, we set up a CentOS server that we planned to have be the new CS server. The old one was running a dated version of Redhat, and with the success of the server that was set up for CS401, the latest CentOS seemed like the way to go.

The tasks were split up between myself and Dillon Murphy. Dillon worked on moving over the blogs and I focused on getting a new wiki up.

The old wiki had been plagued by spambots and used a different software so Dr. Wurst figured just setting up MediaWiki and moving over what he could would be a good direction to go in.

MediaWiki is a PHP/MySQL wiki software built and designed for Wikipedia. Although it was built for Wikipedia, it’s a free software application and as such can be freely used by anyone. I’m not sure if it’s the most popular wiki software, but it’s definitely the one I’ve seen used around the most. Aside from Wikipedia, one notable site that uses the MediaWiki software is Wikia where users can easily create MediaWiki installs for arbitrary uses. It’s also the wiki software of choice we use at my work, which we mainly use for documenting various important information such as networking info, what’s running on servers, troubleshooting tips, and that sort of content.

A PHP/MySQL application relies on PHP for the code and MySQL for the database. These applications are usually pretty trivial to install, so long as you have php, mysql, and a webserver running. I’ve only ever really worked with Apache as a webserver, so we have the common Apache/PHP/MySQL set-up for this server. This is also the set-up one would use for WordPress, so it worked well for our multi-purpose server.

I’m pretty familiar with MediaWiki in a general sense, I remember installing it and playing around with it years ago because I found wikis interesting. I remember years ago it wasn’t as easy to install as it is now, with its main focus being to make Wikipedia better, but nowadays its really trivial to set up and get running. (It could have been that I wasn’t as used to installing web applications as I am now though, haha)

So there shouldn’t be any issues setting this up right?

Luckily there weren’t any issues getting this up and running. I had also recently set-up Wikipedia styled URLs for pages at work, so I was able to get that working with no problem either. After setting everything up, I created a user for Dr. Wurst and a user for Dillon to test that everything was working. Everything seemed to be running with no issues, so mission success!

The next step was to figure out how we were going to handle user authentication. Ideally, we don’t want it possible for spam bots to sign up at all due to what happened to the older wiki. Initially I set it up so users can not register at all until we figured things out. We decided against using LDAP authentication since we didn’t want to bother UTS too much. Originally we were thinking of editing the code to only accept registrations from the @worcester.edu domain- this probably wouldn’t be too difficult to implement, but I found that the Google Apps Authentication plugin accomplished pretty much the same thing, restricting logins to Google Apps users from a specific domain using OpenID authentication. Users would be created automatically on login if they didn’t exist, so it was an ideal solution. Unfortunately I found that this authentication method wasn’t enabled for the worcester.edu domain, so we sent in a request to UTS to see if they could enable it.

While we waited for an update on if it was a good idea to enabled and if its enabled, Dr. Wurst asked about setting up subpages. Subpages is a feature not enabled by default, but allows you to make pages with the title PAGE/Subpage which are pages by the name of “Subpage” that link to the original page “PAGE” right at the top for easy navigation. This was pretty trivial to setup, and it’s a pretty nice feature to have.

We also moved the server from testing to production as Dillon had also finished moving over the blogs. All we really needed to do for this was take the old server of its current local IP and move the new server to that IP and the domain and public IP would point to it. We weren’t 100% sure it would work out this way, but it seemed like that was the way it would work to me and we could always undo the changes if there were problems and send in a request to UTS to figure things out, but luckily everything worked out and the server was live.

Earlier today I got word that the OpenID authentication feature was enabled for the worcester.edu domain, so I enabled the plugin again and everything worked out as it should. This lead to two issues though: First of all it wouldn’t let me edit permissions of users with an @ in the name as it treats editing those users as editing users from another wiki for some reason. I found a pretty easy fix for that by changing what delimiter is used for that, from @ to #, allowing editing the @worcester.edu users as normal. The second issue was that the old accounts couldn’t be used any more, so I just redirected their user pages to the new accounts. Not the cleanest fix, but I looked into changing the users who made each contribution over and it seemed like a lot of work for an easy task, but it made sense for MediaWiki to do that as having it possible for people to “steal” contributions on a big site like Wikipedia wouldn’t be good for anyone.

As far as the Wiki goes, I’m pretty satisfied in how it turned out.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Working With UTS

This is my first time working with a team that manages networking and other things. At work I was able to plan out the networking with a few others and if we ever needed something done it was usually something we could handle ourselves. You don’t really have the kind of freedom and ease for doing those kind of things when you have to send in requests for a lot of different tasks.

I’m not really complaining, it’s just a different experience. The most important takeaway is to plan out what you need in advance and send out requests as soon as possible. Some things will take more time than others as some tasks need to be verified to ensure network security and they definitely have other tasks to deal with than ours.

One big holdup we had was the networking for the cluster. It took Dillon and I a bit to figure out exactly what kind of networking we could do as well as what we wanted to do. The original cluster had one IP on WSU’s LAN, and then a machine acted as a router and handled a separate LAN just for the cluster. We figured that wasn’t really necessary and it would be easier enough to just have each machine with an IP on WSU’s LAN instead. Initially we decided to just use the next 10 IP addresses from the block we were given for virtual servers. This set-up didn’t work at all, and we figured those IP addresses were on a separate VLAN or something like that. Dillon later attempted to use the IP that was assigned to the server before but that didn’t work. We could acquire IP addresses from DHCP, but we wanted some static IP addresses that we wouldn’t have to worry about IP conflicts due to being in the DHCP pool or anything like that. We then had to ask UTS about getting some IP addresses, and while we waited we would just install the servers using the DHCP addresses. When UTS got back to us they told us that the whole block of the IP address that was given for the cluster before should be available, but we had already tested it and it didn’t work, but we tested it again just in case and still it still didn’t work. So we had to troubleshoot with them, and eventually we found out that the VLAN assigned to that port was wrong and got everything sorted out. It was kind of a hassle overall, but we definitely could have avoided it if we tested IPs and figured them out earlier in the semester.

One thing I was working on individually was Google Apps authentication for the wiki. I’ll go into it more in a post about the wiki, but after setting it up I found that the feature wasn’t enabled for WSU accounts. So we had to see if UTS thought that was a good feature to enable- I’ll give an update once we get that up or not.

We had the IP addresses available to us for the virtual machines in advance, Dr. Wurst had asked about them earlier in the year for the server for the CS401 project so we were all set regarding those.

In one particular case we avoided UTS altogether. When we were moving the new CS server to production, we found that the old server wasn’t assigned the public IP address directly- it was assigned a local IP address. This sounded like a 1:1 NAT set-up, which is essentially where you have local IP addresses for devices on your network and you route public IP addresses to certain IP addresses. So for moving the new CS server to production, we figured just changing the IP address from the old one over and giving the new server the old one’s local IP address we’d be able to put it in production with no issues. We were able to do this and everything worked out well.

Overall I’d say working with UTS was a good thing. It definitely caused a bit of hassle as it wasn’t really what I was used to, but overall a team like that is definitely a good thing to have.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Start of the Independent Studty

This semester I’m working on an independent study involving migrating the old Computer Science server (cs.worcester.edu) to a more updated platform (with a focus on installing new Wiki software and moving over articles from the old software), looking into and setting up a Git server for the CS department, and work on setting up a milti-node Eucalyptus cluster. For some of this I’ll be collaborating with Dillon Murphy, in the case of the Eucalyptus cluster I’ll be sharing hardware with his Hadoop cluster for that, in the case of the server migration, I’ll be focusing on migrating the Wiki and he’s focused on the blog, and we’ll be working together for the git server.

For the first step of the project, we installed a CentOS 6.4 virtual server on the VMWare platform. We had a bit of trouble getting the VMWare console to load, the web extension that allows it to work didn’t seem to want to work on anything, and we ended up installing the wrong VMWare clients and not having them work, so eventually Dr. Wurst had to open up his Windows virtual machine and run the VMWare client he had installed on that. Once we got that running, everything ran smooth and we were able to get everything up and running without any issues. We used a minimal install of CentOS, allowing us to focus on only installing the applications we need.

We set up some user accounts for ourselves on the new server and the old server, and on the new server I installed some essential applications as well as the EPEL (Extra Packages for Enterprise Linux) repository. If you don’t know, most distros of GNU+Linux use a package manager to manage the installation of applications, and they use repositories as the sources they get the applications from; CentOS relies on the YUM package manager, and while the repositories it comes with by default are good for most cases, there are a few odd applications you’ll want to get from the EPEL.

Here’s what we installed and why:

  • vim – text editor
  • emacs – another text editor (can’t have too many of these)
  • tmux – terminal multiplexer, similar to GNU Screen. This allows you to run multiple terminal windows in the same session, with a few extra features.
  • mosh – Mobile Shell- http://mosh.mit.edu/ ; a replacement for ssh that has some cool ideas such as client ip roaming and lag compensation.
  • httpd – Apache web server
  • MySQL – MySQL database
  • htop – displays running processes, cpu and memory statistics, I think it’s much more easy to use than the standard top application

This should be good enough for now; if any of our projects need any more dependencies, we can install those, but this is a pretty reasonable set-up for a web server.

My next focus will be the installation of MediaWiki, probably to a /wiki directory on the server. I’ve set up one before at my job with nice Wikipedia styled links and it’s a bit of work but shouldn’t be too difficult; my main worry is getting a reasonable set of plug-ins to use with it to provide us all the functionality we could need (as well as documenting the process). I also want to look into using Worcester State’s LDAP server for the MediaWiki users, as MediaWiki has an LDAP plugin. If that doesn’t work or we can’t get the LDAP details, it seems like the Google Apps Authentication plug-in will serve the exact same purpose.

The next step after MediaWiki would be to set up a GitLab server. GitLab is a free software knock-off of GitHub. While it would be nice to have a GitHub server up and running, I’m pretty sure they charge a ton of money for their enterprise self-hosted solution and GitLab provides enough functionality for what we want where it’s not really an option. I have a bit of experience with GitLab, and from what I’ve found installing it from scratch doesn’t really work out too well, but BitNami has an easy installer, as well as a VMWare appliance for GitLab. I’ve set up a server using the easy installer before and it’s extremely easy to set up, although it’s installed to a pretty non-typical location so it behaves a bit different from a native installed application. For that reason I think if we set up the VMWare appliance it’d be the best option for us, as as far as I know that’ll be a native installed GitLab on a pre-configured server. But that’s something we’ll just have to find out. We want something that’s easy to maintain (or doesn’t have to be maintained much at all) and easy to upgrade if we need to upgrade it.

As far as the Eucalyptus platform goes, I haven’t looked too much into that yet. For the hardware I’m sharing 10 or 11 servers with Dillon Murphy as he works on his Hadoop cluster. We’re not sure if we’ll just split the servers between us, attempt to install both platforms on each server, or set up a Virtual Machine host and install a VM for each platform on each server. I think that ideally installing them side-by-side on each server would be the best option but if they have conflicting dependencies there would be problems with that, so it’s something we’ll have to figure out when we get to that.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

File Management on the Server with Git

After trying out a few different options, I finally came up with an easy way to move files to the server using git. This isn’t using GitHub at all, but actually a Git repository on our server. It’s nice and you never have to use FTP or SCP, just some shell commands for the initial setup! The setup might seem a bit daunting but it’s not that hard and you’ll have it running in no time. It may ask for your password a few times in the commands, keep in mind when you’re typing characters they won’t show up when doing the password. This is for security purposes, of course.)

SSH into the server, Windows users can use puTTY, OS X and GNU/Linux users can use the ssh command like this:

ssh user@server.internet.com

(obviously replacing user with your username and server.internet.com with the server)

Then do the following commands: (replace web with what you want to call the repo if you want to call it anything else. web is perfectly ok though.)

mkdir web
cd web
git init –bare
cd hooks
touch post-receive
nano post-receive

This will open a text file to edit. You can use a text editor of your choice instead of nano, but I think the server only has nano and vim. Vim is what I’d recommend using if you know how, but if not nano is perfectly O.K. Here’s what you need to fill the file with:

#!/bin/bash
GIT_WORK_TREE=~/html/ git checkout -f
echo Updated Successfully

After that save the file and exit (I think it’s CTRL+X with Nano, and :wq on Vim). Then you will be all set with the server. After this you need to clone the repo. I’m not sure how to do this properly on Windows, but for the OS X and GNU/Linux users you do the following command:

git clone user@server.internet.com:web

(where user is your username and server.internet.com is our server. web is the repo)

This will clone the repo to your machine. It’s going to be empty, so it’s time to fill it with things. I’d test it first, by putting an index.html file with some stuff in it in the folder, and then adding, committing, and pushing. (If you don’t know the commands, they’re:

git add -A
git commit
git push origin master

git add -A adds every file in the folder, so watch out) Then the server should be updated. Put the server address in an address bar and head to your user folder and your files should now be there. They’re also tracked by git so if you’re familiar with git you can revert changes and all that fun stuff.

Hope this is helpful to some people. I’d recommend doing this over the other options as this’ll help get you familiar with git and it’s also much simpler to do after the initial set-up.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

File Management on The Server

Now that we have a server set up and you can remotely access it, I can full document how to get your files on there. Make sure you get your username set up by Dillon before attempting this, if you don’t have that info, you’re not going to be able to do anything. Make sure you put your files in the html directory for things to show up on the server online under the development folder.

There’s a few options available to you.

Using SCP

scp (Secure Copy) is a means of simply copying files over SSH (Secure Shell). It’s a replacement for the old rcp (Remote Copy) command which was functionally equivalent, but unencrypted so not secure at all. It’s a safe, easy, and fast way of getting files from one place to another.

In Linux (and possibley OS X systems, I’m honestly not too sure), you can do this with a simple command line command. To copy files to the server, here’s what you’d do in a shell:

scp file-to-copy user@server:path

an example would be

scp test.txt chad@example.com:html/

which would copy a test.txt file to the html/ directory on the server relative to a user chad’s home folder. You can also use the -P option to specify a port, but for our purposes we’re using the default SSH port so you don’t have to worry about that. Also if you’re copying over a directory, you’ll have to use the -r option for recursive, as without it it’ll only transfer individual files (don’t worry though, it’ll warn you). Depending on your setup, it might ask for your password when you want to copy, enter your password (your text won’t show up on the screen as you type a password) if it does that and hit enter and you should be ok.

You can also copy files from the server back, just by swapping the inputs,

scp user@server:file-to-copy location-to-copy-to

I use this command quite a bit, so if you have any trouble with it, feel free to ask.

For Windows users, you can use WinSCP. Basically you start the program up, and it should prompt you with a window called WinSCP Login. Set the protocol to SCP, the Host name to our server, enter your username and password in the appropriate fields and ignore the key field. Click Login, and another prompt should pop up telling you about the key on the server, just hit Yes and it should connect you from there. It should be pretty easy to understand from there.

You can also do SFTP from WinSCP which leads us to the next option

Using SFTP

SFTP is the FTP (File Transfer Protocol) over SSH. It offers a few more features than SCP such as changing file permissions, which shouldn’t matter, but some people might be more comfortable with FTP. WinSCP works fine for this, but FileZilla is cross platform so that’s what I’d recommend using. Pretty much any modern FTP client should support the SFTP protocol though (you could even do it from command line although I’m not a fan of doing that)

Open up FileZilla, and head to File -> Server Manager. Click on New Site and call it whatever you want. In the General tab, for host fill in our server, and port 22. For Protocol make sure to pick SFTP (FTP is the default option). Change Logon type to Normal, and enter your username and password in those fields and hit connect. Then you can click connect and it should put you on the server (if there’s a prompt about a key, just hit yes or ok).

Using Git

Ideally we should be able to set up a git repo on the server that we can push local projects to. I’m going to look into this for another blog post.

Hope this helps, and if you have any questions, be sure to drop me an email at cdayjr@worcester.edu or send me a message in the comments.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Cover Flow Interface – What’s Out There Now

For those of you who have used an iPod, I’m sure you’re familiar with the “Cover Flow” interface of using your albums (I’ve also heard it referred to as a Carousel interface). If you haven’t heard of it, you can check out one of the examples here to see what it is. One idea we had for the interface for the project is using an interface like this, so I was tasked with working on that. So here’s some HTML5 options for the CoverFlow interface.

Content Flow
http://www.jacksasylum.eu/ContentFlow/index.php
This looks like the best option for this. It’s compatible with pretty much everything out there. This looks great, and works pretty great from the demonstrations, however one thing that I don’t like about it is how it lacks the angled images that the iPod’s cover flow uses, it just makes those not in focus smaller.

zFlow
http://code.google.com/p/css-vfx/wiki/AboutZflow
zFlow looks like it’d be a bit more work to implement but I think it looks much nicer.

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.

Use your worcester.edu account for Instant Messaging

Because we use Google Apps at our University, we can take advantage of this little known feature. Basically every Gmail account also gives you an XMPP account, which is an IM protocol that Google uses. There’s a lot of cool things you can do with it, such as XMPP transports to link it up with other IM accounts, but I won’t get into that. The main thing is, this could be useful for some people, especially in regards to keeping up with the project, so I’ll explain how to set it up.

When you log into your Worcester State Gmail account, it should already show up in the lower left corner. From there you can configure who shows up in your contact list, whether it’s online there, and other things. You can click on people’s names and chat with them in the browser as well.

I don’t really like browser-based Instant Messaging, so I found out you can use this through Pidgin. You can do this with other XMPP clients but I’ll just explain Pidgin here as it’s cross-platform and easy enough to use.

Install Pidgin if you haven’t already. If you’ve just installed it it should take you to the account management page automatically, but if you’ve been using it for a while already, you go to the Accounts menu and click Manage Accounts. Then in the Accounts window, click Add. You should fill in the Basic tab something like this:

  • Protocol: XMPP
  • Username: (your worcester state username; for example, mine is cdayjr)
  • Domain: worcester.edu
  • Resource: (put whatever you want here; usually this is the name of your machine, as XMPP allows people to IM you on specific machines if you’re logged into multiple ones)
  • Password: (your Worcester State password. For example, mine’s hunter2)
  • Remember Password: (check this if you want)
  • Local Alias: (Set this to whatever you want, usually your name)
  • other stuff doesn’t matter
Then head to the Advanced tab, and fill it in like this:
  • Connection Security: Require encryption
  • Connect port: 5222
  • Connect server: talk.google.com
  • everything else should be ok here, so click add
You should be all set. If you need any help, I’m sure I can help, just get in contact with me. While I was writing this I went through this and made sure everything worked, so if you follow my instructions you should be ok

From the blog Five mvs of Doom by Chad Wade Day, Jr. and used with permission of the author. All other rights reserved by the author.