Category Archives: tech

Running Bash Commands in Node.js

Disclaimer:

This is not explicitly a “guide”. There are far better resources available on this topic; This is simply a discussion about the child_process module.

Anecdotal Background

I’ve been in the process of creating programs to automate the running of my Minecraft Server for my friends and I. In that process, I’ve created a functional system using a node server, a C++ server, and a lot of bash files. While it works, I can’t be sure there aren’t bugs without proper testing. Trust me, nothing about this project has has “doing it properly” in mind.

When learning front-end JavaScript, I recall hearing that it can’t modify files for security reasons. That’s why I was using a C++ server to handle file interactions. Little did I realize node can easily interact with files and the system itself. Once I have the time, I’m going to recreate my set up in one node server. The final “total” set up will involve a heroku node server, a local node server, and a raspberry pi with both a node server (for wake on lan) as well as an nginx server as a proxy for security for the local node servers.

Log

As a bit of a prerequisite, I’ve been using a basic module for a simple improvement on top of console.log. I create a log/index.js file (which could simply be log.js, but I prefer having my app.js being the only JavaScript file in my parent directory. The problem with this approach, however, is that you end up with many index.js files which can be hard to edit at the same time).

Now, depending on what I need for my node project I might change up the actual function. Here’s one example:

module.exports = (label, message = "", middle = "") => {
    console.log(label + " -- " + new Date().toLocaleString());
    if(middle) console.log(middle);
    if(message) console.log(message);
    console.log();
}

Honestly, all my log function does is print out a message with the current date and time. I’m sure I could significantly fancier, but this has proved useful when debugging a program that takes minutes to complete. To use it, I do:

// This line is for both log/index.js and log.js
const log = require("./log"); 

log("Something");

Maybe that’ll be useful to someone. If not, it provides context for what follows…

Exec

I’ve created this as a basic test to see what it’s like to run a minecraft server from node. Similar to log, I created an exec/index.js. Firstly, I have:

const { execSync } = require("child_process");
const log = require("../log");

This uses the log I referenced before, as well as execSync from node’s built in child_process. This is a synchronous version of exec which, for my purposes, is ideal. Next, I created two basic functions:

module.exports.exec = (command) => {
    return execSync(command, { shell: "/bin/bash" }).toString().trim();
}

module.exports.execLog = (command) => {
    const output = this.exec(command);
    log("Exec", output, `$ ${command}`);
    return output;
}

I create a shorthand version of execSync which is very useful by itself. Then, I create a variant that also creates a log. From here, I found it tedious to enter multiple commands at a time and very hard to perform commands like cd, as every time execSync is ran, it begins in the original directory. So, you would have to do something along the lines of cd directory; command or cd directory && command. Both of which become incredibly large commands when you have to do a handful of things in a directory. So, I created scripts:

function scriptToCommand(script, pre = "") {
    let command = "";

    script.forEach((line) => {
        if(pre) command += pre;
        command += line + "\n";
    });

    return command.trimEnd();
}

I created them as arrays of strings. This way, I can create scripts that look like this:

[
    "cd minecraft",
    "java -jar server.jar"
]

This seemed like a good compromise to get scripts to look almost syntactically the same as an actual bash file, while still allowing me to handle each line as an individual line (which I wanted to use so that when I log each script, each line of the script begins with $ followed by the command). Then, I just have:

module.exports.execScript = (script) => {
    return this.exec(scriptToCommand(script));
}

module.exports.execScriptLog = (script) => {
    const output = this.execScript(script);
    log("Exec", output, scriptToCommand(script, "$ "));
    return output;
}

Key Note:

When using the module.exports.foo notation to add a function to a node module, you don’t need to create a separate variable to reference that function inside of the node module (without typing module.exports every time). You can use the this keyword to act as module.exports.

Conclusion

Overall, running bash, shell, or other terminals in node isn’t that hard of a task. One thing I’m discovering about node is that it feels like every time I want to do something, if I just spend some time to make a custom module, I can do it more efficiently. Even my basic log module can be made far more complex and save a lot of keystrokes. And that’s just a key idea in coding in general.

Oh, and for anyone wondering, I can create a minecraft folder and place in the sever.jar file. Then, all I have to do is:

const { execScriptLog } = require("./exec");

execScriptLog([
    "cd minecraft",
    "java -jar server.jar"
]);

And, of course, set up the server files themselves after they generate.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Don’t be Overwhelmed by Refactoring

As I’ve mentioned previously, I taught myself how to code a few years ago. I’ve learned a lot more since then, but I’m always learning. The other day, I had an assignment for another course that involved going back and refactoring old code. Since it was so bad, I’d like to discuss it.

Concept

The code itself is a terminal based Java calculator. I imagine it was really tricky for me at the time considering the way in which I implemented it. I didn’t know about the static keyword nor about how to use methods. This serves as a great example of what not to do:

The Original Code

Now, the code itself is so horrendously bad that I have to get creative to even display it. So here’s a few entertaining lines. Keep in mind the entire program is within the main method:

int load;
double num1, num2, ans;
boolean on, autoclose, autocontinue, loading, numchecking, divide, multiply, add, subtract, other, help;
String in, in2, in3, in4, in5, operation, fa, status1, status2, status3;
char op;

Now, a sane person might ask “Why are there so many variables and why are many of them named so badly?” Well the reason is because I didn’t use methods. I used variables to separate control flow. I also find it funny that I had a String for every user input rather than just reusing one, as well as the fact that I declared all variables at the top for no reason. Here’s an example of that horrible control flow in action:

in = input.next();

if (in.equals("/")) {
	divide = true;
	numchecking = true;
	operation = "divide";
	fa = "by";
	op = '/';
}

else if (in.equals("*")) {
	multiply = true;
	numchecking = true;
	operation = "multiply";
	fa = "by";
	op = '*';
}

I would ask for input, set these variables, and then later on:

if (numchecking) {
	Thread.sleep(1000);
	System.out.println("Enter your first number.");
	num1 = input.nextDouble();
	System.out.println("Enter another number to " + operation + "  " + fa + " " + num1);
	num2 = input.nextDouble();

	if (divide) {
		ans = num1 / num2;
	}
	if (multiply) {
		ans = num1 * num2;
	}
	if (add) {
		ans = num1 + num2;
	}
	if (subtract) {
		ans = num1 - num2;
	}

	Thread.sleep(3000);
	System.out.println("Calculating...");
	Thread.sleep(1000);
	System.out.println(num1 + " " + op + " " + num2 + " = " + ans);

I have no idea why I used if else previously but not here. I can understand I didn’t know how to use a switch statement yet I guess. Anyway, I used Thread.sleep() to add artificial delay for some reason in the program. This code I’ve shown is pretty tame honestly. Notice the line counts. I recommend taking a look at the original source code so you can really appreciate how horrible it is. Unfortunately, I had to convert the .java file to a .docx file to upload it to WordPress:

The Refactoring Process

I find myself fall into the same hole: When I realize I can do something in a better way but it’s large and intimidating, I prefer to start from the beginning rather than modifying the current product. Sometimes, that is extremely useful and you just need a solid clean start. However, often that’s overkill and wastes time. I tend to do that even in video games.

As an example, one of my all time favorite games is Factorio which is an indie game about building factories and trying to automate everything. The goal of the game is to get the game to play itself. Anyway, I have over 500 hours in this game and I haven’t actually reached the end goal of launching a rocket. It’s not because I don’t know how to do it or I die too quickly. It’s because I’m never satisfied with my factory layout or my world generation settings. When it comes to world generation, I do actually have to start over. When it comes to factory layout, I could take the time to manually replace the entire layout and keep my current research. Despite that, I almost always start over.

The saving grace with code is that, when its scale is manageable, it can be incredibly fun and relaxing to refactor. Sometimes it’s tough to get started but once you do, it’s a really fun time. I honestly had less trouble refactoring as I did with the assignment itself. The assignment wanted me to make a change based on a guide. Make the change, write it down, and continue. However, I fell into refactoring and made change after change extremely quickly. With code this bad, it was really easy to just aim at one thing and make 40 changes that all fit into unique categories. So I prioritized refactoring the code well over documenting it well.

Virtually all of those variables I had before are gone. Here is the refactored beginning of the code:

private static final String[] OPERATORS = {"+", "-", "*", "/"};
private static Scanner scanner;
private static boolean autoContinue;
private static boolean autoClose;
private static boolean continueOnce;

public static void main(String[] args) {
    boolean programIsOn = true;

You’ll notice I made use of static variables. I only have one variable in the main method and it controls the loop of the program. Other variables are now in methods or simply don’t exist. I even created an array of operators to allow for easier expansion of functionality later on, despite the fact that I’ll almost certainly never come back to this project. I also have 3 booleans on 3 separate lines. This is my personal preference, but with only 3, I would understand simply writing: private static boolean autoContinue, autoClose, continueOnce; I just tend to lean on keeping things on separate lines. Although now that I’ve written that, I kind-of do prefer that. Although it would mess up the width aesthetic going on because it’s such a wide line.

Before, I showed part of how I took in user input and managed arithmetic operations. I set a bunch of variables and handled it later. Here’s how I manage it now:

private static void handleOperations() {
    String operation = scanner.next();
    System.out.println();

    if(isArithmeticOperator(operation)) {
        arithmeticOperation(operation);
        return;
    }

    switch(operation) {
        case "help":
            help();
            continueOnce = true;
            break;
        case "exit":
            autoClose = true;
            break;
        default:
            System.out.println("Sorry, I don't understand that operation. Try again.");
            continueOnce = true;
    }
}

I created a method to handle it that is called in the main loop. It has good variable names, uses a switch statement, and calls methods that have descriptive names. It could be better but it is leagues better than what it was before. Originally in the refactoring process, I had:

private static boolean doUserRequestedMethod() {
    switch(scanner.next()) {
        case "/":
            divide();
            break;
        case "*":
            multiply();
            break;
        case "+":
            add();
            break;
        case "-":
            subtract();
            break;
        case "help":
            help();
            return true;
        case "exit":
            autoClose = true;
            break;
        default:
            System.out.println("Sorry, I don't understand. Try again.");
            return true;
    }

    return false;
}

It returned a boolean value to allow the loop to continue if it needed to, such as if the user entered an unknown command. I replaced that with global (static) variables to control that. I renamed the method for clarity, and I created a single arithemticOperation() method to avoid code repetition. However, that function itself needed a switch statement, so rather than check the operation twice, I split off the arithmetic operations from the original switch statement in a way that allowed me to add more arithmetic operations in the future. I’m pretty happy with that solution.

Lastly, since I showed how the numbers are calculated originally, I should show that in the refactored version. First I have to check if the input was for an arithmetic operation:

private static boolean isArithmeticOperator(String potentialOperator) {
    for(String operator : OPERATORS) {
        if(potentialOperator.equals(operator))
            return true;
    }

    return false;
}

This was part of why I created the OPERATORS array, so that I could avoid a long boolean of &&s. Then for the actual calculation:

private static void arithmeticOperation(String operator) {
    System.out.println("Enter a number: ");
    double leftOperand = scanner.nextDouble();

    String partialEquation = leftOperand + " " + operator + " ";
    System.out.print(partialEquation);

    double rightOperand = scanner.nextDouble();
    double result = leftOperand;

    switch(operator) {
        case "/":
            result /= rightOperand;
            break;
        case "*":
            result *= rightOperand;
            break;
        case "-":
            result -= rightOperand;
            break;
        case "+":
            result += rightOperand;
            break;
    }

    System.out.println(partialEquation + rightOperand + " = " + result);
}

Again, you’ll find decent variable names and a more clear control flow. Obviously this code isn’t flawless but that’s not the real goal in refactoring. The point of refactoring is to create an improvement. You’ll never have perfect code, but you can always improve your code. This is pretty analogous to life itself. Recognize the value and functionality currently there, recognize that you will never be perfect, but always aim to be better.

Here is the current state of the refactored code. It’s honestly amazing how much better it is:

Conclusion

Refactoring is not something to be feared; it should be enjoyed. There is something very relaxing about it if you enter it with the right mentality. Focus on small things you can easily tackle that would make a big improvement and do that. As you slowly cross off changes you need to make, it’ll become more manageable and more readable. In that program above, I cut the number of lines in half after refactoring. I can actually read it and understand what it’s doing. It should be satisfying to go through and make progress towards simplicity and organization. You just have to want it.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

The Insatiable Quest for Prime Numbers

I have been coding at least since 2013. I started off self taught and now I’m majoring in CS, while still constantly learning outside of classes. I went from language to language, but something that almost always followed me was the idea of a prime number generator.

I must’ve tried this in almost every language I’ve used. It’s a very simple idea: create a console based program that can both check if an integer is prime, and print out a list of primes in order as quickly as possible. Usually, I don’t get too far for reasons I’ll get into. Although, I did succeed mostly with C++. I had an incredibly quick program (once I realized how terribly slow it was to print out the results to the screen) that ran on multiple threads and could fill up a file with prime numbers. I had two main problems: the file it filled up became so big that notepad wouldn’t be able to open it and the max size of an unsigned long.

Looking back at that code, it was a horrible mess and I’ve come a long way in both C++ and general code design. However, ever since then my quest began to create a data structure from scratch that could allow me to handle enormous primes. I managed to create a class that used strings to hold values. I defined all necessary arithmetic operations and it worked. Incredibly slowly. I should also mention that while I’m constantly coming back to this idea, my attention wanes as I run out of time between vacations.

Using strings was a horrible idea. Not only are they really slow when you try to treat them like numbers, but they are a huge waste of memory. Each character is 1 byte and in base 10 I would only be using 10 values per character. Even using base Z (where I use 0-9 and then a-z and finally A-Z), it would be a very stupid waste of space. So I started thinking about how I could store numbers more efficiently. In C++, it really isn’t that hard.

The solution I came up with involved std::size_t’s (size_t). It’s just a compiler name for the largest possible unsigned integer. Since I have a decent background in number bases (see my blog post on Duodecimal and why we should all switch to it), I was inspired. My design was a linked list structure of size_t’s. I used a linked list rather than an array because I wanted “small” numbers to not take up a lot of space, I wanted to avoid reallocation, and I also wanted to prevent an upper limit for the size. In theory, this object can be as large as memory allows. If I used something like a std::vector, it’s size itself is stored as a size_t.

Each size_t is the largest possible integer in C++. And each node in the linked list acts like a digit of a base. Let n be the largest size_t number. Suppose you have n + 1. That number would basically just be 1<—>0 in this form. Then you keep ticking up the 1’s digit, etc. This means that the numbers are stored incredibly densely. We’re dealing with base (n+1). In my case, I think n is 64 bits. So n=264 – 1. Anyway, to store n2 , you would need 2 nodes. To store nk you only need 64*k bits.

This is insanely better than using strings. Since they’re integers already as well, I get to avoid conversion. My problem recently has been finding the time and motivation to work on this. I try to create unit tests so I can guarantee it’s working as expected and I get bored. I also need to find efficient algorithms for arithmetic operations.

Despite my inability to complete this project, it does a good job of showing the benefits of Object Oriented Programming. I can take all of this complexity and shove it into a class. Once that class is done, I can create the functions to simply calculate primes the normal way.

Checking Primality

The most basic way to check if an integer is prime is to check if any positive integers less than it (other than 1) divide it. From there, you might realize that you can skip all of the even numbers after 2 because if 2 doesn’t divide it, then neither will any of the evens. Then you can also skip all integers larger than it’s square root as factors come in pairs. A few proofs would easily show these results to be true. Then you can continue to progress downwards. However, I’ve come up with a method as well.

How come after you check 2 you can skip all of the even numbers? Okay I suppose I should do one simple proof here:

In fact, let me do this in general:

I enjoy a nice simple proof every now and then, even though I could’ve just cited transitivity of divides. Anyway, what this is telling us is that this applies for any factors. Any time an integer does not divide our potential prime, we can ignore all multiples.

What I’m getting at is that the hypothetically most efficient method for verifying primality is to check all primes up to it’s square root. Since every number is either prime or the multiple of primes, checking a prime also checks every single multiple of that prime. The problem is that you need a list of primes to check. Hence, my program would store primes and use them to verify if a number is a prime.

You may be thinking that if I have a list of primes, why bother at all with arithmetic. And thinking about it, having an “exhaustive” list of primes (an exhaustive infinite list…) and simply checking that list could be very efficient to verify a number is prime. However it would be pretty inefficient to verify that the average number is not prime. The reason I’m not using that method is because my list of primes is small and slowly grows. I only need to store primes up to the square root recall. That means to check 100, I’ll only have 2,3,5, and 7 stored. To check 10,000, I’ll only have all 2 digit primes.

So as I verify primes the list would grow and add more factors in order. You would also want to check the smaller primes first. While you can’t exactly say that more integers are even than are multiples of 5 due to the set being infinite, any complete finite subset of consecutive integers has this property. Multiples of 2 will make up half of the elements, multiples of 3 one third, etc. So you’ll want to start with the smaller primes first.

How to determine divisibility is also up to you. For instance, in base 10 we can rule out any integer who’s first digit is a 0 or 5 (except the number 5 itself) since that rules out multiples of 5 and 10. However, when working with duodecimal, I realized different number bases have different properties for divisibility. For example, in base 12 every multiple of 4 and 8 ends in 0,4, or 8. Every multiple of 3 and 9 ends in 0,3,6, or 9. Every multiple of 6 ends in 0 or 6. Every multiple of 12 ends in 0.

You could potentially have an algorithm to convert the integer to a different number base that is more efficient than performing an arithmetic calculation. Especially considering that for these tricks, you only need to convert the first few digits of the number. I think it’s a really interesting idea, however you risk circumventing the benefits of the machine code. Often times you try to make something more efficient, and then it’s either unnecessary or makes performance worse because the compiler was able to do a better job “fixing” your code. I think either way I’ll have to experiment with these idea in the future. Thinking about it, you only need to convert the number to the base of the prime and then check if the 1s digit is 0. You can then modify how many digits you actually convert based on how many digits the base-prime number will be. For larger numbers this might have potential.

Conclusion

While this project may not be the best example, I think it’s very useful for programmers to have a “go to” project for practicing an unfamiliar language. Something simple enough to allow you to write it from memory, while complex enough to give you a good grasp of the language. In this primes project, I have to handle the terminal, arithmetic, functions, objects, files, threads, etc. If I simplified it down and ignored my ambitions, it would server as a very functional example to allow you to learn the syntax and libraries of a new language. Ideally, you wouldn’t even have the original code to look at. You would just program the functionality from memory using your IDE and Google to figure out syntax. The best way to learn a language is to use it. If you have something familiar in your mind, then you have a great example project to work on already. I find struggling to implement functionality and spending a lot of time searching for a solution has taught me an invaluable amount about the languages I use.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

A Strange Way to Use Get Requests

Strap in because this post is going to get very anecdotal.

Backstory

I have a history of running Minecraft servers for my friends and I. A few years ago, I learned how to port-forward a locally hosted server and use it to play on. While it’s not the most secure thing in the world technically, it works really well and it’s produced a lot of fun. For whatever reason, I was thinking about it again recently.

When I’m out of school, that’s usually when we play. I’ve slowly been learning more and more. I’m currently at the point where I can make intermediate datapacks for the game and use shell/bash/batch scripts to run the server. Over the summer for instance I created a shell script to run the server, back it up, and reload it if it crashes.

There’s still one major problem with my servers, however. I need to manually turn on one of my computers and run it. Then that computer has to stay on until I want the server off. It might waste power if no one is using it (also increasing the risk of the world becoming corrupted) and it’s just slowly going to wear out my hardware.

Recently, I remembered that I had two old laptops. Unfortunately, the newer one didn’t have a power cord. So I broke my way in and took out the ram, hard drive, and wifi card. Then I swapped those components into the other laptop. Lastly, I installed Lubuntu onto it. It works surprisingly well. I then set up NoMachine so I can remote into the machine without having it in front of me. The BIOS supports wake on LAN so I took a few hours to get that working. Now, I can leave that laptop plugged in next to my router and send a signal over the internet to wake it up. While the router blocks the magic packet required on ports 7 and 9, a simple port forward allows wake on LAN to work from anywhere. I would definitely not recommend that for most scenarios, but until I find myself the victim of attacks on my router, I think I’ll take my chances.

Now, my goal has been to create programs that run all the time that laptop is running to be able to receive requests to launch Minecraft servers. I’ll work out a script later for handling crashes and such, as well as a datapack for automatically stopping the server when no one has been online in 30 minutes or so. So I started work on a node server.

Node.js

While I am learning about node in my CS-343 course, I have already taken the Web Developer Bootcamp by Colt Steele on Udemy and I highly recommend it. This is the basis of my server so far:

const express = require("express");
const app = express();

const server = 
app.listen(process.env.PORT, process.env.IP, function() {
     console.log("The server has started.");
});

var minecraft_server1_on = false;

app.get("/minecraft/server1/start", function(req, res) {
     res.send("Turning on the server…\n");
     minecraft_server1_on = true;
});

app.get("/minecraft/server1/ping", function(req, res) {
     res.send(minecraft_server1_on);
});

app.get("/minecraft/server1/declare_off", function(req, res) {
     res.send("The server is off.\n");
     minecraft_server1_on = false;
});

app.get("/exit", function(req, res) {
     res.send("Exiting…\n");
     server.close();
});

app.get("*", function(req, res) {
     res.send("The server is running.\n");
});

Now, I’m certain that I’m making mistakes and doing things in a strange way. It definitely is not in a state I would use for anything other than a private server, and even then I’m still working on it.

Here’s the important parts. It’s an express app with a few get routes. I’m running this server from a bash script using nohup so that it can simply run in the background while the pc runs. However, I want to be able to stop the server from another bash script without using process ID’s so I don’t risk closing the wrong thing. While I was considering my options, I thought of something really interesting. I can use the curl command to perform a basic get request in a bash script. So I created a route /exit that when requested, it shuts down the server.

It’s incredibly simple and there are definitely ways around it, but I never even thought of using get requests in this way. There is information in the requesting of a web page, excluding the normal stuff. Just the fact that a request occurred can be useful. What this means is I can check if the server is running by pinging most routes. I can also both send and receive boolean data without any real complexity. Once again, I’m sure this violates some design principle. However, for something like this that is meant to be mostly casual and private, why should I need to set up proper post routes and databases when I can use this.

My method of turning on a Minecraft server now; I store the state in a variable in the node server. This is fine because when this server stops, all other servers should have stopped so I don’t need a database. Then, by pinging certain routes, I can turn a Minecraft server on, check if it’s on, and declare that I have turned it off somewhere else. The node server can maintain the state of Minecraft servers (I can possibly run multiple on different ports) as well as handle inside and outside tracking.

Keep in mind again, I’m looking for the easiest way I know how to make this work right now, not the best way to do it. So from here, I had no idea how to make the node server actually start a Minecraft server. I know how to run one from a bash command though. So I created a C++ program that will also run all the time. It can periodically check the status of the node server. For instance, if I send a request to the node server to turn on a Minecraft server, the C++ program can detect that change by running system("curl http://localhost:PORT/minecraft/server1/ping"); Once again, I’m using a UNIX command in C++ rather than using a wrapper library for curl because it was an easier solution for me. The node server can then return true or false. In fact, the C++ server won’t directly run the command. It will run a bash script that runs the command and stores output into a file. C++ can then read the file and get the result.

I’m currently still in the process of making this work. After this, I’ll make another node server hosted on Heroku that has a nice front end to allow myself and other people to request the laptop to wake on LAN, and then directly interact with those local node server. I may even make a Discord bot to allow people to simply message in chat to request the server to turn on.

Conclusion

Once again, I do not recommend anyone actually does this the way I have. However, the whole point of coding is to make a computer do what you want it to. If you can hide the get requests behind authorization (which I will probably do) as well as fix any other issues, this could be useful. It’s not even specifically about using get requests in this way. Abstract out and realize that it’s possible to do something in a way you never thought of, and it’s possible to use something in a way you never have. Consider what you know and explore what’s possible. Figure out whether or not it’s a good practice based on what problems you run into. I think that’s one of the best ways to learn and if you can find a functional example to fixate on, the way I have, you can find yourself learning new things incredibly quickly.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Don’t Spend So Much Time Coding Cleanly!

There comes a point in every developer’s career where they transition from “make it work” to “oh my God I need to make this look decent”. Initially, our goal is to create a functional Hello World program. However, as we develop, we slowly begin to learn proper code styles and naming conventions. After reading enough Stack Overflow Forums, we start to become self aware of how our code looks and we begin to focus on code aesthetics. While there’s merit in that, we often focus on the wrong things.

This video does a great job of explaining just that: there isn’t really any such thing as clean code. Now, that isn’t to say that there aren’t wrong ways to code. I’m sure we can all name plenty. That being said, the main point is that we shouldn’t spend our time trying to code perfectly the first time.

The best way to code is to do so as well as we can without spending too much time overly focused on getting everything right the first time. If you know you have the opportunity to do something right or cut corners, do it right. However, if you’ve spent 10 minutes trying to name a variable, give it some placeholder name and worry about it later. The first priority is functionality. Later on, you can review and refactor your code.

The main goal of refactoring is readability. In a compiled language especially, there is only so much efficiency the programmer can add. Compilers do a remarkable job of optimizing code by themselves, so your main goal should be readability. Make sure that other developers, as well as your future self, can read and understand your code. Add comments where necessary, but aim for self-commenting code. If your code is its own comment, that helps save space.

In conclusion, focus on simple and readable code. Don’t waste your time doing something that can simply refactored later, within reason. With that being said, try to do things right the first time if it’s obvious how to do it right.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Adapter Design Pattern

I’ve always found Derek Banas’ channel really useful. In late middle school, I started watching his tutorials for how to code. So it’s really interesting to go back and watch his videos, to say the least.

I feel like the adapter design pattern is something that shouldn’t really need to be used. Don’t get me wrong, it’s very useful. Like real adapters, however, the problem itself shouldn’t really exist. That’s especially true with code; it seems as though adapters become required when interfaces aren’t abstracted well enough. In his video, for example, he has an EnemyAttacker interface that represents some kind of enemy. However, the methods are very specific and presume certain characteristics about the attacker. We then need to use an adapter to get around that specificity. It seems to me that writing the interface more generally to begin with would be more ideal. But given that it’s a bad idea to modify old working code, an adapter is a great solution.

In principle, an adapter does exactly what you expect it to: it adapts code. It takes one “interface” and connects it to another “interface”. That’s the colloquial interface as opposed to a coding specific interface. We use adapters every day. The best example is a phone charger. It converts (or adapts) 120V AC power to a low DC voltage. It also adapts the physical plug into USB type-c. All of the functionality is hidden inside of the charger and from the perspective of a user, it’s literally plug and play.

An adapter in code acts the same. I honestly like the example he gave so if you want to see an example of actual code, check out his video. Conceptually, one description of an adapter is the following: We have two interfaces A and B that have many differences, but are conceptually similar. We can create an adapter so we can use any A as a B, or vice versa. It’s okay because of their similarity (or perhaps they need not even be similar!). Really, it’s an incredibly basic pattern so the specifics aren’t that important.

Fundamentally, an adapter is just code that allows other code to operate together. Again, I think the best way to conceptualize it is through the imagery of any real life adapters. However, it should ideally not be necessary in scenarios such as the one from the video. Abstraction, within reason of course, should be prioritized ahead of time.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Introductory Post for CS-343

Welcome! First I would like to point out that all posts for this course should all be placed in roughly the same categories. However, they all will be tagged with CS-343. This should make them easy to find.

These posts will be centered around me searching for online materials other people have posted that relate to the core topics of CS-343. I will then be sharing them, interpreting them, etc. It will be a collection of information primarily useful to the following topics (as taken from the course syllabus):

  • Design Principles
    • Object Oriented Programming
    • SOLID
    • DRY
    • YAGNI
    • GRASP
    • “Encapsulate what varies.”
    • “Program to an interface, not an implementation.”
    • “Favor composition over inheritance.”
    • “Strive for loosely couples designs between objects that interact”
    • Principle of Least Knowledge
    • Inversion of Control
  • Design Patterns
    • Creational
    • Structural
    • Behavioral
    • Concurrency
  • Refactoring
  • Smells
    • Code Smells
    • Design Smells
  • Software Architectures
    • Architectural Patterns
    • Architectural Styles
  • REST API Design
  • Software Frameworks
  • Documentation
  • Modeling
    • Unified Modeling Language
    • C4 Model
  • Anti-Patterns
  • Implementation of Web Systems
    • Front End
    • Back End
    • Data Persistence Layer

Hopefully, these posts provide you with many resources to help you learn these topics for the first time or to help you recall them after a long time has passed! I wish you the best of luck in whatever you’re hoping to achieve!

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Find Mentors || S.S. 10

Sams Ships (13)In this final installment of my individual apprenticeship patterns, I think an important one to write about would be Find Mentors. To summarize the main point of this one, I would say that it encourages people to observe their role and their surroundings to see where they can find the most value from learning or use their resources. It encourages you to look at things from one level back instead of blindly jumping into something right away.

Personally, I have been in a role where I had to figure out a lot of things that could have just been taught to me. I quickly learned that I would be able to ask other junior developers how they managed to learn things on their own and it helped me a lot. If other junior developers were not available, then I would work my way up to people who had the most recent on-boarding experience and hope that they could recall the process I was currently going through. For the most part, that worked out well!

Thanks to this pattern, I thought it was useful to think about and remind ourselves that even though our mentors will know a lot more about us, they still do not know everything. They are still continuing to learn as much as we are in their own careers.

I thought I should update this blog to throw in a little hidden announcement if anyone actually reads these that I will be learning at a company with about 100 peers going through the same thing. This makes me feel a lot more confident knowing that I will have a designated support system around me and have mentors around.

Overall, I agreed with the pattern. This is because I can testify with my personal experiences how useful it was to be able to utilize my resources including being able to ask mentors questions or just find my own. A common question I had for my interviewers was, “Will I have a mentor or support system along the way throughout my career progression?” Personally, it is important for me to have a designated place to go for support because it just takes one more worry away about having to ask somebody a question.

It is now time to conclude my individual apprenticeship pattern series! I hope you have at least learned one thing from it because I have learned so many things.

From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.

WSU x AMPATH || Sprint 5 Retrospective

Sams Ships (14)Over the past two weeks, my team continued to discuss what we are working on as usual. We have come to the conclusion that we will add our Search Bar component once there are updates and more of a base to work off of. This was concluded after we realized that the process would be much more efficient. The parameters and details on the search bar would be harder to figure out without making up a base anyways.

Some advice for others who may be working on the same thing would be to try and collaborate or discuss potential orders between groups if one thing may depend on another. That would make it much simpler from the start if possible so there aren’t any clashes or time wasted on doing extra work that could have just been done by one group or team.

In the meantime, I did a little more research on the AMPATH system out of curiosity since we are going to be building onto their work. I found out that there are 500+ care sites in Kenya! It is interesting to think about the potential impact our work may make on how AMPATH carries out their process. Their initiative reminds me of what Enactus at Worcester State strives for when they work on projects to help people or organizations in the community “sustain their own success, connect them with universal health insurance, train next generation medical professionals, and research new breakthroughs and best practices.” Being able to help a healthcare organization is pretty meaningful, especially as a project through my capstone.

A way to tie our 348 course (Software Process Management) with our 448 (Capstone) course would be through now being able to use Travis CI and Heroku. It was interesting being able to experience using these in class and help our peers use it and now be able to use them in our capstone. I think the practice we got was nice because I found that my peers and I were more comfortable with following steps that were written out and explained to us instead of just “going for it.” I have also noticed that our 348 course helped us pay more attention to how we interact with others, which is very useful for the future when we will be working in teams of developers to create or update new technologies. One more thing which I found useful was seeing Travis CI load, and the race against time when it came to classmates pushing code at the same time; it made me push myself to be a little faster while at the same time not be sloppy about what I was putting into my code.

Overall, we discussed what we will do in these coming weeks as the semester comes to a close. The project we are planning on presenting will feature a search bar which we plan to implement by then. I am excited to see what we end up with in terms of helping AMPATH and their healthcare system!

 

 

From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.

Draw Your Own Map || S.S. 9

csseries281829For my second-to-last individual apprenticeship pattern, I have decided to go with something a little more relevant to my current situation–relating to starting my career post-graduation.

The Draw Your Own Map pattern caught my attention right away with “we might come across situations or colleagues or people in the society who will try to prove that programming will become an unsustainable activity as time passes by.” Throughout my job search process, I asked questions and requested advice from all different kinds of people across different fields (and especially within computer science) on how they knew what job they wanted to start with when given opportunities.

In the end, I must choose what I think is best for me in terms of what I’m looking for. I’ve finally came up with a list and that includes:

  • Having solid mentorship
  • Proper training (no room for imposter syndrome)
  • A company that tries to stay on top of new technology
  • Work-life balance that allows me to continue doing all the things I love to do outside of work and travel often

The Draw Your Own map pattern is very encouraging, reminding us that we have options elsewhere if we feel that our current company is hindering our learning and personal growth. I found that this pattern was interesting because I part of my decision-making process was “what if I am ____ amount of time into my first career and realize that I do not like what I am doing?” How would I move on out of that role to figure out what I may like better in terms of my day-to-day tasks?

The activity to list three jobs that I could do following my next was was really helpful to visualize future career possibilities. I know that we can always learn on the job and at new jobs but it is also important to build up your skills that can be transferred in the first place.

The pattern has helped me feel more confident in the decision I made to start out in software engineering. I will build up my skills starting here and then more onward from there!

From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.