Author Archives: James Young

When It’s Easier to Just Do Everything [More] Manually

Sometimes doing things the hard way is a lot easier. The more tools you use and the more complicated those tools are, the more complexity you have to deal with. So while it may be nice to call a few simple methods and have a framework do everything for you behind the scenes, you’ll have to to learn how the framework works and maybe realize down the line it can’t do everything you want it to do. There may even be incompatibilities with other parts of your program.

This week in my independent study, I tried to figure out how I could run a machine learning model on Android. I had some success, but quickly discovered some complications. Android has the option of using TensorFlow Lite, which seems great. However, I built my model using Keras, so I needed to convert the model. That was relatively straightforward, but before I started calling my model, I realized that I needed to extract audio features on Android. This required using Python code on Android, particularly Librosa and Numpy. This led me to other potential frameworks to get this to run.

This would lead to a bloated app, so I looked into Google Cloud services and thought about running server-side code there. I already set up a way to upload and download files with Google FireBase, so this seemed reasonable. But this is a paid service and would be even more work to make it functional.

I already have all the code running on my personal machine, so what if I just set up a server with a REST API to upload and download files and run the necessary Python code locally? If I could get that working, it would be trivial to call the code I’m already running.

Getting the server to upload and download files is what I did this week. I used Flask, which makes it very easy to get a basic server up and running. For the time being, data can only be transmitted via WiFi, as there will be uncompressed audio files transmitted back and forth.

While there was some additional work to figure out HTTP requests on Android, already knowing the basic building blocks gives me much more flexibility moving forward. But with great flexibility comes great responsibility, and proper error checking will be an important part of development moving forward. Security measures are also very important to consider before deploying an app to production.

The next iteration will involve running the machine learning code with a REST API call and getting back both the results of the model’s prediction and any data I will need to plot within the app.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Dig Deeper

For my final blog post on apprenticeship patterns, I wanted to discuss my favorite pattern. Software is so pervasive now that anyone can make a working product with little more than superficial knowledge of a language and a framework. This is great motivation to continue, but it may lead one to erroneously believe they are an expert. Finishing a product, even a successful one, doesn’t make you an expert programmer.

Digging deeper is going below surface knowledge of a technology and learning the nitty-gritty bit-y details. The caveat is to not become too specialized. The book warns to keep your perspective of the project as a whole, and only the learn as much detail necessary to help with a given task or problem.

I was originally taught to treat new classes as a black box, and I only found it frustrating once I graduated to more complicated tools. To truly understand how something is meant to work, you have to look inside. Another example: I’ve taken a few introductory classes that used metaphors to explain concepts and/or taught from the top down, adding detail over time. Biology class was boring and difficult because I had to memorize that a blue circle will separate the green, spiked lines so that two red hexagons can copy each of them. It wasn’t until high school, which provided an understanding of underlying chemical reactions, that biology became interesting and easy to remember.

So it is with software. I’ve been exceedingly frustrated with new tools when I tried to play without understanding. Sometimes, it works. Others, when things begin to get confusing, diving in becomes a necessity. Another caveat: you don’t know what you don’t know, and if you assume you’re doing it right, you may be wrong. Even if it works.

This is another pattern that requires balance. Learning details provides diminishing returns over time, but you should mostly understand why you need to do something a certain way, and how it is working. If you can explain this in simple words, you’re probably on the right track. This applies not only to software tools, but work processes as well.

You may not always agree with how a technology was designed. No one will tell you that the modern Internet is a perfect design, because it has been manipulated into working in a world it wasn’t designed for. Created in a world of text, it now works in a world of streaming video across billions of devices. This would never have been achieved without engineers and developers who understood the basic building blocks of the technology. Be like them.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Improving the Spoken Digit Speech Recognition Machine Learning Model

After getting a simple machine learning model to recognize spoken digits, I was able to begin the iterative process of improving the model. Using only MFCC’s, the model was failing more than desirable, reaching a maximum of 60% accuracy when using validation data (my own voice, which was not used in training the model).

Below you will see plots of a sample of results when validating the model. For each digit, there is the extracted MFCC features, the actual spoken digit, the predicted digit by the model, and the certainty. There is also a plot of the certainty for each other digit for that recording.

This is just a sample of a larger validation set, and the actual results in this first model was only 45% accurate. But this shows that for all of these digits except 3 and 5, the model was 99% to 100% certain of the result. The differences in the MFCCs are subtle, but stark differences in color appear to be more likely to be correct, whereas 5 is clearly closer in color to 1, which it was mistake for. Additionally, every single audio clip of 3 was mistaken for a 0 using this model.

Retraining the model with different parameters may help in this case, but we can also hypothesize about the reason for these mistakes. Perhaps the MFCC is finding patterns in vowels that make “zero” and “three” look identical. If that’s the case, features that can detect consonants might help improve results. This sounds pretty obvious anyway, so it might be a good next step on the next iteration.

But first, let’s retrain the model without any changes.

Okay! This 3 was very accurately predicted. But the total accuracy of validation was only 50% (remember, this only shows a sample size of 10). Inspection of actual results now shows that 3 is sometimes mistake for a 2, and vice versa. This model is slightly better, bit still flawed. Which makes sense, because no changes have been made to the model and we just got lucky that it learned to be a bit better this time.

I’ve been training with 25 epochs, and getting 95-97% accuracy during training, and 93-97% accuracy using test data (from the same dataset as the training data, which was not used to train the model). Those results are pretty good, so maybe we can use fewer epochs and prevent some overfitting.

This certainly looks promising. With 95% accuracy during training, and 93.8% accuracy using test data, the results are still pretty good. However, the validation data with my voice is now 57.5% accurate! Only a single 3 was mistaken for a 0.

So I’m using a dataset of 4 voices to train and test, and my own voice to validate. But more data is probably better, so let’s use my voice to train the model and take a random sample to validate.

The plot is looking good! Each of these was very accurately predicted. During training and test data was 97% accurate. The validation data was 100% accurate. Of course, now that the validation data contains all voices that were trained, it’s more likely to be correct. Furthermore, the sample is small. So let’s see what happens if we use a new voice to validate. I had my roommate record himself saying each digit and used only his voice for validation data.

In general, the model is much more certain of its guesses. The final validation result was 80% accuracy, so not perfect but a major improvement. This much of an improvement was gotten just by adding more data and making small modifications to the model.

The importance of collecting data in order to improve a model is apparent. Even with 80% accuracy, there is still some predictive power. If this can be found to be useful, further data can be collected as it is used and this new data can be cleaned and used to train better models.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

A Machine Learning Model That Recognizes Spoken Digits (Introduction)

This week, I managed to prove (to myself, at least) the power of MFCCs in speech recognition. I was quite skeptical that I could get anything to actually recognize speech, despite many sources saying how vital they are to DSP and speech recognition.

A tutorial on Tensorflow I found a couple of months ago sparked the idea: if 2-dimensional images can be represented as a 1-dimensional array and used to train the model, perhaps the same could be done with features extracted from an audio file. After all, the extracted features are nothing but an array of coefficients. So this week, armed with weeks of knowledge of basic Tensorflow and signal processing, I finally tried to get it to work. And of course, many problems arose.

After hours of struggling with mismatches in the shape of the data, waiting for the huge dataset to reload when I made a mistake, and getting no results, I finally put together the last piece of code that made it run correctly, and immediately second-guessed the accuracy of the model (“0.99 out of 100, right???”).

Of course, when training a model, a result this good could be a case of overfitting. And indeed it is, because it is only 95% accurate when using separate test data. And even this percentage isn’t the whole story. The test data comes from the same dataset, which has a lot of recordings of each digit, but using only 4 voices. It’s quite possible that there are patterns found in the voices that would not exist in other voices. This would make it great using a random sample from the original dataset, but possibly useless for someone else. There’s also the problem of noise, which MFCC is strongly affected by. So naturally, I recorded my own voice speaking digits and ran it with the model. Unfortunately, I could only manage approximately 50% accuracy, although it is consistently accurate with digits 0, 1, 2, 4 and 6. Much better than chance, at least!

This is a very simple model, which allows you to extract only MFCCs from an audio recording of a spoken digit (0 through 9) and plug it into the model to get an answer. But MFCCs may not tell the whole story, so the next step will be to use additional extracted features to get this model to perform better. There is also much more tweaking I can do with the model to see if I can obtain better results.

I’d like to step through the actual code next week and describe the steps taken to achieve this result. In the meantime, I have a lot more tweaking and refactoring to do.

I would like to mention a very important concept that I studied this week in the context of DSP: convolution. With the help of Allen Downey’s ThinkDSP and related lecture, I learned a bit more detail on filtering of signals. Convolution is essentially sweeping one signal over another to get a new signal. In DSP, this is used for things such as low-pass filters and adding echo to audio.

Think of an impulse as an instantaneous tone consisting of many (or all) frequencies. If you record this noise in a room, you will get a recording of the “impulse response”. That is, how all of the frequencies are affected by the room over time. The discrete Fourier transform of this response is essentially a filter, because it gives the amplitude of each frequency in the impulse response, including all echos and any muffling. Multiplying these amplitudes by the DFT of an entirely different audio signal will modify each frequency in the exact same way. And thus, to the human hear, this different audio signal will sound like it does in the same room. If this concept is interesting, I encourage you to watch the lecture and work through the examples in the book.

I think these topics may come in handy if I need to pre-process recordings, in the event that noise is in fact causing errors in the above model.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Learn How You Fail

All the big CEO’s and entrepreneurs will give you trite advice on their LinkedIn pages: you need to fail a lot in order to succeed a lot.

I can’t be too hard on anyone who wants to get this message out, because I had to hear it at some point too. Failure meant incompetence to me, not part of the path to success. But what if failures don’t stop? The “Learn How You Fail” apprenticeship pattern attempts to solve the problem of lingering failures despite increased success.

This is likely due to something you are repeatedly doing that causes the failures. Hone in on these behaviors, because changing them is the only way to stop failing. Furthermore, you may simply not be good at some things, and finding these things can prevent wasted time and effort.

A potential problem with this pattern is that a lot of failure is likely due to blind spots in our perception of ourselves, so it’s likely not easy to solve the problem of uncovering the cause of our failures. But as software developers, our skills in pattern recognition should be top notch. Carefully documenting your failures and how they connect should reveal the patterns.

A coworker asked me years ago what my chosen superpower would be. I answered that I would want to know everything. Omniscience. Besides briefly earning me the nickname “know-it-all”, this reflects my own desire to fix my unfixable flaws. Now, I do think significant effort should be put into things that you are especially bad at. But this pattern is a good reminder to let go of the things that you’ve tried to do and simply haven’t worked out.

Success is many cases is figuring out what you are good at, not about fixing all of your weaknesses. This is the reasons the US President has a cabinet, Congress has subcommittees, companies have divisions, and software team members have roles. We can’t all do everything.

My biggest issue with this pattern going forward will be striking a balance. There will always be a desire to learn as much as possible. Gaining more skill and broadening horizons may be tempting, but knowing when to give up in a specific area is the challenge. The CEOs who credit their success to past failures knew what to give up on within their chosen field. Almost always, this doesn’t mean choosing a brand new career; it means finding what you’re doing wrong in the current one.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Analysis and Comparison of Ascending and Descending Scales

With added pressure in realizing the semester is half over, as well as an upcoming interview for a position dealing with DSP and machine learning, I came into this week with newfound motivation. The focus that comes with a little bit of pressure is paradoxically quite freeing.

I had some issues when attempting to compare features between audio files. In hindsight, it was an obvious mistake that I had already learned in theory. But of course, applying theoretical knowledge always reveals the points of weak understanding.

As I’ve written in the past, MFCC (mel-frequency cepstrum coefficients) are most common with speech processing. There are time slices taken from the audio file and by default Librosa calculates 13 coefficients commonly used for speech processing. The MFCC is an array of time slices, each represented by 13 coefficients. These are plotted below, with color representing magnitude (from dark blue to dark red), time slices on the y-axis, and coefficients on the x-axis. The waveform, MFCC Delta, and Chromagram are also plotted.

The chromagram is of particular interest, as it extracts the frequencies in the time domain, revealing that the scale on the left is ascending and the scale on the right is descending. You can even see where my finger slipped playing the descending scale.

Analysis of an ascending and descending scale

This shows the importance of scale invariance when comparing features, which will also come to play in machine learning. This is why frames of equal time-slices, which usually overlap, are taken from an audio sample.

Originally, I was extracting features without cutting the audio files to the same size. This resulted in a larger MFCC. Attempting to plot the difference between the features caused an error. Files with the same length, however, naturally resulted in two arrays of the same size. Because they were only slightly off, I wanted to be sure that my understanding was correct, so I made the ascending scale exactly half the size and ran the program again.

Indeed, cutting the first sample in half reveals that the resulting matrix has half as many MFCC time slices. Librosa extracts the first 13 mel-frequency coefficients, so each array will be length of 13 and each time slice will have one of these arrays. Trying to find the difference by subtracting one matrix from another results in this error message:

ValueError: operands could not be broadcast together with shapes (44,13) (87,13)
Analysis after cutting the ascending scale in half

Also notice the chromagram only reveals 4 major frequencies. And because a chromagram is in the time domain, but the plot still has the same x-axis, the notes end at approximately the halfway point.

Plotting the absolute difference between MFCC features may not be visually illuminating, but potentially has uses for pattern identification. The real utility comes from comparing an audio sample to existing files. Take a look at the ascending versus ascending scales:

The absolute difference in MFCC features between ascending and descending scales

There is little difference in the higher coefficients, but some strong differences in the first coefficient. There are irregular differences through the rest of the plot, both in time and within coefficients. In isolation, this doesn’t reveal much. But when instead comparing two ascending scales offset by 0.1 seconds, the differences are very small. There are regular spikes in the first coefficient however, likely due to the earlier change of note in one sample.

The absolute difference in MFCC features between ascending scales, offset by 0.1 seconds

This lack of difference is one example of how a machine learning algorithm can detect whether a audio sample fits into a group. Actually training these models will be the topic for next week.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Reflect As You Work

This apprenticeship pattern caught my eye for discussing the Peter Principle, which states that an individual will only be promoted if they do well in a position. The result of this is an organization full of people who have been promoted to positions in which they no longer perform well, creating an incompetent organization.

Reflecting as you work is meant to prevent this. The idea is simple enough: constantly probe your work process and ask yourself whether it is ideal. Take note of the positive aspects, but also the negative aspects so that they can be fixed.

The pattern suggests focusing on increasing skill rather than experience. This is a subtle but important distinction, because spending a lot of time doing a single thing wrong is still a lot of experience, but it’s not necessarily useful as a professional.

In certain parts of our industry,
it is quite easy to repeat the same year of experience 10 times without making significant progress in your abilities.

Apprenticeship Patterns

I certainly reflect on my work when it’s done, but I tend to focus on the work itself rather than the process. I’ve spent countless hours before a project commit making sure every line of code is perfect and the necessary documentation is in place. This led to nice code and happy professors or managers, but it didn’t help my work process. It was doing more of the same; obtaining a good result, but without improvement in workflow.

Figuring out how work processes connect is a helpful practice to identify counterproductive habits. We tend to take the path of least resistance and easily fall into a workflow that we assume works best for us, but this may be an illusion due to our limited scope of experience. We know redundancy can be reduced within a project’s architecture. We strive for this. Why then, would we not give our work process the same courtesy?

This pattern is a great example of software development’s being more than simply writing code. And by writing code, I mean using tests, DevOps, OOP, and anything else that can be considered a software engineering practice. As the entire book aims to prove, software development is an organic, human process. Moving from a software apprentice, to journeyman, to master requires understanding your own flaws, not just the flaws in your product.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Creating Chords from Sine Waves in Python

This week as part of my independent study I worked on feature extraction in Python. I have been using Python for Engineers as a reference, and it describes basic digital signal processing. As an exercise, I expanded on the code found in that chapter.

It’s a non-trivial task to create a sine wave in code (although compared to the most complex aspects of DSP, it’s a cakewalk). A sine wave will create the purest tone possible, as it creates a constant oscillation. This oscillation is what you perceive as a pitch. The equation for a sine wave is given as:

y(t) = A sin(2πft + φ)

where A is amplitude, f is the frequency, t is time, and φ is the phase in radians. We can ignore phase because this simply indicates where the wave starts at t=0, and this doesn’t matter to our ear. We hear the same oscillation regardless of where it starts.

Take a look a the code for creating a sine wave. Some of the details aren’t as important, but you can see the book for a description. The line that actually creates the sine wave values is:

sine_wave = [np.sin(2 * np.pi * frequency * x/sampling_rate) for x in range(num_samples)]

If you aren’t familiar with list comprehension in Python, this is just using the sine wave equation above, substituting time, t, as a specified number of samples divided by the sampling rate. The result is a list of values representing a sine wave. In reality, this is all an audio file is, with some additional encoding (and usually more interesting oscillations than a sine wave).

So what if we wanted to do this for a chord of multiple sine waves? Maybe using more list comprehension? Sure.

# Note Frequencies
a4 = 440
c5 = 523.25
e5 = 659.25
chord = [a4, c5, e5]

sine_waves = [[np.sin(2 * np.pi * freq * x/sampling_rate) for x in range(num_samples)] for freq in chord]

This is doing the same as above, only it’s doing it for each frequency in a list of frequencies. But that’s the easy part. The original code just multiplies the samples by the amplitude, then converts them to hexadecimal values and writes it to the file:

for s in sine_wave:
    wav_file.writeframes(struct.pack('h', int(s*amplitude)))

But we can’t simply do that for each sine wave in succession, or we’d get different sine waves playing one after another. That’s an arpeggio, not a chord!

So we have to get a little creative. But not too creative. If you think of each sine wave playing from a separate speaker, what you hear is the sum of the air pressure from each speaker. A single speaker is the same story: it’s just playing the sum of the three sine waves. So then, iterating through each index, we can add the amplitudes of each individual sine wave. I also had to reduce it enough to store the value as a short int, by dividing by two.

# Only write samples to the end of the shortest sine wave
shortest_sample_len = min([len(j) for j in sine_waves]) 
for i in range(shortest_sample_len): 
    current_frequencies = [wave[i] for wave in sine_waves]
    value = sum(current_frequencies) / 2
    wav_file.writeframes(struct.pack('h', int(value*amplitude)))
Output of the sums of the sin waves before multiplying by amplitude

Python for Engineers goes on to describe how to use a Fast Fourier Transform to get the frequencies from the wave file with a single sine wave. But the code works just as well for the sine wave chord! This is because the FFT is an array that treats each index as a frequency, and the value at that index is the frequency’s amplitude. This means that regardless of the number of tones in a sample, the FFT can be plotted and will reveal outliers: those indices with a much higher amplitude. These are the frequencies in the audio file.

One last thing to keep in mind is that since the indices are used to represent the frequencies, they will be whole numbers. For example, c5 = 523.25hz will show up as a spike at indices 523 and 524, which 523 having the larger value of the two.

Output of finding the frequencies of the chord created above

Full code for creating a chord in Python is posted here.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Be The Worst

I love this apprenticeship pattern. It immediately caught my attention, and although I knew exactly what it was getting at by reading the title, it’s a great reminder.

We’ve probably all heard the phrase “if you’re the smartest person in the room, you’re in the wrong room”. That’s what this pattern is describing. As the worst on a team, you will have an occasion to rise to, riding on the coattails of your teammates and becoming better for it. This means harder work. It requires the ability to apply many other patterns described in the book to succeed efficiently.

The above quote is a bit harsh, though, and isn’t mentioned in the book. It’s not that one should be openly criticizing their coworkers or consider themselves the smartest person in the room, despite every programmer’s periodic God complex. Besides, it’s difficult to ascertain who exactly is the smartest, or best developer, or best employee. There are many metrics and a team is full of people with different strengths and weaknesses. The individuals are less important than the group: this pattern is probably best implemented by looking at teams as a whole. Are your skills lower than those of the group as a whole? This could be caused by a lot of great developers not working well together. This, too, will slow your progress down. This, too, is a reason to change.

I have mixed feelings about this sentiment. It’s patently true that working with people significantly above your skill level will help you improve if you’re willing to put the work in. But it would be a real shame to leave a close, efficient team just because you’re no longer worse than all of them.

But that is an important part of growth: knowing when to move on. Finally ending my college career makes me consider all the things I could have done, or could do if I finished a second concentration or major. Or what I missed for general education credits, extra curricular activities, and social experiences by cramming the entire degree into three years and not taking my time to enjoy it. It’s nearing the time to move on, though. Despite a connection to classmates and professors, it would not be wise to stay longer. This doesn’t mean professional or academic relationships have to end, but the majority of my time will soon be spent improving in my chosen career, with new challenges, on a new team.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Independent Study: The Plateau, the Snag, and the Obstacle

Researching more on audio feature extraction last week gave me a lot to think about. As it settled in, I came up with great ideas for hypothesis to test and specific applications for them. The kind of ideas that lead to turning off your cell phone, skipping lunch and dinner, and get things working.

Unfortunately, as mentioned last week, I do not have the time learn audio processing algorithms to the extent that I can implement them, so I will have to make a choice on a library that will help. A further complication was getting code to run in Android. Libraries exist to run Python code in Android, but the Python libraries I’ve found for audio analysis are lacking in features and I would need to use a combination.

The most comprehensive library I’ve found is Essentia. Of course, it’s so comprehensive I won’t be able to use the code for a professional app without a commercial license. Luckily there is a noncommercial license available that will allow me to get results for the project and determine if there are commercial applications.

Essentia is a C++ library. So the good news is I get to use JNI and the Android NDK (Native Development Kit) to run C++ code. Getting C++ code to run from an Android Activity is straightforward enough, but I do worry about potential complications in running Essentia. There are a number of dependencies that I fear might cause trouble with a feature I might want in the future. These are kept to a minimum with a special flag during compilation for Android. But alas, my paranoia strikes.

Because Essentia is open source, I am at least able to see implementations of the audio processing algorithms, and the code is well-documented with references to studies. Signal processing is a degree, not just a semester-long project. I certainly appreciate that fact more over the past couple of weeks. But this will be a great overview and using the code will still require understanding of the underlying processes.

Progress was made on converting files from audio to basic byte data. When I was considering Python libraries, I was under the assumption I could use WAV files and easily get byte data (for example, with librosa). Android’s MediaRecorder doesn’t support saving WAV files, so other formats must be used.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.