Author Archives: orscsblog

Thoughts on “Learn How You Fail”

The “Learn How You Fail” blog pattern is about finding the ways in which you tend to fail or make mistakes.  The solution does not claim to save a programmer from ever failing (that’s not possible), but is instead about learning what tends to lead you to failure.  It’s part of drawing accurate boundaries around yourself as a learner: figuring out where you can grow and improve, what tends to throw you off track, and what may not be worth the effort of improvement.  The authors have a very concrete action step to help: using only a text editor, write an implementation of a simple algorithm, write all of the tests, refine the algorithm until you’re sure it will compile and pass the tests, then actually try to compile and test it to see where you failed.

Learning my weak points, where I tend to make mistakes, is a big part of my life as a gamer.  I try to improve my performance (whether that’s technical gameplay, strategy, tactical decision-making, etc.) with every game.  A very important aspect of that is determining what kinds of mistakes I tend to make.  Where do I overthink?  Where do I underthink?  What can I take away from losses, and when is it worthwhile to acknowledge but let go of a weakness (at least for a while) to focus on something else?

This apprenticeship pattern also resonates with me for a different reason.  I struggled through much of middle and high school with a mild nonverbal learning disability.  It took years of practice and tutoring for me to really be able to work through it.  An important part of that (or, I think, any therapeutic process) was finding points of failure.  I struggled to express myself in writing, especially meeting word counts for papers.  I had (and still have) difficulty breaking focus or task-switching.  Recognizing these things was the first step to improving them.

I think that learning how you fail is not just a skill that’s important for software apprenticeship and craft.  It’s an important life skill; at least, it has been for me.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

I think I learned more this sprint about the necessity of good sprint planning than I did about anything else.  We started the sprint by laying out goals, and by going over the user stories in the doc that AMPATH sent us.  This was useful work, but after that we hadn’t planned for any real actionable items.  Broad goals are nice for setting a larger scope, and less so for getting concrete work done.

I did, however, learn some concrete things; I think it’s also important to note that most of my learning took place in a group setting both within my team and with another team meeting in an adjacent space.  I learned how the AMPATH app strings its services together to go from button press to REST API query, and how that information is propogated through the layers of services and onto the user interface.  I also learned how they check the online status, which may be more important going forward for other groups that are focused on smooth offline/online transitions (my team, for now, is taking on data encryption).

Other than team planning process (which I will get to later), the most important thing that I learned was that, at least for me, it’s far easier and better to go through code in a group setting than individually.  I tend to get lost or distracted when I’m looking through it on my own, where in a group I can focus and bounce ideas off of my peers.  We ended up using a projector to throw the code up on the wall.  We then started with a form that logically must request server access (we looked at the patient search form).  From there, we looked at the service invoked by the actual search function(s), which is set up to generate a table of search results.  In order to do that it makes use of another service, which actually builds the REST request from a URL stored elsewhere and the string passed along from the search bar.  I think a similar tracing of services could be useful in the future to other groups interested in different parts of the UI.  We also received some help from the AMPATH team, directing us to the openmrs-api folder which appears to contain all of the services that interact with the server.

The most important team-related learning that came out of this sprint is the now-clear necessity for specific, concrete tasks rather than more abstract ones.  We came to this conclusion during our team sprint retrospective, and we agreed to spend more time breaking down tasks and assigning them during sprint planning moving forward.

In addition, during the class time where we met for our retrospective, we started researching already-made encryption services that we could use to encrypt records pulled from the AMPATH servers.  We found three so far that may be promising; one way we can add specific, focused tasks for next sprint could be to assign one of these services to each team member so they can research it and prepare a brief report.  This way we will be able to present what we’ve found to AMPATH for feedback on which might be the most suitable (they’ve already given us a few guidelines).

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on the Breakable Toys pattern

The Breakable Toys pattern addresses the issue of learning by making mistakes in an environment where failure is not an option.  This can mean that learning substantially more difficult.  The solution that they propose is to build yourself an environment in which failure is allowed to happen.  The apprentice does this by creating a pet project (even if it’s reinventing the wheel) and just trying things until something works or something sticks.

This pattern really resonates with me and the way I learn.  I can’t just pick things up by reading code or manuals, or copying premade solutions.  I need to get my hands dirty and build things.  Sometimes I’ll “sketch out” programs first by building a skeleton of files I know I want and adjusting as I go.  Sometimes I implement the core of code and let the rest of the program grow around it.  Neither approach is necessarily suitable for a professional environment where expectations may be high, but it helps me learn.

I also think that the Breakable Toys pattern can be applied in a more limited way to a more pressured setting.  While you might not be able to just declare failure when tasked with writing a section of code, you could take a couple of runs at it if the logic or the language involved are unfamiliar.  The main difference between this and Breakable Toys (maybe this is Partially-Breakable Toys?  Scratchable Toys?) is that you still need to deliver a working end-product in reasonable time.  The toy can’t be the whole assignment or program, but maybe it can be part of it.  This approach, however, doesn’t have the longevity implied by the authors.

I really appreciate that this pattern is in the book.  Playing with problems, writing code, failing, and then finding ways to do what I want to do is how I learn things.  It’s also a big part of what I love about software development.  Reading about something like the way I prefer to learn in a book like this makes me feel validated about my learning methods, like I’m on the right track.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Confront Your Ignorance” apprenticeship pattern

This pattern seeks to solve the problem of skill gaps that are making daily work more challenging.  The authors propose that software apprentices suffering from this issue attempt to actively learn the missing pieces.  This could take different forms — they suggest reading tutorials/FAQs, constructing small low-stakes projects, and/or involving other people that are either experts in the area or trying to learn the same thing.  They also suggest keeping a list of these skill gap areas, and crossing them off as they’re sufficiently learned; this goes hand-in-hand with adding to the list as your learning exposes additional gaps.

This approach to learning really resonates with me.  My preference is to actively seek out knowledge, and I tend to learn best through hands-on practice.  I’ve already used a less formalized version of this when I taught myself Python: find a skill (in this case, a programming language) that I would like to learn and then give myself a project to work on that forces me to learn and use it.  There are three major additions (on top of the formalization) that I can take away from this pattern:

  1. Involve other people, whether they are experts or fellow learners.
  2. Don’t overuse this pattern to the point where it causes problems for others; I only have so much time.
  3. Balance learning with introspection.

The first point leads to the creation of a learning community, and extends both the resources and benefits of learning.  I know that I have a tendency to want to do everything myself, and while independence isn’t bad it’s also important to not always be reinventing the wheel.  I also enjoy sharing my knowledge, and it makes sense for me to seek that out in a more mutual way.

The second point is also something I run into often, and partially extends from the first.  I really like to build things from scratch and see how they’re made.  However, that tendency can also lead to excessive use of time and energy for what should be a simpler project, or the preference for my own solution over another (quite possibly better) one that’s already been written and vetted.

The third point encourages me to set up a cycle of learning and introspection; crossing items off of the list and then adding more to the bottom.

While actual checklists are not a tool I particularly enjoy using, this pattern has leaned me towards perhaps keeping one (and actually updating it).  That, I think, is where I’ve found the most value in Confront Your Ignorance.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective

In this sprint I learned how to set up a development environment for a fairly unfamiliar IDE and language, and I learned how to work with other people to work through issues in building a local copy of the AMPATH app.  I struggled most in the latter with actually understanding the error messages and how to resolve them.  I can take the difficulty I experienced with this process and use as a reminder in the future to more carefully pick through the error messages and try to resolve the conflicts myself.  I was concerned about making changes to files or folders, even though the worst outcome would just be reverting back a version.

I think the team worked pretty well together on this sprint, although it’s a little early to tell.  We were all working on the same thing (environment/app setup) which didn’t offer a lot of room for collaboration other than helping each other resolve problems.  I think that we’ll get a better feel for our team dynamic in this upcoming sprint when we have more “real” work to do.

This sprint was primarily focused on getting the AMPATH app up and running on my local systems.  I started by forking the repository from the group repo, then cloning it to my system.  I then looked through the README and tried to follow the instructions.  I must have done something incorrectly or in the wrong order, because I ran into missing dependencies (which should have been downloaded by npm).  I then tried a variety of solutions to resolve the problems.  I attempted to directly install the dependencies (didn’t work, due to a file/folder creation error).  I deleted and checked out the project again (didn’t work either, same issues as before).  I tried rolling back to older versions of Angular and npm, which was suggested by a couple of StackOverflow pages I found relating to similar errors  (did not work, bricked my Angular install, had to reinstall).  On the second or third try of just installing the dependencies (with a clean Angular install and project version), I managed to resolve the issue  but then ran into a problem with the ladda module — it wasn’t in the right place for Angular to find it.  I was stuck on that until Matt Foley found and posted a solution.

Once I had the solution to the ladda error, I reinstalled Angular again (just to be sure), worked in order through the steps outlined in the README, worked through Matt Foley’s fix, and wrote up the procedure I took as I went so that other people in my group and in the class could use it.

The prodecure boiled down to the following:

Run npm install, then install global dependencies, then npm install again to catch anything that might have been missed.  Create copies of one of the ladda files in the directory Angular wanted to look for it in, and modify one of the ladda files to point to the proper directory.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “The White Belt” apprenticeship pattern

The White Belt pattern arises out of an issue I’ve encountered — I’ve learned one language fairly well and have some practice with others that follow similar paradigms, but have found it to be more challenging to learn new things.  Tools, skills, and languages that are different from what I know don’t come as naturally as it seems they should.

This apprenticeship pattern seeks to solve that problem through a mindstate and a more practical approach.  It teaches the mindset of a willingness to feel like a beginner again, to fail and maybe look foolish in order to adopt a childlike ability to absorb knowledge.  More practically, it suggests a learning strategy: adapt a simple, known software project from one language to another, using the paradigms of the new language to their fullest rather than trying to write “Fortran in any language”.

As I said before, I have notice that I struggle more and more to acquire new skills.  Whether that’s environment setup, picking up new languages, or adapting to a different set of tools it seems to get harder as I learn more.  That doesn’t bode well for my mental plasticity, and this pattern provides a useful solution.

The most useful aspect of the pattern, for me, are what the authors calls the mindset of not knowing.  Of willingness to ask questions, start from the beginning (whether that’s following tutorials that feel beneath me, or finding help elsewhere).  Of taking a harder road to learn things right and come up with a better and more nuanced solution rather than patching together something that’s familiar and comforting.  This speaks to something I learned in a high school gym class: we learn and grow the best and most successfully just a bit outside of our comfort zones.  I need to push myself into the “learning zone”.

The action advice is also something that I can take away and use moving forward.  While I don’t have a lot of time now, I am interested in refactoring old projects into new languages as a learning exercise, to blend something that I know well already with something brand-new.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Apprenticeship Patterns, Chapter 1”

In this blog post, I want to go through Hoover and Oshineye’s values of software craftsmanship, and how I can apply them to my work and my development as a developer.  To summarize quickly, they are:

  • Talent derives from effort and practice.
  • Adapt and change.
  • Embrace pragmatism.
  • Share knowledge.
  • Experiment, and be willing to fail.
  • Take control of own destiny.
  • Focus on individuals.
  • Include the best ideas from everyone.
  • Focus on skills over processes.
  • Find like-minded people to learn with and from.

I don’t necessarily agree that all talent comes from practice and effort, but practice and effort are the cornerstones of ability; talent certainly languishes without use.  I also don’t think that it’s enough to just do my work for classes or my job.  Improvement requires practice on new things and dedication of time outside of the minimum needed to put food on the table.

Like any machine learning system, we can improve via feedback.  Unfortunately, it’s not always so easy as minimizing a loss function and backpropogating information.  I agree with the authors wholeheartedly on the need to find solutions to my inadequacies.

Pragmatism is great, but I don’t agree that it’s the end-all, be-all; I do appreciate, however, that it’s more important in the context of apprenticeship than it might be later in my life.  I recognize the tendency within myself to become paralyzed by attempts to force “theoretical purity” or “future perfection” from the start, and then never start at all.  This is something I very much want to change, or at least to recognize when it is useful versus times it gets in the way.

One of the things I enjoy most about being a part of a learning community is the ability to share what I know, and learn from my peers.  I will miss that after I graduate, so I may try to bring that kind of sharing to wherever I end up next.

I absolutely agree that progress in any form requires experimentation and failure.  From my experience playing various games, I know that it’s important to take something away from each loss, and also to acknowledge the room for improvement in each win.

I think that many of Hoover’s and Oshineye’s points follow from taking control of one’s destiny.  I agree that conscious choice and involvement in learning and life are key to the improvement of self.

I like the idea of a community of like-minded individuals rather than a blind reliance on hierarchy.  In my gaming life, I’ve found that (sometimes somewhat heated) debate is an excellent way to grow myself and to learn.

Inclusiveness goes hand-in-hand with pragmatism.  If an idea is useful or productive, adopt it.  If a person has value to contribute or simply wishes to share in the learning and information, include them.

My earlier education was heavily focused on skills over memorization, and that different processes may work better or worse for different individuals.  I agree that it’s important to recognize these things, as well as acknowledge that skill is not always equitable.

Learning is communual, and this last value of “situated learning” seems to follow naturally from many of the earlier ones.  I want to continue to align myself with like-minded individuals, even if that requires more effort than just showing up to class.

There is more to unpack in this chapter than what I have covered here, far more than what I could digest and boil down for a single blog post.  I believe that commitment to the values Hoover and Oshineye lay out will help me improve as a software craftsman.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Introduction for CS 448

This is the blog I will be using for this course.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Hybrid Verification: Mixing Formal Methods and Testing”

This article, by Dr. Ben Brosgol, focuses on a mixture of formal methods and testing practices (together called “hybrid verification”) and the use of “contracts” that consist of preconditions and postconditions in order to formalize the assumptions made by critical code.

I chose to write about this article because it highlights some of the limits of testing, shows how to provide additional security for critical code, and introduces contact-based programming.

Brosgol defines a “contract” in programming as a set of preconditions and postconditions that wrap a subprogram (function, method, etc.) call.  The subprogram cannot begin unless its preconditions are met, and cannot return until its posconditions are true.  This provides a contract between a program and its subprograms that guarantees a certain state at critical times.  There are tools written for some languages (he uses SPARK as an example) that can do both static and dynamic contract testing and provide proof that the code will work as specified.

Brosgol then details ways to mix testing and formal verification.  If formally verified code calls subprograms that were tested rather than proven, the formal analysis tools will attempt to show that the preconditions are met, and assume that the tested code satisfies the postconditions.  This also requires that either the contracts are checked at runtime, or that sufficient testing was done such that the developer is confident the contracts will be fulfilled by the tested code.  If formally verified code is called from tested code, there need to be runtime checks for the preconditions (because the tested code does not guarantee those in the way the formal verification requires), but because the postconditions have been proven there is no need for checks at that point.

Next, Brosgol mentions the need for good choice of postconditions.  Strong, extensive postconditions make it easier to provide proof, but may have unacceptable overhead if they need to be checked dynamically.

He concludes that the relatively new combination of formal proof tools and contract verification on both a static and a dynamic basis opens up new avenues to create code that secures its critical sections.

This article helped me to understand that there’s a much wider world of testing beyond what we covered in class.  We didn’t talk about proof-based testing at all, and that’s a subject area that I believe I should learn more about.  It also highlights the way that our understanding of testing is ever-expanding.

 

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Getting Started with AI for Testing”

In my last post, I wrote about an article that dove into the uses of AI in software testing.  Given the volume of search engine results that turned up when I started doing some research into the subject area, I thought it was worthwhile to write another piece about it.

The post I chose to write about this time is an introduction to AIST – Artificial Intelligence for Software Testing.  It is defined by Tariq King (the author of the post) as “an emerging field aimed at the development of AI systems to test software, methods to test AI systems, and ultimately designing software that is capable of self-testing and self-healing.”  Most intrigueing to me is the last part — self-healing software.

The organization hosting this blog (of which King is a founding member) is called AISTA, or the Artificial Intelligence for Software Testing Association.  Their mission is to pursue what they call the “Grand Dream” of testing: software that tests and updates itself with little need for human intervention.

King’s post is more of a survey than an in-depth piece.  He identifies three areas to explore when looking to get into AIST: artificial intelligence, software testing, and self-managing systems.  I know a little about the first two, but the third I haven’t touched on much.  Self-managing systems also appear to be the focus of AISTA.  King claims that there is “a general lack of research in the area of self-testable autonomic software”, but that recent technological developments appear to bring solutions closer practicality.

Ultimately, self-managing and self-healing systems are designed to adapt to their environment, modeled (originally by IBM) after the autonomatic nervous system in living creatures.  A self-healing system should be able to maintain homeostasis alongside self-optimization.  And that necessitates self-testing: before making changes to its own code, an autonomous system needs to ensure the change won’t do more harm than good.

So, what does a world of self-testing software mean for software testers?  It means that we may become more like teachers for software systems, moving them out of local pitfalls so that they can continue to grow.  Of course, these systems may be a long way off, and will need extensive human-driven testing and validation before they can start to test themselves.

The robots aren’t coming to take software testing jobs.  Yet.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.