Category Archives: Week 4

Java @nnotation

Enhancing Your Code with Metadata

The blog from ioflood.com provides a comprehensive guide on Java Annotations, covering the basics and advanced aspects. It explains the fundamental annotations in Java, such as @Override, @Deprecated, and @SuppressWarnings, and also delves into creating custom annotations. The blog addresses how to deal with common challenges and compares Java Annotations with other metadata approaches like comments and naming conventions. It also touches upon the role of Java Annotations in larger projects and frameworks, emphasizing their importance in modern Java development.

Delving into Java’s built-in annotations, let’s begin with @Override. This annotation safeguards your method overrides, ensuring that you’re correctly extending a superclass method. Missteps in method naming or parameters can lead to subtle bugs, but @Override makes these issues immediately evident.

Next, consider @Deprecated. It’s a polite warning to developers that a particular method or class should be avoided, possibly due to security concerns or improved alternatives. Using @Deprecated helps maintain backward compatibility while steering developers towards better options.

Lastly, @SuppressWarnings plays a key role in managing compiler warnings. While it’s not advisable to ignore all warnings, this annotation is invaluable when dealing with known but unavoidable issues, particularly in cases of backward compatibility or deprecated usage.

Creating and Using Custom Annotations

Custom annotations take this utility a step further. They allow you to create tailor-made metadata suited to your specific project needs. For instance, you could define an @Configurable annotation to mark fields that should be populated from a configuration file.

Creating a custom annotation involves defining an interface with the @interface keyword. The real power lies in understanding and using retention policies effectively. These policies determine how the annotation is stored and used:

  • SOURCE: Discarded by the compiler, useful for annotations processed during source code analysis.
  • CLASS: Stored in the .class file but not available at runtime, ideal for annotations that don’t influence runtime behavior.
  • RUNTIME: Available at runtime, these annotations can be used for runtime processing, like those in many Java frameworks.

Best practices for custom annotations include clear documentation and thoughtful consideration of their scope and retention policy. They can serve myriad purposes, from guiding framework behavior to enforcing coding standards.

Conclusion

Java Annotations, whether standard or custom, represent a powerful aspect of Java programming. They allow for cleaner code, clearer intent, and more robust software design. By understanding and utilizing annotations effectively, Java developers can ensure their code is not only efficient but also well-structured and easier to maintain.

Here is the link of the blog: https://ioflood.com/blog/java-annotations/

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

Black Box vs White Box Testing

In the ever changing and dynamic field that is Software development, understanding the nuances of different testing methodologies is crucial for ensuring quality and reliability. I would like to say that I stumbled upon the blog “Black vs White vs Grey Box Testing” on Shakebugs.com however, the truth is I was still a little confused after our last class and needed further clarification not only on the difference of the two testing methods but just what they do and when they are used. And well this article did just that it resonated with what we were learning and sparked several insights that I believe will impact future practices.

The article navigates through the concept of black, white and grey box testing (I did not even know grey was a thing.) Black box testing, as it explains, is an approach where the tester assesses the functionality without knowledge of the internal workings of the application. White box testing, on the other hand, requires a deep understanding of the code, as tester need to verify the internal processes and pathways. Grey box emerges as a hybrid approach, combining elements of both black and white box testing. It allows testers to apply their partial knowledge of the internal structures while examining the software’s external functionality.

As I mentioned before I chose this resource because it matched the topics we were discussing in class and further helped develop my understanding of the practical applications of the different testing methodologies. The clear and concise explanations paired with practical examples and visuals, provide a framework to differentiate and appreciate the unique attributes and applications

Reading this article was more delightful than I initially anticipated as when I saw a 13 minute read time I almost closed the tab however, I am glad I did not. I learned that while black box testing is excellent for validating user requirements and functionalities, white box testing is indispensable for internal code optimization and security assessments. Grey box testing , with its balanced approach, offers a valuable perspective for comprehensive testing.

Going forward, I intend to integrate these insights into my approach to software testing. In future projects, I will not only consider the functional requirements but also the internal code structure and security aspects when deciding on a testing strategy.

The blog post is a must-read for anyone in the field of software development testing. It offers clear and practical understanding of the different methods, guiding how to apply them effectively. You can read the full article here . This resource not only enhanced my understanding but has also equipped me with practical knowledge I am eager to apply in the future.

From the blog CS@Worcester – Josies Notes by josielrivas and used with permission of the author. All other rights reserved by the author.

Navigating the Software Development Spiral: A Closer Look at the Spiral Model

In the dynamic world of software development, finding the right approach to tackle complex projects and manage uncertainties is an important task since we always need to figure out ways of maneuvering around problems or solving them if need be. One such approach having came across in my searches that has gained recognition for its adaptability and risk management capabilities is the Spiral Model. This Software Development Life Cycle (SDLC) model is something that provides a systematic and iterative method for building software, allowing developers to navigate through the challenges of large and intricate projects.

In the blog post, it delve deep into the intricacies of the Spiral Model, it tries exploring its phases, characteristics, advantages, and disadvantages.
By the end, you’ll have a comprehensive understanding of this powerful SDLC model and its potential applications, helping you make informed decisions about its use in your software development projects. from what i’ve uncovered as to why the Spiral Model is often referred to as a “Meta-Model” and discuss the scenarios where it shines, it’s most likely because of it’s nature to incorporate multiple approaches, being able to seamlessly integrate concepts from other SDLC models, utilizing a step wise approach almost similar to the classic Waterfall method- with every loop representing some kind of a step or phase that’s completed in the development process.

Following that, is usually the Prototyping model/technique; like the name implies we make a prototype model right in the beginning to have something of a baseline to draw on- the prototype is developed at each beginning phase, providing a tangible solution to resolve any risks that may crop up. the iterations in the spiral model can be thought of as evolutionary biology through which the complete systems we have are built.

The primary focus of the spiral model is usually risk aversion and management- by addressing risks at each and every phase- it makes sure that any risks or uncertainties software development cycle is usually kept at a minimum. last but not least is the adaptability of the spiral method- it’s iterative and incremental approach can be useful for any changing requirements or unexpected events that may crop up

I selected this resource because the Spiral Model is a fundamental concept in software engineering. Understanding different SDLC models, their advantages, and disadvantages is crucial in the software development field. The Spiral Model’s focus on risk management and adaptability piqued my interest as it aligns with the evolving nature of software projects.

https://www.geeksforgeeks.org/software-engineering-spiral-model/#

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

Working Locally And Upstream.

As a student of Computer Science and currently taking a class of Software Process Management, my journey through this course specifically involves a lot of learning, experimenting, and finding better ways to upgrade as a student in this field. In this blog post, I shall share some of the things I have learnt and we’ll delve into the concept of working locally and upstream, highlighting its significance and the benefits it gives in contributing to open-source projects.

What Is “Working Locally and Upstream”?

Before I go into the why and how, let’s clarify what working “locally” and “upstream” means in the context of open source:

  1. Working Locally: When you work locally, you are making changes and improvements to open-source software on your personal development environment. You might be fixing bugs, adding features, or simply experimenting with the code. This is your playground to test, experiment, and learn.
  2. Working Upstream: Once you’ve made changes locally and are confident in your code, the next step is to contribute your changes to the official or “upstream” repository. Upstream is where the original project is maintained, and your contributions become part of the official codebase.

Why Would one Work Locally?

  1. Learning and Experimentation: Working locally allows you to experiment freely. You can try out new ideas, make mistakes, and learn from them without the pressure of affecting the main project.
  2. Skill Development: This is a perfect opportunity to hone your coding, debugging, and collaboration skills. You’ll gain valuable experience that can be applied in your coursework and future career.
  3. Portfolio Building: Every contribution you make locally is a valuable addition to your portfolio. It showcases your practical experience and commitment to open source.

Why Should you Consider Contributing to the Upstream?

  1. Community Engagement: Contributing upstream allows you to be part of a wider community. Your code becomes part of a larger ecosystem, and you collaborate with experienced developers from all over the world.
  2. Impact: Your contributions have a real impact. The changes you make can benefit not only the project but also countless other users and developers who rely on it.
  3. Networking: Working upstream introduces you to industry professionals and like-minded individuals. This networking can be a stepping stone to internships, job opportunities, and mentorship.

How to Get Started Working locally and upstream.

  1. Choose a Project: Find an open-source project that aligns with your interests or field of study. Popular platforms like GitHub offer a wide selection.
  2. Fork the Repository: Forking creates a copy of the project in your GitHub account, which you can work on without affecting the original code.
  3. Make Local Changes: Clone your forked repository to your local machine. Make the desired changes, test them thoroughly, and commit your work.
  4. Make a Pull Request: Once you’re satisfied with your changes, submit a pull request to the original repository. This is your way of proposing your contributions to the upstream maintainers.

In conclusion, Working locally and upstream in open source is a valuable experience for a lot of software developers. It not only helps you grow as a developer but also connects you with a global community of like-minded individuals. So, dive in, fork your first repository, and explore.

Here is where you can find some open source projects to work with:

https://github.com/

https://gitlab.com/

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

learning concurrency

I’ve heard these terms a lot, concurrency and multithreading, but I never really bothered looking into what they actually do. All I’ve really known about these terms was that they make things run faster. I mostly associated multithreading with hyperthreading, I’ve known that CPUs can use multiple cores for the same application to speed up runtime and make games perform better, but I was always sort of confused about how some games have issues with actually taking advantage of modern high-end CPUs. Usually, this is fixed by some sort of modification, if it’s been written already. That being said, my association between the two is only really surface level.

Hyperthreading is really just related to the physical hardware, which makes it different from multithreading in programming. Instead, multithreading is a form of concurrency in which a program has multiple threads that can run simultaneously, with each thread having its own operations and processes that it executes. What this ultimately means is that within a program, multiple operations can be executed at the same time.

This really fascinates me coming from the perspective of only writing straightforward code. While I sort of knew intuitively that programs can probably do multiple tasks at the same time, I’ve only experienced this on the end-user side of things, rather than the individual writing the program. After looking into how threads work in Java on a BairesDev post regarding Java concurrency, I can really see how powerful of a tool concurrency can be for runtime efficiency. This post essentially goes over what concurrency is and the ‘basics’ of utilizing built-in multithreading functions in Java, along with the advantages and disadvantages that this design comes with.

Of course, it does seem like it takes a careful approach to make sure that the implementation of such a tool is actually beneficial to the project. Even with this relatively simple tutorial, I did find myself a little confused at some points, particularly at the point where Locks are introduced. Though this could be the effect of the hour at which I decided to start writing this blog post, it still stands to reason that the complexity of writing a multithreaded program may result in a more difficult development process, especially in debugging.

Regardless, I was truly fascinated by the subject matter and I’m really excited to be going over concurrency in our course (CS-343). This seems like a tool I would really like to use as I enjoy toying with logistics in video games and the like.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

Code Documentation

For this week’s blog post, I chose the article “Code Documentation: How to Do It Right” by the editorial team of SkillReactor. I chose this article in particular because it aligns well with the code documentation section of the syllabus. This article goes into great depth about code documentation and its benefits, as well as how it is best implemented. In this blog post, I will specifically be going over the code documentation tools discussed in the article and how the article discusses overcoming documentation challenges. Prior to reading this article, I wasn’t very familiar with code documentation tools, but I learned much about their function and how some of the tools mentioned can be used with multiple programming languages.

The article mentions a few specific code documentation tools that are commonly used in software development. One of the tools mentioned that caught my attention is Doxygen. “Doxygen is a versatile tool that supports multiple programming languages and generates documentation in HTML, LaTeX, and RTF formats.” Doxygen is very interesting because of its benefit of functioning alongside multiple programming languages and creating documentation in HTML, which, in my experience, can be very difficult to navigate without some form of documentation guiding you. I also found Sphinx to be fascinating because the article mentions that it makes use of automated API documentation generation. “Sphinx, primarily used for Python codebases, offers automated API documentation generation and support for reStructuredText markup language.” This tool having the capability to generate documentation for your code automatically can be immensely helpful; it being automatically generated can also help with accidentally using jargon or slang that may not be understood by others reading through your code. Another essential aspect that the article discusses is overcoming some challenges associated with creating code documentation.

Writing documentation into your code comes with great benefits, but often, it can be difficult to implement into your project. One of these challenges being maintaining up to date documentation with a project. “Although code documentation offers numerous advantages, it comes with its own set of challenges. Managing updates to documentation can be a daunting task, particularly in large projects where multiple developers are working simultaneously.” Another challenge that comes along with creating documentation is avoiding redundancy. “Another challenge is avoiding redundancy in documentation. When multiple code sections use the same functions or variables, it can be tempting to copy and paste documentation, resulting in redundant documentation and confusing code.” However, these challenges can be overcome with enough diligence. “To overcome these challenges, it is essential to establish specific standards for documentation management and to incorporate documentation review processes into the development workflow.” As long as these standards can be maintained and code documentation is regularly reviewed, the documentation implemented with your projects should be of high quality and make understanding your projects much easier for your fellow developers.

Article: https://www.skillreactor.io/blog/the-importance-of-code-documentation-and-how-to-do-it-right/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Minimizing Anti-Patterns

The article I chose for this week’s blog post is “What Is an Anti-Pattern?” by Andreas Schöngruber. I chose this article because it discusses what anti-patterns are, gives some examples of anti-patterns, and how to avoid them. I chose this article for this week’s blog because it fits well with the syllabus topic of anti-patterns. I found the section about recognizing and avoiding anti-patterns to be very helpful. This blog post will focus on the aforementioned section about recognizing and avoiding anti-patterns in your code.

I will first discuss the section of the article which discusses recognizing anti-patterns in your code. One method that the article mentions involves keeping an open mind and looking to others for feedback. “When identifying anti-patterns in our code or design, we must keep an open mind and question our assumptions. Sometimes, we may become attached to a solution that required a lot of time and effort, but there might be a better solution out there. To avoid this, it is helpful to seek feedback from others.” As this quote states, it is imperative not to be too attached to a process that you developed or are developing, especially if the time it takes to develop or implement this feature is more than the value that the feature is worth. Another important part of being able to mitigate the harm done by anti-patterns is knowing how to avoid them.

While being able to identify anti-patterns is incredibly important, it is also important that one knows how to prevent their occurrence in the first place. One method of doing so is by stepping back and looking at the greater picture of the project that you are working on. “When working on software projects, it is essential to be aware of common pitfalls that can lead to anti-patterns. One strategy to avoid these pitfalls is to take a step back and consider the larger context of the problem. Understanding the problem in its entirety will help in coming up with a good solution.” Instead of being hyper-focused on one part of a larger project, which could lead to anti-patterns arising in your code, its best to take a step back occasionally to make sure what you are working on is not falling into the traps of anti-patterns. Another method of preventing anti-patterns would be a divide-and-conquer approach to developing features. “Another strategy is to break down large problems into smaller pieces. Doing this can help avoid getting overwhelmed and make it easier to spot issues and inefficiencies.” Using this strategy can be very beneficial because it can allow you to see anti-patterns as they appear in your code.

Article: https://www.baeldung.com/cs/anti-patterns

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

CS343 – Week 4

Through the duration of this class, we have worked a lot with different types of classes, abstract and concrete, as well as interfaces to better design the code. It was not a new concept to me, as I have learned some of the basics to these in prior classes. However, I never truly understood the benefits and the right time to incorporate each. The articles I found give a simple explanation on the distinctions between concrete classes, abstract classes, and interfaces, as well as the correct situations to use them in.

A concrete class, sometimes just referred to as a class, is used to specify any entity. It also can work as a blueprint for all entities with the same attributes. An abstract class and interfaces are similar and was often confusing for me to distinguish the two of them at first. An abstract class will have methods implemented inside the body, but abstract methods will not have the body of the method inside this class. For example, the duck class I was working with for Homework 2 has the abstract class Duck and the abstract method display(). The body of the method is not present in the abstract class because the different concrete classes implementing Duck uses display() to show different messages. An interface only has the method names included for classes to implement with the interface but does not have any body of the methods. This is what partially distinguishes abstract classes from interfaces, as abstract classes can have concrete methods defined in them while interfaces can only have the method names defined. Interfaces are handy when only needing to enforce a contract. Whoever implements this interface will provide an implementation of all the methods. Abstract classes are helpful when only needing partial implementation, which leaves the responsibility of implementing non-useful methods from the class.

The phrase “program to an interface, not an implementation” is new to me and I am starting to make more sense of it. Concrete classes are the actual implementation, interfaces are the contract, and abstract classes are a trade-off between both. Being able to fully program these three structures into the program helps with maintainability, extensibility, and testability. When programming to an interface, you are forced to split the program into subsystems, with each subsystem responsible for certain tasks and having the ability to communicate with each other to complete the whole task. The key to making the program more flexible is focusing the design on what the code is doing rather than how it does it.

I decided to pick this topic for my blog because I had always had a good idea of how to physically write programs to function correctly. However, something that I never focused on before was the structure of the code which can help when needing to develop the program further after initial creation. Most of our time spent in the class so far has been focusing on this concept and it is proving to be very important the more a program continues to develop.

Program to Interface, Not Implementation – Beginner’s Tutorial for Understanding Interface, Abstract Class and Concrete Class – CodeProject

Programming to an Interface – DZone

From the blog CS@Worcester – Jason Lee Computer Science Blog by jlee3811 and used with permission of the author. All other rights reserved by the author.

SW Design Strategy – Interfaces vs. Abstract Classes

An age-old discussion in the computer science and Object-Oriented Programming world is whether/when to implement interfaces or inherit through abstract classes. In these first few weeks of CS-343 we’ve been working on several activities discussing some of the strengths, weaknesses, and differences between interfaces and abstract classes. In the past I’ve worked on some mid-modest sized projects which include both and can think of a few great examples of using each, but I found that I still struggled to understand some of the basic conceptual differences. So while I had a solid grasp on some effective use cases, I didn’t have a very clear idea on how to choose one/the other when in design stages where a lot less may be known about the project and how it may take shape later on.

Interfaces or Abstract Classes is a blog post from 2017 that I came across through web searching which explains some of the key conceptual and strategic differences between abstract classes and interfaces. Author Suhas Chatekar begins by discussing some of the most common responses he has received when asking this question in interviews. Abstract classes are typically preferred if there are suspected to be changes/additions needed later on. Interfaces are considered best when there are likely to be many different definitions for the same inherited methods, or as a possible alternative or substitute in multiple-inheritance in languages which do not support it (like Java/C#). 

Often it’s difficult to verbalize these differences, but this pretty well summarized my understanding. However, these philosophies focus on using interfaces to get around a syntax/language obstacle rather than as a best-case tool and are what Chatekar dubs “futuristic”, in that they rely on a programmer to know how the program is going to turn out longer term at the beginning which is simply unrealistic on a large scale project. Instead, he suggests an approach of considering interfaces as establishing a “can-do” relationship versus abstract classes creating a “is-a” relationship.

In the past and in CS-343, I’ve heard these terms thrown around and attached sometimes, but this post helped me to better understand the value in this approach and line of thinking for project planning. Commonly project components and requirements shift over the course of a project as unexpected needs are identified and addressed which cannot necessarily be planned for, so a futuristic interface-versus-abstract decision process seems likely to fail or be significantly less effective than a simplified approach focused on anticipated “is-a” and “can-do” relationships. One of my first and favorite interface/inheritance example projects simulated a Chess game using Java with a ChessPiece abstract class as well as a PieceAction interface; Regardless of later complications, each piece “is-a” ChessPiece, and “can-do” all PieceAction’s. This approach helps plan for future project events and needs in a more present state of mind, especially in long term projects that may include both.

Source:

Interfaces or Abstract Classes?. I have been asking this question in… | by Suhas Chatekar | Medium

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

The Art of Crafting Web Systems: Unveiling the Front End, Back End, and Data Persistence Layer

In the exciting realm of web development, the successful implementation of web systems hinges on understanding the intricate dance between three core components: the front end, the back end, and the data persistence layer. This blog post aims to dissect these vital components, shedding light on how they collaborate harmoniously to bring web applications to life.

The Front-End Canvas

Imagine the front end as the artist’s canvas, the visual facade of your web application that users interact with directly. It encompasses everything you see and interact with on a website—buttons, forms, navigation menus, and the overall layout. The front end relies on a trio of essential technologies:

  1. HTML (Hypertext Markup Language): HTML acts as the foundation, structuring web pages by defining content and layout.
  2. CSS (Cascading Style Sheets): CSS is the stylistic wizard responsible for the visual appeal, ensuring that the web elements harmonize seamlessly.
  3. JavaScript: JavaScript adds interactivity to the canvas, bringing it to life by creating dynamic elements, managing user input, and facilitating communication with the back end.

The Back-End Engine

Behind the scenes, the back end serves as the powerhouse of your web application. It handles user requests, processes data, interacts with databases, and sends responses. Several technologies and frameworks excel in back-end development:

  1. Node.js: Node.js enables JavaScript to run on the server side, making full-stack development a breeze.
  2. Ruby on Rails: Rails offers a structured framework that simplifies the creation of robust web applications with concise code.

The Data Persistence Layer

No web system is complete without a reliable data persistence layer. This layer manages data storage, retrieval, and efficient data management. It’s home to various types of databases, including:

  1. MySQL: MySQL, a popular relational database, is known for its reliability and is widely used in web applications.

The Synchronized Symphony

These three layers—front end, back end, and data persistence—complement each other to deliver a seamless user experience. When a user interacts with the front end, JavaScript often sends HTTP requests to the back end. The back end processes these requests, interacts with the chosen database to retrieve or store data, and then sends a response back to the front end. JavaScript on the front end dynamically updates the user interface based on this response, creating a responsive and interactive web application.

In Closing

Understanding the synergy between the front end, back end, and data persistence layer is fundamental to web development. These layers work in harmony, much like the instruments in an orchestra, to create the symphony of a fully functional web system. As you embark on your web development journey, remember that mastery of these layers takes time, practice, and a touch of artistry. So, keep coding, keep building, and explore the boundless possibilities of web system implementation.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.