Author Archives: aadelinia1

Big Companies and Flaws

In software teasing and quality assurance, it is very important that issues be buffed out before release of a product. A lot of companies pour a ton of resources into this and inevitably things still do slip through space. One of the issues developers deal with a lot post launch are security issues. A lot of people may think that security issues are exclusive to smaller companies because they often think that “the bigger the company the better the security”, however, this has been proven wrong time and time again. Throughout Windows 10’s 2018 history, a lot of big updates were released. A lot of these updates actually needed up breaking a lot of things. Let’s go over some of the issues and look at what a little more software testing and quality assurance could have avoided.

Microsoft had been planning to release a massive Windows 10 update in April that added a ton of new features (including security) to their flagship operating system. However, a very bad bug that was causing Windows 10 to spam the blue screen of death was discovered. Microsoft could not release this big update with an issue such as this because it would leave the operating system even more unstable than it already is in its current state. After this issue was fixed, Microsoft was ready to finally ship out the update after a long delay. However, after the update was shipped out, there were over 600 million reports of Google chrome freezing and crashing after the update.

I think the reason that things like this happen is because of rushed deadlines. Sometimes while scheduling updates, there are a list of prioritized tasks that need to be finish in order to meet a deadline. However, in this rushed period bugs and glitches are bound to be overlooked because of the stressful development runs. In this case, Microsoft had to actually take the update offline and rollback the update because their user files were being deleted. From this, we learn that time management is important but also making sure rushed development doesn’t end up making the end users’ quality of products even worse.

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Live Monitoring and Testing

This article from softwaretestingmagezine.com talks about how testing and monitoring live and active services is a key element of software quality assurance. After deployment, making sure all of the bells and whistles of a service are up-to-date is a very important thing. Not only is it important on the programmer’s end, but it is extremely important on the client side because you should have a smooth experience for the both of you. Without proper testing of a product or service, it is impossible to correctly gauge how it will perform, which is why pre-launch and post-launch immanence testing is a must, especially today. This article then goes into many online services that monitor performance and uptime of certain services. Let’s go into some of these now.

A very important aspect to tracking a service is by recording it’s uptime. A service called “StatusCake” does just this. StatusCake is a paid monitoring service that can monitor page speeds had extremely high rates. They claim to have a very large system for monitoring big servers. Another nice thing about StatusCake is that it can set reminders about domain renewals. SSL monitoring, and much more. Although at may seem like monitoring uptime of your service wouldn’t make sense, it is actually very crucial in many ways. One thing I learned from this article about how important this is, is by monitoring your up time, depending on how long a service is kept online without failing, you can determine by logging where issues lay when something does occur. Something such a service outage or service lag can easily be tracked and tested if you have tools available to help you track it.

Tracking these issues with a system can be tricky, but there is another testing tool that can help us do exactly this. This tool is called Uptrends. Uptrends is another software testing tool that actually notifies you and double checks when something is wrong with your service. One of the harder things is tracking exactly when or where an error in a service occurs. The interesting thing about Uptrends is that it will actually give you detailed reports and statistics on these errors and also sends out email alerts when something goes wrong. This is another very important aspect of software quality assurance and testing. When something goes wrong you need to have information about the failure as fast as possible. With services such as this, you can receive notifications as soon as the fault happens so you can act accordingly to the issue.

Many services are available to help developers and clients for software testing and quality assurance. Depending on what you need, it is very important to keep a close eye on operations after a service is launched or completed, especially if it is being upgraded or modified in any way.

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Round Earth Test Strategy

This article is very interesting in that it offers a new perspective on the importance of a front-end user perspective first type of testing scheme. It starts off by explaining to us the normal pyramid testing scheme and how at the tip of the pyramid is where the user perspective and UI is. This article is contrary to all of those other testing pyramids because, by how this article explains it, the top of the pyramid is just as, if not MORE important than the lower levels. Typically in a Test Automation Pyramid you have Unit or unit tests at the bottom (long base), then you have your service tests (integration, component, and api tests: middle slice), and finally at the top you have your user interface and ideally what the user sees. Knowing that, this article explains how the pyramid should actually be flipped upside-down, having the user perspective be of larger importance. You would still have your unit tests and integration on the bottom and middle, it just wouldn’t be as large. This is the point the article is trying to make, “Just as a triangle has more area in its lower part than its upper part, so you should make more automated tests on lower levels than higher levels.” This is not an argument; this is not reasoning. Nothing in the nature of a triangle tells us how it relates to technology problems. It’s simply a shape that matches an assertion that the authors wanted to make. It’s semiotics with weak semantics.” Pretty much, the article is saying that the shape of the triangle in which these schemes are based on don’t really carry that much weight into technological problems.

My reaction to this article is that I agree with what they are describing here. Similarly to the article, I also think that when you have a project, each layer above the next can often be a lot more complex than compared to the lower levels. This in turn can also even carry a higher risk. The model the author is talking about is the Round Earth model. The round Earth model states that you should think of technology as concentric spheres and that each layer can increase dramatically. This article made me open my eyes and made a lot more sense of how certain models don’t really understand what they even stand for.

 

 

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

The Future of Performance Anaylitics

In the future, data analytics are going to be invested in a lot heavier due to the shear amount of information certain companies will need to collect and maintain. This issue is one that not only needs to be solved, but it also needs to have it’s issues prefaced before progression – which is what is hindering it.

This article from centraljersey.com talks about the rapid speeds needed to meet deadlines for “high demand” analytical solutions. It goes into how certain markets are investing in analytical technologies in order to predict the future thus being able to optimally market services. However, the article states that three main factors are causing a great hindrance to this push. These factors are security, privacy, and error prone databases. Not only do these kinds of methods take time, they also need to be secure. Not only to protect mass amounts of data, but to operate as efficiently as possible.

Upon reading this article, what interested me is that North-America accounts for the largest market share due to the growing numbers of “players” in the region. Per the article, a lot of this is being invested for cloud-based solutions. What I found interesting, however, is that this company (Market Research Future), provides research to their clients. They have many dedicated teams devoted to specific fields, which is why they can craft their research very carefully. What I find useful about this posting is that it shows just how important the future of data analytics and organization can be. With the future of data collection, there will need to be more, optimized solutions to handle and control these types of research data.

The content of this posting confirms my beliefs on how cloud computing and cloud based data analysis will continue to grow and evolve rapidly over the coming years. With more and more companies migrating to cloud based systems, not only for internal means, but for client needs as well, we will see a great push in optimized data sorting and faster data transfer. Expansion in cloud computing and web-based services will become the main staple of future products such as this.

 

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Angular and The Future of Web Apps

Currently in class we are learning about the tools and functionality of Angular. Angular is a JavaScript based open source web application framework. It is currently being maintained by Google and some other developers. Angular is open source which attracts many users and continually increases the framework’s growth in popularity. Recently, we have seen just this. Developers are choosing Angular because of its great framework and because it enables Progressive Web Applications. Progressive Web Apps (PWAs) is a term that is being universally accepted by developers across the world. It is a way of creating the best web and mobile apps and taking advantage of the most recent technologies in order to make for more efficient and fast web apps. Google has been leading the initiative for Progressive Web Apps by dedicating their design philosophy behind and distributing public data and toolkits in order to help people get started on their web apps as well. I chose this article because I think it is a great example of how Angular is a great framework to work with because it is forward thinking relative to Progressive Web Applications. It will also help further my understanding of Angular and allows me to see the greater benefits of the framework.

Developers want their apps to be more efficient and they also want them to be scalable. This goes for both desktop and mobile apps. By creating web apps that are user friendly  and provide the scalability that allows multiple types of users to interact with it, you can create a very successful platform because it gives users a reason to keep using your app. Angular is great because it is an open-source framework that has a ton of support for it.

Angular has gone through many great changes that improve its functionality, sustainability, and reliability. These are also the main keys of a Progressive Web App, which is why a lot of developers tend to like it. The first version of AngularJS released in 2012. Until this time, no one has ever really seen a web app infrastructure that was this reliable and easy to understand. Angular has the ability to reduce boilerplate and also greatly improved code testability. Angular then make a great leap in 2014 with the team’s announcement of Angular 2. Angular 2 was the newest version of Angular and it was written with Microsoft’s superset of JavaScript and TypeSript. As you can imagine, these language are two very popular and easy languages. Angular 2 was also focused on being more compact and extremely fast.

As we can see, Angular 2 is becoming the fastest growing environment for web app development upon many developers. This affects me personally because Angular is a great tool to utilize when designing and planning out Web Applications. I also learned that the future of Progressive Web Applications is quickly evolving and how Angular 2’s infrastructure is a great resource to consider. In the future, I hope to expand my experience with Angular and wish to apply that knowledge towards the development of reliable and easy to use web apps.

 

Source: https://jaxenter.com/angular-progressive-web-apps-2018-139076.html

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Progressive Web Apps

When you visit a website, web-apps are mainly what you will use. Believe it or not, we utilize web apps almost everyday. From online wikis to video hosting websites, these are all including in the wide world of Web-Apps. Today, I want to discuss Google’s developer program and their developer tools for progressive web apps. But what are progressive web apps? Progressive web-apps are applications that are reliable, fast, and engaging, according to googles development page. These are very interesting points because they can be relocatable to other aspects of computer science. Whether it is programming or deciding which algorithm is best for a certain scenario. These three key factors can help our understanding and visualization of future projects we may want to work on, which is why I choose this article. It helps detail each important aspect of user experiences and describes why these aspects need to be present.

First, let’s start of with reliability. By Google’s definition of reliable, “When launched from the user’s home screen, service workers enable a Progressive Web App to load instantly, regardless of the network state.”

This is a great viewpoint because you wouldn’t want your web-apps to load slow. By having the web-app load slow, it could alter the user’s experience – which is what our primary goal is to enhance. Determining ways that make items load faster can be a great challenge in itself. The article explains that pre-caching key resources can increase stability and enhance the user’s reliable experience because it eliminates the dependence of the app from the network. An example of this would be a service worker written in JavaScript that acts as a client-side proxy.

Google’s statistics mention that approximately 53% of users will abandon a website if it takes longer than 3 seconds to load. This data is interesting because it shows how far loading and caching algorithms and optimization have come. This also can have a big impact for monetized web-pages. If the page doesn’t load fast enough, the user could then leave, resulting in potential profit loss.

The final key point is engagement. An example of this would be your push notifications that you receive on a smartphone. Whenever the web-app wants to notify you of a change or a message, depending on what the web-app is, it sends notification to the home screen of your phone which in turn lessens the burden of opening the app itself. Small quality of life enhancements such as push notifications can really immerse a user in your product, and with a progressive web-app, that is our main goal. Knowing these main design principles of web-apps really helped me understand why and how we can further enhance user experience. Most of the time when we are developing something, it will be for the use of others, whether it’s internally or client based operation, reliability, speed, and engagement are all key aspects of creating a great web-app.

Source: https://developers.google.com/web/progressive-web-apps/

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Web Apps vs. Native Apps

With our discussions in class about typescript and java script for use with Web-apps, I believe it is important to discuss the difference between web-apps and native apps and how our knowledge of them can help us decide which one is more preferable. I choose this article by Lifewire because it provides a great compare and contrast of web-apps and native apps.

There has been an ongoing debate over what type of app is better – Web Apps or Native apps. Firstly, I think it is important to distinguish the two. Typically, a native app is an app that is used local on a device. These apps are usually downloaded and installed on the device. For example, the camera app on an android phone or Microsoft word on a desktop computer. While native apps are usually local on a device, Web Apps are apps that are not installed locally on a device.

Let’s take for instance a locally installed app. That app can access almost all of the devices features (if permissions are granted). Snapchat, for example, is an instant messaging app using a smart-device’s internal camera. That is a native app using another native app. A web app only has access to a limited number of the device’s native features. This may seem like a bad thing, however, there are greater benefits to web-apps than you may think

The great thing about native apps is that since they operate specifically on software designed for a particular device, it can be greatly optimized and catered towards that device, thus enhancing the users experience. At the same time, this means whenever a native app needs to be updated, the device needs to keep downloading updates and bug fixes. With a web app, all updates are handled on the back-end, therefore no native changes or downloads need to be made.

Both web-apps and native apps are used everyday, and arguably, they can both be used hand in hand. Web apps can be developed for native apps and native apps can be developed for web apps. Paypal has a web-app in browser, however they also have a native app that can be downloaded and updated. I think as technology progresses, we will see more web-apps as cloud computing seems to be the future. Knowing this, I personally think that web-apps will continue to evolve because they require no user action to update and they do not need to be designed for a specific system, therefore making

 

Source: (https://www.lifewire.com/native-apps-vs-web-apps-2373133)

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Practical Application of Adapter Method

This week I chose to write about the adapter method because it is a great tool to have when working with incompatible interfaces and will help me in the future by creating more of an understanding about different types of frameworks. Since we are learning about 3 different design patterns, I figured adapter would be an interesting topic because can be compared to the porting of programs or video games to different types of hardware/ interfaces. Let’s first start off by talking about what the Adapter Pattern is. The adapter pattern works to allow two incompatible interfaces either work together or use attributes of one another. One example is a memory card reader in a laptop. When a memory card is plugged into the card reader so that the laptop can interpret the data. This article talks about how media devices can be adapted to play different formats.

This article explains the difficulties of porting or adapting a video game to other platforms because of the middle-ware or libraries that need to be used. For example, let’s say one game is built for a PC, however down the road the developers would like to adapt it to work on mobile phones or tablets. One struggle of this is that some developer tools are often built with closed sourced tools. This means that the game would need to be rebuilt from the ground-up because the libraries and tools needed are locked. Some game development tools are aware of this and actually include tools to directly adapt the libraries from one platform for another. Some optimization is still necessary by the developer, however it makes it so that these games and programs can run across multiple platforms. The reason I believe developers would want to make their products available across multiple platforms is because they want to maximize profits by adding more groups of users to their potentially interest list.

Performance is also an issue, sometimes it is more than code that can hinder the game. The actual hardware can greatly impact an adaptation of the game due to the fact of how it was built for another hardware architecture. The article says “The number one showstopper in game porting is the usage of closed-source tools, engines or libraries. Game developers should be aware of the technical decisions they are making, and how they will later affect portability of their game.”. This can create great issues and slow down for development time, especially if a deadline needs to be met.

One more important take from the article is language choice. From the article, “Is it going to properly compile in all platforms? Will it perform well? Beware of “too new” languages with super features (for example C++11) that might not be completely supported in all platforms (as we have painfully realized).” As we can see, there are many factors in adaptation of different types of software, and if we can create our programs with future adaptations in mind, adapter implementation can be organized in such a way to make the process easier.

 

Article: https://www.gamasutra.com/view/news/222363/What_exactly_goes_into_porting_a_video_game_BlitWorks_explains.php

 

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

PayPal’s Interesting Design Pattern API

PayPal is a massive online American banking and currency handler that allows users to transfer funds electronically. PayPal originally started out as a company that developed security software for handheld devices. This article describes the different design guidelines through the years of developing the API. All of PayPal’s platform services have been connected through RESTful APIs. REstful is an API that uses HTTP requests to GET, PUT, POST, and DELETE data. It is commonly known as the architecture style for designing networked applications. We have learned that, through design patterns, we can optimize and organize our code in efficient ways to make later implementations of other objects easier. I chose this article because it is another look at practical implantation of design principles for large businesses.

From the article, we can see that the basic principles for PayPal’s design foundation are very similar in what we are currently implementing in our code. Some of the principles include that are discussed in the article are Coupling, encapsulation, stability, reusable, contract-based, consistency ease of use, and externalizable. Since these APIs are developed with consumer business in mind, each of the principles follow a catered idea for each. For loose coupling, the services need to be loosely coupled from each other. This makes it so that components in a network can work together, but do not heavily rely on each other. Encapsulation is also important in order to group certain attributes together. This makes it so that we can restrict direct access to certain part of an objects components. The stability principle important in that we need to make the program stable. The code should also be reusable so that it can be used across multiple different instances and by different consumers and users. This is important for team collaboration and organization as well because if multiple people can understand where an issue is occurring, it can be solved a lot faster. For contract-based functionality, it needs to be shared using a standardized service. This makes it better to create a standardization because it loops back to the reusable principle and makes accessibility more seamless. As for ease of use and consistency, these both work hand in hand because since the service needs to follow a specific set of rules and attributes, it also needs to be easy to create for consumers. The last design principle is externizable, which requires that the service can save and restore the contents of its instances.

I learned that these design principles are interesting because they seem to be a great tool to use when designing a program because it offers fundamental guidelines for optimization. I expect to apply these principles in my future practice by remembering the importance of how code should be organized to make it accessible to the consumer and to the programmer.

Source: https://www.infoq.com/news/2017/09/paypal-api-guide

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Efficiency, Inside and Out.

Creating efficient programs is very important, in that it allows processes to run faster and in some cases, with less memory. Program optimization is key in making fast and reusable code. Currently, we are learning about various ways to create efficient implementations of object-specific attributes. By doing these, we need to understand that we need to be careful and consistent with our designs because if, in the future, things need to be modified or added, it can be done with ease. Database querying is a process in which mass amounts of data are sifted through by algorithms to extract to inject specific data. The article I choose mentions that, not only do we have to be efficient with the database query algorithms, but also within other aspect such as power consumption.

The article explains how cache servers are very expensive because they use the “power-hungry” and expensive Random Access Memory modules. However, some cache servers are starting to implement and test Flash Memory databases. Since flash memory is so cheap and has much more storage density that RAM, it would seem like and obvious switch. However, along with the cheap speed comes slower performance than RAM.

I choose the article because working with efficient materials and hardware can lead to innovations and new implantation of design ideas. We discuss design principles so that we can understand the advantages and disadvantage to the better and worse outcomes of issues, respectively. From the article “The more important concern is keeping up with the requests flooding the data center. From the article, “The CSAIL researchers’ system, dubbed BlueCache, does that by using the common computer science technique of ‘pipelining.’ Before a flash-based cache server returns the result of the first query to reach it, it can begin executing the next 10,000 queries. The first query might take 200 microseconds to process, but the responses to the succeeding ones will emerge at .02-microsecond intervals.”

In the future, I expect to apply what was learned by finding ways to further conserve memory and creating the most efficient design principles for my projects. This is important because querying mass amounts of data can take very long (and can use a lot of power) and it is important in recognizing that there are certain design principles that work better in certain situations. The article also states that “A data center for a major web service such as Google or Facebook might have as many as 1,000 servers dedicated just to caching.”, and if you are performing that many calculations, you will need to have efficient modeling and understand good design patterns to handle all of the data correctly. This also made me realize that there are a lot of other factors, such as processing and energy costs, that are considered in certain stages of program modeling.

Source: (https://www.sciencedaily.com/releases/2017/08/170831115210.htm)

 

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.