In this Blog Article we are going to discuss things to look for from an Executive’s perspective to ensure that your Software Development Teams are designing and testing the scalability of their code. And this isn’t just about the Code, but also the fundamental Infrastructure it is being deployed on.
During the past 20 years, a lot of progress has been made with regards to “Virtualizing” our infrastructure environments. Whether you prefer to use the term, “Cloud”, “Containers”, or something else, it is basically the same thing. It represents a Virtual System, which operates on a collection of physical devices that you can easily add and allocate additional memory, CPU’s, and disk space to the system. This provides significant benefits over older models, which required Servers and Systems to run on physical Server equipment typically in an on-site Data Center, or perhaps a Colocation (Co-Lo) Data Center. It also allows you to quickly clone an environment and recreate it.
What is interesting to me is that in many ways the “Cloud” is returning us to the original days of Information Technology, where companies would rent time (memory, CPU’s, and space) from a provider that had purchased a large Mainframe computer. This way the Mainframe was being shared simultaneously by multiple different companies and organizations.
As a former Chief Information Officer, today unless there were a very specific Business or Security Reason, I would not build out or host my own Data Center. It is very expensive to do this, in terms of physical equipment, facility space, electricity, air conditioning, redundancy, fire suppressant, monitoring systems, staff, etc.
VMWare was the original market leader in Virtualization technology, although today there are many “Cloud” options with the most notable ones from Amazon (AWS), Microsoft Azure, and IBM. But there are hundreds of other options available, so you can easily pick a Partner based on price, quality of service, types of service you need, etc.
Here are just a few of the many benefits for moving to the “Cloud” and using Virtualized Environments:
All of these things will save you money. Again, the major reason not to do this, if you have very specific security concerns or issues, where you want to have a lot more control over the physical security of your systems and applications.
This is a fancy technical term for simply saying that we will allow an Application to do multiple things at the same time, instead of doing them in sequential order. This normally leads to faster processing of transactions within an Application, as you are able to spread them across all of the computer’s resources. Another benefit is that if one “thread” or instruction is delayed for any reason, it does not automatically affect the others. Especially when designing large scale Enterprise Applications, you want to make sure your Software Development Team is using a Multi-Threaded Programming Language.
While using a Single-Threaded Programming Language is often much easier to work with from a Developer’s perspective, it can and will cause challenges if you need to scale the Application, as you simply won’t get the speed and can encounter bottlenecks in the application processing.
If you’re building a simplistic Cafeteria Ordering application, this isn’t a big deal. But if you are building a Custom Enterprise Resource Planning application or a large scale Social Media Platform, it is extremely important, and will either make or break your system.
Anytime that you break something down into small components it makes it both easier to Design and Test successfully. For example, a Web Service is 100 lines of code in it will be a whole lot easier to Design, Build and Test, than a Function with 10,000 lines of code.
So, you want to break things down into small components. This also makes it easier to Scale your application. Because if you are calling a Function with 10,000 lines of code in it, it has to go through and process most of that code before it can finish. While a simple Microservice with only 100 lines can finish in perhaps 1% or 2% of the time.
The other benefit for doing this is that you can create more reusability in your code base. The reason for this is that you might have the same function or process being used in multiple locations within your application. So, it is possible to create a single Microservice once, that is used multiple times.
There are a number of Cloud based monitoring tools available on the market, and each of them have different capabilities, functions, and ease of usage or set up.
Most of them will require a Quality Assurance Engineer to implement them and configure the Synthetic Testing that will be run on either your STAGE or Production Environment. You can use these tools for multiple purposes.
Whichever tool you use they all work basically the same way.
One of the great things with running a Synthetic Testing tool against your application, is in finding potential Memory Leaks within the application itself.
A Memory Leak is when a portion of the application is asked to run and requires a certain amount of memory to be allocated to it. However, when it is finished processing for some reason it does not give back to the system all or some of the allocated memory. So, what happens is over time the application is using more and more allocated memory and not giving it back when it should. If this continues long enough, then over time the whole system can crash.
Normally these problems are difficult to detect and even harder to fix, because in many cases it is a very small amount of memory each time. So, a team might not notice it for days or even weeks.
One time I was working on a contract project with P.F. Chang’s to implement an Invoice Image Processing system for their Accounts Payable Department. After many months of configuration and testing everything seemed to be working as expected. However, I wanted to do a stress test before we started to actually roll this out to all of the restaurants. So, we started with a small load of Invoice images and continued to increase it over time over multiple tests, until we reached what we expected our real load would be, plus room for growth.
In our Expected Load Test, the system crashed. We ran it again, another crash. And then we started analyzing the data, log files, server logs, etc. and figured out that over nearly a day of running our test the memory that was allocated to the Application kept going up and up every so slightly. Until BOOM! Nothing worked.
Fortunately, we were able to provide the vendor with our test data, test information, and logs, and they were able to reproduce our results and determine where the problem was. And fix it. But this problem is not that uncommon.
This occurs over time when a programming function or method was created at one point to do something, but at some point in time was replaced. However, it wasn’t completely removed from the code. So, what happens is that this procedure or method may still be called, but it does nothing. This is why we refer to it as an “Orphan”. This part of the code doesn’t actually do anything or have a home.
However, what it does do is take up processing cycles and slow down the overall application. By running an Application Performance Monitoring tool we can start to determine where these older orphaned pieces of code are and remove them.
In this blog article we have discussed the benefits and reasons why you want your Software Development team to look at the following areas:
We hope that you have enjoyed this article.
Thank you, David Annis.