This blog is linked with the presentation: “Calibrate your Mobile Performance testing” which was presented at various test events, Freetest 2014, Test Istanbul 2014 and the Romanian testing days.
Today’s world is fast, demanding and your mobile application is facing high expectations from users. Performance testing has to adapt to these challenges to serve the need for testing performance on mobile platforms. Any business today that isn’t going mobile is losing to a competitor that is. Today all major companies have a mobile app so you can easily access their content, products and services. As performance tester you need to have in depth knowledge about the mobile environment you are testing against and how to calibrate your performance test tooling to get production like results.
Going mobile for a company is not an option today anymore, it’s a requirement for being successful.
What’s all these Devices and OS stuff anyhow ?
Performance testing for mobile is less straight forward then for desktop applications, services and websites. If you go to a computer shop today to buy a new PC it will come with Windows 8 , and maybe if you ask for it you can get Windows 7. But no shop will sell you a desktop based on Windows XP and a Pentium 4 CPU with 2Gb of ram. The same goes for Apple with OS X. Performance testing for desktop is therefore mainly focused on the performance of infrastructure and servers.
A mobile device can be a smartphone, tablet, mini-tablet, phablet, or other small handheld computer. With mobile devices the market is completely different, you can buy an entry level smartphone with Android 2.2 or the latest high-end model with Android 4.4 at the same time. Not only from a software point of view, but also from a hardware perspective there are big differences, the entry level smartphone may have a single core CPU, 512Mb memory and a 2.4” screen where the high-end model packs a quad core CPU, 2Gb of memory and a 5.1” screen. Both buyers expect however that your service, application or mobile website will work on their phone. While smartphones have less resources then desktops, users expect the same performance as on a desktop.
Where PC, Apple and Linux users are regularly “forced” to upgrade their browsers, this is not the case with mobile devices, most users never upgrade their browser and this leads to a plethora of versions of browsers and operating systems. While the PC market is mainly dominated by Windows and OS X, the mobile market has more operating systems, there are already 8 major platforms: Android, IOS, Windows Mobile, Symbian, Blackberry, Tizen, HP webOs and Bada.
Mobile performance therefore starts at the design and development stage. Which devices are you going to support ? Are you going to build a native version of your app for every OS ? The key problem arises in the conflict between market penetration of the OS on the one hand and cost of development for every OS on the other.
Cross-platform app development is a strategy in order to cut the costs, but also has impact on performance. Applications that need maximum performance from OS and hardware, like games for example, are currently out of reach for web-based cross-platform alternatives. Native apps in this respect are superior regarding user experience and performance.
Within development and functional testing it is a common practice to use emulators for testing. Which makes sense as it is costly and sometimes not easy to obtain all the different devices that your users are using. The use of emulators is simple, just download the software, install on your PC and you’re ready to go. Multiple emulators can be run in a simple and straightforward manner. Emulators are typically a “plain vanilla” version of the OS and often do not reflect the specific hardware and software features of each supported device, and are not connected to the mobile network. Testing on real devices is needed to test the impact of network-related events (e.g., incoming call, text message, etc.) on mobile application behavior. Were it is usually ok to use emulators in the development stage, performance and functional testing on real devices to capture the end user experience is highly advisable, the optimal mobile testing solution is a combination of both.
What’s all these Mobile Networks stuff anyhow ?
Once you have determined what devices and platforms you are going to develop for, the next performance hurdle comes up. Mobile users use, not surprisingly, mobile networks which have different characteristics then fiber, ADSL or cable networks. As mobile networks are radio based, they suffer from latency, packet loss, CRC errors and packet reordering to name but a few. Performance testing of your server from the internal LAN or through a cloud based internet service creates a different load then the footprint of real mobile users. Mobile users on slow networks for example will keep connections longer open on your server then users on fast fiber or cable networks. You will need to review and adjust server settings for connection pooling in such a case.
Insight in the quality of the mobile networks that your users are using is therefore mandatory. A website that can assist in this area is opensignal.com, which offers worldwide cell phone signal quality and coverage maps based on data submitted by mobile users.
Contacting mobile operators and purchasing reports to get in-depth knowledge about the quality of the mobile networks can be a next step based on the performance requirements. Keep in mind that this data can also be reused for other projects involving mobile networks.
Now that you have the data and insight about the quality of the mobile networks of your users, it is time to implement it into your performance testing. From my own research I found that performance test tools are offering too limited options to-do mobile network emulation aka bandwidth shaping.
Also the bandwidth shaping functionality consumes resources of the load generator which will have impact on the amount of load you can generate. The most important reason not to use the integrated bandwidth shaper from the performance test tool is that mobile development is often done in an Agile fashion. This means that during development and functional testing you also like to use the bandwidth shaper functionality to spot mobile network related issues in an early stage, therefore the use of a separated bandwidth shaper makes sense. Bandwidth shapers come in two flavors, as software that you can install on a server or as a piece of hardware.
My own favorite is WANem from Tata consulting, it is open source and comes with a nice web interface that let you adjust all the settings for latency, packet loss, CRC errors, packet reordering and more. By using multiple ip’s from your load generator you can emulate several mobile networks in WANem and create a load with a footprint similar of real mobile users. WANem is not a simple tool and requires in-depth knowledge about networks, all settings will have a direct impact on the reported response times in your performance test tooling. As these results are often the reason to start with performance optimization it is very important that these are correct and traceable.
What’s all these Application Performance Monitoring stuff anyhow ?
In the desktop world APM is mainly done at the backend server level, client hardware and network connection play only a small role in the performance experienced by the user. Mobile applications are very different than their server based counterparts. The main source of the difference lies in the architecture of the application which is adapted to the mobile environment. The part used by the end user is now a full-fledged application installed on the mobile device, instead of a browser which executes only a limited portion of the application.
Therefore it makes sense to do APM at client level, by incorporating an agent that instruments the application to monitor the user behavior, network behavior and application performance. An example of such an agent is AppInsight: Mobile App Performance Monitoring in the Wild from Microsoft. Instrumentation in combination with customer targeted ad’s in the application is what for example Flurry Analytics offers. As a service or application provider this kind of instrumentation is a great tool for application performance improvement, were end-users in the meantime are getting more and more concerned about application permissions and privacy. From my own point of view instrumented apps should only be used during field testing to get the necessary insights in application performance “in the wild”. As field testing is usually done in the last stages of application development before go-live, it’s purpose is mainly to validate that performance testing at device level and mobile network emulation was properly done earlier in the cycle.
The plethora of hardware, versions of browsers and operating systems combined with the architecture used, makes performance testing at device level become a key factor for Mobile. Also the quality and characteristics of the mobile networks have a lot of impact from performance point of view. Performance should therefore be part of your development process from day one to guarantee the end to end performance experienced by the users.
Roland van Leusden