Spare your Android application from lags, brakes and long loading screens

Spare your Android application from lags, brakes and long loading screens

Before we get into the discussion of tools and techniques to improve performance, let’s take some time to see how lags generally appear and why an application can be slow. The main problems of modern applications:

  • Too long app and individual screens to load;
  • freezes, when the app just hangs and after a while a message appears with the sentence “kill”;
  • FPS substitutions, when instead of smooth scrolling and animation the user sees a slide show.

The reasons for all these problems are simple: too long execution of any operations, computational or I/O operations, but the optimization methods are different.



Cold Start

Launching an application consists of several stages: initializing a new process, preparing a window to display the application interface, displaying the window on the screen and passing control to the application code. Further, the application should form the interface on the basis of the description in the XML-file, load from the “disk” or from the Internet necessary for the correct display of the interface elements (bitmaps, data for lists, charts, etc.), initialize additional interface elements, such as a sliding menu (Drawer), hang a bulb on the interface elements.

Obviously, this is a huge amount of work and every effort should be made to do it as quickly as possible. The two main tools in this case:

  • pending initialization;
  • running tasks in parallel.

Delayed initialization means that everything that can be done later must be done later. It is not a good idea to create and initialize all the data and objects that an application may need at the beginning. First we initialize only what we need to see the main screen correctly, then we go to everything else.



The processing of complex and expensive operations that cannot be optimized, as well as locked operations like reading from the disk or receiving data from the server, should be sent to a separate thread and the interface should be updated asynchronously when it is finished.

Example: You have an application that should display a summary of data received from the Internet on the main screen. The most obvious way to do this is to get the data from the server and then display the interface. And although Android by default does not allow you to perform network queries in the main stream of the application, forcing you to create a separate stream to get data from the server, most developers will still try to make the code consistent.

The problem with this approach is that it introduces delays, which are simply not needed. Most of the time the network thread will be idle waiting for the data and this time is better used to display the interface. In other words: right after starting an application, you should create a thread that will receive data from the server, but not wait for this data to be received, but create an interface using time plugs instead of not yet received data.

Blank images, blank lines, and blank lists can be used as plugs (for example, RecyclerView can be initialized immediately, and when data is received, simply call notifyDataSetChanged()). Once the data is retrieved from the server, it should be cached. The next time you run it, you can use it instead of plugs.

This approach works well not only for network communications, but also for any tasks that require long computations and/or data waiting. For example, a firewall needs a lot of time to request a list of installed applications from the system, sort it, load icons and other data into memory. That’s why modern campers do it asynchronously: they display the interface and fill it with application icons using the background stream.

Another bottleneck: the formation of the interface from the description of the lights in an XML file. When you call the method setContentView() or inflate() object LayoutInflater (in the code of the snippet), Android finds the desired Layot in the binary XML file (for efficiency Android Studio packages XML in the binary format), reads it, parses and on the basis of the received data forms the interface, measuring and adjusting the interface elements to each other.

This is a really complex and expensive operation. Therefore it is necessary to pay special attention to the optimization of the laiots: avoid unnecessary nesting of the laiots in each other (for example, to use RelativeLayout instead of nested LinearLayout), as well as to break down complex interface descriptions into many smaller ones and download them only when necessary.

Another option is to switch to Kotlin language and use Anko library. It allows you to describe the interface directly in the code, so that the speed of displaying the interface increases by four times, and you get more flexibility in managing the logic of interface formation.

FPS Freeze and Subsidence

In Android only the main stream of the application has the right to update the screen and process the screen clicks. This means that, when your application is engaged in complex work in the main stream, it does not have the ability to respond to clicks. For the user, the application will look dependent. In this case, again, the removal of complex operations in individual threads will help.

There is, however, a much more subtle and unobvious point. Android updates the screen at a speed of 60 FPS. This means that during the display of animation or scrolling lists he has only 16.6 ms to display each frame. In most cases Android copes with this work and does not lose frames. But incorrectly written application can slow it down.

.

A simple example: RecyclerView is an interface element that allows you to create extremely long lists that take up the same amount of memory regardless of the length of the list itself. This is possible by reusing the same interface element sets (ViewHolder) to display different list elements. When a list item is hidden from the screen, its ViewHolder is moved to the cache and then used to display the next list items.

When RecyclerView extracts ViewHolder from the cache, it runs the onBindViewHolder() method on your adapter to fill it with the data of a specific list item. And then the interesting thing happens: if the onBindViewHolder() method does too much work, RecyclerView will not have time to form the next display item in time and the list will start to slow down while it is being wound.

One more example. You can connect to RecyclerView a custom RecyclerView.OnScrollListener(), the method OnScrolled() which will be called while the list is being wound. It is usually used to dynamically hide and display a round action button in the corner of the screen (FAB – Floating Action Button). But if you implement more complex code in this method, the list will slow down again.

What a third example. Let’s say that your application interface consists of many fragments, between which you can switch using the side menu (Drawer). It seems that the most obvious way to solve this problem is to place in the handler of clicking on the menu items about this code:

// Switch the fragment
getSupportFragmentManager .
        .beginTransaction()
        .replace(R.id.container, fragment, "fragment")
        .commit()

// Close Drawer
drawer.closeDrawer(GravityCompat.START)

Everything is logical, but if you run the application and test it, you will see that the menu closing animation slows down. The problem lies in the fact that the commit() method is asynchronous, i.e. switching between fragments and closing the menu will occur simultaneously and the smartphone will simply not have time to process all the screen refresh operations in time.

To avoid this, switch the fragment after the animation to close the menu is over. You can do this by connecting the custom DrawerListener to the menu:

mDrawerLayout.addDrawerListener(new DrawerLayout.DrawerListener() {
    @Override public void onDrawerSlide(View drawerView, float slideOffset) {}
    @Override public void onDrawerOpened(View drawerView) {}
    @Override public void onDrawerStateChanged(int newState) {}

    @Override
    public void onDrawerClosed(View drawerView) {
      if (mFragmentToSet != null) {
        getSupportFragmentManager()
              .beginTransaction()
              .replace(R.id.container, mFragmentToSet)
              .commit();
        mFragmentToSet = null;
      }
    }
});

Another not at all obvious moment. Starting with Android 3.0 application interface rendering takes place on the GPU. This means that all the bitmaps, drawable and resources specified in the application theme, are uploaded to the GPU memory and therefore access to them is very fast.

Any interface element shown on the screen is converted into a set of polygons and GPU instructions and is therefore displayed in the case, for example, of fast swaps. Just as quickly will be hidden and displayed View by changing the attribute visibility (button.setVisibility (View.GONE) and button.setVisibility (View.VISIBLE)).

But when you change the View, even the most minimal, the system again will have to recreate the View from scratch, loading the GPU new polygons and instructions. Moreover, when you change TextView this operation will become even more expensive, because Android will have to first make the font rasterization, that is, turn the text into a set of rectangular pictures, then make all measurements and form the instructions for the GPU. There is also an operation of recalculation of position of the current element and other elements of the Liota. All this should be taken into account and you should change the View only when you really need it.

Overdraw is another serious problem. As we said above, parsing complex laiots with many nested elements will be slow in itself, but it will also likely bring with it the problem of frequent screen repainting.

Imagine that you have several nested LinearLayouts and some of them also have a background property, i.e. they not only contain other interface elements, but also have a background in the form of a drawing or filling with color. As a result, during the rendering phase of the interface the GPU will do the following: fill the area occupied by the root LinearLayout with pixels of the desired color, then fill the part of the screen occupied by the nested Layout with another color (or the same) and so on. As a result many pixels on the screen will be updated several times during the rendering of a single frame. This does not make any sense.

It is impossible to avoid it completely. For example, if you want to display a button on a red background, you will still have to fill the screen with red first, and then change the pixels displaying the button again. In addition, Android knows how to optimize the rendering so that the overdraw does not happen (for example, if two elements of the same size are above each other and the second is opaque, the first simply will not be drawn). However, much depends on the programmer, who should do his best to minimize the overdraw.

The overlay debugging tool will help in this. It is built into Android and is here: Settings → Developer Options → Debug GPU overdraw → Show overdraw areas. After its activation, the screen will be repainted in different colors, which means the following:

  • standard color – single overlay;
  • blue – double overlay;
  • green – triple;
  • red – quadruple and more.

The rule here is simple: if most of the interface of your application became green or red, you have problems. If you see mostly blue (or native color of the application) with a little bit of green and red where different switches or other dynamic elements of the interface are displayed – it’s okay.

Overdraw healthy app and smoker’s overdraw.

Well, a few tips:

  • Try not to use the background property in liots.
  • Recrease the number of nested laiots.
  • Place this line at the beginning of the Activity code: getWindow();.
  • Don’t use transparency where you can do without it.
  • Use the tool Hierarchy Viewer to analyze the hierarchy of your lights, their relationships to each other, estimate rendering speed and calculate size.

Systrace

Finding the bottlenecks in a relatively simple application that you wrote in a few days is not difficult. It is enough to follow the rules described above. But when it comes to a really big project, you cannot do without special tools.

Systrace is one of the most important tools you should master. It is a tracer that allows you to track what happens on your device while the application is running. In particular, it will show you clearly how each frame is drawn, which frames were drawn on time, and which systems had to be discarded.

Systrace can be started with the Android Device Monitor, which, in turn, is in the menu Tools → Android in Android Studio. Open the Android Device Monitor, wait until he discovers the smartphone, select the application and press the start button tracing.

.

In the launch window that opens, leave all settings as is and press Ok. Trace will continue for five seconds, after which an HTML file with the data will be generated (in *nix it is a trace.html file in your home directory). You should open it in your browser.

At first glance, the Systrace report is confusing – a huge amount of data is not clear what matters. Luckily, you won’t need much of this data, you’re only interested in the Frames, UI Thread and RenderThread lines.

Frames shows you the screen updates. Each frame is a circle of one of three colors: green, yellow, red. Green means that the frame was drawn in 16.6 ms, yellow and red – drawing of the frame took more than 16.6 ms, i.e. the frame rate drops. Immediately below the Frames line is the UI Thread line, with which you can analyze what steps the system has performed to display the frame.

Clicking on the circle will give you more information about why the system has spent more time than it should to draw a frame. Possible situations and methods to solve them are described in developer documentation. Let us just add that we should not pay attention to the Sheduling delay problem. Usually it is not caused by your application, but by Android itself. Especially often it appears on old and low-power smartphones.

Systrace allows you to assess, at what stage there was a delay in drawing. But he will not tell you, whether the problem was caused by your code, and if yes, then where exactly the bottleneck. To find the problem, the output of Systrace can be detailed by adding markers to the application code, which will allow you to estimate how long it takes to execute your own code. An example of the onBindViewHolder tracing method:

@Override
public void onBindViewHolder(MyViewHolder holder, int position) {
    Trace.beginSection("MyAdapter.onBindViewHolder");
    try {
        try {
            Trace.beginSection("MyAdapter.queryDatabase");
            RowItem rowItem = queryDatabase(position);
            mDataset.add(rowItem);
        } finally {
            Trace.endSection();
        }
        holder.bind(mDataset.get(position));
    } finally {
        Trace.endSection();
    }
}

There is a more simple tracing tool, built-in Android. Simply enable Developer options → Profile GPU rendering → On screen as bars, and the chart will appear. On the X-axis – frames, on the Y-axis – columns, showing the duration of drawing of each frame. If the column is above the green line, it took more than 16.6 ms to draw the frame.

Android Profiler

This is another important tracing tool that allows you to estimate how much time it took to complete a method in your application. Like Systrace, it generates a report for a certain period of time, but this report is much more low-level and concerns every single method that was called.

Start the Android Studio, click on the Android Profiler at the bottom of the screen, then on the CPU and press the red round record button at the top of the screen, stop recording when you need to. At the bottom of the screen a window with the report will appear.

By default, the report is displayed as a diagram, with the X axis showing the time and the Y axis showing the methods to be called. System methods (API) are marked in orange, application methods are marked in green, and third-party API methods including Java are marked in blue. The Flame chart tab shows a similar diagram where the same methods are combined. It is convenient because it allows you to visually estimate how much time this or that method has worked during the whole period of tracing.

The Top Down and Bottom Up tabs show the call tree of methods, including information about the time spent on their execution:

  • Self – code execution time of the method itself;
  • Children – code execution time of all methods called by it;
  • Total – sum of Self and Children.

Like Systrace, this tool requires a thoughtful study of the report. It will not tell you where and what went wrong. It simply tells you when and what was going on in your application, and allows you to find the code fragments that took the most CPU time.

Itogs

We have described only a small part of what you should pay attention to when developing a performance application. There are a lot of different situations and problems that you will face. No article or book will not teach you how to deal with them, will help only the experience and understanding of the principles of Android.



WARNING! All links in the articles may lead to malicious sites or contain viruses. Follow them at your own risk. Those who purposely visit the article know what they are doing. Do not click on everything thoughtlessly.


7 Views

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments


Do NOT follow this link or you will be banned from the site!
0
Would love your thoughts, please comment.x
()
x

Spelling error report

The following text will be sent to our editors: