Blur effects under iOS 7
With the introduction of iOS 7, an increased interest in the topic of blur also arose in the developer community due to the use of blurred menus.
Top!
Hello
With the introduction of iOS 7, an increased interest in the topic of blur also arose in the developer community due to the use of blurred menus.
If a developer wants to adapt his own app to the new look and feel, the use of the blur effect is inevitable. In simple static scenarios, the blurred menu can be implemented with the help of the Core Image library supplied by Apple and its filters [1].
Figure 1: Blur effect vs. normal display
If dynamic or even three-dimensional content is used, however, the limits of the library are quickly reached. If the blur has to be calculated at runtime, this causes unacceptable waiting times for the user.
A more efficient way to compute is to use the graphics hardware using OpenGL ES in the GLKViewController provided by Apple from the GLKit framework [2]. Before we go into this in more detail, we will first explain the concept of the blur effect.
The blur effect, better known as soft focus, passes through all the pixels of an image and calculates the new color value taking into account the neighboring pixels. For this purpose, a filter matrix is used that determines the weighting with which neighboring pixels are included in the resulting color value. One of the best known filters of this kind is probably the Gaussian filter, whose matrix is defined as follows:
Figure 2: Mathematical formula for calculating the Gaussian filter
More detailed information about the operation and mathematical background of the Gaussian filter can be found in the following references [3] or [4].
Apart from the type of filter used, the radius of the filter also plays a role. The higher the number of included pixels is chosen, the more the image is blurred. At the same time, however, the effort required for the calculation increases considerably. As the filter radius increases, the number of pixels to be considered increases quadratically.
Since pixel accesses account for most of the computing time and resources are very limited, especially in the mobile area, it is recommended to reduce the number of accesses as much as possible. Due to the separability of the Gaussian filter it is possible to split the filtering on a 2D image into two 1-dimensional filterings. Thus the image is filtered once vertically and once horizontally. This reduces the number of accesses in the case of a 9×9 filter from 81 to 18. The following figure illustrates the application of this concept:
Figure 3: Blur horizontal
Figure 4: Blur vertical
Figure 5: result
The realization of the described concept in OpenGL ES is done by pixel calculations in the fragment shader. For this purpose, the desired image area is first rendered into a texture with the help of a framebuffer object. The filter operation in the shader is then applied to this texture.
To apply the concept of the separated filter it is necessary to use 2 frame buffers. The unblurred image is read from one buffer and the resulting image is stored in the second buffer. After passing the horizontal blur, the buffers are swapped and the vertical blur is passed.
A possible realization of such a shader based on the template of [5] can be seen in the code snippet below.
uniform sampler2D image;
uniform highp float blurSize;
void main(void) {
highp float offset[3];
offset[0] = 0.0;
offset[1] = 1.3846153846;
offset[2] = 3.2307692308;
highp float weight[3];
weight[0] = 0.2270270270;
weight[1] = 0.3162162162;
weight[2] = 0.0702702703;
gl_FragColor = texture2D( image, vec2(gl_FragCoord) / blurSize) * weight[0];
for (int i=1; i<3; i++) {
gl_FragColor += texture2D( image, ( vec2(gl_FragCoord)+vec2(offset[i], 0.0) ) / blurSize) * weight[i];
gl_FragColor += texture2D( image, ( vec2(gl_FragCoord)-vec2(offset[i], 0.0) ) / blurSize) * weight[i];
}
}
}
If the resulting image does not correspond to the desired blur intensity, this process can be repeated as often as desired. Afterwards, the finished image with blur effect is available as a texture and can be used to draw the GUI elements. From experience, depending on the size of the blurred image area, up to five repetitions are already possible under OpenGL ES 2.0 on the iPad 3.
If performance problems occur despite the separated filters, it is possible to save the frame buffers in a lower resolution, since the images only have low-frequency image components after the filter has been applied.
In simple cases it is possible to store the required textures already blurred and use them to render the GUI elements. To create a pre-burnished texture, it is sufficient to apply an appropriate filter over the image with the help of current image processing software and to save the image in filtered state. If enough memory is available, pre-calculation is recommended.
In concrete individual cases, the choice of method must be made depending on certain factors - the complexity of the scenario and the availability of computing power and memory. A conclusive generally valid recommendation can therefore not be given.