Performance Best Practices Using Java and AWS Lambda: Discussion

:::info
This paper is available on arxiv under CC BY-SA 4.0 DEED license.

Authors:

(1) Juan Mera Men´endez;

(2) Martin Bartlett.

:::

Table of Links

Abstract and Introduction

Initial Application

Performance Tests

Best Practices and Techniques

Combinations

Discussion

Related Work

Conclusion and References

VI. DISCUSSION

In Figure 4 and Figure 5, you can see a summary of the cold start improvement and warm Lambdas, respectively. Next, we will analyze each approach and the possibilities they offer.


Starting with an appropriate configuration for each function, in our opinion, is the minimum optimization that any function should have, both to improve performance and to balance cost. In addition to offering a significant improvement, it is perfectly compatible with all the other techniques mentioned in this article.


Continuing with the use of Snapstart, this technique focuses on mitigating cold starts. In our experience, this technique could not produce as significant an improvement as to justify the incompatibilities it generates, including the lack of support for the arm64 architecture or custom runtimes, along with other important features such as the capability to use EFS or to attach the lambda to a VPC.


The improvement offered by using the Arm64 architecture seems to depend on the specific use case in which it is employed. For example, it can be particularly beneficial for compute-intensive applications such as high-performance computing. This could explain why the improvement in our case is not too noticeable. Nevertheless, its use is highly recommended as it can be combined with other approaches discussed, and its adoption is on the rise. The major drawback is the

incompatibility with some third-party technologies or dependencies related to this architecture.


The use of AWS’s SDK v2 for Java applications is also practically mandatory, as long as it supports all the services and the majority of the dependencies integrated into the function. Thanks to its improvements and modular approach, this second version of the SDK significantly increases performance. Moreover, it is perfectly mixable with the rest of the techniques without adding notable limitations. Combined with version 1, if any unsupported features are needed, they can be utilized as required.


As expected, leveraging the benefits of ahead-of-time compilation generates a significant performance boost both in cold starts and when functions are already warm. However, the main drawback is the complexity that this approach adds. It’s common to encounter runtime issues that may require the developer to deal with low-level aspects, and any modifications can become challenging. In principle, it is perfectly combinable with the rest of the techniques that support custom runtimes.


The possibilities offered by the JAVA TOOL OPTIONS environment variable are quite extensive, allowing for significant Java VM configuration. In our case, we only tested tiered compilation, but there are other possibilities such as configuring garbage collector behavior [7]. In any case, it’s worth trying to use this variable and integrating it into Java functions whenever the JVM is used because it doesn’t impose any limitations, and the improvement is significant.


It’s also important to highlight that investing efforts in refactoring the function’s code and applying the details mentioned in subsection IV-G can achieve an even greater performance improvement than the techniques discussed. This work is entirely dependent on the use case and requires trial and error as well as skill.


Regarding the simultaneous application of techniques, for our combinations, we recommend the use of either of the two, so you should choose the one that provides the most benefit, either in terms of performance or greater adaptability to the specific use case.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.