This article is part of the series
"Cloud-Based Embedded Testing: A Case Study."
Discover more:
By leveraging cloud infrastructure, the aim is to streamline testing processes, reduce execution times, and enhance scalability. But how is automated cloud-based testing for functional software implemented, especially when it comes to test execution and result evaluation
This study demonstrates the feasibility and advantages of automated, scalable cloud testing for embedded systems, offering insights into optimizing both process efficiency and result accuracy in software quality assurance.
Key elements of the cloud testing approach include the use of an appropriate testing framework, the generation of relevant test data, and reliable assessment metrics to ensure functionality meets predefined standards. The case study also includes practical implementation steps, utilizing Amazon Web Services (AWS) and Jenkins for continuous integration, which are tailored to the unique requirements of automotive embedded software testing.
The focus is on automated testing of simple functional software with an emphasis on test execution and evaluation. The resulting report provides insights into which tests were successful and which ones failed.
To conduct these tests, three essential components are required:
The test subject is a simple Simulink model that calculates the control of the vehicle’s external lights based on a light switch and light intensity: Testing Lights Control with TPT.
We assessed this simple example as highly suitable for the use cases because it serves as an exemplary representation of automotive embedded software development. It has a manageable size, encompasses well-known functionality, and has been tested frequently.
The testing is conducted in both Model-in-the-Loop (MiL) and Software-in-the-Loop (SiL) environments. The results of both types of tests are automatically compared as part of a Back-2-Back test.
The automated tests were created and executed with TPT. TPT is a widely used solution in the automotive industry and enables testing of MATLAB/Simulink and TargetLink models. In the comparative testing between MiL and SiL, the same test cases are used without modification. Test execution in MiL is achieved through the integration of TPT and Matlab/Simulink, with TPT automatically establishing the connection.
We set up two cases with a different focus each, yet both are build on each other.
The objective: A tester should be able to start a test execution on multiple, parallel computing units at the push of a button. When the test execution is complete, there is a report that summarizes all test executions, measurements, and results as if the execution had occurred on a single computer.
To be able to implement this, the computer must be able to:
Use case 1 could be fully implemented. Setting up the AMI and the other cloud computing resources was easy and fast thanks to good documentation from AWS.
The biggest effort was in the security activities. Often we first had to understand which ports we needed to enable in order to allow communication between two entities.
For example, for file uploads from the local machine to the instances and for downloading the reports from the instance to the machine.
In numerous places, we also had to allow communication relationships between the elements used in AWS and the user. Once we had these chains of action in place and understood, we were able to implement an automation script in Python. Thanks to the very intuitive and comprehensive boto3 framework, this was done incredibly quickly and smoothly.
However, scripts executed on a local computer to control the process can also pose risks: If the connection between the local computer and the cloud is lost, unnecessary instances may continue to run and incur unnecessary costs.
Additionally, a script needs to be initiated by a user. This can lead to delays, can be inconvenient at times, and may also be error-prone.
To minimize such risks associated with local scripts, we expanded our first use case: We set up an additional instance in the cloud to initiate and monitor the test execution process. This is described in the second use case.
Because Use Case 1 was completed very quickly and successfully, the question of further automation arose immediately. Why? It is less error-prone, more precise, faster, and more robust.
The first case was to be extended by instantiating a Jenkins server in the cloud. Similar to setting up EC2 instances, Jenkins was installed in an AMI . The major difference between this instance and the instances used for testing is that this instance will run continuously.
The reason for this continuous operation: This instance is meant to monitor the connected software repository (version control) GIT for changes in the code and ongoing tests at all times.
If a team member makes a change to a part of the software, like the Matlab/Simulink model, this change should be immediately detected and verified. Typically, in software version control, there are multiple levels.
For quality assurance, the following principle has been established: Only changes that meet the quality standard are merged into the production path. In contrast, in a development path, the development hands over the change to the version management. This is detected by Jenkins.
Now, first, quality assurance measures like our software tests should demonstrate that the change is mature enough for the production path.
If it is mature enough, it will be incorporated. If not, the change will be discarded, and the development team will receive feedback indicating that the change is causing issues. This is usually communicated through an email from Jenkins.
Additionally, Jenkins offers a wide range of useful plugins, including the EC2 plugin. With its help, EC2 instances are automatically launched, monitored, and pipeline jobs (tests) are executed.
Use Case 2 was successfully implemented and represents an impressive integration of cloud services into our CI environment. The seamless integration was achieved effortlessly, thanks to the excellent documentation of the plugins used.
A significant portion of the work focused on security activities, such as opening inbound ports and setting up webhooks to fetch files from private repositories, and managing access using access tokens.
Additionally, communication relationships between the AWS services and the Jenkins server had to be established to enable automatic scaling through the automatic creation and connection of new instances. Thanks to this innovative solution, we were able to successfully overcome the challenges of Use Case 1.
As soon as changes are made in the Git repository, the Jenkins server automatically initiates the creation of new EC2 instances and starts the execution of the pipeline jobs.
The Jenkins server autonomously monitors the created instances and terminates them immediately once the test cases are completed and the results have been uploaded.
The current solution does not automatically detect when test cases are stuck and run longer than expected, failing to terminate them in a timely manner.
Internal monitoring mechanisms, known as ‘watchdogs’, can be implemented. These watchdogs can stop test cases if they unexpectedly run for an extended period to control execution time and prevent bottlenecks.
TPT constructs a test framework tailored to the software.
TPT creates test data to stimulate the test subject.
In automotive software development, tests are typically executed on a PC. For very large models or extensive tests, the execution can even take several days. This primarily arises from the sequential processing of test cases and the limited scalability on a PC.
Assessments for evaluating whether the test subject functions correctly.