Tagged: ARINC 661, CDS, Performance, SCADE
-
-
September 8, 2025 at 8:51 am
SolutionParticipantIntroduction
Amongst all the challenges that the development of an ARINC 661 Cockpit Display System (CDS) present, defining, designing for and meeting performance objectives is a difficult one. This is mainly because multiple stakeholders produce software that shares a common resource: the Cockpit Display System.
Integrated Modular Avionics encounter size, weight and power challenges, for which consensual guidelines exist in the DO-297 guidance. However, such guidelines do not exist yet for ARINC 661.
In this article, we will outline a generic framework to define a performance budget for an ARINC 661 CDS. We will first define activities required to measure the performance of a CDS, then suggest methods to comply with these requirements along the development and V&V phases of the cockpit. We will conclude with a summary diagram mapping out the approach.
Note: this article assumes familiarity with the ARINC 661 standard. If you need a refresher, you may first want to read our introduction article.
Aircraft (A/C) level definition activities
Defining User System (US) functions
An ARINC 661 Cockpit Display System provides a Human Machine interface between A/C pilot(s) and the User Systems (US) they need to interact with. The existence, identification and function definition of these different User Systems come very early in the A/C definition. These activities are needed to identify providers for these US. The initial contract between A/C manufacturer and US providers will describe the User System functions and stipulate that (parts of) the User Systems are going to communicate with the Cockpit through the ARINC 661 standard.
Defining cockpit configurations
With the definition of US functions, initial decisions are made on how they interact with cockpit functions. This leads to defining initial cockpit states, windows and screen partitions. This will be further referred to as cockpit configurations.
As part of exchanges with the US, logical layers are defined and allocated to the US. This logical split needs to be realized at the CDS level. CDS providers and US suppliers split the functional layers into display layers, decide which ones are going to rely on the ARINC 661 standard, and allocate them to a User Application (UA). In the end, a US can be composed of multiple UAs.
As a result, each US provider knows the UAs it will provide and the layers they will be composed of. Application IDs and Layer IDs can be defined at this time and used to formalize the ARINC 661 level configurations.
High-level layer-states may be identifiable at this stage (e.g. some layers are mutually exclusive, or not interactive).
Defining cockpit high-level performance requirements
With cockpit configurations defined, the A/C manufacturer may set the initial mark on expected performance of the CDS. High level performance requirements can boil down to:
- Present information in time for pilot decision
- Timely react to pilot decision
- Provide usable and comfortable pilot interaction
The first two points are linked to the functions and their timing requirements, while the last point is linked to human factors. Human factors are usually configuration independent, so a blanket value can be given for the whole cockpit. However, lower performance might be accepted in severely degraded conditions.
Note: most of these human factors related performance requirements are stronger for interactive areas. Identifying and minimizing interactive areas (windows and/or layers) of the cockpit is a way to relax performance needs.
In each configuration, a window has a function linked to pilot activity. This function imposes its own performance requirements. We can then propagate these requirements down to the layers composing the window.
Example: a Horizontal Situation Indicator (HSI) window may show a Traffic Collision Avoidance System (TCAS) layer which needs to be refreshed at 5Hz, and a weather radar that can be refreshed at 0.5 Hz.
Large variation in performance requirements within the same window should prompt for reconsidering window layout – typically splitting into sub-windows.
CDS level development activities
Hardware/software architecture
As the CDS architecture is being refined, it needs to be verified from a safety standpoint (ARP4761A). It is important to analyze high-level performance requirements to ensure they are achievable. Pay attention to:
- Latency due to communication between input devices and the ARINC 661 server
- Latency due to communication time between UAs and the ARINC 661 server
- Latency introduced by partition scheduling of all components in-between
Building a UA performance estimation strategy
The CDS is a shared resource between the different UAs. The performance of the whole CDS + USs can only be completely assessed in a complete, integrated cockpit, including all the User Systems with their hardware, and their physical connection to the server.
However, waiting for full integration to assess the USs and ARINC 661 CDS design choices is problematic: when performance does not meet expectations at the end, who is responsible? It could be anyone in the A/C → CDS → US chain, and this might trigger deep rework.
It is thus necessary to provide a performance reference for User System developers. They must be able to check against this reference as they progress through their design work.
This reference can only be built by the CDS provider, as it is linked to cockpit capabilities, architecture and implementation. It is derived from the high-level performance requirements established by the A/C provider and described above.
As the configuration of the CDS is — usually — not known by the UA, and a layer is usually present in multiple configurations, we can simplify performance requirements for each layer of each UA by taking the most demanding one from all its potential configurations. Here again, a large variation should prompt for questioning.
We now have a high-level performance requirement for each UA layer. We recommend communicating and refining these requirements with the UA supplier, as they should be consistent with UA logic performance requirements. What’s the point in being able to refresh something every 10ms if the logic produces a value every 50ms?
High-level performance requirements can be further broken down into timing requirements along the different components of the architecture (CDS elements, communications and User System) and analyzed as such. SysML v2 modeling is appropriate for this activity.
However, these high-level performance requirements are not appropriate in defining the interfacing performance of UAs, meaning commands to set parameters at Definition time (D), modify them at Runtime (R), or both (D+R). This is because the cost of these commands is CDS implementation dependent. There is a need for the CDS supplier to provide a performance budget as well as a performance cost function, to be used by UA suppliers.
Determining performance-relevant resources
A CDS is a complex system providing various kinds of resources, like screen space, input devices, computing power, communication bandwidth between its components, etc. These resources are shared between the UAs. It is part of the duty of the CDS supplier to establish the list of performance-relevant resources as well as their availability. A resource can be something concrete or completely abstract. The unit used to quantify each resource’s availability can be physical or completely arbitrary. You only need to specify resources that:
- Can be consumed by UAs (e.g. GPU memory consumption might not be impacted by the UAs)
- Are going to impact/contribute to high level performance requirements.
You may also list other limiting factors that do not impact performance, like allocable memory. These can be part of the breakdown and communicated to UA suppliers.
In the end, you end up with an empty radar chart, each branch representing a resource:
You must now specify, for each resource, its total availability for the CDS in the unit of your choice. E.g.:
- CPU Processing time: 9 billion cycles
- GPU processing time: 100 %
- CPU → GPU Bandwidth: 100 000 vertices/s
- UA → CDS Communication Bandwidth: 12 MB/s
- CDS → UA Communication Bandwidth: 100 events/cycle
- …
Establishing a cost function
Once we have a set of available resources, we need to understand how they are going to get consumed by the UAs. They can be consumed by elements of the User Application Definition File (UADF), mainly layers and widgets, and by run-time messages.
This consumption can be represented by a cost function. Its role is to produce a consumption estimation from a given UADF and/or set of runtime messages. To simplify our examples, this article will focus on costs related to UADF content, but the reasoning is similar for Runtime parameters.
The first step in establishing this cost function is to identify UADF items and attach a cost to each resource.
The level of abstraction used for these UADF items can range from very abstract to very concrete and can be refined during the lifecycle. The means used to define the corresponding cost (educated guess, direct measurement, static binary analysis) are the responsibility of the CDS supplier.
Widget types and states usually have a big impact on performance, so decomposing along the most meaningful states gives a finer-grained estimation. Visible, Enable and Anonymous are obvious states, but performance can also be linked to max string length value range, widget screen surface or any other parameter impacting performance resource consumption.
Example:
Widget type
Widget state
CPU cost
GPU cost
…
…
*
Visible = A661_TRUE
1
0
Graphics Primitive
Anonymous = A661_TRUE
2
2
Graphics Primitive
Anonymous = A661_FALSE
4
6
Interactive widget
Enable = A661_FALSE
10
10
Interactive widget
Enable = A661_TRUE
25
14
…
…
…
…
…
…
PopupPanel
Enable = A661_TRUE and
Visible = A661_TRUE
12
2
MapItem
0.1
1
MapItem
Interactive
0.4
1
Now that we have a “price list” for each item, we can estimate how much a layer is going to cost. As a first approximation, one can simply sum all the contributions of the widgets in their most expensive known reachable state, for each cost column.
Determining the most expensive reachable state for a widget depends on the state of its layer (usually Visible, Active or Disabled) as well as on its ancestors.
Some containers offer more precise constraints that can be used to produce a less pessimistic cost function.
Example:
The MutuallyExclusiveContainer guarantees that only one of its children is visible. When computing its draw time, we can propagate state change for its children in the following fashion:
- Evaluate the costs of each possible in visible and invisible states
- For each branch, add its cost when visible to the cost of all other branches when invisible
- Estimate cost for the MutuallyExclusiveContainer as the maximum of the previously computed cases:
$$t_{visible}(C) = t_{visible}(C_{no\ children}) + \max\limits_{{i \in [1,N]}} \left(t_{visible}(b_i) + \sum_{\substack{j=1 \\ j \neq i}}^{N} t_{invisible}(b_j)\right)$$
This reduces the child draw time estimate to the slowest child draw time, instead of the sum of all child draw times.
Most layers and widget states are dynamic, which means their state and cost cannot be determined by static analysis of the DF. In this case, one can either:
- Use the most pessimistic value per cost column
- Add a constraint to the UA using a specific requirement (e.g. BasicContainer A and B cannot be enabled simultaneously)
- Rely on information provided by the UA provider. This can be made explicit and refined later in the UA Test logic and final logic.
Mapping high-level performance requirements to low-level resources
For each configuration, the CDS provider now confronts available resources with the high-level performance requirements per configuration, and splits the resource accordingly:
Configuration 1 (e.g. ILS Landing)
UA1, Layer 3
UA1, Layer 2
UA2, Layer 1
…
Total Budgeted
Available
Present <
10ms
1s
500ms
React <
N/A
1s
100ms
Interact <
N/A
30ms
30ms
CPU
8
6
4
…
8+6+4+… = 53
100
GPU
14
4
3
…
14+4+3+… = 44
100
UA→CDS BW
0.5
1
0.1
…
0.5+1+0.1+… = 3.5
12
CDS→UA BW
0
2
2
…
0+2+2+… = 4
100
Configuration 2 (e.g. Pre-Flight)
UA5, Layer 1
UA5, Layer 2
UA5, Layer 3
…
Total Budgeted
Available
Present <
1s
1s
1s
React <
N/A
N/A
100ms
Interact <
30ms
30ms
30ms
CPU
4
4
4
…
8+6+4+… = 53
100
GPU
14
4
3
…
14+4+3+… = 44
100
UA→CDS BW
0.5
1
0.1
…
0.5+1+0.1+… = 3.5
12
CDS→UA BW
2
2
2
…
2+2+2+… = 6
100
This resource split is somehow arbitrary and likely needs adjustments and negotiations between the CDS supplier, the airframer and the UA suppliers. Referring to the high-level performance requirements should be used as a guide, but other parameters can be added to help allocating resources. Examples include DAL of the function — dedicate more resources to something more critical — and surface of the layer — something smaller generally requires less resources to draw than something big.
Note: Reservation for future functional expansion should be inserted and verified at this level.
The initial split can be something as rough as a uniform division between UAs or their layers. This resource allocation for each layer — taking the most pessimistic configuration — is then given to UA suppliers. They compare the estimated cost of their UA, using the price list and cost estimation rules, with their allocated budget while they are designing. Having this roughly estimated horizon from the beginning allows to keep UA design decisions compatible with CDS platform capabilities.
Example of budget with a 20% margin (for safety or future expansion)The outcome of this performance estimation activity is, for each UA layer, a resource budget and a cost function.
CDS and ARINC 661 server development
An avionics ARINC 661 cockpit is a complex architecture with strict safety and performance requirements. It differentiates itself from most of the other systems by its graphics stack. The graphics stack is composed of multiple hardware components and software layers, potentially coming from different vendors. Designing for performance requires a good understanding of the graphics stack, but above all a systematic, easy to execute and comprehensive performance measuring mechanism. Developers need means to immediately understand the performance impact of their design choices.
This implies in particular a fine-grained instrumentation of the server for its main functions: reception of messages from UAs, reception of events from input devices, decoding of incoming messages, UA/layer/widget finding functions, layer behavior execution, layer drawing, encoding of outgoing messages, etc. Pay attention to the graphics stack, by instrumenting each interface of the multiple SW layers.
This server instrumentation needs to be able to be easily deployed and executed on target. The more automation, the better: Continuous Integration, systematic measurement of performance using a reference (daily or at each commit), and diffusion of performance metrics are important tools in controlling and understanding CDS performance evolution during its development.
CDS resource budget validation
As the CDS supplier now defines quantities of available resources to be shared, they can use them in their server verification strategy. The first step is to verify that this budget meets the high-level performance requirements. The CDS provider can create UADFs that consume all (or slightly more) of the proposed resources, using the “price list” + cost evaluation function. They can then verify that high-level cockpit performance objectives are effectively met.
A more refined approach would be to do this in each configuration, create reference cost layers consuming the whole budget of each layer, and verifying that the high-level performance requirements of each layer are met.
CDS performance validation
Once the overall budget is secured, the CDS supplier needs a way to ensure that for each configuration, the set of UA-provided layers fits the budget. This can be done:
- Manually, by applying the cost estimation function to each layer and comparing it to the budget.
- Manually, by comparing the sum of the costs of all layers in a given configuration to the total budget.
- Automatically, using an offline tool to perform the above calculations. This may be achieved using a tool such as the SCADE ARINC 661 Test Automation Framework.
- Dynamically, by the CDS itself at the end of the definition phase.
Note: in the last case, we could imagine having the CDS refuse to enter the runtime phase if the estimated cost is above budget. This is particularly interesting when the DO-178C certification strategy of the CDS is to consider the UADFs as Parameter Data Items (PDI). The dynamic budget consumption check may be presented as part of robustness verifications.
Validation that each layer estimated cost fits in the global CDS budget envelope in each configurationSolving performance overshoots
Solving overconsumption of CDS resources by UAs can be achieved using any combination of the following tracks:
- Add more resources to the global pool – e.g. change the hardware platform
- Reallocate resources between UAs – other UAs might not use their full budget
- Rework UA requirements, the UADF and UA logic
- Reduce the price list by improving server software: architecture, implementation of specific optimizations, driver update, etc.
- Refine the evaluation mechanism: define more price list items and update the cost estimation function
The ultimate arbiter in case of disagreement on resource sharing should be the airframer.
User System development activities
User System development activities should take into account the cost function and the allocated budget, which can have a deep impact. These constraints might prompt for a different layer split, use of different widgets, or different run-time communications. Early availability of these constraints avoids future tedious rework and helps determine a sound performance strategy.
As the UADF gets developed, regular application of the cost function prevents budget overshoots. Here again, automation helps. Ideally, a Continuous Integration system should compute the cost function regularly and publish its progression compared to the corresponding budget.
User System V&V activities
Does the estimated cost fit the budget?
The first User System verification is to ensure that the estimated cost fits within the envelope. Should that not be the case, this can be worked out at three different levels, by increasing levels of effort. The earlier point you get back to, the heavier the impact is.
UA level:
- Rework the UADF; this likely impacts the UA logic as well
CDS level:
- Request more budget from the CDS provider; maybe some UA providers are not going to use their whole budget and can give away some of it
- Refine the cost function to be less pessimistic and closer to reality; this has the added advantage to benefit all UA providers
- Rework the CDS architecture, e.g. to reduce latencies
A/C level:
- Make the high-level performance requirements easier to attain: relax the constraints
- Rework the CDS Configuration
- Redefine the User System function
Providing UA Test logic
Once the UADF is available, its performance should be assessed by the CDS provider in the actual cockpit. When the CDS and UA are developed by two different providers, this can prove challenging.
Loading the DF into the cockpit with no UA logic would simply do nothing: layers are hidden by default. This does not put the CDS in an interesting situation from the performance measurement point of view.
The UA provider cannot expect the CDS provider to develop UA test logic themselves; understanding how a complex UADF is meant to be controlled by the UA logic is too costly.
Likewise, delivering the real UA logic is also frequently impossible:
- The complete US function might depend on dedicated hardware, significant volume of input data, interactions with other component, complex scenarios, etc.
- US providers might not want to deliver their hardware and software to a CDS provider who might also be a competitor (IP protection)
Having the US provider deliver dedicated UA test logic is a good solution to this problem. Test logic is typically purely software, so it is easy to distribute. It should be pessimistic from the performance point of view, but as little as possible. Example of overly pessimistic test logic: make all widgets visible. Example of reasonably pessimistic test logic: fill all Labels that can change from the real UA (as listed in the Interface Control Document) to their max length.
Test logic can be partly generated by analyzing the UADF and the relevant Interface Control Documents (authorized messages). However, some aspects require high-level knowledge of the UA function that might not be guessable by the CDS provider.
CDS-level integration V&V
At this stage, the CDS provider collects all the UADFs from the US providers, alongside with their test logic. It becomes possible to load all of them into the server, connect all the logic, iterate over the multiple configurations and test logic states, to ensure that the high-level performance requirements are met. There is still time to fix some performance issues at this stage:
UA level:
- Refine the test logic: is it too pessimistic?
- Rework the UADF and possibly UA logic
CDS level:
- Rework the CDS implementation
- Rework the CDS architecture, e.g. to reduce latencies
A/C level:
- Make the high-level performance requirements easier to attain: relax the constraints
- Rework the CDS configuration
- Redefine the User System function
It is also time for a first check on the visual consistency of the cockpit. Some adjustments to the UADFs can still easily be made at this stage: style sets, layouts, white space, colors, fonts, etc.
A/C Level integration V&V
This is the last V&V step, where the CDS and all the complete UAs are integrated and connected through the embedded network. All the previous V&V steps that have been introduced are meant to reduce the chances to realize something is wrong at this level. Only integration issues should remain, like actual network connections, UA/US scheduling, combination of UA logics, etc.
Summary
In this article, we defined a set of methods to define a performance budget for an ARINC 661 Cockpit Display System and to enforce it.
Below is a large diagram summarizing the approach: a tight collaboration between the airframer, CDS provider and UA providers, based around a performance budget and cost function, with multiple opportunities to iterate and have the system fit the budget.
Diagram of the complete framework outlined in this article (click to expand)Explore further
If you’d like to learn more about Ansys SCADE Solutions for ARINC 661 Compliant Systems, we’d love to hear from you! Get in touch on our product page.
About the author
Aubanel Monnier (LinkedIn) is a Senior Principal Engineer in the Ansys Customer Excellence organization. His main expertise is in the Embedded Software domain. He has been working on the SCADE product lines for more than 20 years. He contributed to the creation of the Ansys ARINC 661 solution and to the standard itself.
After 5 years in Asia (Shanghai, Tokyo) he relocated to France in August 2024. He is charge of the global adoption of the new generation of the Scade product -line: Scade One.
-
Introducing Ansys Electronics Desktop on Ansys Cloud
The Watch & Learn video article provides an overview of cloud computing from Electronics Desktop and details the product licenses and subscriptions to ANSYS Cloud Service that are...
How to Create a Reflector for a Center High-Mounted Stop Lamp (CHMSL)
This video article demonstrates how to create a reflector for a center high-mounted stop lamp. Optical Part design in Ansys SPEOS enables the design and validation of multiple...
Introducing the GEKO Turbulence Model in Ansys Fluent
The GEKO (GEneralized K-Omega) turbulence model offers a flexible, robust, general-purpose approach to RANS turbulence modeling. Introducing 2 videos: Part 1 provides background information on the model and a...
Postprocessing on Ansys EnSight
This video demonstrates exporting data from Fluent in EnSight Case Gold format, and it reviews the basic postprocessing capabilities of EnSight.
- Scade One – Bridging the Gap between Model-Based Design and Traditional Programming
- An introduction to DO-178C
- ARINC 661: the standard behind modern cockpit display systems
- Scade One – An Open Model-Based Ecosystem, Ready for MBSE
- Scade One – A Visual Coding Experience
- Using the SCADE Python APIs from your favorite IDE
- How to Verify a Model on Host with SCADE Test? (Part 4 of 6)
- How to integrate multiple SCADE models into one executable
- Introduction to the SCADE Environment (Part 1 of 5)
- Scade One – Democratizing model-based development
© 2025 Copyright ANSYS, Inc. All rights reserved.

