Introduction
Performance, accessibility and traffic limits fall in the general category of non-functional requirements (NFRs), and are discussed in the CDS Non-functional Requirements section. This section covers the following areas:
- Availability requirements
- Performance requirements
- Traffic thresholds
- Data recipient requirements
- Reporting requirements
- Data latency
- Data quality
- Exemptions to protect service
NFRs offer a traffic and availability cap for Data Holders, allowing them to plan their infrastructure.
NFRs offer a predictable level of service for Accredited Data Recipients.
Limiting large data payloads
For large data payloads that may affect response times, the DSB recommends the CDR participant make use of:
- The 1000 page size maximum set in CDS Pagination standards, Additional pagination rules, which puts an upper bound on the size of any invocation, regardless of the data available for the response.
- The 95% compliance requirement for Performance in NFRs. This accommodates rare instances where the set thresholds are exceeded.
Audit information
The DH is required to collect information sufficient to meet the NFRs. While there is no specified requirement for collecting additional audit information, the DSB suggests that there is a non-binding expectation for the DH to collect all information that would reasonably be required for troubleshooting and operationally managing the system. The ACCC, as regulator, can to request any data they require to ensure compliance and operational integrity of the ecosystem.
Outages
Planned outages should be announced with as much lead time as possible. When a planned outage takes place, the unavailability does not have to be reported as a failure.
Planned outages may occur with less than seven days notice, or without notification, in cases where the outage is required to resolve a critical service or security issue. Planned outages that satisfy these conditions are not included in availability reporting.
See:
Performance calculation
To calculate performance, use aggregation rather than averaging.
- For each of the performance tiers (unauthenticated, high priority, etc), and each time period to be reported:
- Find Total Invocations, defined as the aggregated number of all invocations for all APIs in that tier for the time period.
- Include error and rejected invocations
- Exclude timed out invocations
- Find Total Exceeded, defined as the number of these invocations that exceeded the time threshold for the tier
- Calculate ( 1 - (Total Exceeded / Total Invocations) ) * 100 and report as a percentage for that tier and time period
For each time period (hour, current day so far, a specific 24 hour period) the approach to calculation is the same, but the period of time is different.
Do not create the daily metric as an average of averages as this will bias the final result to low traffic periods. The appropriate calculation is to aggregate: find the total number of invocations and the total number exceeding threshold for the report period, and divide exceeded by total invocations.
An example shows the difference between aggregating and averaging. Suppose you measure 1000 invocations within the threshold in an hour and one invocation outside the threshold in the next hour. If you calculate an hourly average using the result for each of those hours, the result is a 50% performance metric. If you aggregate across the two hours, the result is 99.9%, which is a more representative result.
Subsections
Comments
0 comments
Please sign in to leave a comment.