Insights Frequently Asked Questions

The following are frequently asked questions about CA Agile Central Insights:

Why do I have to take a survey to see visualizations of my own data?

The survey data is used for research purposes. CA Agile Central performs analysis to find correlations between team behaviors, attitudes, practices, and team performance. If the analysis indicates that smaller teams have higher responsiveness for example, we can share those findings with our customers so they can make decisions about how to structure their teams based upon real data. We believe that the time to take surveys is a worthwhile investment. All survey responses are non-attributable.

Why can’t I see the results of my team’s surveys?

We are planning to add that functionality soon.

How often are metrics updated?

Calculations are updated every month at the beginning of the month. Virtual workspace calculations are done on demand and typically take between 30 and 60 minutes to complete.

How is team size calculated?

We use a heuristic that looks at the owner on the stories, tasks, and defects, as well as the user that is creating, editing, and moving. If each team member actually uses CA Agile Central and the owner field is current, this should be fairly representative of the true size, regardless of the ALM team members.

How is the Industry Average calculated?

  • The percentiles virtual benchmark is calculated from all 13,000 teams using CA Agile Central.
  • We use a Bayesian classifier to identify real teams.
  • We recalculate the distributions every month and for every quarter. This means that they will drift over time. However, this drift is so slow that any change in them is minimal compared to changes in a particular team's performance.

How is the Workspace Average calculated?

  • Similar to the Industry average calculation, we use a Bayesian classifier to identify real teams in the workspace which are used to drive the Workspace Average.
  • In the future, we hope to provide more granularity on what teams constitute the Workspace Average.
  • We recalculate the distributions every month and for every quarter.

Do the metrics roll up or are they just at the team level?

They all roll up. However, the comparison to industry distribution is tuned to the team level. It does not matter much when using throughput normalized by team size and defect aging normalized by team size and TiP (normalized or not), but it does matter when looking at variability of throughput, velocity, and non-normalized versions of throughput and defect aging. Those metrics are fine for tracking over time but the percentile score is not useful.

Why do the metrics not roll up how I expect?

When you look at the scores at the project level, those are most likely real teams.

However, as you move up the project hierarchy, more data is included. A non-normalized metric value or percentile score for a high-level project will aggregate all data for a metric into sums that may make the values look unusually high or the scores unusually low. For this reason, we recommend looking at scores for real teams.

What is the difference between projects and workspaces?

The workspace is a top-level container that contains a project hierarchy  (this is not limited to one project, as workspaces can contain multiple project hierarchies). The project entity in CA Agile Central is the team container, but its hierarchical nature means that some projects represent other organizational entities (meta-teams, divisions, departments, and so on). Some may even represent projects. While workspaces can contain many projects, there is a fundamental difference between the two entities: workspaces represent a benchmark, and as such, only aggregate information related to real teams, whereas projects rollup all information of sub-projects (of both real and non-real teams).

To determine which project entities are actually teams, we use a Bayesian classifier that looks at how much work is contained in the project, how close to the leaf of the hierarchy it is, and a number of other characteristics.

What is the difference between real teams and non-real teams?

Real teams are project entities whose characteristics match with our heuristics for what constitutes an implementation team. This is determined by a Bayesian classifier composed of several indicators, such as:

  • The number of levels (distance) from the leaf nodes of the current branch of the project tree. Real teams tend to be at the leaf nodes, which is 0, or one level up, which is 1.
  • The number of work items in-progress in the project.
  • The full-time equivalent value for the node. Real teams tend to have between 5 and 8 members, and outside of this range, the probability of being a real team decreases.

A non-real team is a project with one or more dimensions whose value falls outside of our heuristic for reality.

Can we drill down to the data that makes up a metric or score to better understand the root issue?

We have added that functionality for the quality and productivity metrics and plan to add it for the other metrics.

What does full-time equivalent mean?

Full-time equivalent (FTE) is a measurement of team size by considering contributions from part-time contributors to the team. Consider the case where three full-time team members are working on a project with two part-time members (who devote 50% of their time to the project), then the full-time equivalent will be four team members.

How can I make sure I am capturing data correctly for the charts to work?

It is important that you are using the expected fields, or if importing from elsewhere, that you have connected the fields correctly.

If indicated with an X, the requirement must be met in order for the charts to be accurate.

Requirement/Metric

Schedule State (In-Progress, Completed, split stories not left open)

Story Points

Unique User in CA Agile Central (the person who performs the action has a user in CA Agile Central)

Defect State (Open, Fixed, Closed)

Defect Environment (Test, Production)

Throughput (Stories) Normalized

X

X

Throughput (Defects) Normalized

X

X

Throughput (Stories and Defects) Normalized

X

X

Throughput (Stories) Absolute

X

Throughput (Defects) Absolute

X

Throughput (Stories and Defects) Absolute

X

Velocity (Stories) Normalized

X

X

X

Velocity (Defects) Normalized

X

X

X

Velocity (Stories and Defects) Normalized

X

X

X

Velocity (Stories) Absolute

X

X

Velocity (Defects) Absolute

X

X

Velocity (Stories and Defects) Absolute

X

X

Variability of Throughput (Stories)

X

Variability of Throughput (Defects)

X

Variability of Throughput (Stories and Defects)

X

Variability of Velocity (Stories)

X

X

Variability of Velocity (Defects)

X

X

Variability of Velocity (Stories and Defects)

X

X

Time in Process Standard Deviation (Stories)

X

Time in Process Standard Deviation (Defects)

X

Time in Process Standard Deviation (Stories and Defects)

X

Time in Process (Stories)

X

Time in Process (Defects)

X

Time in Process (Stories and Defects)

X

Defect Density

X

Released Defect Density

X

X

Cumulative Defect Age Normalized

X

X

Cumulative Defect Age Absolute

X

Do the names and order of states in CA Agile Central matter?

Schedule state and defect state values have an implied order based on their respective drop-down list orders. The order of the schedule states in CA Agile Central is unchangeable; the only variables are the use of optional initial and final states and their names. If the optional final state (Released, for example) is used, Insights treats it the same as the standard final state (Accepted), regardless of the name of the optional final state.

However, the order of defect states is changeable and all names (except Open and Closed) are also changeable. In order for defect metrics to be collected correctly, please ensure that Closed is the last (by implied order) and final (by workflow) defect state. An exception to this rule is that if there is a defect found with defect state named Fixed, Insights will treat the defect as if it were Closed. So, if your workflow distinguishes between fixing and closing defects such that the final (workflow) state for some defects is something other than Closed, ensure that Fixed is the name of that state.

Which defects are included in the Cumulative Defect Age calculations?

Defects are included if their state is less than Closed and not equal to Fixed at the end of the timeframe selected. This may include defects with Defect State = Unable to Reproduce.

Which defects are included in the Released Defect Density calculation?

Defects are included if the Environment value is production and they were created in the timeframe selected.

Why am I not able to see CA Agile Central Insights?

It may be that your company has IP restrictions. In order to allow CA Agile Central Insights, your subscription administrator needs to do the following:

  1. Click the Setup icon ().
  2. Click Subscription.
  3. From Actions, select Edit Subscription.
  4. Select the Enable additional CA Agile Central services integrations field, which is required for CA Agile Central Insights to work with subscriptions with IP restrictions turned on.

Why is my quarterly percentile score for dimension higher than any of the monthly scores in that same time period?

The percentile score for that quarter as a whole can be higher compared to other teams if a team is more consistent, even if they were lower on a month-to-month basis. See the table below for an example where Team A's quarterly score is 90% even when it is less than that each month:

Team Month 1 Score Month 2 Score Month 3 Score Quarter Score (sum)
A 7 8 6 21.0
B 8 1 1 10.0
C 10 9 9 28.0
D 2 1 10 13.0
E 2 10 8 20.0
F 2 1 5 8.0
G 2 1 7 10.0
H 9 1 1 11.0
I 1 1 2 4.0
J 1 1 1 3.0
Team A
Percentile
66.7% 77.8% 55.6% 88.9%

How does Time In Process (TiP) differ from ALM's Cycle Time reports?

ALM's cycle time reports calculate story TiP as the difference between that last time the story went In Progress and when it was Accepted. Transitions back to In Progress, such as In Progress → Defined → In Progress, will reset the calculation.

Insight's story and defect TiP uses a more sophisticated TiP calculation that includes all the time the story or defect spends In Progress, and takes into consideration workdays and holidays. Insight's TiP charts do not include stories and defects that spend no time in process, such as stories or defects that move directly from Defined → Accepted or Released.

Feature TiP is calculated slightly different from story and defect TiP. Feature TiP is the the time between when the first story parented to the feature moves to In Progress and when the feature reaches 100% complete by story count. If an In Progress story is reparented to a feature, the timing begins at the time of the reparenting. Feature TiP also takes into account work days and holidays.

Closed or reparented projects and historical data

Metrics are always calculated using the project hierarchy that existed at the time period under consideration. This includes rollups into higher level projects. Closing or reparenting projects will only affect data for future time periods. 

For example, Project A has two child projects, Team 1 and Team 2. In December, Team 1 is reparented to Project B, so:

  • Team 1's history will not change. Team 1's full history should continue to be available.
  • Project A will contain Team 1's contributions for months prior to December.
  • Project B will start including Team 1's contributions for December and afterwards.

To view past contributions from moved or closed child projects to their former parent project, the drill-down for each metric shows the work items that contributed to the metric and the projects the work items belonged to.

Expected differences in the number of items in Throughput and Time In Process drill downs

Time In Process only includes stories that were Accepted during the time period.

Throughput's drill down includes all positive, negative, and neutral contributions to the throughput value. All positive contributions display in the TIP chart, as well as some of the neutral contributions. Any story that went from Accepted → In Progress → Accepted during the time period are included but not stories that went In Progress → Accepted → In Progress during the time period.

Feedback

Need more help? The CA Agile Central Community is your one-stop shop for self-service and support. To submit feedback or cases to CA Agile Central Support, find answers, and collaborate with others, please join us in the CA Agile Central Community.