Firm Standards

The Firm Standards page  aids you in determining which metric values should be recorded for your models.

 

The Clarity software records a wide variety of metrics and so it is best to define the most important metrics for your company and the most relevant ones for your users. To ensure that the users are reviewing them, these metrics display on all Project-based Model Metrics, along with the green, yellow, red, white indicators.

 

To help determine the most typical ranges seen at your company, use the Scatter tab in the Graphs pane.

Defining the Typical Model Size

Many metric targets can be valid for most cases, however, particularly large or particularly small models may represent valid exceptions to the rules.

Clarity enables you to record a range of file sizes for your company's typical models, and then adjust the metric targets for those that fall outside of the typical range.

Adding Firm Standard Metrics

Clarity can collect over 50 various metrics, some of which can look at every Project in a firm. These metrics can be added to the list of firm standards. The metrics can either be shown:

 

Note: Currently, it is only possible to have one Firm Standard for each kind of model metric.

 

The image above shows a set of "Firm Standards" for what the range of values that are acceptable.
The ORDER of the standards can be adjusted by dragging the rows up or down.
 

 

Metric Configuration: # of Warnings dialog box

In the dialog box that opens, you can select:

 

You can then define the range of green and red, where the yellow range is implicitly defined between them.

The Small Model Factor and Large Model Factor values enable you to shift the ranges to better accommodate for small or large models, by applying a factor to the range.

For example, in the diagram above:

 

Model Scoring

Clarity will score models based on a zero (bad) to 100 (perfect) system. You define, for each Firm Standard, how many points that firm standard should be worth.

 

 

All of the points for all metrics will be added up for each model, and then divided by the number of AVAILABLE points for the model (for example, Revit Warnings are not reportable in Revit 2017, so a model will not get penalized for not having that metric available). All scores are normalized to a zero to 100 scale.

 
Where Model Scores are Visible

Model Scores are visible:

 

 


IMAGINiT Clarity
Copyright © 2011-2020, Rand Worldwide, Inc. All Rights Reserved