Domain-level pre-simulation QA — catches issues schema validation cannot.
Runs six checks against the loaded model:
- Zones with no BuildingSurface:Detailed surfaces
- Missing required simulation control objects (Version, Building, Timestep, RunPeriod, SimulationControl)
- Orphan schedules (defined but not referenced by any object)
- Surface boundary condition mismatches (non-reciprocal 'Surface' pairs)
- Fenestration surfaces referencing non-existent host surfaces
- ZoneHVAC:EquipmentConnections referencing non-existent zones
Use this after validate_model and before run_simulation. A model can pass
validate_model but still fail these checks.
Preconditions: model loaded.
Side effects: none — read-only.
Source code in src/idfkit_mcp/tools/integrity.py
| @tool(annotations=_READ_ONLY)
def check_model_integrity() -> ModelIntegrityResult:
"""Domain-level pre-simulation QA — catches issues schema validation cannot.
Runs six checks against the loaded model:
- Zones with no BuildingSurface:Detailed surfaces
- Missing required simulation control objects (Version, Building, Timestep, RunPeriod, SimulationControl)
- Orphan schedules (defined but not referenced by any object)
- Surface boundary condition mismatches (non-reciprocal 'Surface' pairs)
- Fenestration surfaces referencing non-existent host surfaces
- ZoneHVAC:EquipmentConnections referencing non-existent zones
Use this after validate_model and before run_simulation. A model can pass
validate_model but still fail these checks.
Preconditions: model loaded.
Side effects: none — read-only.
"""
state = get_state()
doc = state.require_model()
checks: list[tuple[str, Callable[[IDFDocument], list[IntegrityIssue]]]] = [
("zones_with_no_surfaces", _check_zones_with_no_surfaces),
("required_simulation_controls", _check_required_controls),
("orphan_schedules", _check_orphan_schedules),
("surface_boundary_mismatches", _check_surface_boundary_mismatches),
("fenestration_host_check", _check_fenestration_hosts),
("hvac_zone_references", _check_hvac_zone_references),
]
all_issues: list[IntegrityIssue] = []
checks_run: list[str] = []
for name, fn in checks:
checks_run.append(name)
try:
all_issues.extend(fn(doc))
except Exception:
logger.exception("Integrity check '%s' failed unexpectedly", name)
error_count = sum(1 for i in all_issues if i.severity == "error")
warning_count = sum(1 for i in all_issues if i.severity == "warning")
logger.info(
"check_model_integrity: %d errors, %d warnings across %d checks",
error_count,
warning_count,
len(checks_run),
)
return ModelIntegrityResult(
passed=error_count == 0,
error_count=error_count,
warning_count=warning_count,
issues=all_issues,
checks_run=checks_run,
)
|