We collect minimal analytics to understand how the site is used. If you decline, we do not load analytics.

Build your own takeoff for Division 08 — Doors, Frames & Hardware

Division 08 — doors, frames, and hardware — is one of the most tedious takeoffs in commercial construction. A large project can have hundreds of door types. Each one has a size, fire rating, frame material, hardware set, and glass spec. That data lives in two places: a floor plan (which tells you where the door is) and a door schedule (which tells you what it is). Getting from PDF to priced line items today means reading both manually, cross-referencing by eye, and hoping you didn't miss anything.

Here's what the AnchorGrid API can automate today, and where you still need to do work on your side.

What the pipeline looks like

Four steps. Three API calls. One post-processing join.

Step 1 — Classify the document

Before you run any detection, classify the drawing set.

POST /v1/drawings/classify

{ "document_id": "uuid" }


This returns a page-by-page breakdown of drawing types, titles, and scopes. Cost is 1 credit per PDF regardless of page count. For a 200-page drawing set, this tells you that floor plans are on pages 3–6 and the door schedule is on page 12 — before you spend a single credit on detection.

Without this step you'd be running door detection across the entire document. That's expensive and slow. Classification makes the rest of the pipeline targeted.

Step 2 — Detect doors on floor plan pages

Pass only the floor plan pages to door detection.

POST /v1/drawings/detection/doors

{ "document_id": "uuid", "page_numbers": [3, 4, 5, 6] }

The response gives you a bounding box for every door swing found, keyed to page number.

{
  "doors": [
    { "id": "door_a3f9c12b8e4d", "page": 3, "bbox": { "x1": 420, "y1": 310, "x2": 465, "y2": 355 } }
  ],
  "doors_found": 47
}

You know how many doors exist and where each one sits in PDF coordinate space. What you don't know yet is what any of them are. That's what the next step is for.

Step 3 — Extract the door schedule

Pass the schedule pages to the schedule extractor.

POST /v1/drawings/schedules

{ "document_id": "uuid", "page_numbers": [12] }


The response returns headers and rows parsed from the schedule table — no template required.

{
  "schedules": [{
    "schedule_type": "door_schedule",
    "headers": ["NUMBER", "WIDTH", "HEIGHT", "FIRE RATING", "DOOR MATERIAL", "HARDWARE SET"],
    "rows": [
      ["N101A", "3'-0\"", "7'-10\"", null, "AL & G", "AL-12"],
      ["N101B", "3'-0\"", "7'-10\"", "45", "AL & G", "AL-08"]
    ]
  }]
}


Every door tag and its full attribute set, structured as JSON.

What you can build from this

At this point you have door locations and door attributes as two separate datasets. Aggregating the schedule alone already gets you a significant portion of the Division 08 takeoff — total counts by door type, hardware set groupings, fire-rated door counts — without touching the floor plan detection at all.

If you combine both, you can build a location-aware takeoff: every door placed on a floor, tagged with its attributes, grouped by floor or zone.

What we can't do yet — the gap

The door detector returns bounding boxes. It does not return the door tag number (e.g. "N101A") that links a physical door location to its schedule row. That tag lives in a small bubble drawn adjacent to the door swing on the floor plan.

Connecting location to attributes requires reading that tag. This is a post-processing step you implement on your side: render the page to a high-resolution image, crop a small region around each detected bbox, OCR it, and join the result to the matching schedule row. Fuzzy matching handles the occasional misread.

The resulting enriched door object looks like this:

{
  "id": "door_a3f9c12b8e4d",
  "page": 3,
  "tag": "N101A",
  "bbox": { "x1": 420, "y1": 310, "x2": 465, "y2": 355 },
  "width": "3'-0\"",
  "height": "7'-10\"",
  "fire_rating": null,
  "hardware_set": "AL-12",
  "matched": true
}


We're building this join as a native endpoint. Until then, the bbox coordinates give you everything you need to implement it.

What's coming

Room detection is releasing next week. For Division 08 specifically this matters because door counts without spatial context are useful but incomplete — knowing which room a door serves (office, corridor, stair) drives hardware set selection and code compliance checks. Room detection will return boundaries, labels, and floor area, giving you the spatial layer the pipeline currently lacks.

Start building

The API is live. Free tier gives you enough credits to run the full pipeline on a real drawing set.