By: Jason Loring, Partner, Jones Walker LLP
February 16, 2026

Artificial intelligence has moved from the conference room to the construction site. Contractors are using AI-powered tools to predict schedule delays, monitor safety through drone footage, optimize equipment maintenance and flag potential hazards in real time. These tools deliver genuine efficiency gains, but they also introduce risks that most construction contracts do not anticipate and many project teams aren’t yet equipped to manage. 

The problem is that AI tools are probabilistic and not determinative, meaning that they can “hallucinate”: generating confident, but completely wrong, information. Your AI scheduling software might therefore predict a delay that never materializes, causing unnecessary resource mobilization. Your drone monitoring might flag a nonexistent safety hazard, stopping work and costing productivity. Or worse, it might miss a real hazard entirely.

The 80% Rule for AI on the Job Site

AI excels at processing massive data quickly, analyzing thousands of daily reports, sorting hundreds of thousands of emails or scanning drone footage in hours instead of weeks. But speed doesn’t equal accuracy. A useful heuristic is that AI can handle roughly 80% of the fact-finding and data extraction, but human experts must verify the critical 20%. While this is not a formal rule, as a practical framework it offers insights for thinking about appropriate division of labor between AI tools and humans. 

This isn’t just a best practice; it reflects an emerging standard of care across professional fields. The American Bar Association’s Formal Opinion 512 articulates this principle for lawyers, requiring competence in understanding AI tool limitations and verifying outputs. While that opinion addresses attorney ethics specifically, the underlying principle that professionals remain responsible for AI-assisted work product applies to construction professionals under their own potential duty-of-care standards and, increasingly, their contracts. 

In practice, you can let AI scheduling tools process daily reports and identify delay patterns, but an experienced project manager should still generally review outputs before reallocating resources or filing notices. You can use AI to scan drone footage for safety issues, but have qualified personnel verify any flagged concerns before stopping work or dismissing warnings. 

When AI Knows Something You Don’t Act On 

Imagine your AI system flags a potential latent site condition based on sensor data, but your site supervisor overrides the alert without documentation. Three weeks later, that condition causes a delay or added cost. Does having the AI capability but not using it properly increase your liability? 

Courts and arbitrators increasingly recognize AI as a “silent witness” that may have flagged problems in real time. If your company invested in predictive AI but ignores its warnings, you may face arguments that you had “superior knowledge” of risks and failed to act. Under the “Spearin Doctrine,” contractors typically have limited responsibility for latent defects in owner-provided plans or specifications, but that protection assumes the contractor didn’t know or have reason to know of the defect. When your AI flags a potential issue that human supervisors miss, that may open the door to arguments that you had constructive knowledge and failed to meet disclosure obligations. 

This gets particularly complicated under standard form contracts. ConsensusDocs 200 (Agreement Between Owner and Constructor) requires written notice of claims within specific timeframes. But does an AI dashboard notification sitting in your project management system count as “written notice”? Likely not. But a failure to respond to those AI alerts may still be held against you. 

The same tension exists under AIA contract forms and most other industry standards. These contracts presume human observation and deliberate communication. They weren’t drafted with the assumption that AI might be continuously monitoring conditions and generating alerts that may or may not be reviewed by qualified personnel. This creates a gap: your AI “knows” something, but until a human reviews that information and makes a judgment call, have you really been put on notice? 

The practical answer is that you need documented processes for evaluating and responding to AI-generated warnings. This doesn’t mean blindly following every alert (again, many will be false positives). But it does mean you need clear protocols for who reviews AI alerts, how quickly they’re assessed, what triggers escalation to humans and how determinations are documented and communicated. 

Three Categories of AI Job Site Risk 

Operational Risks: When AI Gets It Wrong 

False positives waste time and money: AI predicts a delay that never happens, causing unnecessary expedited deliveries, or flags equipment for unneeded maintenance, taking productive machinery offline. Consider a scenario where your AI-powered crane monitoring system flags a hydraulic issue and automatically takes the crane offline during a critical concrete pour. The inspection reveals nothing wrong; the sensor data was corrupted. But you’ve now blown your schedule because you relied on AI without human verification. This can be catastrophic. If AI-powered safety monitoring misses a hazard and someone gets injured, you face potential liability despite investing in the technology. The question ultimately becomes whether you over-relied on AI without adequate human oversight. 

Legal and Contractual Risks: When Liability Gets Complicated

When AI errors cause project problems, determining liability gets complicated. Was it the vendor’s faulty algorithm? Your team’s inadequate training? Bad input data? Most construction contracts don’t address this allocation of risk. 

Your AI vendor contracts generally should have specific provisions addressing liability for errors, indemnification requirements, insurance coverage for failures and clear documentation of capabilities and limitations. Your project contracts with owners and subcontractors should address how AI-generated data will be used, who bears the risk of errors and whether AI outputs alone satisfy contractual requirements or need human verification. 

The insurance and bonding implications are equally critical. Traditional professional liability policies may not cover errors stemming from AI recommendations, particularly if the AI was marketed as reducing the need for professional judgment. Cyber insurance policies typically cover data breaches, but may exclude coverage for AI hallucinations that lead to faulty business decisions. And if an AI scheduling error causes massive delays, that could trigger a performance bond claim (an area where the surety industry is still working through coverage questions). 

Data and Privacy Risks: When Information Flows Where It Shouldn’t 

AI tools require massive amounts of data (e.g., site photos, progress reports, worker information, budget details and proprietary methods). If processed on cloud platforms or shared with AI vendors, you must ensure compliance with data privacy regulations and contractual confidentiality obligations. 

If your project uses an AI-powered workforce management tool to optimize crew assignments and a project administrator uploads daily crew rosters that include worker medical restrictions (to ensure proper job assignments) into the AI platform, the platform’s terms of service may permit the vendor to use that input data to improve its algorithms. You’ve now potentially violated state privacy laws, worker‑protection statutes and your own employment policies. 

For federal projects, sharing procurement-sensitive information with third-party AI platforms may violate Federal Acquisition Regulation rules. State privacy laws may apply to projects involving personal information. And if your AI vendor suffers a data breach, you may face exposure regardless of fault. 

Practical Steps for Managing AI Risk

Document Your Process. Create written protocols for how AI tools will be used, who has authority to act on recommendations and what verification is required. When you override an AI recommendation, document why. This protects you whether the AI is right or wrong. 

Train Your Teams. Everyone using AI tools needs to understand their limitations. Project managers should know AI predictions are probabilistic. Safety personnel should understand that AI supplements, but does not replace, independent judgment. Administrative staff should know what data can and cannot be input into AI tools. 

Update Your Contracts. Develop AI-specific provisions addressing ownership of outputs, liability allocation for errors, data privacy requirements, insurance and indemnification and verification standards. 

Verify, Verify, Verify. Do not rely solely on AI for critical decisions. Use AI to process data and flag issues, but have qualified humans make final calls on safety, costs, personnel and legal notices. 

The Bottom Line

AI tools are transforming construction project management, offering genuine benefits in scheduling, safety and efficiency. But these tools supplement human judgment rather than replace it. The contractors who will successfully navigate this transition embrace AI’s capabilities while remaining clear-eyed about its limitations and build processes and contracts necessary to manage the new risks. 

The technology isn’t going away, and it’s making many projects genuinely better. But when your scheduler hallucinates, you need to be ready with verification protocols, trained teams and proper contracts. Those who treat AI as a powerful tool (rather than a replacement for professional judgment) will be best positioned to reduce risk while capturing real project value. 

Jason M. Loring is a Partner and Co-Lead of the Privacy, Data Strategy and AI Practice at Jones Walker LLP. He represents contractors, owners and other construction industry participants on AI governance, technology contracts and data privacy compliance. Jason also serves on the State Bar of Georgia’s Special Committee on Artificial Intelligence and Technology.

“The Construction Industry Team at Jones Walker LLP is one of the most highly regarded and award-winning construction law practices in the nation. Our experienced construction attorneys understand the complex dynamics between — and the unique priorities of — project participants and can craft effective solutions that minimize disputes, manage risks, and help keep projects moving from conception to completion.”

The views expressed in this article are not necessarily those of ConsensusDocs. Readers should not take or refrain from taking any action based on any information without first seeking legal advice.