°ÄÃÅÁùºÏ²Ê×ÊÁÏ

Skip to main content
Share via Copy link

How to mitigate risk in disputes arising from AI use in construction projects

27 October 2023

Businesses are increasingly adopting AI tools to carry out functions traditionally performed by humans and/or non-AI tech. Construction companies are no different and are using AI tools across many parts of their businesses to increase productivity and add new proficiencies in areas such as their construction, consultation, operation and development services (e.g. site and staff management, reporting, procurement controls, project management processes, utilities management, contract generation, review, and analysis). Notwithstanding these benefits, the use of AI in can lead to complex and at times unexpected disputes. 

This article focuses on construction projects which are typically bespoke and multifaceted agreements that evolve as the project progresses. As a result, when construction projects run into difficulties, it can be challenging to ascertain which party is responsible for what has gone wrong, and to pinpoint the role AI may have played in causing the issue(s) giving rise to the dispute. 

This article sets out the potential disputes that could arise due to AI use in construction projects and offers guidance on how to manage AI-related risks.

AI disputes in construction projects

Construction projects invariably involve participation from stakeholders at all points of the supply chain (construction developers, contractors, subcontractors, architects, purchasers, tenants, funders etc). Within the context of the overall project, there will be different contractual arrangements in place that govern the various relationships, for example developers will have contracts in place with contractors who in turn will have contracts in place with subcontractors, and architects. The use of AI adds another layer to this contractual matrix and introduces an additional party to the chain – namely the AI developer.

The question of who is liable when construction projects go wrong is ultimately a factual one, but the introduction of AI makes answering this question far harder, due to the inherent complexity of AI tools. The difficulties in assessing who is responsible for the failure of a construction project that uses AI means that a whole range of time consuming and expensive disputes can be triggered. For instance, did the project ultimately fail because of the data used by the AI developer to train the AI tool (the answer is likely to be different if data was provided to the AI developer by the construction developer or contractors) or was it because the training methodology used by the AI developer was flawed or inadequate or was the failure due to an underlying issue with the hardware used in the tool itself? Moving higher up the chain, architects could be held liable for a failure to exercise sufficient oversight over the outputs that the AI tool produces before submitting designs to contractors, who in turn could be held liable by the construction developers for failing to properly implement the designs. It is also possible that one or more AIs could be used by different stakeholders for their workstreams on the project or some or all of the stakeholders may be providing inputs / contributions to the operation of the different AIs on the project.  Accordingly, AI use increases the number of potential disputes associated with one project, but it is important to bear in mind that the nature of these disputes largely depends on the project itself, what specifically goes wrong, and what is stated in the various liability and risk provisions of the relevant contracts.

How to manage risk in construction projects that use AI

It is crucial that the various stakeholders are clear about the primary objectives of the project, and the ways in which AI will be used to achieve these objectives. Contracting parties should ensure that there are comprehensive agreements in place at each stage of the supply chain to provide certainty in the event that something goes wrong, in particular in relation to any warranties and limitations of liability. 

It is essential that the roles, responsibilities, expectations and risk of the various parties are contractualised and defined in detail from the outset of the project, so as to apportion liability throughout the chain with maximum clarity.  This includes drafting very detailed and bespoke specifications for the AI tool along with the related customer services requirements (e.g. Statements of Work) and any corresponding supplier solution responses that must be met by the relevant parties, which should be contractually tied into clearly identified timelines in a comprehensive implementation/project plan.

In the event that a construction project runs into difficulty, then it is likely that there will be a cascade of claims, with stakeholders seeking to recover their losses from the party next in the contractual chain. The claims will typically be for breach of contract arising from a failure to provide services in accordance with the express terms of the contract (e.g. by missing contractual milestones, or by the AI producing results which do not meet the contractual specifications for the AI tool itself or the operational use requirements for that AI tool), and/or with reasonable care and skill. Such claims may give rise to damages, termination rights and/or other contractual remedies specified in the contracts. 

The use of AI means that typical liability frameworks may not be suitable. Parties contracting to use an AI tool in a construction project should ensure that from the outset, the agreement includes AI specific warranties, indemnities and limitation provisions. These terms should be tailored to the specific context in which the AI tool will be deployed and should be based on standards that are clearly measurable. This will likely involve drafting warranties which, whilst based on common service standards such as reasonable care and skill, respond to the fact that an AI tool is being utilised. Examples of this type of warranty is that: the AI tool should behave in the same way as a suitably capable and experienced human who is exercising reasonable skill and care in providing the service; its outputs will be monitored and reviewed by a suitably qualified human; the AI developer uses a suitably diverse team to design and develop the AI software. Where the AI’s outputs are subject to human oversight, the scope of such obligations should be clearly drafted, including identifying the required skills/experience of the individuals concerned, the nature of any training required, as well as the processes to be followed in testing, monitoring and reviewing the AI’s outputs (e.g., testing and analysis of the AI outputs to be carried out on a monthly basis for the duration of the project). Record-keeping in relation to the review of decisions made by AI solutions can also be helpful in managing the risk associated with their use. 

Notwithstanding the merits-based challenges in establishing the causes of the failure of an AI tool, given the complexity of the development of the tool, it is also important for parties to recognise and mitigate against the risk that the AI developer (often start-ups) might not have sufficient assets available or sufficient insurance coverage to satisfy a claim. 

The contractual issues referred to above are highly specific to the use to which the AI will be put, and so contracting parties should engage with their stakeholders, consultants, lawyers and other experts to help them navigate this complex and evolving area.

Contact

Contact

Mark Hickson

Head of Business Development

onlineteaminbox@brownejacobson.com

+44 (0)370 270 6000

View profile
Can we help you? Contact Mark

Anthony Nagle

Partner

Anthony.Nagle@brownejacobson.com

+44 (0)20 7871 8501

View Profile
Can we help you? Contact Anthony

You may be interested in...