AI is here. How can real estate navigate the risks and stay ahead?
Navigating risks to stay ahead of the pack
Understanding your role in bringing AI to your business is crucial. So let's look at how some real estate companies are already using the technology in an array of applications.
Common ways of interacting with AI and GenAI systems include:
- Using existing foundation models or tools such as GPT4 or Microsoft Copilot for internal tasks, e.g., summarizing a market research report.
- Purchasing off-the-shelf AI-powered products/services from PropTech service providers, e.g., buying SaaS product for HVAC system control.
- Partnering with an external AI experts to customize solutions to your specific business needs, e.g., creating a customized tool to optimize sustainability performance in a portfolio.
- Training, and finetuning, models with proprietary data to provide service in a client-facing interface, e.g., building a client-facing chatbot that makes investment recommendations using historical transaction data.
In these use cases, real estate investors, developers, and corporate occupiers are generally categorized as "AI users/deployers," a term defined as "natural or legal persons that deploy an AI system in a professional capacity" by the European Union’s AI Act. This distinguishes them from AI developers and individual end-users.
AI developers focus on creating systems and ensuring the systems function correctly and responsibly, while AI users/deployers must navigate the practical, ethical, and regulatory implications of implementing and relying on these systems in their professional activities.
While the potential for damage looks substantial, these risks can be effectively managed with well-crafted strategic, technical, and legal frameworks. The next sections focus on key considerations in mitigating these three types of risks for real estate investors, developers, and corporate occupiers.
Yao Morin
Chief Technology Officer, JLLT
New regulations are changing the game
AI regulations are hitting a milestone in 2024. Following the U.S. Executive Order on AI at the end of October 2023 – the world's first AI law – the EU AI Act has recently been approved by the European Parliament. It’s expected to set a global benchmark similar to the GDPR. Concurrently, regulators and lawmakers in a number of other countries, including China, Canada and Australia, are actively advancing their own AI legislative efforts.
The EU AI Act classifies AI systems according to their societal risks into:
- Unacceptable (e.g., social scoring and manipulative AI)
- High-risk (e.g., biometrics, critical infrastructure, border control)
- Limited risks (e.g., chatbots)
- Minimal risk (e.g., video games, spam filters)
Different requirements are in place for different risk tiers, with the majority of obligations falling on AI providers and developers rather than users (although some risks remain with the users). These actions encompass enhancing data governance, disclosing technical documentation, providing instructions for use and complying with the EU’s Copyright Directive, among others.
For real estate AI users/deployers, this regulatory step is welcomed, promising increased transparency in data collection, model training, and use. Nonetheless, as these regulations take effect, it is imperative to evaluate compliance within the specific context of your use cases, making sure your AI providers build tools responsibly. Failure to do so could result in fines, liabilities, or even criminal penalties for your business.
In addition to overarching AI regulations, there are also regulations targeted at specific tools and use cases. For instance, Automated Valuation Models (AVMs) will be regulated in the U.S. and EU through legislation and appraisal standards in the near future.
At the same time, using AI systems irresponsibly could result in violation of real estate industry regulations such as fair housing laws and antitrust regulations. For example, RealPage was accused of anti-competitive practices through its pricing algorithms for rental housing. The tools designed to optimize pricing could inadvertently lead to illegal price-fixing or other anti-competitive behaviors. It is important to be aware of such risks and assess your application carefully.



