TLDRs
- Goldman blocks Claude in Hong Kong after strict contract interpretation with Anthropic terms.
- ChatGPT and Gemini remain available despite Anthropic AI restrictions in Goldman systems.
- Ownership rules and geopolitics increasingly determine enterprise AI tool availability globally.
- Goldman move highlights rising complexity of cross-border AI compliance and governance risks.
Goldman Sachs has restricted access to Anthropic’s Claude for its bankers in Hong Kong after a recent internal review of its agreement with the AI startup.
The decision follows a stricter interpretation of Anthropic’s usage terms, which the bank assessed in consultation with the company.
The move highlights how fast-changing AI partnerships in global finance are increasingly shaped not only by technology capabilities but also by legal frameworks, ownership structures, and geopolitical considerations.
Strict Interpretation of Agreement
According to people familiar with the matter, Goldman Sachs determined that its Hong Kong-based staff should no longer use any Anthropic products, including Claude, after re-examining contract conditions. The decision came after discussions with Anthropic, which clarified how its access rules apply to global institutions operating in multiple jurisdictions.
While Claude has been removed from approved internal tools in Hong Kong, other AI systems such as OpenAI’s ChatGPT and Google’s Gemini remain available to staff. The bank’s internal AI platform continues to support these alternatives for productivity and workflow tasks.
Goldman had previously positioned itself as an early adopter of generative AI in financial services. Earlier in the year, the firm announced it was collaborating with Anthropic on developing AI agents designed to assist with internal operations, signaling a broader push toward automation and AI-driven efficiency.
Regional Rules Shape Access
The restriction also reflects the complex regulatory and contractual environment surrounding advanced AI systems. In mainland China, US-developed models like ChatGPT and Claude are not permitted, though Hong Kong has generally maintained separate access conditions.
However, Anthropic’s policies extend beyond geography alone. The company restricts access to organizations that are more than 50% owned by entities based in unsupported jurisdictions. This means access decisions can depend on corporate control structures rather than just the physical location of employees.
In Goldman Sachs’ case, the bank appears to have opted for a conservative interpretation of these rules to ensure compliance and avoid potential breaches of contract. This approach underscores how financial institutions are increasingly cautious about aligning AI usage with evolving provider terms.
AI Governance Becomes More Complex
Anthropic’s policy framework is designed, according to the company, to mitigate national security risks and reduce exposure to jurisdictions where data access could be subject to government pressure. This has introduced a new layer of complexity for multinational firms deploying AI tools globally.
For large institutions like Goldman Sachs, these constraints mean AI adoption strategies must account for ownership structures, compliance obligations, and cross-border legal interpretations. The result is a fragmented AI environment where different tools are available depending on region and regulatory alignment.
The impact of such restrictions is also spilling over into the broader tech ecosystem. AI products tied to Chinese-linked ownership structures or partnerships have faced uncertainty in global markets, with some users reportedly reconsidering tools that rely on restricted models.
Broader Market Implications
The tightening of AI access rules is also creating opportunities for alternative providers. Chinese AI developers are increasingly positioning themselves as substitutes for restricted US-based tools, offering competitive pricing and localized solutions to fill potential gaps in the market.
For global enterprises, the situation underscores a growing reality, AI deployment is no longer purely a technical decision. It is increasingly influenced by geopolitical alignment, corporate ownership, and evolving compliance frameworks.
Goldman Sachs’ decision reflects a wider trend in the financial sector, where institutions are balancing innovation in artificial intelligence with heightened scrutiny over data security and international regulatory exposure. As AI becomes more embedded in core business operations, similar restrictions may become more common across global firms.


