In addition to the above objectives, we will not design or deploy AI in the following application areas:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
As our experience in this space deepens, this list may evolve.
除了上面列出的目标外,我们不会为以下应用领域设计或部署 AI:
1. 会造成或可能会造成普遍伤害的技术。如果存在实质风险,只有当确信益处远大于风险时,我们才会迈出下一步,并且会实施适当的安全限制。
2. 主要目的或用途是造成或直接促使发生人身伤害的武器或其他技术。
3. 为了实施违反国际公认规范的监控活动而收集或使用信息的技术。
4. 目的与广泛接受的国际法和人权原则相悖的技术。
随着我们在 AI 领域的经验不断累积,可能会不时修订这份清单。