A Hazard Analysis Framework for Code Synthesis Large Language Models

A Hazard Analysis Framework for Code Synthesis Large Language Models.
Codex, a large language model (LLM) trained on a variety of codebases,
exceeds the previous state of the art in its capacity to synthesize and
generate code. Although Codex provides a plethora of benefits, models that may
generate code on such scale have significant limitations, alignment problems,
the potential to be misused, and the possibility to increase the rate of
progress in technical fields that may themselves have destabilizing impacts or
have misuse potential. Yet such safety impacts are not yet known or remain to
be explored. In this paper, we outline a hazard analysis framework constructed
at OpenAI to uncover hazards or safety risks that the deployment of models like
Codex may impose technically, socially, politically, and economically. The
analysis is informed by a novel evaluation framework that determines the
capacity of advanced code generation techniques against the complexity and
expressivity of specification prompts, and their capability to understand and
execute them relative to human ability.

Read in full here:

This thread was posted by one of our members via one of our news source trackers.

Corresponding tweet for this thread:

Share link for this tweet.