Skip to Main Content
 

Major Digest Home Regulating AI Power: A Calculated Risk - Major Digest

Regulating AI Power: A Calculated Risk

Regulating AI Power: A Calculated Risk

The US government has introduced a new regulation to track and potentially restrict artificial intelligence (AI) systems that pose a security risk due to their immense computational power.

A New Benchmark for AI Systems

Regulators are using a specific metric, known as floating-point operations per second (flops), to gauge an AI system's potential threat. Specifically, any model exceeding 10^26 flops must be reported to the US government and could soon trigger stricter requirements in California.

This benchmark is based on arithmetic calculations and has raised concerns among tech leaders that it might stifle innovation. Critics argue that this arbitrary threshold could snuff out emerging AI startup industries.

Regulatory Thresholds: A Balancing Act

The European Union's sweeping AI Act uses a similar metric but sets the bar 10 times lower at 10^25 flops. China's government has also looked into measuring computing power to determine which AI systems need safeguards.

California's legislation adds another metric: regulated AI models must cost at least $1 million and have more than 100 employees. The aim is to exclude models that lack the ability to cause critical harm from safety testing requirements.

The Debate Over Flops Thresholds

Some tech leaders, like venture capitalist Sara Hooker, argue that using flops thresholds as a proxy for risk is too simple and hard-coded. They claim there's "no clear scientific support" for such metrics and that they might fail to mitigate risks.

Regulators, however, see this metric as a temporary one that could be adjusted later. They emphasize the need for regulation in light of AI systems' rapidly growing capabilities.

Navigating Regulatory Uncertainty

The debate surrounding flops thresholds and AI regulations reflects the challenges of regulating emerging technologies.

As the stakes grow higher, regulators must balance innovation with caution to prevent harm. Meanwhile, tech leaders must adapt to evolving regulatory landscapes while pushing for better metrics to gauge AI risks.

A Calculated Risk

The introduction of flops thresholds represents a calculated risk by regulators to ensure public safety in the face of rapidly advancing AI capabilities.

While this approach has its flaws, it also acknowledges that progress requires careful consideration and adaptation to new circumstances. As Aguirre said, "This is all happening very fast... I think there’s a legitimate criticism that these thresholds are not capturing exactly what we want them to capture."

Source:
Published: