Helping policymakers weigh the benefits of open source AI
GitHub enables developer collaboration on innovative software projects, and we’re committed to ensuring policymakers understand developer needs when crafting AI regulation.
Policymakers are increasingly focusing on software components of AI systems, and how developers are making AI model weights available for downstream use. GitHub enables developer collaboration on innovative software projects, and we’re committed to ensuring policymakers understand developer needs when crafting AI regulation. We support AI governance that empowers developers to build more responsibly, securely, and effectively, to accelerate human progress.
GitHub submitted a filing in response to the U.S. NTIA’s request for comment on the potential risks, benefits, and policy implications of widely available model weights–and of open source AI, which makes not only weights available to developers, but also code and other components under terms allowing developers to inspect, modify, (re)distribute, and use AI components for any purpose. Our submission can be found here, but there are a few important ideas we want to highlight.
Open source AI presents clear benefits
It is important to consider the myriad benefits of open source AI. Open source is a public good, designed for all to use: hobbyists, professional developers, companies, governments, and anyone looking to make an impact with code. The broadly available nature of open source has already generated tremendous value to society accelerating innovation, competition, and the wide use of software and AI across the global economy. Open source AI advances the responsible development of AI systems, use of AI in research across disciplines, developer education, and government capacity.
Evaluation and regulation should prioritize AI systems–not models
Evaluation and regulation are better focused on the full AI system and policies governing use, rather than subcomponents, including AI models. Policies that focus on restricting the model are likely to inhibit beneficial use more than prevent criminal abuse. It also risks missing the forest for the tree: orchestration and safety software included in AI systems can expand or constrain AI capabilities. Current evidence does not support government restrictions on sharing AI models. Policymakers should instead, irrespective of model type, prioritize AI regulation for high-risk AI systems and prepare plans to address abuse by bad actors. Security through obscurity is not a winning strategy.
The path to societal resilience is not open or closed
Governments have an important role to play in steering the technological frontier and building societal resilience that allows us to seize the benefits enabled by AI while reducing its risks. From accelerating needed AI measurement science and safety research, to supporting public education and protective measures, civic institutions are well-positioned to usher in a new era of AI governed by our values. The open availability, diversity, and diffusion of AI models can support this societal resilience and flourishing. With this in mind, GitHub looks forward to continuing policy collaboration to accelerate human progress.
Tags:
Written by
Related posts
GitHub and JFrog partner to unify code and binaries for DevSecOps
This partnership between GitHub and JFrog enables developers to manage code and binaries more efficiently on two of the most widely used developer platforms in the world.
2024 GitHub Accelerator: Meet the 11 projects shaping open source AI
Announcing the second cohort, delivering value to projects, and driving a new frontier.
Introducing GitHub Copilot Extensions: Unlocking unlimited possibilities with our ecosystem of partners
The world of Copilot is getting bigger, improving the developer experience by keeping developers in the flow longer and allowing them to do more in natural language.