What we learned from the Security Lab’s Community Office Hours
The GitHub Security Lab provided office hours for open source projects looking to improve their security posture and reduce the risk of breach. Here’s what we learned and how you can also participate.
Earlier this year, the GitHub Security Lab kicked off an initiative to provide office hours for open source projects looking to improve their security posture and reduce the risk of breach. The initiative aligned with our mission to inspire and enable the community to secure the open source software we all depend on. It also helped to address a well-expressed need in the open source community for security expertise.
After extending an invitation to the community, we were able to connect with six open source projects, and the results were incredible! Maintainers who participated in the initiative saw several immediate benefits, which drastically improved their security.
For example, following our discussion, Guzzle, a widely-used PHP HTTP client with 22k stars and 2.3k forks on GitHub, reported significant reduction to the processing time of vulnerability reports. In the past, acknowledging and confirming the bug, implementing and reviewing the fix, and notifying their user base took the Guzzle team several weeks. With the Security Lab’s help, they were able to manage five separate vulnerabilities in just a few hours!
Another team of open source maintainers was inspired by the conversation to write an article about third-party GitHub Actions. This provided an opportunity for collaboration and allowed the team to share best practices regarding permission escalation with other maintainers facing similar concerns.
How the Community Office Hours worked
We first asked interested maintainers to complete a short questionnaire with more information about their projects, including any security concerns. We then matched the projects up with internal security experts from GitHub based on the topics mentioned in the questionnaire and the programming languages used.
Leading up to the conversations with the project teams, the experts were asked to get familiar with the codebase and prepare initial thoughts about their security practices. This preparation phase allowed the team to jump right into useful conversations to maximize the value for each of the maintainers.
Common patterns that we observed
1. Maintainers struggle to define their attack surface
The top concern maintainers expressed was about their ability to define their project’s attack surface. Simply put, everyone asked how we would hack them. The attack surface is a collection of areas in a project where there is a higher risk of attack or malicious activity. This includes places where user input enters the codebase or where user-controlled data can be used in a critical way, such as code execution or a file system operation.
Everyone should also be aware that attack surfaces can also extend well beyond code. We found countless examples of attack vectors consisting of a project’s supply chain, confidential information, or CI/CD pipeline. Even maintainers can be attack vectors through cleverly designed social engineering attacks and account takeovers.
Developers without cybersecurity knowledge may not be aware of all of the examples above. That’s why we’re here! Since every project is different, the first step is to define the most pressing attack vectors in order to identify weak points and design theoretical attacks against them. This practice is known as a threat modeling exercise. Through a threat modeling exercise, team members brainstorm ideas, present evidence of weak points, and explain how they would exploit these weaknesses.
It’s also important to understand that each attack differs in terms of impact on users, amount of effort, time required, likelihood of success, and skills needed to execute. By considering these factors, maintainers can prioritize their mitigation efforts and address the most pressing dangers through a data-driven approach. This cheat sheet from OWASP is a great starting point for any readers interested in learning more.
2. Adopting a few simple practices can significantly improve your project’s security
Some maintainers weren’t aware of some simple best practices that bring huge benefits and are easy to implement. The following five practices provided the most value to our participants.
- Enable an additional factor (or several factors) of authentication (2FA or MFA) to safeguard against account takeover and impersonation.
- Activate automated code scanning in your CI/CD workflow to quickly flag bugs in your code during the software development lifecycle.
- Activate Dependabot to keep dependencies up to date.
- Publish a security policy to instruct users on how to disclose vulnerabilities in your code.
- Create security advisories to alert users about vulnerabilities in your code and the safest, updated version following a vulnerability disclosure.
3. Imbalance between functionality testing and security testing
The final pattern that we observed was that some projects were way more focused (or solely focused) on functionality testing instead of balancing with security testing. In one case, a project had implemented input sanitation to prevent injections, but was never tested to ensure it was working properly. This non-functional requirement could have easily been tested through unit tests by intentionally entering malicious inputs. Another option would have been fuzzing, which involves leveraging a dynamic, automated security testing method that runs the program with invalid, malformed, or unexpected inputs to reveal vulnerabilities via crashes and information leakage.
If you are an open source maintainer, you can immediately improve the security of your project by following the tips above. We encourage you to also consider participating in our Community Office Hours. If you are interested, please fill out this form and we’ll get back to you!
Tags:
Written by
Related posts
Execute commands by sending JSON? Learn how unsafe deserialization vulnerabilities work in Ruby projects
Can an attacker execute arbitrary commands on a remote server just by sending JSON? Yes, if the running code contains unsafe deserialization vulnerabilities. But how is that possible? In this blog post, we’ll describe how unsafe deserialization vulnerabilities work and how you can detect them in Ruby projects.
10 years of the GitHub Security Bug Bounty Program
Let’s take a look at 10 key moments from the first decade of the GitHub Security Bug Bounty program.
Where does your software (really) come from?
GitHub is working with the OSS community to bring new supply chain security capabilities to the platform.