Abstract |
: |
Web API technology has been widely used in various application infrastructures because it allows different application services to interact and communicate via network platforms. Web API allows applications to share functionality and data with others, making it a preferred choice for integration across infrastructures. Despite the benefits of Web API, it is not without its security concerns. Many vulnerabilities arise due to misconfigurations or insufficient security mechanisms, which can be prevented by performing functionality testing. One of the critical functionality tests to do is fuzzing. Fuzzing is a testing method to identify vulnerabilities that emerge from flawed input and business logic validations. In this research, we performed fuzzing experiments from offensive and defensive perspectives. We compared several state-of-the-art fuzzing tools for the offensive approach, namely EvoMaster, Restler, and RestTestGen. For the defensive approach, we compare several state-of-the-art input validation libraries: Joi, Zod, Marshmallow, and Pydantic. The performance metrics used are the fuzzing tool's effectiveness in finding bugs/errors and the validation library's effectiveness in validating fuzzing payloads, which is measured by calculating the percentage of error reduction. Evaluation results show that Restler found the most bugs/errors compared to other fuzzing tools. Then, each validation library has the following effectiveness: Joi 98.34%, Zod 96.68%, Marshmallow 98.04%, and Pydantic 98.04%. |