Search Results

Documents authored by Tripp, Omer


Document
Static Analysis for AWS Best Practices in Python Code

Authors: Rajdeep Mukherjee, Omer Tripp, Ben Liblit, and Michael Wilson

Published in: LIPIcs, Volume 222, 36th European Conference on Object-Oriented Programming (ECOOP 2022)


Abstract
Amazon Web Services (AWS) is a comprehensive and broadly adopted cloud provider. AWS SDKs provide access to AWS services through API endpoints. However, incorrect use of these APIs can lead to code defects, crashes, performance issues, and other problems. AWS best practices are a set of guidelines for correct and secure use of these APIs to access cloud services, allowing conformant clients to fully reap the benefits of cloud computing. We present static analyses, developed in the context of a commercial service for detection of code defects and security vulnerabilities, to identify deviations from AWS best practices. We focus on applications that use the AWS SDK for Python, called Boto3. Precise static analysis of Python cloud applications requires robust type inference for inferring the types of cloud service clients. However, Boto3’s "Pythonic" APIs pose unique challenges for type resolution, as does the interprocedural style in which service clients are used. We offer a layered approach that combines multiple type-resolution and tracking strategies in a staged manner: (i) general-purpose type inference augmented by type annotations, (ii) interprocedural dataflow analysis expressed in a domain-specific language, and (iii) name-based resolution as a low-confidence fallback. Across >3,000 popular Python GitHub repos that make use of the AWS SDK, our layered type inference system achieves 85% precision and 100% recall in inferring Boto3 clients in Python client code. Additionally, we use real-world developer feedback to assess a representative sample of eight AWS best-practice rules. These rules detect a wide range of issues including pagination, polling, and batch operations. Developers have accepted more than 85% of the recommendations made by five out of eight Python rules, and almost 83% of all recommendations.

Cite as

Rajdeep Mukherjee, Omer Tripp, Ben Liblit, and Michael Wilson. Static Analysis for AWS Best Practices in Python Code. In 36th European Conference on Object-Oriented Programming (ECOOP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 222, pp. 14:1-14:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mukherjee_et_al:LIPIcs.ECOOP.2022.14,
  author =	{Mukherjee, Rajdeep and Tripp, Omer and Liblit, Ben and Wilson, Michael},
  title =	{{Static Analysis for AWS Best Practices in Python Code}},
  booktitle =	{36th European Conference on Object-Oriented Programming (ECOOP 2022)},
  pages =	{14:1--14:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-225-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{222},
  editor =	{Ali, Karim and Vitek, Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2022.14},
  URN =		{urn:nbn:de:0030-drops-162429},
  doi =		{10.4230/LIPIcs.ECOOP.2022.14},
  annote =	{Keywords: Python, Type inference, AWS, Cloud, Boto3, Best practices, Static analysis}
}
Document
Synergies among Testing, Verification, and Repair for Concurrent Programs (Dagstuhl Seminar 16201)

Authors: Julian Dolby, Orna Grumberg, Peter Müller, and Omer Tripp

Published in: Dagstuhl Reports, Volume 6, Issue 5 (2016)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 16201 "Synergies among Testing, Verification, and Repair for Concurrent Programs". This seminar builds upon, and is inspired by, several past seminars on program testing, verification, repair and combinations thereof. These include Dagstuhl Seminar 13021 "Symbolic Methods in Testing"; Dagstuhl Seminar 13061 "Fault Prediction, Localization and Repair"; Dagstuhl Seminar 14171 "Evaluating Software Verification Systems: Benchmarks and Competitions"; Dagstuhl Seminar 14352 "Next Generation Static Software Analysis Tools"; Dagstuhl Seminar 14442 "Symbolic Execution and Constraint Solving"; and Dagstuhl Seminar 15191 "Compositional Verification Methods for Next-Generation Concurrency". These were held in January 2013; February 2013; April 2014; August 2014; October 2014; and May 2015, respectively. Two notable contributions of Dagstuhl Seminar 16201, which distinguish it from these past seminars, are (i) the focus on concurrent programming, which introduces significant challenges to testing, verification and repair tools, as well as (ii) the goal of identifying and exploiting synergies between the testing, verification and repair research communities in light of common needs and goals.

Cite as

Julian Dolby, Orna Grumberg, Peter Müller, and Omer Tripp. Synergies among Testing, Verification, and Repair for Concurrent Programs (Dagstuhl Seminar 16201). In Dagstuhl Reports, Volume 6, Issue 5, pp. 56-71, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{dolby_et_al:DagRep.6.5.56,
  author =	{Dolby, Julian and Grumberg, Orna and M\"{u}ller, Peter and Tripp, Omer},
  title =	{{Synergies among Testing, Verification, and Repair for Concurrent Programs (Dagstuhl Seminar 16201)}},
  pages =	{56--71},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{5},
  editor =	{Dolby, Julian and Grumberg, Orna and M\"{u}ller, Peter and Tripp, Omer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.5.56},
  URN =		{urn:nbn:de:0030-drops-67203},
  doi =		{10.4230/DagRep.6.5.56},
  annote =	{Keywords: (automatic) bug repair, concurrency bugs, concurrent programming, deductive verification, interactive verification, linearizability, synchronization testing}
}
Document
Access-rights Analysis in the Presence of Subjects

Authors: Paolina Centonze, Marco Pistoia, and Omer Tripp

Published in: LIPIcs, Volume 37, 29th European Conference on Object-Oriented Programming (ECOOP 2015)


Abstract
Modern software development and run-time environments, such as Java and the Microsoft .NET Common Language Runtime (CLR), have adopted a declarative form of access control. Permissions are granted to code providers, and during execution, the platform verifies compatibility between the permissions required by a security-sensitive operation and those granted to the executing code. While convenient, configuring the access-control policy of a program is not easy. If a code component is not granted sufficient permissions, authorization failures may occur. Thus, security administrators tend to define overly permissive policies, which violate the Principle of Least Privilege (PLP). A considerable body of research has been devoted to building program-analysis tools for computing the optimal policy for a program. However, Java and the CLR also allow executing code under the authority of a subject (user or service), and no program-analysis solution has addressed the challenges of determining the policy of a program in the presence of subjects. This paper introduces Subject Access Rights Analysis (SARA), a novel analysis algorithm for statically computing the permissions required by subjects at run time. We have applied SARA to 348 libraries in IBM WebSphere Application Server - a commercial enterprise application server written in Java that consists of >2 million lines of code and is required to support the Java permission- and subject-based security model. SARA detected 263 PLP violations, 219 cases of policies with missing permissions, and 29 bugs that led code to be unnecessarily executed under the authority of a subject. SARA corrected all these vulnerabilities automatically, and additionally synthesized fresh policies for all the libraries, with a false-positive rate of 5% and an average running time of 103 seconds per library. SARA also implements mechanisms for mitigating the risk of false negatives due to reflection and native code; according to a thorough result evaluation based on testing, no false negative was detected. SARA enabled IBM WebSphere Application Server to receive the Common Criteria for Information Technology Security Evaluation Assurance Level 4 certification.

Cite as

Paolina Centonze, Marco Pistoia, and Omer Tripp. Access-rights Analysis in the Presence of Subjects. In 29th European Conference on Object-Oriented Programming (ECOOP 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 37, pp. 222-246, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{centonze_et_al:LIPIcs.ECOOP.2015.222,
  author =	{Centonze, Paolina and Pistoia, Marco and Tripp, Omer},
  title =	{{Access-rights Analysis in the Presence of Subjects}},
  booktitle =	{29th European Conference on Object-Oriented Programming (ECOOP 2015)},
  pages =	{222--246},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-86-6},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{37},
  editor =	{Boyland, John Tang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2015.222},
  URN =		{urn:nbn:de:0030-drops-52249},
  doi =		{10.4230/LIPIcs.ECOOP.2015.222},
  annote =	{Keywords: Static Analysis, Security, Access Control}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail