Part 8 of our OWASP API Security Top 10 Deep Dive Series
The Doctor’s Dilemma
Imagine visiting a new doctor for a routine check-up. You share basic health details, height, weight, and medications – all needed to provide good care.
However, you then discover that the doctor’s back-office system can also access your diary entries, bank balance, private messages, and browsing history. What started as a sharing of information becomes an exposure of your digital life.
The doctor isn’t being malicious – they’re simply using the access their system provides. But somewhere in the architectural design, someone assumed that if a doctor should have access to some of your information, they should have access to all of it.
This is OWASP’s #3 API Security Risk: Broken Object Property Level Authorization – when systems grant access to entire objects instead of controlling access to specific properties. It’s a fundamental misunderstanding of how granular privacy should work in the digital age.
The All-or-Nothing Fallacy
Binary thinking creates a dangerous blind spot in system design. When we think about access control, we default to the idea that either someone has access or they don’t. This works for physical spaces, inside the building or outside, but fails in digital environments where information has layers, contexts, and varying sensitivity.
Research shows developers are more likely to implement object-level access controls than property-level ones. Teams are good at deciding who can access what, but not which parts of that “what” should be accessible.
This is the “all-or-nothing fallacy”, treating authorization as a single gate instead of a system of graduated permissions. It’s like giving your front door key the power to open every drawer, cabinet, and safe in the house.
The 2023 OWASP update merges two former vulnerabilities. Excessive Data Exposure and Mass Assignment, because both stem from the same root problem: failing to authorize at the property level, not just the object level.
The Oversharing Architecture
Consider this scenario that plays out daily across modern applications: A social media API provides a user profile endpoint. When you request your profile, you rightly get your account object, but the response also includes your private email, internal user ID, account creation date, privacy settings, and even your hashed password.
The API isn’t broken in the traditional sense; you are authorized to access your account object. But it’s returning far more properties than you need or should see. It’s like asking a librarian for today’s newspaper and getting the entire archive since 1892.
This stems from “scope creep in permission design.” APIs start with legitimate needs: “Users should see their profile,” but over time, “profile” grows to include increasingly sensitive data. Before long, a simple request exposes everything the system knows about that user.
The mass assignment side is just as dangerous. It happens when APIs accept more input properties than users should be able to change. An attacker could inject extra fields, changing their role from “regular” to “admin” or altering their account balance, and the API processes it without question.
The Trust Gradient Problem
What makes this vulnerability dangerous is how it exploits the trust gradient in all authorization systems. Most systems do a decent job deciding whether you can access a user account, order record, or document, but they often skip the more nuanced question: “Which aspects of that object should this person, in this context, be able to see or modify?”
Consider an e-commerce platform where customer service reps need order details to help customers. The object-level authorization works, reps can access order #12345 when assisting its owner. But if the API returns the entire order object, the rep might also see:
- Full payment method details
- Internal pricing calculations and profit margins
- Fraud detection scores and risk assessments
- Personal notes from past interactions
- Administrative flags affecting service decisions
They don’t need this data to do their job, and seeing it could compromise privacy and security. Yet because the check happens only at the object level, these property-level exposures slip by unnoticed.
The Evolution of Expectation
This vulnerability has grown as our relationship with data sharing has evolved. Twenty years ago, applications were simpler, data models smaller, and “user data” straightforward. Today’s applications hold vast, interconnected webs of information, social connections, behavioral analytics, preferences, transactions, locations, and predictive models.
Our mental models of privacy haven’t kept pace. We still think in terms of “sharing contact info” rather than “sharing 847 distinct data points that together form a detailed behavioral profile.” This gap between simple privacy concepts and complex data reality creates perfect conditions for broken property-level authorization.
People usually underestimate both the amount of data collected and the sensitivity of seemingly harmless combinations. We grant “location access” without realizing it reveals work schedules, social circles, and sometimes even health conditions.
Building Intentional Boundaries
The solution to broken object property-level authorization requires a fundamental shift from reactive to proactive privacy design. Instead of asking “What data do we have?” and then “Who should see it?”, the question becomes “What specific information does this person need to accomplish this particular task?”
Property-Level Authorization Design: Every API endpoint should explicitly specify which properties are accessible for which roles in which contexts. This isn’t just about hiding sensitive fields – it’s about creating clear, documented relationships between user intentions and data exposure.
Least Privilege Data Sharing: Default to sharing the minimum necessary information, then explicitly add properties only when there’s a clear business justification. This is harder than it sounds because it requires understanding not just what users request, but what they need.
Context-Aware Property Access: The same object might expose different properties depending on how it’s being accessed. A user viewing their own profile sees different fields than a colleague viewing their public profile, which differs from what an administrator sees during account management.
Input Validation Beyond Types: For APIs that accept user input, implement explicit allow-lists of which properties can be modified by which users. Never assume that because a property exists in your data model, it should be modifiable through external API calls.
Graduated Disclosure Patterns: Design APIs that can return different levels of detail based on the requester’s permissions and the context of the request. Think of it as having multiple “zoom levels” of data access rather than a single all-or-nothing view.
Broken object property-level authorization is a maturation challenge, not a fundamental limitation. As digital systems grow more sophisticated, we’re learning to model the nuanced, contextual nature of human privacy expectations in code.
This shift toward granular authorization reflects a broader evolution in digital trust, moving from the early internet’s “public” versus “private” model toward a graduated, contextual privacy that mirrors human relationships. In an age where data underpins digital relationships, getting property-level authorization right is the foundation of digital trust.
Frequently Asked Questions About Object Property Level Authorization
Q: What’s the difference between object-level and property-level authorization?
A: Object-level authorization controls whether you can access a user account, order, or document at all. Property-level authorization controls which specific fields within that object you can see or modify. Think of it like a filing cabinet: object-level determines if you can open a folder, property-level decides which pages inside you can read. Many systems get the first part right but fail at the second.
Q: Can you give a simple example of this vulnerability?
A: Imagine a social media API where you can only modify your profile (good object-level), but the update request also lets you change fields like account_type, verified_status, or admin_privileges, because the API accepts all properties without checking if you should.
Q: How does this relate to “Excessive Data Exposure” and “Mass Assignment”?
A: They’re two sides of the same coin. Excessive Data Exposure happens when APIs return too many fields (like including password hashes). Mass Assignment happens when APIs accept too many fields in requests (like letting you update your user role). Both stem from not controlling access at the property level.
Q: Why don’t developers just limit fields?
A: Convenience and changing requirements. Returning the entire object is easier than selecting fields for each context. But this also means giving everything, even to attackers.
Q: How can I test for property-level authorization issues?
A: Document what properties each endpoint should expose or accept for different roles. Test with different users: can a regular user see admin-only fields or modify restricted ones? Add unexpected properties to requests and see if they’re processed.
Q: What’s the business impact?
A: Beyond privacy violations, these flaws can cause privilege escalation, financial manipulation, competitive intelligence theft, and regulatory breaches. The risk grows with the sensitivity of your data.
Q: How do microservices affect this risk?
A: They help by separating data types, but they hurt when services have inconsistent rules. Ensure consistent policies across all services and clear agreements on data sensitivity.
Q: What role does documentation play?
A: Documentation should clearly state which fields are exposed or accepted for which roles. Unlisted fields can still be exploited; attackers will try them even if clients don’t.
Q: How do modern frameworks help?
A: Tools like serializers, DTOs, and role-based views can enforce property-level rules, but only if used consistently. It’s still possible to bypass them and expose raw data.