Python Properties, Getters, Setters, and Descriptors

Here's a situation you've almost certainly run into. You're building a class, things are going well, and then you realize you need to add a little bit of logic around one of your attributes. Maybe you want to validate that an age is never negative, or you want to make sure a price is always a number, or you want to automatically update something else whenever a value changes. So you do what seems natural: you write a get_something() method and a set_something() method. Problem solved, right?
Kind of. It works, sure. But now every time someone uses your class, they have to call .get_age() to read the age and .set_age(31) to change it. That's not very Pythonic. More importantly, if you started with a plain attribute (person.age = 31) and later decided you needed validation, you'd have to change every single place in your codebase that reads or writes that attribute. That's a maintenance nightmare.
Python saw this coming and designed a solution that lets you have it both ways. You get the clean, simple attribute-access syntax that makes Python so pleasant to write, and you get the full power of methods running behind the scenes to validate, compute, cache, or do anything else you need. This isn't a hack or a workaround, it's a core part of how Python is designed to work.
The mechanism has two layers. The first layer is the @property decorator, which handles the common cases elegantly and with almost no extra code. The second layer is the descriptor protocol, which is the deeper machinery that @property itself is built on. Understanding both layers doesn't just make you a better Python programmer, it gives you genuine insight into how Python's attribute system actually works under the hood, which pays dividends every time you read Python code, debug a tricky problem, or design a clean API.
In this article we're going to work through both layers. We'll start with @property and get comfortable with the getter, setter, and deleter pattern. Then we'll look at real-world use cases like validation, computed values, and lazy-loading. After that we'll go deeper into the descriptor protocol, understand the difference between data and non-data descriptors, and see why things like @classmethod and @staticmethod are also just descriptors. By the end, the hidden layer won't be hidden anymore.
You've probably written code that looked something like this:
class Person:
def __init__(self, name, age):
self._age = age
self.name = name
def get_age(self):
return self._age
def set_age(self, value):
if value < 0:
raise ValueError("Age cannot be negative")
self._age = value
person = Person("Alice", 30)
print(person.get_age()) # 30
person.set_age(31) # Good
person.set_age(-5) # ValueErrorIt works. You've got validation. But there's a Python way that's way cleaner, and it doesn't require you to call .get_age() and .set_age() like you're writing Java in 2005.
The problem we're solving: You want attribute-like access (person.age) but with the control of methods (validation, computed values, side effects).
Enter properties, the @property decorator, and the descriptor protocol. These tools let you write code that looks simple on the surface but does powerful stuff underneath.
Table of Contents
- Why Properties Over Direct Access
- The Simple Fix: @property
- Understanding the Three Pieces: Getter, Setter, Deleter
- Validation Without the Setter Overhead
- Computed and Cached Properties
- Why Properties Over Direct Access
- When Properties Aren't Enough: The Descriptor Protocol
- Descriptor Protocol Deep Dive
- Data vs. Non-Data Descriptors
- The Descriptor Protocol: How Methods and Properties Work
- Properties vs. Descriptors: Which One Should You Use?
- Common Property Mistakes
- Real-World Patterns: Lazy-Loaded Properties
- Type-Checked Properties with Descriptors
- Summary
Why Properties Over Direct Access
Before we dive into the mechanics, let's really nail down why this matters so you can make the right design decisions when you're building your own classes.
The naive approach is to just make all your attributes public and let people do whatever they want. It's simple and direct. But the moment your requirements change, and they always do, you have a problem. Say you started with account.balance as a plain attribute, and three months into your project you realize you need to log every read of the balance for auditing purposes. Or you need to validate that the balance is never set to a non-numeric value. With a plain attribute, you now have to go back and change your interface. Everyone using your class has to update their code too.
The Java-style getter/setter approach solves that brittleness but creates a different problem: clunky usage. Nobody wants to write account.get_balance() every time they need to check a balance. It's verbose, it's repetitive, and it makes your Python code look like it has an identity crisis.
Properties give you the best of both worlds. The interface stays clean, you still write account.balance to read and account.balance = value to set, but you have complete control over what happens when someone does that. You can add validation later without changing any calling code. You can make an attribute read-only just by not defining a setter. You can make an attribute computed on-the-fly without anyone knowing the difference. This is what people mean when they talk about encapsulation in Python: not hiding data behind complex access patterns, but controlling the interface while keeping the usage natural.
There's another benefit that's easy to overlook: computed properties. Sometimes "attributes" aren't really stored values at all, they're derived from other values. A Rectangle doesn't need to store its area; it can compute it from width and height. A Temperature object can provide both Celsius and Fahrenheit without storing either redundantly. Properties let you expose these computed values as if they were plain attributes, which means callers don't need to know or care whether a value is stored or calculated. That's clean API design.
The Simple Fix: @property
Let's rewrite that same code using @property:
class Person:
def __init__(self, name, age):
self._age = age
self.name = name
@property
def age(self):
return self._age
@age.setter
def age(self, value):
if value < 0:
raise ValueError("Age cannot be negative")
self._age = value
person = Person("Alice", 30)
print(person.age) # 30, looks like an attribute
person.age = 31 # Setter runs automatically
person.age = -5 # ValueErrorWhat changed? You're now reading and assigning to .age directly, like it's a normal attribute. But behind the scenes, the @property decorator intercepts both reads and writes. No method calls. No awkward naming.
This is the hidden layer at work: when you read person.age, Python calls the method decorated with @property. When you write person.age = 31, Python calls the method decorated with @age.setter. It's magic disguised as simplicity.
Notice that we still store the actual value in self._age with the underscore prefix. That underscore is a convention in Python that means "this is an internal implementation detail, please don't access it directly." The underscore doesn't prevent access (Python doesn't have truly private attributes), but it communicates your intent to other developers. The public-facing interface is person.age, and that's what goes through your controlled getter and setter. Anyone who bypasses that by reading person._age directly is explicitly choosing to ignore the interface contract, and that's on them.
Understanding the Three Pieces: Getter, Setter, Deleter
Properties come in three flavors. Let's see them all:
class Temperature:
def __init__(self, celsius):
self._celsius = celsius
# The GETTER: what happens when you READ the property
@property
def celsius(self):
return self._celsius
# The SETTER: what happens when you ASSIGN to the property
@celsius.setter
def celsius(self, value):
if value < -273.15:
raise ValueError("Below absolute zero")
self._celsius = value
# The DELETER: what happens when you DELETE the property
@celsius.deleter
def celsius(self):
print(f"Deleting temperature value: {self._celsius}")
self._celsius = None
# BONUS: computed property (no setter needed)
@property
def fahrenheit(self):
return (self._celsius * 9/5) + 32
temp = Temperature(0)
print(temp.celsius) # 0
print(temp.fahrenheit) # 32.0
temp.celsius = 100 # Setter validates
print(temp.fahrenheit) # 212.0
del temp.celsius # Deleter runs
print(temp.celsius) # NoneThe naming convention is important here, all three methods share the same name (celsius), and the decorators are what distinguish their roles. Python looks at the decorator chain to know which function handles which operation. When you define @celsius.setter, you're calling the .setter() method on the property object that was created by the first @property decorator, and that returns a new property object with both the getter and setter configured.
The pieces explained:
- @property getter: Computes or returns a value. Runs when you read the attribute.
- @property.setter: Validates and stores a value. Runs when you assign to the attribute.
- @property.deleter: Handles cleanup. Runs when you delete the attribute.
Not every property needs all three. A read-only property (no setter) is totally valid. A computed property like fahrenheit doesn't touch internal state, it calculates on the fly.
The fahrenheit property here is a great example of the elegance this enables. It has no backing _fahrenheit variable. It has no setter, because if you want to change the temperature you set celsius directly. Any time you read temp.fahrenheit, Python runs the conversion calculation and returns the result. The caller just sees a clean attribute, they have no idea (and don't need to know) whether it's stored or computed.
Validation Without the Setter Overhead
Here's where properties shine in real code. You want users to set an attribute, but only if it passes your rules:
class BankAccount:
def __init__(self, balance):
self._balance = balance
@property
def balance(self):
"""Public read-only view of the balance."""
return self._balance
@property
def is_overdrawn(self):
"""Computed property: no internal state needed."""
return self._balance < 0
def deposit(self, amount):
if amount <= 0:
raise ValueError("Deposit must be positive")
self._balance += amount
def withdraw(self, amount):
if amount <= 0:
raise ValueError("Withdrawal must be positive")
self._balance -= amount
account = BankAccount(100)
print(account.balance) # 100
print(account.is_overdrawn) # False
account.deposit(50)
print(account.balance) # 150
account.withdraw(200)
print(account.balance) # -50
print(account.is_overdrawn) # TrueWhy this pattern rocks: The balance is read-only (@property only, no setter). Users can't accidentally write to it, they must use the controlled methods deposit() and withdraw(). That's interface design. That's safety.
The is_overdrawn property is also a nice touch. It's just balance < 0 behind the scenes, but exposing it as a property rather than making callers write account.balance < 0 everywhere serves two purposes: it reads more naturally in code, and if your definition of "overdrawn" ever changes (say, you add a credit limit), you only need to update one place.
Computed and Cached Properties
Sometimes you need to calculate a value once and keep it around. Other times you want it fresh every read. Properties let you choose:
import time
class DataProcessor:
def __init__(self, raw_data):
self.raw_data = raw_data
self._processed = None
self._processed_time = None
@property
def processed(self):
"""Lazy-loaded, cached property."""
if self._processed is None:
print("Processing data (first time only)...")
self._processed = [x * 2 for x in self.raw_data]
self._processed_time = time.time()
return self._processed
@property
def age_seconds(self):
"""Fresh computation every read."""
if self._processed_time is None:
return None
return time.time() - self._processed_time
processor = DataProcessor([1, 2, 3, 4, 5])
print(processor.processed) # Processing data (first time only)...
# [2, 4, 6, 8, 10]
time.sleep(2)
print(processor.age_seconds) # ~2.0
print(processor.processed) # [2, 4, 6, 8, 10], no recomputeThe hidden layer here: processed caches its result. The first read triggers the work. Subsequent reads return the cached value. Meanwhile, age_seconds computes fresh every time because it depends on the current time.
This is crucial for performance. If processing is expensive (database queries, API calls, heavy math), you cache. If the value changes constantly, you compute fresh.
The lazy-loading pattern is worth calling out specifically because it comes up constantly in real applications. The idea is that you defer expensive work until the moment it's actually needed. If your DataProcessor object is created but never actually has its processed property accessed, you've saved the computation entirely. If it's accessed multiple times, you only pay the cost once. This is especially valuable in frameworks where objects might be created in bulk and only a subset actually gets used.
Why Properties Over Direct Access
Before we move on to the deeper machinery, let me address a question that sometimes comes up: why not just skip all this and use __setattr__ to intercept all attribute assignments? Or use slots? The short answer is that properties give you surgical precision, you decide exactly which attributes get special behavior and which are plain. That makes your code easier to read and reason about, because readers can see at a glance which attributes are "managed." A class covered in @property decorators tells a clear story. A class that overrides __setattr__ requires readers to carefully trace what happens for every assignment everywhere.
When Properties Aren't Enough: The Descriptor Protocol
Properties work great for individual attributes. But what if you want the same validation logic on multiple attributes?
That's where descriptors come in.
A descriptor is a Python object that implements at least one of these methods:
__get__(self, instance, owner), called when you read the attribute__set__(self, instance, value), called when you assign to the attribute__delete__(self, instance), called when you delete the attribute
Properties are actually descriptors under the hood. But you can create custom descriptors for reusable behavior.
class PositiveInteger:
"""Descriptor that only allows positive integers."""
def __init__(self, name):
self.name = name
def __get__(self, instance, owner):
if instance is None:
return self
return instance.__dict__.get(self.name, None)
def __set__(self, instance, value):
if not isinstance(value, int):
raise TypeError(f"{self.name} must be an integer")
if value <= 0:
raise ValueError(f"{self.name} must be positive")
instance.__dict__[self.name] = value
def __delete__(self, instance):
del instance.__dict__[self.name]
class Product:
price = PositiveInteger('price')
quantity = PositiveInteger('quantity')
def __init__(self, name, price, quantity):
self.name = name
self.price = price
self.quantity = quantity
product = Product("Laptop", 999, 5)
print(product.price) # 999
product.price = 1299 # Valid
print(product.price) # 1299
product.price = -100 # ValueError: price must be positive
product.price = "expensive" # TypeError: price must be an integerWhat's happening: Instead of writing the same @property logic for both price and quantity, we created a reusable PositiveInteger descriptor. When you access product.price, Python calls __get__. When you assign product.price = 1299, Python calls __set__.
The descriptor lives on the class, not the instance. It manages multiple attributes across multiple instances. This is powerful for shared validation logic.
Pay attention to that if instance is None: return self check in __get__. This handles the case where someone accesses the descriptor through the class itself rather than through an instance, like Product.price as opposed to product.price. When accessed through the class, instance will be None, and the convention is to return the descriptor object itself. This is how you can still inspect the descriptor if you need to, and it's a pattern you'll see in virtually every descriptor implementation.
Also notice how we store values: instance.__dict__[self.name] = value. We're writing directly to the instance's dictionary rather than going through normal attribute setting. This is necessary because if we did instance.name = value (without the direct dict access), Python would call our descriptor's __set__ again, creating infinite recursion. Writing directly to __dict__ bypasses the descriptor protocol, which is exactly what we want here.
Descriptor Protocol Deep Dive
The descriptor protocol is the mechanism Python uses to turn attribute access into method calls. It's not just a feature, it's a fundamental part of how Python's object model works, and once you understand it, a lot of Python's behavior that seemed like "magic" starts making perfect sense.
When you access obj.attr, Python doesn't just look up a value. It follows a lookup chain. First it checks the class (and its parent classes via the MRO) for a data descriptor with that name. If found, it calls that descriptor's __get__. If not, it checks the instance's __dict__. If found there, it returns that. If not, it checks the class for a non-data descriptor and calls its __get__. If nothing is found anywhere, it raises AttributeError.
This lookup order is precise and deliberate. Data descriptors take highest priority because they're designed to maintain control, you're explicitly saying "this attribute should always go through my logic." The instance dict sits in the middle, which is where most regular attributes live. Non-data descriptors come last, because they're meant to provide default behavior that instances can override.
Understanding this chain also explains a subtle footgun: if you accidentally create an instance attribute with the same name as a property, with a plain attribute it would shadow the property, but with a data descriptor (which properties are) it won't, the descriptor wins. That's usually what you want, but it can be surprising the first time you encounter it.
The three __get__ arguments also tell a story. self is the descriptor object itself. instance is the object the attribute was accessed on (or None if accessed through the class). owner is always the class. You almost always need instance to know whose data you're working with, and owner when you need class-level information or to handle the None instance case properly.
Data vs. Non-Data Descriptors
There's a subtle but important distinction.
A data descriptor has a __set__ method (and/or __delete__). It intercepts both reads and writes.
A non-data descriptor has only a __get__ method. It handles reads, but instance attributes can shadow it.
class NonDataDescriptor:
def __get__(self, instance, owner):
print("Getting value")
return 42
class DataDescriptor:
def __get__(self, instance, owner):
print("Getting value")
return 42
def __set__(self, instance, value):
print(f"Setting value to {value}")
class Example:
non_data = NonDataDescriptor()
data = DataDescriptor()
ex = Example()
# Non-data descriptor
print(ex.non_data) # Getting value
# 42
ex.non_data = 99 # Straight to instance dict
print(ex.non_data) # 99, shadowed the descriptor!
# Data descriptor
print(ex.data) # Getting value
# 42
ex.data = 99 # Setting value to 99
# Descriptor intercepts it
print(ex.data) # Getting value
# 42, descriptor still controls itWhy this matters: Data descriptors take priority over instance __dict__. Non-data descriptors can be shadowed by instance attributes. For validation (where you want control), use data descriptors.
This distinction has practical consequences when you're designing your own descriptors. If you want your descriptor to have complete control, if you want to ensure that nobody can just do instance.attr = something and bypass your logic, you need __set__. Even if your __set__ doesn't actually do anything, just having it defined makes your descriptor a data descriptor and gives it priority over the instance dictionary. On the other hand, if you're building something like a method (where it makes sense for instances to override class-level behavior), a non-data descriptor is the right choice.
The Descriptor Protocol: How Methods and Properties Work
You know what's mind-bending? @classmethod, @staticmethod, and functions themselves are all descriptors.
class Example:
@classmethod
def class_method(cls):
return "Called on class"
@staticmethod
def static_method():
return "No self or cls"
def regular_method(self):
return "Called on instance"
# Functions are descriptors
print(type(Example.regular_method)) # <class 'method'>
print(type(Example.__dict__['regular_method'])) # <class 'function'>
ex = Example()
print(Example.regular_method) # <function Example.regular_method...>
print(ex.regular_method) # <bound method Example.regular_method...>The hidden layer revealed: When you read ex.regular_method, Python finds the function in the class, then calls its __get__ method. That __get__ binds the instance to the method, creating a bound method. When you read Example.regular_method, the function returns itself unbound.
Properties, classmethods, staticmethods, they all leverage the descriptor protocol. It's one unified mechanism that powers attribute access across Python.
This is one of those insights that makes you see Python differently once it clicks. You've been using descriptors your entire Python career without knowing it. Every method call, every @classmethod, every @staticmethod, they all go through the same __get__ mechanism. The reason self gets automatically passed when you call an instance method is because the function object's __get__ creates a bound method that has the instance baked in. Python isn't doing anything special for methods; it's just using the same descriptor protocol that's available to all Python objects.
Properties vs. Descriptors: Which One Should You Use?
Here's the decision tree:
Use @property when:
- You're managing a single attribute with custom logic
- You want simple read, write, or delete behavior
- The code lives on a single class
- You want clean, readable syntax
class Rectangle:
def __init__(self, width, height):
self._width = width
self._height = height
@property
def area(self):
return self._width * self._heightUse a descriptor when:
- You have the same validation across multiple attributes
- You want reusable, pluggable validation logic
- Different classes need the same behavior
- You're building a framework or library
class TypedProperty:
def __init__(self, name, type_):
self.name = name
self.type_ = type_
def __get__(self, instance, owner):
if instance is None:
return self
return instance.__dict__.get(self.name)
def __set__(self, instance, value):
if not isinstance(value, self.type_):
raise TypeError(f"{self.name} must be {self.type_.__name__}")
instance.__dict__[self.name] = value
class User:
name = TypedProperty('name', str)
age = TypedProperty('age', int)In practice: Start with @property. If you're copy-pasting the same logic multiple times, extract it into a descriptor.
The upgrade path from property to descriptor is clean. If you have validation in one @property setter and you need that same validation in three more classes, that's your signal to extract it. The descriptor version is a bit more code to write, but it pays back immediately in reduced duplication and easier maintenance. Change the validation in one place, and it applies everywhere the descriptor is used.
Common Property Mistakes
Let me walk you through the mistakes I see most often so you can avoid them.
The first and most classic mistake is calling a property with parentheses. Properties look like methods when you define them (they're decorated functions), but they behave like attributes when you use them. So person.age is right, and person.age() will give you a TypeError because you'd be trying to call an integer. If you're getting "object is not callable" errors on what looks like a simple attribute access, check whether it's a property that you're accidentally calling.
The second mistake is forgetting the underscore on the backing variable. If you write a property named age and store the value in self.age instead of self._age, you'll get infinite recursion, reading self.age in the getter triggers the getter again, forever. Always use a different name for the backing store, and the underscore-prefix convention is the standard way to signal that it's internal.
The third mistake is defining a setter without a getter, or in the wrong order. The getter must come first, it creates the property object that the setter and deleter methods are attached to. If you try to define @age.setter before @property def age, Python won't know what age is yet and you'll get a NameError.
The fourth mistake is making properties do too much work. Properties are called on every access, so if you put an expensive database query in your getter and then access that property in a tight loop, you're going to have a bad time. The lazy-loading cache pattern we saw earlier is the right solution when access might be expensive, compute once, cache the result, return the cached version on subsequent reads.
The fifth mistake is writing a setter that silently ignores invalid values instead of raising an error. If someone sets an age to a string, you want to know immediately with a clear error message, not three function calls later when you try to do arithmetic on it and get a confusing TypeError. Fail fast, fail loudly.
Real-World Patterns: Lazy-Loaded Properties
Here's a common pattern: you have expensive-to-load data (database record, API response, file read). You don't want to load it until someone asks for it.
class User:
def __init__(self, user_id):
self.user_id = user_id
self._profile = None # Cache
@property
def profile(self):
"""Load profile lazily on first access."""
if self._profile is None:
# Simulate expensive operation
print(f"Loading profile for user {self.user_id}...")
self._profile = {
'name': 'Alice',
'email': 'alice@example.com',
'bio': 'Python enthusiast'
}
return self._profile
@profile.deleter
def profile(self):
"""Clear cache when deleted."""
print("Profile cache cleared")
self._profile = None
user = User(123)
print(user.profile) # Loading profile...
print(user.profile) # No load, cached
del user.profile
print(user.profile) # Loading profile... (reloaded)This pattern saves bandwidth and time. You load what you need, when you need it.
The deleter is particularly useful here for cache invalidation. If you know the underlying data has changed, maybe the user just updated their profile through the API, you can do del user.profile to clear the cache, and the next access will re-fetch fresh data. This is a clean pattern because the cache management logic lives entirely in the property, and callers just use natural Python syntax to interact with it.
In larger applications, this same pattern scales nicely to ORM models (where related objects are lazily fetched from the database), configuration objects (where file parsing happens on first access), and connection pools (where connections are established on demand). The interface stays the same regardless of what's happening behind the scenes.
Type-Checked Properties with Descriptors
Here's another real pattern: you want to ensure a property always has the right type.
class TypeChecker:
def __init__(self, name, expected_type):
self.name = name
self.expected_type = expected_type
def __get__(self, instance, owner):
if instance is None:
return self
return instance.__dict__.get(self.name)
def __set__(self, instance, value):
if not isinstance(value, self.expected_type):
raise TypeError(
f"{self.name} must be {self.expected_type.__name__}, "
f"got {type(value).__name__}"
)
instance.__dict__[self.name] = value
class Config:
debug = TypeChecker('debug', bool)
port = TypeChecker('port', int)
host = TypeChecker('host', str)
def __init__(self, debug, port, host):
self.debug = debug
self.port = port
self.host = host
config = Config(True, 8000, "localhost")
print(config.debug, config.port, config.host) # True 8000 localhost
config.port = 9000 # Good
config.port = "9000" # TypeErrorThis catches type errors immediately, at assignment time. No silent failures. No weird bugs three layers deep.
The error message quality is worth thinking about. The TypeChecker here reports both what the attribute should be and what it actually got: "port must be int, got str." That's the kind of error message that helps people fix bugs in thirty seconds instead of thirty minutes. When you're building validation into descriptors or properties, investing in clear, actionable error messages is always worth it. Future you, debugging at 11pm, will be grateful.
This descriptor approach also shows how you can build mini type systems in Python without reaching for third-party libraries. For simple cases, a TypeChecker descriptor can be more transparent and flexible than, say, using dataclasses with type annotations (which still don't enforce types at runtime by default unless you add validation manually). It's not the right choice for every situation, but knowing you can do it gives you options.
Summary
Properties and descriptors are Python's answer to the getter/setter problem. They let you write code that looks simple (person.age = 31) but does powerful stuff underneath.
The journey we took in this article moved from the practical to the fundamental. We started with @property as the everyday tool, the thing you'll reach for when you need a bit of control over how an attribute works. We saw how the getter, setter, and deleter pattern handles the common cases cleanly, and we worked through real patterns like validation, computed values, read-only access, and lazy-loading caches. These patterns aren't theoretical, they show up in production Python code constantly.
Then we went deeper. We looked at descriptors and saw how they generalize the property concept into something reusable across multiple attributes and multiple classes. We understood the difference between data and non-data descriptors and why that distinction controls who wins when there's a conflict between the descriptor and the instance dictionary. And we saw how the descriptor protocol is the same mechanism that powers methods, @classmethod, and @staticmethod, which means understanding descriptors isn't just knowing a niche feature, it's understanding something fundamental about Python.
@property is your go-to: clean, readable, perfect for single-attribute logic.
Descriptors are the power move: reusable, composable, perfect for frameworks and shared validation.
And the real magic? The descriptor protocol ties it all together. Properties, methods, classmethods, staticmethods, they're all descriptors. Understanding this protocol gives you insight into how Python really works.
Start with properties. Learn their three faces: getter, setter, deleter. Use them for computed values, lazy-loading, and validation. Avoid the common mistakes, don't call properties with parentheses, always use a differently-named backing variable, put the getter before the setter. When you find yourself repeating the same logic across multiple attributes or classes, graduate to descriptors. You'll write less code, make it easier to change, and gain the kind of deep control that marks mature Python design.
The hidden layer is no longer hidden. You see the mechanism. You can build on it.