Context Managers and contextlib in Python

You've probably written code that opens a file, processes it, and needs to close it, no matter what happens in between. Or maybe you've needed to acquire a database connection, run some queries, and release it safely. These are classic resource management problems, and Python gives you an elegant solution: context managers.
If you've ever used a with statement, you've already benefited from context managers. But understanding how they work under the hood? That's where things get powerful. Today, we're diving deep into context managers, the contextlib module, and how to build them yourself. By the end, you'll know how to write cleaner, safer, more maintainable code.
Table of Contents
- The Problem: Resource Leaks and Error Safety
- The `with` Statement: A Closer Look
- Implementing Custom Context Managers
- The `@contextlib.contextmanager` Decorator
- `contextlib.suppress`: Clean Exception Handling
- `contextlib.ExitStack`: Dynamic Context Management
- Async Context Managers: The Modern Frontier
- Practical Patterns: Real-World Examples
- Pattern 1: Database Transactions
- Pattern 2: Temporary File Operations
- Pattern 3: Timing and Performance Monitoring
- Pattern 4: Mock Patching and Testing
- Thread-Safe Resource Management
- Common Pitfalls and How to Avoid Them
- Why Context Managers Matter: A Philosophy
- Key Takeaways
The Problem: Resource Leaks and Error Safety
Let's start with why context managers matter. This is about a fundamental challenge in programming: resources. A file handle, a database connection, a network socket, a lock, these aren't infinite. If you don't return them to the system when you're done, you create a resource leak.
Imagine you're working with files the naive way:
f = open('data.txt', 'r')
content = f.read()
f.close()This works fine when nothing goes wrong. The file opens, you read it, you close it. Simple. But here's the problem: what if an exception happens during the read() call? Say the file is corrupted, or the disk has an I/O error. An exception fires, and the execution jumps to the exception handler. Your f.close() line never gets executed. The file remains open. The operating system still thinks you're using that file descriptor. If this happens thousands of times in a long-running application, say, a web server processing requests, you'll eventually run out of file handles. The system can't open any more files. Your application crashes.
You might write defensive code to prevent this:
f = open('data.txt', 'r')
try:
content = f.read()
finally:
f.close()Better. The finally block runs whether or not an exception occurred, so your file always closes. But notice what happened: we added six lines of boilerplate to do a simple task. Imagine managing three files, or a database connection and a cache lock simultaneously. The try/finally nesting becomes unmaintainable.
This is exactly the problem context managers solve. With a context manager, you get guaranteed cleanup, automatically, without the ceremony:
with open('data.txt', 'r') as f:
content = f.read()
# File is automatically closed here, even if an exception occurredClean. Safe. Pythonic. That's the promise of context managers. And this isn't just convenience, it's a change in how you think about resource management. You're saying to Python: "I'm entering a scope where I need this resource. Please guarantee you clean it up when I leave."
The with Statement: A Closer Look
The with statement is syntactic sugar for a specific protocol. It's not a language hack, it's a designed pattern that Python implements at the interpreter level. Understanding what happens under the hood will change how you use context managers.
When you write:
with expression as variable:
# Do something with variable
passPython executes a precise sequence of steps. First, it evaluates the expression to get an object. That object needs to implement two special methods: __enter__ and __exit__. Those methods form the context manager protocol.
Here's what happens step-by-step:
- Python evaluates
expressionand gets an object (let's call itctx) - Python calls
ctx.__enter__()and stores the result invariable - Python executes the block under the
withstatement - When the block exits, whether normally, via
break/continue/return, or via an exception, Python guarantees thatctx.__exit__()is called
The key word is guarantees. This is not a suggestion. It happens even if your code raises an exception. It happens even if you return from inside the block. Python's exception handling system makes sure that exit handlers run.
The magic, really, is in the __enter__ and __exit__ methods. These two methods form the contract. __enter__ acquires the resource (opens the file, connects to the database, acquires the lock). __exit__ releases it. That's the whole pattern.
Implementing Custom Context Managers
Now let's build our own context manager. This is where you'll start to see the real power. When you implement the protocol yourself, you control exactly what happens on entry and exit.
Say you're managing a database connection. You need to open it when you enter the block, and close it when you exit:
class DatabaseConnection:
def __init__(self, db_url):
self.db_url = db_url
self.connection = None
def __enter__(self):
print(f"Connecting to {self.db_url}")
self.connection = self._connect()
return self.connection
def __exit__(self, exc_type, exc_val, exc_tb):
print("Closing connection")
if self.connection:
self.connection.close()
return False # Don't suppress exceptions
def _connect(self):
# Simulate connection
return {"status": "connected"}Let's trace through what happens when you use it:
with DatabaseConnection("postgres://localhost/mydb") as conn:
print(f"Using connection: {conn}")
# Do database work here
# Connection closes automaticallyWhen Python encounters the with statement, it instantiates DatabaseConnection, calls __enter__(), stores the returned connection in conn, executes your block, then calls __exit__(). Even if your block raises an exception, __exit__() still runs. That's the guarantee.
Now, here's the critical part: __exit__ receives three parameters that tell you what happened:
exc_type: The exception class (if one was raised;Noneotherwise)exc_val: The exception instance with the error messageexc_tb: The traceback object (helpful for logging)
These parameters let you decide how to respond. If no exception occurred, all three are None. If an exception did occur, you have all the information you need to decide what to do about it.
The return value of __exit__ is crucial. If it returns True, the exception is suppressed, it's swallowed, and execution continues after the with block. If it returns False (or implicitly returns None), the exception propagates up to the caller. This is your way of saying, "This is an error I can handle" or "This is an error I can't handle, pass it up."
Here's a practical example that shows this in action:
class ErrorHandler:
def __init__(self, error_message="An error occurred"):
self.error_message = error_message
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
print(f"{self.error_message}: {exc_val}")
return True # Suppress the exception
return False
# Usage
with ErrorHandler("Database error"):
result = 1 / 0 # This would normally crash
print(result)
print("We're still running!") # This executes because the exception was suppressedOutput:
Database error: division by zero
We're still running!
Notice what happened. The ZeroDivisionError was raised inside the with block. Normally, this would crash the program. But because __exit__ returned True, the exception was suppressed, caught and logged, and execution continued. This is powerful, and dangerous. You should only suppress exceptions you know how to handle. If you suppress an exception you don't understand, you're hiding bugs.
Why does this matter? Because in real-world code, you often want to distinguish between expected failures (like a file not existing) and unexpected failures (like a permissions error). A context manager lets you handle each appropriately.
The @contextlib.contextmanager Decorator
Writing classes is fine if you have complex state to manage. But for simpler cases, Python offers a shortcut: turn a generator function into a context manager with the @contextlib.contextmanager decorator. This is where context managers become truly elegant.
The idea is simple but powerful. Instead of writing a class with __enter__ and __exit__, you write a generator function. The yield statement marks the boundary between setup and teardown. Everything before yield is setup (like acquiring a resource). Everything after yield is cleanup (like releasing the resource). The decorator handles the rest.
from contextlib import contextmanager
@contextmanager
def database_connection(db_url):
print(f"Connecting to {db_url}")
conn = {"status": "connected"}
try:
yield conn # This becomes the 'as' variable
finally:
print("Closing connection")
conn["status"] = "closed"
with database_connection("postgres://localhost/mydb") as conn:
print(f"Using: {conn}")Let's understand what's happening here. When Python executes the with statement, it calls the decorated function. The function runs up to the yield statement (this is the setup phase). The value yielded becomes the variable after as. The with block executes. Then the code after yield runs (this is the cleanup phase). If an exception occurs, it's raised at the yield point, which means the except and finally blocks can catch and handle it.
How the decorator transforms this:
- Code before
yieldbecomes__enter__ - The value yielded becomes the
asvariable - Code after
yieldbecomes__exit__ - Exceptions in the
withblock are raised at theyieldpoint - The
finallyblock guarantees cleanup runs
This is often cleaner than writing a full class, especially for one-off context managers. Let's see a real-world example:
@contextmanager
def temporary_directory():
import tempfile
import shutil
tmpdir = tempfile.mkdtemp()
try:
yield tmpdir
finally:
shutil.rmtree(tmpdir)
with temporary_directory() as tmpdir:
print(f"Working in {tmpdir}")
# Create files, do stuff
# Directory is automatically deletedThis is beautiful. You don't have to manage the complexity of recursive cleanup (like deleting a directory and all its contents). The context manager handles it. If something goes wrong while you're working in the directory, the finally block still runs and cleans up. No orphaned directories left behind.
You can also handle exceptions in the generator. If you want to catch an exception and suppress it (not re-raise), just don't re-raise it:
@contextmanager
def catch_and_log():
try:
yield
except Exception as e:
print(f"Caught exception: {e}")
# Don't re-raise (implicitly suppressed)
with catch_and_log():
raise ValueError("Something went wrong")
print("We recovered!")In this example, when you raise ValueError inside the with block, it gets caught by the except clause in the generator. You log it and don't re-raise it, so the exception is suppressed. Execution continues with print("We recovered!"). This is different from a class-based context manager returning True from __exit__, here, you're explicitly handling the exception in a try/except block. The effect is the same, but the code reads more naturally.
contextlib.suppress: Clean Exception Handling
Sometimes you want to suppress specific exceptions without all the ceremony. You know the exception might happen, and you're okay with it. You don't want to log it or handle it specially, you just want to ignore it and move on. That's what contextlib.suppress is for.
from contextlib import suppress
dictionary = {"other_key": "value"}
# Instead of:
try:
value = dictionary['missing_key']
except KeyError:
pass
# Write:
with suppress(KeyError):
value = dictionary['missing_key']This is much cleaner and more readable. You're not hiding bugs; you're being explicit about which exception you're okay with and what you're doing about it (nothing). The code is self-documenting. Anyone reading it knows that you expect a KeyError might happen, and that's fine.
Here's a practical example that shows why this matters:
from contextlib import suppress
import os
# Delete a file if it exists, ignore if it doesn't
with suppress(FileNotFoundError):
os.remove('possibly_missing_file.txt')Without suppress, you'd write try/except with a pass. That works, but it looks like dead code. With suppress, the intent is crystal clear: "Try to remove this file. If it doesn't exist, that's okay." This is a common pattern in cleanup code. You want to delete something if it's there, and silently succeed if it's not.
You can also suppress multiple exception types. This is useful when an operation might fail in several different ways, all of which you're willing to accept:
with suppress(KeyError, ValueError, AttributeError):
result = perform_risky_operation()Now, a word of caution: suppress is convenient, but don't overuse it. If you find yourself suppressing many exceptions, you might be hiding real bugs. The point of exceptions is to alert you to things going wrong. Suppressing exceptions should be intentional and limited to cases where you really know what you're doing.
contextlib.ExitStack: Dynamic Context Management
Here's a scenario: You need to open multiple files, but you don't know how many at parse time. Maybe the number depends on command-line arguments or the contents of a directory. You can't write a nested with statement because you don't know the depth. This is where ExitStack shines. It's a tool for managing a dynamic number of context managers at runtime.
from contextlib import ExitStack
def process_multiple_files(filenames):
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
for f in files:
print(f.readline())
# All files close automatically when exiting the blockLet's trace through this. The ExitStack() creates a context manager that can manage other context managers. You call stack.enter_context() to register a new context manager (like open(fname)). The ExitStack tracks all registered context managers and guarantees they're cleaned up in the right order when the with block exits. Even if you register ten context managers in a loop, ExitStack handles them all.
Why is this useful? Because code like this without ExitStack would be a nightmare:
# Don't do this, it's unmaintainable
with open(filenames[0]) as f1:
with open(filenames[1]) as f2:
with open(filenames[2]) as f3:
# ...ExitStack is powerful because it lets you:
- Dynamically register cleanup functions at runtime
- Manage a variable number of context managers
- Handle complex nested scenarios without nesting nightmares
- Register raw cleanup functions without full context managers
Here's a more advanced example that shows the power:
from contextlib import ExitStack
from contextlib import contextmanager
@contextmanager
def verbose_context(name):
print(f"Entering {name}")
try:
yield name
finally:
print(f"Exiting {name}")
with ExitStack() as stack:
contexts = [
stack.enter_context(verbose_context(f"Context {i}"))
for i in range(3)
]
print(f"Active contexts: {contexts}")Output:
Entering Context 0
Entering Context 1
Entering Context 2
Active contexts: ['Context 0', 'Context 1', 'Context 2']
Exiting Context 2
Exiting Context 1
Exiting Context 0
Notice the order: contexts exit in reverse order (Context 2, then 1, then 0). This is intentional and important. When you nest context managers, they're cleaned up in reverse order. The most recently acquired resource is released first. This matters when resources depend on each other. If Context 1 needs Context 0 to still be alive, releasing Context 0 first would break Context 1. ExitStack respects this dependency order automatically.
You can also use callback() to register arbitrary cleanup functions, not just full context managers:
from contextlib import ExitStack
def cleanup_resource(resource_name):
print(f"Cleaning up {resource_name}")
with ExitStack() as stack:
stack.callback(cleanup_resource, "resource_a")
stack.callback(cleanup_resource, "resource_b")
print("Working...")
# Output:
# Working...
# Cleaning up resource_b
# Cleaning up resource_aHere, you're registering arbitrary cleanup functions. When you exit the with block, the callbacks run in reverse order (resource_b, then resource_a). This is useful when you need cleanup logic that doesn't fit neatly into a context manager. Maybe you're allocating memory with a C library that needs a cleanup function, not a full context manager. stack.callback() handles that.
Async Context Managers: The Modern Frontier
Here's where things get really interesting. Python supports async context managers for asynchronous code. As async programming becomes more prevalent in modern Python (especially with web frameworks like FastAPI and async database libraries), async context managers are increasingly important.
The pattern is the same as synchronous context managers, but with two differences: use async with instead of with, and use __aenter__ and __aexit__ instead of __enter__ and __exit__. The async keyword marks that the methods need to await asynchronous operations.
class AsyncDatabaseConnection:
def __init__(self, db_url):
self.db_url = db_url
async def __aenter__(self):
print(f"Async connecting to {self.db_url}")
await self._async_connect()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
print("Async closing connection")
await self._async_disconnect()
return False
async def _async_connect(self):
# Simulate async connection
print("Connected (async)")
async def _async_disconnect(self):
print("Disconnected (async)")
# Usage with async/await
async def main():
async with AsyncDatabaseConnection("postgres://localhost/mydb") as conn:
print("Using async connection")
# asyncio.run(main())The flow is identical to synchronous context managers. __aenter__ runs when you enter the async with block, the block executes, and __aexit__ runs when you exit. But now you can use await to perform asynchronous operations, connecting to a database without blocking, waiting for network I/O, and so on.
The decorator version works too, and it's often cleaner:
import asyncio
from contextlib import asynccontextmanager
@asynccontextmanager
async def async_database_connection(db_url):
print(f"Async connecting to {db_url}")
await asyncio.sleep(0.1) # Simulate async work
try:
yield {"status": "connected"}
finally:
print("Async closing")
async def main():
async with async_database_connection("postgres://localhost/mydb") as conn:
print(f"Using: {conn}")
# asyncio.run(main())This is the async version of the @contextmanager decorator. Setup code runs before yield (and can await), the block executes, and cleanup runs in the finally. The same guarantees apply: cleanup always happens, even if the block raises an exception.
Why does this matter? Because async code is everywhere now. If you're building a modern web application, you're probably using async database drivers, async HTTP clients, and async file I/O. Context managers give you the same safety guarantees in async code that you get in synchronous code. You don't have to manually close database connections or manage cleanup callbacks. The context manager does it for you, and you get that guarantee even when handling concurrent async operations.
This is your bridge from the context manager patterns we've learned to modern async Python, context managers work beautifully with async/await.
Practical Patterns: Real-World Examples
Now let's look at real-world patterns where context managers shine. These aren't contrived examples, you'll encounter these patterns constantly in professional code.
Pattern 1: Database Transactions
Database transactions are a classic use case. A transaction is an atomic unit of work: either all of it succeeds and commits, or all of it fails and rolls back. Context managers are perfect for this:
@contextmanager
def database_transaction(connection):
try:
yield connection
connection.commit()
print("Transaction committed")
except Exception as e:
connection.rollback()
print(f"Transaction rolled back: {e}")
raise
# Usage
with database_transaction(db_conn) as conn:
conn.execute("INSERT INTO users VALUES (...)")
# If an exception occurs, transaction rolls back automaticallyThis pattern ensures that if anything goes wrong inside the block, a constraint violation, a network error, a logic bug, the transaction automatically rolls back. You don't have to manually call rollback(). You don't have to worry about accidentally committing partial data. The context manager handles it all. And the raise at the end means the exception still propagates to the caller, so they know something went wrong. This is the correct way to handle transactions: atomic cleanup that respects both success and failure paths.
Pattern 2: Temporary File Operations
Sometimes you need a temporary file for processing, but you want it deleted when you're done:
@contextmanager
def temporary_file(suffix=''):
import tempfile
f = tempfile.NamedTemporaryFile(mode='w', suffix=suffix, delete=False)
try:
yield f
finally:
f.close()
import os
os.unlink(f.name)
with temporary_file(suffix='.txt') as tmpfile:
tmpfile.write('temporary data')
print(f"Writing to {tmpfile.name}")Why not just use Python's tempfile.NamedTemporaryFile with delete=True? Sometimes you need a different cleanup policy, or you need to perform additional cleanup steps. This context manager wraps the file and ensures it's closed and deleted, no matter what happens in the block. If an exception occurs while writing, the file still gets cleaned up.
Pattern 3: Timing and Performance Monitoring
You want to measure how long something takes, but you don't want to litter your code with timing boilerplate:
from contextlib import contextmanager
import time
@contextmanager
def timer(name):
start = time.time()
try:
yield
finally:
elapsed = time.time() - start
print(f"{name} took {elapsed:.4f} seconds")
with timer("Database query"):
time.sleep(0.5) # Simulate workOutput:
Database query took 0.5001 seconds
This is elegant. Wrap any code block with timer() and you get timing information automatically. No manual time.time() calls before and after. No risk of forgetting to measure. The context manager guarantees the measurement happens.
Pattern 4: Mock Patching and Testing
In unit tests, you often need to mock external dependencies:
from contextlib import contextmanager
from unittest.mock import patch
@contextmanager
def mock_external_api():
with patch('requests.get') as mock_get:
mock_get.return_value.json.return_value = {"result": "mocked"}
yield mock_get
# In tests
def test_api_call():
with mock_external_api() as mock:
result = my_api_function()
assert result == {"result": "mocked"}
mock.assert_called_once()This pattern wraps the mock setup in a context manager. When you enter the block, the mock is active. When you exit, the mock is automatically cleaned up and the real requests.get is restored. This prevents test pollution where a mock from one test accidentally affects another test. The context manager guarantees cleanup.
Thread-Safe Resource Management
When working with threads, context managers become even more critical. Thread synchronization primitives like locks are easy to forget to release, which leads to deadlocks. Context managers enforce the discipline:
from contextlib import contextmanager
import threading
lock = threading.Lock()
@contextmanager
def acquire_lock():
lock.acquire()
try:
yield
finally:
lock.release()
# Or, even cleaner:
@contextmanager
def acquire_lock_v2():
with lock:
yield
def worker():
with acquire_lock():
# Critical section, guaranteed to release the lock
print(threading.current_thread().name)Why is this important? Imagine a scenario where a thread acquires a lock but an exception occurs before it releases the lock. Without proper cleanup, the lock stays held forever. Other threads waiting for that lock will wait forever too (deadlock). Your entire application hangs. The context manager prevents this. The finally block guarantees the lock is released, exception or not.
In fact, threading.Lock itself is a context manager, so you can actually use with lock: directly:
with lock:
# Critical section
do_something()
# Lock is automatically released hereThis is cleaner than the decorator. But if you need additional logic around the lock (like logging, monitoring, or handling specific exceptions), the decorator approach gives you that flexibility.
Common Pitfalls and How to Avoid Them
Context managers are powerful, but they have sharp edges. Let me show you the most common mistakes and how to avoid them.
Pitfall 1: Forgetting __exit__ can raise exceptions
This is subtle and dangerous. When you implement a context manager, __exit__ can itself raise an exception. If your __exit__ implementation is buggy, it might raise an exception while trying to clean up, which masks the original exception:
# Bad, silently suppresses exceptions
class BadContext:
def __enter__(self):
return self
def __exit__(self, *args):
return True # Oops, all exceptions suppressed
# Good, only suppress what you mean to suppress
class GoodContext:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is KeyError:
return True # Only suppress KeyError
return FalseThe first example silently suppresses all exceptions. If your code raises a ValueError, the BadContext will swallow it and pretend nothing happened. Now imagine debugging that, your code fails silently, and you have no idea why. The second example is selective: it only suppresses KeyError, letting other exceptions propagate. This is the right approach. Only suppress the exceptions you specifically know how to handle.
Pitfall 2: Not handling exceptions in generators
When you use @contextmanager, the yield statement is the boundary between setup and cleanup. If an exception occurs in the with block, it's raised at the yield point. If you don't have a try/except around yield, the cleanup code won't run:
# Bad, cleanup won't run if exception in yield block
@contextmanager
def bad_context():
print("Setup")
yield
print("Cleanup") # Won't run if exception in yield block
# Good, handle exceptions and guarantee cleanup
@contextmanager
def good_context():
print("Setup")
try:
yield
except Exception as e:
print(f"Handling {e}")
raise
finally:
print("Cleanup") # Always runsIn the bad version, if an exception occurs in the with block, it's raised at yield and the function exits immediately. The "Cleanup" line never runs. In the good version, the exception is caught and can be handled (or re-raised), and the finally block guarantees cleanup runs no matter what. Always use try/finally around yield if you have cleanup code.
Pitfall 3: Order matters with ExitStack
ExitStack manages multiple context managers. They exit in reverse order (LIFO, Last In, First Out). This is usually what you want, but it can surprise you:
# These exit in reverse order!
with ExitStack() as stack:
a = stack.enter_context(context_a())
b = stack.enter_context(context_b())
# b exits first, then a
# If a depends on b, you might have problemsIf you need a to still be alive when b is exiting, this order is wrong. You need to register them in the opposite order. This is rarely a problem in practice (most resources don't have dependencies), but it's worth knowing about.
Additional Pitfall: Re-raising exceptions incorrectly
Sometimes you catch an exception in __exit__ or in the generator, but you don't re-raise it properly. Be careful:
# Dangerous, hides the exception
@contextmanager
def dangerous_context():
try:
yield
except Exception:
pass # Silently suppresses all exceptions!
# Correct, re-raise to propagate
@contextmanager
def correct_context():
try:
yield
except SomeSpecificException:
# Handle and don't re-raise, this exception is expected
pass
except Exception:
# Handle and re-raise, we didn't know about this
logger.error("Unexpected exception", exc_info=True)
raiseThe key is intention: if you suppress an exception, you should know why you're doing it. If an exception is unexpected, re-raise it so the caller knows something went wrong.
Why Context Managers Matter: A Philosophy
Before we wrap up, let me explain why context managers are important beyond just being convenient syntax.
Context managers encode a promise: "I will clean up after myself." In a world where code gets more complex, where systems integrate with more external resources, and where bugs cost real money and time, this promise matters. When you see a with statement, you immediately know that whatever resource is being managed will be returned to the system. You don't have to read the entire function to verify this. You don't have to trace through exception handlers. The with statement itself is a guarantee.
This is why Python has a culture of using with statements everywhere. It's not just style; it's reliability. It's saying, "This code is safe. I've thought about what happens when things go wrong, and I've planned for it."
In enterprise code, this matters. Consider a web server handling thousands of requests. Each request opens a database connection. If even 0.1% of them leak (due to an exception), you'll run out of connections after a few thousand requests. Then the server crashes. Context managers prevent this. They're not a nice-to-have; they're fundamental to writing production code.
In open-source libraries, this matters too. When you write a library that manages resources, context managers are the expected interface. Users expect to be able to wrap your API in a with statement and know that cleanup happens. Libraries that don't provide context managers are considered less professional.
And in async code, as Python moves toward more concurrent systems, context managers become even more critical. Async/await makes it easy to forget cleanup because control flow is non-linear. A context manager makes cleanup deterministic again.
The lesson: whenever you acquire a resource, immediately think about how it will be released. Wrap it in a context manager, either one that exists in the standard library or one you write yourself. Your future self, and your team, will thank you.
Key Takeaways
You now understand:
✓ How the with statement guarantees resource cleanup via __enter__ and __exit__
✓ How to build custom context managers for your own resources
✓ How to use @contextlib.contextmanager decorator for generator-based contexts
✓ How contextlib.suppress cleans up exception handling
✓ How ExitStack manages dynamic and nested contexts
✓ How async context managers (__aenter__, __aexit__) work with async code
✓ Real-world patterns: databases, files, timing, mocking
✓ Thread-safe resource management with guaranteed cleanup
Context managers are one of Python's best-kept secrets. They make your code safer, cleaner, and more professional. Whether you're opening files, managing database transactions, or coordinating async operations, context managers are your friend.
Start incorporating them into your code today. You'll write fewer bugs and spend less time debugging resource leaks.