ACID (atomicity, consistency, isolation, durability)
- sumitnagle
- Jun 24
- 6 min read
Updated: Jun 27
Database doesn’t just store data, they also needs to guarantee that the data stays consistent and reliable, even when multiple operations happen together or failures occur. This is where the concept of transaction management comes in. And to ensure this, databases follow a set of principles called the ACID properties, which define the rules for how transactions should behave to maintain integrity in the system.
Atomicity
Atomicity ensures that a transaction is treated as a single, indivisible unit of work. Either all operations within the transaction are completed successfully, or none are applied. If an error occurs during any step of the transaction, all previous operations are rolled back (through mechanisms like rollback logs) to ensure that the database remains unchanged.

For example, imagine we're building a simple banking system and suppose a user wants to transfer 1000 amount from Account A to Account B. Underneath, this operation involves two database queries,
UPDATE accounts SET balance = balance - 1000 WHERE account_id = 'A' UPDATE accounts SET balance = balance + 1000 WHERE account_id = 'B'Now, what if the first query succeeds, and 1000 is deducted from Account A, but before the second query can run, the database server crashes or a network failure occurs!
Without atomicity, this would leave Account A short by 1000, while Account B never received it, which is an inconsistent, invalid state. Following atomicity, the database wraps both operations inside a transaction. If any step fails, the transaction manager detects the failure, triggers a rollback operation, and all changes made by the transaction are undone, returning both Account A and Account B to their original balances, as if nothing happened.
So how does database achieve atomicity, for that they typically use a transaction log (also called a write-ahead log). Before any data modifications are applied to disk, a log entry describing the intended change is written to persistent storage. If a failure occurs mid-transaction, the DBMS can consult this log during recovery to either complete the remaining operations (if possible) or roll back already-performed changes.
Rollback uses undo logging, where before changing a value, its old value is recorded so that it can be restored if needed. Some systems use shadow paging, creating a duplicate page for modifications and only swapping it in when the transaction commits successfully.
Consistency
Consistency ensures that a transaction brings the database from one valid state to another. The integrity of the database is never violated. All predefined rules, constraints, triggers, and relationships must be maintained after a transaction is completed. If a transaction would violate these rules, it’s either rolled back or rejected.
In our banking system, imagine we have a rule, "No account should ever have a negative balance". Now, suppose a user attempts to transfer 10,000 from Account A, which only has 7,000. Inside the transaction, the following happens,
UPDATE accounts SET balance = balance - 10000 WHERE account_id = 'A'UPDATE accounts SET balance = balance + 10000 WHERE account_id = 'B'With consistency, after step 1, Account A would have a balance of -3000, which violates the rule. Due to consistency, before committing the transaction, the database checks whether all constraints (like non-negative balances) are still valid. And in this case, the transaction would detect the violation and would automatically roll back the changes. The balances remain as they were before the transaction started.
The system refuses to move from a consistent state (valid balances) to an inconsistent one (negative balance).
Databases enforce consistency through,
Constraints, like primary keys, foreign keys, unique constraints, and check constraints.
Triggers, automatic actions performed when certain conditions are met.
Business rules in application logic, although ideally, integrity rules should also exist at the database level.
Every transaction is checked against these rules before it can commit. If any rule is broken, the transaction is aborted and rolled back.
Isolation
Isolation ensures that concurrent transactions do not interfere with each other and are executed as if they were happening sequentially. Each transaction should be executed as if it were the only transaction running at that time.
Why is this important? Because in multi-user systems (which is basically every modern database), hundreds or thousands of transactions may be running simultaneously and without isolation, those concurrent transactions could read uncommitted changes from each other, causing unpredictable or incorrect results.
Back to our banking system, imagine two users transferring money at the same time, transfer of 500 from Account A to B and transfer of 300 from Account A to C. Now, if both transactions read Account A's balance at the same time and both see 2000, they might both proceed to deduct their respective amounts, not knowing the other is also modifying it. This could result in Account A ending up with 1500 or 170 instead of 1200 (if both deducted from 2000).
Isolation ensures that these transactions are handled properly, either one transaction waits for the other to finish (serially), or the database uses a clever mechanism (like locking, snapshots, or multi-version concurrency control — MVCC, more on this in later blogs) to avoid conflicts and ensure each transaction gets a consistent view of the data.
Isolation prevents issues like dirty reads, non-repeatable reads, and phantom reads (checkout this).
Databases implement isolation levels to define how strictly transactions are isolated. Isolation helps prevent anomalies that can occur when multiple transactions access and modify the same data concurrently.
Isolation levels define how transaction visibility works when multiple transactions run concurrently.
Isolation levels in databases define how transactions interact with each other, particularly in environments with concurrent transactions. The goal is to control data consistency and concurrency by determining how and when the changes made by one transaction are visible to others.
Read Uncommitted (Lowest Isolation), transactions can read data that has been modified but not committed by other transactions. Which means, dirty reads, non-repeatable reads, and phantom reads, can happen, and if we are fine with that, we can set our isolation level to this. Its fastest but least reliable.
Read Committed, a transaction can only read committed data from other transactions. This prevents dirty reads, (as even if one transaction is updating a data, if its not committed yet, then this update wont be visible to other transactions, and other transactions will see old already committed data) but non-repeatable reads and phantom reads are possible.
Repeatable Read, ensures that if a transaction reads a row once, it will read the same value throughout the transaction and this is achieved by using shared locks on read data. This prevents dirty reads and non-repeatable reads but allows phantom reads.
Serialisable (Highest Isolation), ensures complete Isolation. Transactions are executed sequentially, avoiding interference and achieved by using strict locking or predicate locks, however, concurrency is no longer there as operations are happening sequentially. This prevents dirty reads, non-repeatable reads and phantom reads.
Lower Isolation levels (like Read Uncommitted) offer higher concurrency but risk data anomalies, whereas, Higher Isolation levels (like Serialisable) ensure strong consistency but reduce concurrency and increase locking overhead.
Durability
Durability ensures that once a transaction is committed, its effects are permanent. Even if there is a system crash, power failure, or error, the changes will not be lost. No power failure, disk fault, or system reboot should be able to undo a committed transaction. This is usually achieved using transaction logs (DBMS use mechanisms like write-ahead logging, where changes are written to a log before being applied to the database) and backups.
Let’s return one last time to our banking system.A user transfers 500 from Account A to Account B. The transaction completes, and the application shows, "Transfer successful!". Now imagine, a millisecond later, the server hosting the database crashes hard, sudden power failure, no warning. When the system comes back up, Durability ensures that the 500 transfer is still recorded in the database exactly as it was at the time of commit.
To guarantee this, databases typically use write-ahead logs (WAL) or redo logs (Some systems also use journaling filesystems or synchronous disk writes for added safety),
Before making changes to the actual database files, the database first writes a log entry describing the transaction’s operations.
Once this log entry is safely written to disk (not just to memory), the database can then say the transaction is “committed”.
If a crash happens later, during recovery, the database reads this log and reapplies any committed transactions to ensure nothing is lost.
Uncommitted transactions are rolled back. Committed ones are redone if needed.