Blockchain Security
Access Control in Smart Contracts: Patterns That Prevent Exploits
TL;DR
Access control bugs are the single most common finding in the smart contract audits I perform. Not reentrancy. Not oracle manipulation. Access control. A missing modifier on a single function can let anyone drain a treasury, mint unlimited tokens, or upgrade a proxy to malicious code. I use OpenZeppelin's AccessControl over Ownable for nearly every project because real protocols need more than one admin role. This article covers every access control pattern I have used in production, shows you the exact bugs I find in audits (anonymized), and gives you Foundry tests that catch these issues before an attacker does.
Why Access Control Is #1 in Audits
In the last two years, I have reviewed over forty smart contract codebases. Access control issues appear in roughly seventy percent of them. That is not a typo. Seven out of ten projects have at least one function that either lacks proper authorization or implements it incorrectly.
The reason is simple: access control is boring. Developers spend their energy on the complex financial math, the AMM curves, the liquidation logic. Then they slap an onlyOwner modifier on the admin functions and call it done. But "done" often means they forgot a function, used the wrong role, or left a backdoor that a single compromised private key can exploit.
Here is what the data looks like from real exploits:
- Ronin Bridge (2022): $625 million stolen because validator keys were compromised. A multi-sig with a higher threshold would have prevented it.
- Wormhole (2022): $320 million lost because an unprotected initialization function let the attacker replace the guardian set.
- Parity Wallet (2017): $280 million frozen permanently because a library contract's
initWalletfunction had no access control — anyone could call it and become the owner.
These are not edge cases. These are the most expensive smart contract exploits in history, and every single one was an access control failure. Not a math bug. Not a reentrancy. A missing or broken permission check.
When I start a security audit, access control is the first thing I review. I map every external and public function, identify which ones modify state, and verify that each one has appropriate authorization. Before I even look at the business logic, I need to know: who can call what?
Ownable — When It's Enough
OpenZeppelin's Ownable is the simplest access control pattern. One address owns the contract. Functions marked onlyOwner can only be called by that address. That is the entire model.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol";
contract SimpleVault is Ownable {
constructor() Ownable(msg.sender) {}
function withdrawFees(address to, uint256 amount) external onlyOwner {
(bool success,) = to.call{value: amount}("");
require(success, "Transfer failed");
}
function setFeeRate(uint256 newRate) external onlyOwner {
require(newRate <= 1000, "Fee too high"); // max 10%
feeRate = newRate;
}
uint256 public feeRate;
receive() external payable {}
}Ownable works when your contract genuinely has a single administrative role and no other permissioned operations. I use it for:
- Simple token contracts where only the deployer needs to toggle a pause
- Personal projects and prototypes
- Contracts that will transfer ownership to a multi-sig immediately after deployment
But Ownable breaks down fast. The moment you need a separate role for minting tokens, a different role for pausing, and another role for upgrading — you are forcing a single address to hold every key. If that key is compromised, everything is compromised. If you need to delegate one responsibility without giving away all of them, Ownable cannot do it.
I reach for Ownable maybe ten percent of the time in production work. The other ninety percent needs something more granular.
AccessControl — Role-Based
OpenZeppelin's AccessControl is what I use on nearly every production contract. It implements role-based access control (RBAC) where you define named roles and assign them to addresses independently.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {AccessControl} from "@openzeppelin/contracts/access/AccessControl.sol";
import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol";
contract TokenVault is AccessControl, Pausable {
bytes32 public constant MINTER_ROLE = keccak256("MINTER_ROLE");
bytes32 public constant PAUSER_ROLE = keccak256("PAUSER_ROLE");
bytes32 public constant TREASURER_ROLE = keccak256("TREASURER_ROLE");
mapping(address => uint256) public balances;
constructor(address admin, address minter, address pauser) {
_grantRole(DEFAULT_ADMIN_ROLE, admin);
_grantRole(MINTER_ROLE, minter);
_grantRole(PAUSER_ROLE, pauser);
_grantRole(TREASURER_ROLE, admin);
}
function mint(address to, uint256 amount) external onlyRole(MINTER_ROLE) whenNotPaused {
balances[to] += amount;
}
function withdrawTreasury(address to, uint256 amount) external onlyRole(TREASURER_ROLE) {
require(amount <= address(this).balance, "Insufficient balance");
(bool success,) = to.call{value: amount}("");
require(success, "Transfer failed");
}
function pause() external onlyRole(PAUSER_ROLE) {
_pause();
}
function unpause() external onlyRole(PAUSER_ROLE) {
_unpause();
}
receive() external payable {}
}The key concepts:
- Roles are `bytes32` constants — hashed from readable strings. This avoids string comparison gas costs.
- `DEFAULT_ADMIN_ROLE` is the role admin for all roles by default. The admin of a role can grant and revoke that role for any address.
- Each role is independent — the minter cannot pause, the pauser cannot mint, and compromising one key does not compromise the others.
- Multiple addresses can hold the same role — you can have three minters if your protocol needs it.
I set up the role hierarchy carefully on every project. The DEFAULT_ADMIN_ROLE should be held by a multi-sig or governance contract, never a single EOA in production. I have seen too many projects where the deployer's hot wallet holds admin rights months after launch.
One pattern I enforce in every audit: the deployer should renounce DEFAULT_ADMIN_ROLE after transferring it to the multi-sig. If they forget, there are two admin addresses — and one of them is likely a developer's laptop.
// Post-deployment script
vault.grantRole(DEFAULT_ADMIN_ROLE, multiSigAddress);
vault.renounceRole(DEFAULT_ADMIN_ROLE, deployerAddress);Custom Modifiers
Sometimes you need access control logic that goes beyond simple role checks. Maybe authorization depends on the contract's state, a time window, or a combination of conditions. Custom modifiers handle this cleanly.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
contract GovernedTreasury {
address public governor;
uint256 public proposalDeadline;
bool public emergencyMode;
mapping(address => bool) public guardians;
modifier onlyGovernor() {
require(msg.sender == governor, "Not governor");
_;
}
modifier onlyGuardian() {
require(guardians[msg.sender], "Not guardian");
_;
}
modifier onlyDuringVoting() {
require(block.timestamp <= proposalDeadline, "Voting ended");
_;
}
modifier notInEmergency() {
require(!emergencyMode, "Emergency mode active");
_;
}
// Compound modifier — governor can only execute when voting is closed
// and system is not in emergency mode
function executeProposal(
address target,
bytes calldata data
) external onlyGovernor notInEmergency {
require(block.timestamp > proposalDeadline, "Voting still active");
(bool success,) = target.call(data);
require(success, "Execution failed");
}
// Guardians can trigger emergency mode anytime
function triggerEmergency() external onlyGuardian {
emergencyMode = true;
}
}A few rules I follow with custom modifiers:
- Keep modifiers pure checks — no state changes inside modifiers. State changes belong in the function body.
- Stack modifiers left to right in order of importance — cheapest check first (saves gas on revert).
- Name them with `only` or `when`/`not` —
onlyGovernor,whenNotPaused,notInEmergency. This makes the function signature self-documenting. - Never use `tx.origin` — always
msg.sender. Usingtx.originfor authentication is a well-known vulnerability that allows phishing attacks through intermediary contracts.
Multi-Sig for Admin Functions
A single private key controlling a DeFi protocol with millions in TVL is not security — it is a countdown to an exploit. Multi-signature wallets require multiple parties to approve a transaction before it executes.
I require multi-sig for every production deployment I audit. The standard is Gnosis Safe (now Safe), and the minimum threshold I recommend depends on the number of signers:
| Signers | Minimum Threshold | My Recommendation |
|---|---|---|
| 3 | 2-of-3 | 2-of-3 |
| 5 | 3-of-5 | 3-of-5 |
| 7 | 4-of-7 | 5-of-7 |
| 9 | 5-of-9 | 6-of-9 |
The threshold should be high enough that no single compromised signer can execute, but low enough that the protocol does not grind to a halt if one signer loses their key.
Here is how I structure multi-sig ownership in a contract:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {AccessControl} from "@openzeppelin/contracts/access/AccessControl.sol";
contract ProtocolTreasury is AccessControl {
bytes32 public constant EXECUTOR_ROLE = keccak256("EXECUTOR_ROLE");
// The multi-sig is the only admin
constructor(address multiSig) {
_grantRole(DEFAULT_ADMIN_ROLE, multiSig);
_grantRole(EXECUTOR_ROLE, multiSig);
// Deployer gets NOTHING — no lingering admin access
}
function withdrawFunds(
address token,
address to,
uint256 amount
) external onlyRole(EXECUTOR_ROLE) {
// The multi-sig must approve this transaction
// with the required number of signatures
IERC20(token).transfer(to, amount);
}
}
interface IERC20 {
function transfer(address to, uint256 amount) external returns (bool);
}The critical point: the deployer receives no roles at all. The multi-sig address is set in the constructor and receives all administrative power from the start. I have seen too many deployment scripts where the deployer grants themselves admin "temporarily" and never renounces it.
Timelock Controllers
Even with multi-sig protection, critical parameter changes should not execute immediately. A timelock forces a delay between proposing a change and executing it, giving users time to review the change and exit the protocol if they disagree.
OpenZeppelin's TimelockController handles this:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {TimelockController} from "@openzeppelin/contracts/governance/TimelockController.sol";
import {AccessControl} from "@openzeppelin/contracts/access/AccessControl.sol";
contract ProtocolConfig is AccessControl {
bytes32 public constant TIMELOCK_ROLE = keccak256("TIMELOCK_ROLE");
uint256 public interestRate;
uint256 public liquidationThreshold;
address public oracle;
constructor(address timelock) {
_grantRole(DEFAULT_ADMIN_ROLE, timelock);
_grantRole(TIMELOCK_ROLE, timelock);
}
// These functions can only be called by the timelock
// which enforces a minimum delay after proposal
function setInterestRate(uint256 newRate) external onlyRole(TIMELOCK_ROLE) {
require(newRate <= 5000, "Rate exceeds 50%");
interestRate = newRate;
}
function setLiquidationThreshold(uint256 newThreshold) external onlyRole(TIMELOCK_ROLE) {
require(newThreshold >= 10000 && newThreshold <= 20000, "Invalid threshold");
liquidationThreshold = newThreshold;
}
function setOracle(address newOracle) external onlyRole(TIMELOCK_ROLE) {
require(newOracle != address(0), "Zero address");
oracle = newOracle;
}
}The deployment sets up the timelock with a minimum delay — I recommend 48 hours for mainnet and 24 hours for L2 deployments:
// Deployment script
address[] memory proposers = new address[](1);
proposers[0] = multiSigAddress;
address[] memory executors = new address[](1);
executors[0] = multiSigAddress;
TimelockController timelock = new TimelockController(
48 hours, // minimum delay
proposers, // who can propose
executors, // who can execute
address(0) // no admin — timelock governs itself
);The flow is: multi-sig proposes a change, 48 hours pass, multi-sig executes the change. During those 48 hours, anyone monitoring the timelock contract can see the pending change on-chain. If the change is malicious — say, setting the oracle to a manipulated price feed — users can withdraw their funds before it executes.
I flag every protocol that has admin-controlled parameters without a timelock. Parameters that affect user funds — interest rates, collateral ratios, fee percentages, oracle addresses — should never change instantly.
Common Access Control Bugs with Code
These are the patterns I see repeatedly in audits. Every single one has led to a real exploit somewhere in DeFi.
Bug 1: Missing Access Control on State-Changing Functions
This is the most common bug. A function that should be restricted is left public or external without any modifier.
Vulnerable:
// VULNERABLE — anyone can call this
function setFeeRecipient(address newRecipient) external {
feeRecipient = newRecipient;
}Fixed:
// FIXED — only admin can change fee recipient
function setFeeRecipient(address newRecipient) external onlyRole(DEFAULT_ADMIN_ROLE) {
require(newRecipient != address(0), "Zero address");
feeRecipient = newRecipient;
emit FeeRecipientUpdated(newRecipient);
}Bug 2: Unprotected Initialize Functions
Proxy patterns use initialize instead of constructor. If initialize has no protection, anyone can call it and become the owner.
Vulnerable:
// VULNERABLE — anyone can call initialize and become owner
function initialize(address _token) external {
token = _token;
owner = msg.sender;
}Fixed:
// FIXED — initializer modifier prevents re-initialization
import {Initializable} from "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol";
function initialize(address _token, address _owner) external initializer {
require(_token != address(0), "Zero token");
require(_owner != address(0), "Zero owner");
token = _token;
__AccessControl_init();
_grantRole(DEFAULT_ADMIN_ROLE, _owner);
}Bug 3: Front-Running Role Changes
Granting or revoking roles without considering transaction ordering allows front-running.
Vulnerable:
// VULNERABLE — malicious minter sees revokeRole in mempool
// and front-runs with a massive mint
function revokeMinter(address minter) external onlyRole(DEFAULT_ADMIN_ROLE) {
revokeRole(MINTER_ROLE, minter);
}Fixed:
// FIXED — pause minting before revoking, then unpause
function revokeMinter(address minter) external onlyRole(DEFAULT_ADMIN_ROLE) {
_pause(); // prevents any minting during role transition
revokeRole(MINTER_ROLE, minter);
_unpause();
}Bug 4: Incorrect Role Admin Hierarchy
Setting up role admins incorrectly can allow privilege escalation.
Vulnerable:
// VULNERABLE — MINTER_ROLE is its own admin
// Any minter can grant minting rights to anyone
constructor() {
_setRoleAdmin(MINTER_ROLE, MINTER_ROLE);
_grantRole(MINTER_ROLE, msg.sender);
}Fixed:
// FIXED — only DEFAULT_ADMIN_ROLE can manage minters
constructor(address admin) {
_grantRole(DEFAULT_ADMIN_ROLE, admin);
_grantRole(MINTER_ROLE, admin);
// MINTER_ROLE admin defaults to DEFAULT_ADMIN_ROLE
// Only admin can grant/revoke minter status
}Bug 5: Using tx.origin for Authentication
// VULNERABLE — tx.origin can be exploited via phishing
modifier onlyOwner() {
require(tx.origin == owner, "Not owner");
_;
}Fixed:
// FIXED — msg.sender is the immediate caller
modifier onlyOwner() {
require(msg.sender == owner, "Not owner");
_;
}An attacker creates a contract that calls your contract. If the owner interacts with the attacker's contract (e.g., claiming an "airdrop"), tx.origin is the owner's address but msg.sender is the attacker's contract. The attacker passes the tx.origin check and executes privileged functions.
Testing Access Control
Every access control modifier needs a negative test — a test that verifies unauthorized callers get rejected. I write these in Foundry because the vm.expectRevert cheatcode makes it straightforward.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {Test, console} from "forge-std/Test.sol";
import {TokenVault} from "../src/TokenVault.sol";
import {IAccessControl} from "@openzeppelin/contracts/access/IAccessControl.sol";
contract AccessControlTest is Test {
TokenVault vault;
address admin = makeAddr("admin");
address minter = makeAddr("minter");
address pauser = makeAddr("pauser");
address attacker = makeAddr("attacker");
function setUp() public {
vault = new TokenVault(admin, minter, pauser);
}
// --- Positive tests: authorized callers succeed ---
function test_minter_can_mint() public {
vm.prank(minter);
vault.mint(address(0xBEEF), 1000);
assertEq(vault.balances(address(0xBEEF)), 1000);
}
function test_pauser_can_pause() public {
vm.prank(pauser);
vault.pause();
assertTrue(vault.paused());
}
// --- Negative tests: unauthorized callers revert ---
function test_attacker_cannot_mint() public {
vm.prank(attacker);
vm.expectRevert(
abi.encodeWithSelector(
IAccessControl.AccessControlUnauthorizedAccount.selector,
attacker,
vault.MINTER_ROLE()
)
);
vault.mint(address(0xBEEF), 1000);
}
function test_minter_cannot_pause() public {
vm.prank(minter);
vm.expectRevert(
abi.encodeWithSelector(
IAccessControl.AccessControlUnauthorizedAccount.selector,
minter,
vault.PAUSER_ROLE()
)
);
vault.pause();
}
function test_attacker_cannot_withdraw() public {
vm.deal(address(vault), 10 ether);
vm.prank(attacker);
vm.expectRevert(
abi.encodeWithSelector(
IAccessControl.AccessControlUnauthorizedAccount.selector,
attacker,
vault.TREASURER_ROLE()
)
);
vault.withdrawTreasury(attacker, 10 ether);
}
// --- Role management tests ---
function test_admin_can_grant_roles() public {
vm.prank(admin);
vault.grantRole(vault.MINTER_ROLE(), address(0xCAFE));
assertTrue(vault.hasRole(vault.MINTER_ROLE(), address(0xCAFE)));
}
function test_non_admin_cannot_grant_roles() public {
vm.prank(minter);
vm.expectRevert(
abi.encodeWithSelector(
IAccessControl.AccessControlUnauthorizedAccount.selector,
minter,
vault.DEFAULT_ADMIN_ROLE()
)
);
vault.grantRole(vault.MINTER_ROLE(), attacker);
}
// --- Fuzz test: no random address can call protected functions ---
function testFuzz_random_address_cannot_mint(address caller) public {
vm.assume(caller != minter);
vm.prank(caller);
vm.expectRevert();
vault.mint(address(0xBEEF), 1000);
}
function testFuzz_random_address_cannot_withdraw(address caller) public {
vm.assume(!vault.hasRole(vault.TREASURER_ROLE(), caller));
vm.deal(address(vault), 10 ether);
vm.prank(caller);
vm.expectRevert();
vault.withdrawTreasury(caller, 1 ether);
}
}The fuzz tests are critical. Instead of testing one specific attacker address, Foundry generates thousands of random addresses and verifies that none of them can bypass access control. This catches edge cases that manual tests miss — like the zero address or precompile addresses having unexpected permissions.
My testing checklist for access control:
- Every protected function has a negative test — verify that unauthorized callers revert with the correct error.
- Every role transition is tested — granting, revoking, renouncing. Verify state before and after.
- Role admin hierarchy is tested — verify that only the role admin can grant/revoke each role.
- Fuzz test every protected function — random addresses should never bypass access control.
- Test role interactions — verify that holding one role does not grant privileges of another.
Emergency Pause Mechanisms
Every production contract I audit should have an emergency pause. When an exploit is detected, you need to stop the bleeding immediately — not wait for a 48-hour timelock to expire.
The pause mechanism needs its own role, separate from the general admin. The person who can pause should not necessarily be the person who can change protocol parameters. And critically, pausing should bypass any timelock.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {AccessControl} from "@openzeppelin/contracts/access/AccessControl.sol";
import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol";
contract SecureProtocol is AccessControl, Pausable {
bytes32 public constant GUARDIAN_ROLE = keccak256("GUARDIAN_ROLE");
bytes32 public constant ADMIN_ROLE = keccak256("ADMIN_ROLE");
// Guardians can pause immediately — no timelock
// This is the emergency brake
function emergencyPause() external onlyRole(GUARDIAN_ROLE) {
_pause();
emit EmergencyPauseActivated(msg.sender, block.timestamp);
}
// Only admin can unpause — higher privilege required to resume
function unpause() external onlyRole(ADMIN_ROLE) {
_unpause();
emit ProtocolUnpaused(msg.sender, block.timestamp);
}
// All user-facing functions check pause state
function deposit(uint256 amount) external whenNotPaused {
// deposit logic
}
function withdraw(uint256 amount) external whenNotPaused {
// withdraw logic
}
// Emergency withdrawal bypasses pause — users can always exit
function emergencyWithdraw() external {
uint256 balance = userBalances[msg.sender];
require(balance > 0, "No balance");
userBalances[msg.sender] = 0;
// Users may forfeit pending rewards, but they get their principal
(bool success,) = msg.sender.call{value: balance}("");
require(success, "Transfer failed");
}
mapping(address => uint256) public userBalances;
event EmergencyPauseActivated(address indexed guardian, uint256 timestamp);
event ProtocolUnpaused(address indexed admin, uint256 timestamp);
}The design principles for emergency mechanisms:
- Pause is low privilege, unpause is high privilege — a guardian (possibly a bot monitoring for anomalies) can trigger the circuit breaker, but only a multi-sig admin can restart the protocol after verifying the threat is resolved.
- Emergency withdrawal always works — even when the protocol is paused, users must be able to withdraw their principal. They may forfeit pending rewards or take a haircut on fees, but their deposited funds should never be locked by a pause.
- Emit events on state changes — monitoring services need to detect pauses immediately. Every pause and unpause should emit an event with the caller and timestamp.
I have seen protocols where the pause function was protected by the same timelock as everything else. That defeats the purpose. If an exploit is draining $10 million per block, waiting 48 hours to pause is not an option.
My Audit Findings — Real Examples Anonymized
These are real findings from security audits I have performed, anonymized to protect the clients. Every one of these was in code that had passed internal review and was heading to production.
Finding 1: Unprotected Price Oracle Update
Severity: Critical
A lending protocol allowed anyone to update the price oracle address. An attacker could point the oracle to a contract that reports manipulated prices, borrow against inflated collateral, and drain the lending pool.
// What I found — CRITICAL
function updatePriceOracle(address newOracle) external {
// No access control at all
priceOracle = newOracle;
}The fix required adding role-based access control and a timelock for oracle changes, plus emitting events so monitoring bots could alert on unexpected oracle updates.
Finding 2: Deployer Retains Admin After Transfer
Severity: High
A yield aggregator transferred admin to a multi-sig but forgot to renounce the deployer's admin role. The deployer's wallet — a hot wallet used for testing — retained full admin privileges on a contract holding $4 million in user deposits.
// What I found — deployer still had DEFAULT_ADMIN_ROLE
// alongside the multi-sig
constructor() {
_grantRole(DEFAULT_ADMIN_ROLE, msg.sender); // deployer
}
// In deployment script, they added the multi-sig:
// vault.grantRole(DEFAULT_ADMIN_ROLE, multiSig);
// But never called:
// vault.renounceRole(DEFAULT_ADMIN_ROLE, deployer);I flagged this as high severity because the deployer's hot wallet was a single point of failure. If compromised, the attacker would have full admin access to the protocol — bypassing the multi-sig entirely.
Finding 3: Mint Function Without Access Control
Severity: Critical
An NFT project's minting contract had a public mint function intended for the team allocation that lacked any access control. During the audit, I confirmed that anyone could call it to mint unlimited NFTs without paying.
// What I found — CRITICAL
function teamMint(address to, uint256 quantity) external {
// Was supposed to be onlyOwner
// Developer forgot the modifier
_mint(to, quantity);
}One missing modifier. That is the difference between a successful NFT launch and a project-ending exploit. The fix was literally adding onlyOwner to the function signature.
Finding 4: Role Admin Allows Privilege Escalation
Severity: Medium
A DAO governance contract set the PROPOSER_ROLE as its own admin. Any proposer could grant proposer rights to any address, including malicious contracts that could spam governance with proposals.
// What I found
constructor() {
_setRoleAdmin(PROPOSER_ROLE, PROPOSER_ROLE);
// Any proposer can now make anyone a proposer
}The fix was straightforward — remove the custom role admin so that DEFAULT_ADMIN_ROLE (held by the governance timelock) manages proposer assignments.
Finding 5: Emergency Pause Behind Timelock
Severity: Medium
A DeFi protocol routed its pause function through a 72-hour timelock. During an active exploit, the team could not pause the contracts for three days. The protocol was fortunate that the exploit was reported by a white hat before it was used in the wild.
I recommended creating a separate GUARDIAN_ROLE with direct pause capability, bypassing the timelock. The guardian was assigned to a monitoring bot that could trigger the pause within seconds of detecting anomalous transactions.
Key Takeaways
- Use `AccessControl` over `Ownable` for production contracts. Role-based access gives you granular permissions. A compromised minter key should not give access to the treasury.
- Every state-changing function needs an access control check. Map every
externalandpublicfunction in your contract. If it modifies state, it needs a modifier. No exceptions.
- Multi-sig is mandatory for mainnet admin functions. A single EOA controlling protocol parameters is a critical vulnerability, not a convenience trade-off.
- Timelock critical parameter changes. Users deserve time to review changes that affect their funds. 48 hours minimum on mainnet.
- Emergency pause bypasses the timelock. The circuit breaker must be instant. Assign it to a guardian role with a monitoring bot.
- Deployers must renounce admin after transferring to multi-sig. Check your deployment scripts. Verify on-chain with
hasRoleafter deployment.
- Test every protected function with unauthorized callers. If you do not have a negative test for every modifier, you do not have access control tests.
- Never use `tx.origin` for authentication. Always
msg.sender. This is not debatable.
- Emit events on every role change and admin action. Monitoring depends on it. Silent admin actions are a red flag in any audit.
- Fuzz test access control with random addresses. Manual test cases catch known scenarios. Fuzz tests catch the edge cases you did not think of.
About the Author
I am Uvin Vindula, a Web3 engineer and security auditor based between Sri Lanka and the UK. I build and break smart contracts — designing secure DeFi protocols, auditing codebases before they go to mainnet, and writing about the patterns that separate production-grade contracts from expensive mistakes.
Access control is the first thing I check in every audit, and it is the most common category of findings I report. If your protocol is heading to mainnet and you want a thorough security review, get in touch about an audit.
You can find more of my blockchain security writing at iamuvin.com, follow my work on Twitter/X↗, or explore my open-source projects on GitHub↗.
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.