Shrinking Storage in Shared Autonomous Database on OCI
In one of our projects, we had deleted a huge volume of historical data from our Autonomous Database. Naturally, the expectation was simple:
Data is deleted → storage should reduce → cloud cost should go down.
But when we checked OCI metrics, the storage size was still the same. This created confusion in the team and raised an important question: “If the data is gone, why is storage still allocated?” If you have faced this, you are not alone. This is one of the most common and misunderstood behaviours in the Shared Autonomous Database on OCI.
This exact scenario is something I see very often across customers and projects using Shared Autonomous Database on OCI.
First, Let’s Understand How Storage Works in Autonomous Database
One of the biggest advantages of Autonomous Database is automatic storage scaling.
This means:
-> As your data grows → storage automatically increases.
-> You never need to worry about provisioning disks.
-> Performance remains consistent.
However, here is the important part many people don’t realize: When data is deleted, storage does NOT shrink automatically.
Yes — that surprises many people.
Why Storage Does Not Automatically Reduce
This behavior is actually by design.Oracle keeps the allocated storage to ensure:
-> Stable performance
-> No frequent resizing overhead
-> Faster future data growth handling
So when you delete data:
✔ The space becomes free inside the database
❌ But OCI still considers it allocated storage
And that means you continue paying for that storage unless you reclaim it. This is why understanding storage shrinking is very important for cloud cost optimization.
When Should You Shrink Storage?
Based on my real project experience, you should consider shrinking storage after:
- Large data purging activities
- Archiving historical records
- Cleaning up staging tables
- Dropping big tables or partitions
- Post-migration cleanup
- Temporary ETL data removal
Basically, whenever a significant amount of data is removed.
Check Before Shrinking Storage
One important point to understand is that Oracle Autonomous Database does not add multiple datafiles to the DATA tablespace. Instead, it uses a single large datafile (bigfile tablespace) and automatically increases its size of that big datafile as storage grows.
Because of this architecture, shrinking storage can take significant time, since the database must reorganize and reclaim space within a very large datafile.
Run the following query to assess the potential storage savings before initiating the shrink operation. All columns in the output are expressed in MB.
select file_name,
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) sm,
ceil( blocks*&&blksize/1024/1024) currsize,
ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm
from dba_extents where tablespace_name='DATA'
group by file_id ) b
where a.file_id = b.file_id(+) and a.tablespace_name='DATA';
You should proceed with storage shrinking only when the savings column indicates a significant amount of reclaimable space. If the expected savings are minimal, it is not recommended to run this operation, as it is time-consuming and resource-intensive with limited practical benefit.
Steps to Shrink Storage in Autonomous Database
Step 1 — Log in to the OCI Console
Navigate to:
OCI Console → Autonomous Database → Select your database
Step 2 — Open Resource Allocation
Click the More actions drop-down menu and select Manage resource allocation.
Step 3 — Review Storage Details
In the Storage section, review:
"Allocated storage" — Total storage currently reserved
"Approximate used storage" — Actual space consumed by data
You can also run the query below to identify space usage by schemas/objects.
select file_name,
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) sm,
ceil( blocks*&&blksize/1024/1024) currsize,
ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm
from dba_extents
group by file_id ) b
where a.file_id = b.file_id(+)
If you observe a significant difference between allocated and used storage, you may be able to reclaim space. For example, in my case there was approximately 4 TB of reclaimable storage between allocated and actual usage.
Step 4 — Click the "Shrink" Button
Click Shrink to initiate the storage reduction process.
Preconditions for Shrink Operation
The Shrink option is available only when all of the following conditions are met:
- Storage auto-scaling is enabled
- Allocated storage is greater than base (minimum) storage
- Allocated storage − Used storage > 100 GB
If these conditions are not satisfied and you click Shrink, Autonomous Database displays an “Action unavailable” message.
Considerations and Challenges with the Shrinking Process
There are a few important aspects to be aware of before initiating storage shrink:
1. No visible progress indicator
Once the shrinking process starts, there is no direct way in the console or database views to monitor its progress. The only practical approach is to raise an SR with Oracle Support and request periodic updates on the completion percentage.
2. Potentially long execution time
The duration largely depends on the size of the data file.
In my case, the data file was about 9 TB, and the shrink process took more than 28 hours to complete.
NOTE:
👉 The shrink operation runs an alter table... move online operation hence it is taking long time to complete it.
👉 Once your data deletion operation is complete, wait at least 1–2 hours before initiating the shrink process. Autonomous Database needs time to recalculate storage usage, and the updated values may take some time to appear in the OCI Console.
Because of this delay, the Shrink operation may occasionally be unavailable or fail immediately after large deletions, as the console has not yet reflected the updated storage metrics.
3. ECPU allocation impacts duration
The time required for shrinking also depends on the number of ECPUs allocated to the Autonomous Database. Higher ECPU allocation provides more processing power for the reorganization work.
👉 Therefore, it is recommended to scale up ECPUs before starting the shrink operation.
This helps in two ways:
-> Reduces overall shrink duration
-> Minimizes impact on your ongoing workload
My final conclusion
👉 Storage will NOT shrink automatically
👉 You must reclaim it manually via console
This small step can save significant cloud costs.
Thanks & Regards,
Chandan Tanwani

