When administering Linux systems I often find myself struggling to track down the culprit after a partition goes full. I normally use du / | sort -nr but on a large filesystem this takes a long time before any results are returned.

Also, this is usually successful in highlighting the worst offender but I've often found myself resorting to du without the sort in more subtle cases and then had to trawl through the output.

I'd prefer a command line solution which relies on standard Linux commands since I have to administer quite a few systems and installing new software is a hassle (especially when out of disk space!)

Stephen Kitt
  • 370,495
  • 49
  • 952
  • 1,048

40 Answers40


Try ncdu, an excellent command-line disk usage analyser:

enter image description here

  • 105
  • 3
  • when i try to ./configure this, it tells me a required header is missing – Alf47 Oct 28 '15 at 13:40
  • 60
    Typically, I hate being asked to install something to solve a simple issue, but this is just great. – jds Jul 05 '16 at 18:58
  • Install size is 81k... And it's super easy to use! :-) – TimH - Codidact Feb 02 '17 at 00:06
  • I was looking for a fast way to find what takes up disk space in an ordered way. This tool does it and it also provides sorting and easy navigation. Thank you for the reference. – DNT May 28 '17 at 09:56
  • 43
    `sudo apt install ncdu` on ubuntu gets it easily. It's great – Orion Edwards Jul 19 '17 at 22:30
  • 14
    You quite probably know which filesystem is short of space. In which case you can use `ncdu -x` to only count files and directories on the same filesystem as the directory being scanned. – Luke Cousins Jul 21 '17 at 11:51
  • 29
    best answer. also: `sudo ncdu -rx /` should give a clean read on biggest dirs/files ONLY on root area drive. (`-r` = read-only, `-x` = stay on same filesystem (meaning: do not traverse other filesystem mounts) ) – B. Shea Sep 21 '17 at 15:52
  • @Alf47 Required header for what? You list only partial error. You are missing a lib dependency. Maybe try installing ncurses lib. That seems to be the usual culprit. The info is there in the build output on what your system is missing. see: https://unix.stackexchange.com/a/113493/186861 – B. Shea Sep 21 '17 at 15:58
  • 1
    @bshea had a great suggestion, many times on AWS it's only your root filesystem that is small, everything else is an EBS or EFS mount that is huge, so you only need to find and clean the root partition. – dragon788 Oct 19 '17 at 23:01
  • 44
    I have so little space that I can't install ncdu – Chris Jun 14 '18 at 16:57
  • Hands down the best. ncdu is an amazing and beautiful tool! – Jay Taylor Oct 19 '18 at 22:32
  • 5
    Error, can't install ncdu, E: You don't have enough free space in /var/cache/apt/archives/. :( – Cerin Oct 26 '18 at 19:47
  • 2
    Problem is... ran out of disk space so can't install another dependency :) – Blairg23 May 05 '19 at 08:44
  • 1
    This is like WinDirStat for Linux users - absolutely perfect for evaluating disk consumption and treating out-of-control scenarios. – bsplosion May 23 '19 at 15:43
  • 1
    Pressing r when browsing disk usage refreshes current directory – too Apr 12 '20 at 18:33
  • Best answer imho – ekerner Aug 28 '21 at 07:49
  • 1
    no sudo, no problem: `wget -qO- https://dev.yorhel.nl/download/ncdu-linux-x86_64-1.16.tar.gz | tar xvz && ncdu -x` (official builds) – jan-glx Sep 27 '21 at 07:53
  • 2
    no space, no problem: `sudo mkdir /ncdu && sudo mount -t tmpfs -o size=500m tmpfs /ncdu && wget -qO- https://dev.yorhel.nl/download/ncdu-linux-x86_64-1.16.tar.gz | tar xvz --directory /ncdu && /ncdu/ncdu -x` – jan-glx Sep 27 '21 at 07:58
  • 3
    no space, no sudo, no problem: `wget -qO- https://dev.yorhel.nl/download/ncdu-linux-x86_64-1.16.tar.gz | tar xvz --directory /dev/shm && /dev/shm/ncdu -x` ... urls might change, newer version might be available see here: https://dev.yorhel.nl/ncdu – jan-glx Sep 27 '21 at 08:05

Don't go straight to du /. Use df to find the partition that's hurting you, and then try du commands.

One I like to try is

# U.S.
du -h <dir> | grep '[0-9\.]\+G'
# Others
du -h <dir> | grep '[0-9\,]\+G'

because it prints sizes in "human readable form". Unless you've got really small partitions, grepping for directories in the gigabytes is a pretty good filter for what you want. This will take you some time, but unless you have quotas set up, I think that's just the way it's going to be.

As @jchavannes points out in the comments, the expression can get more precise if you're finding too many false positives. I incorporated the suggestion, which does make it better, but there are still false positives, so there are just tradeoffs (simpler expr, worse results; more complex and longer expr, better results). If you have too many little directories showing up in your output, adjust your regex accordingly. For example,

grep '^\s*[0-9\.]\+G'

is even more accurate (no < 1GB directories will be listed).

If you do have quotas, you can use

quota -v

to find users that are hogging the disk.

Ben Collins
  • 4,031
  • 1
  • 9
  • 8
  • 2
    This is very quick, simple and practical – zzapper Oct 29 '12 at 16:43
  • 26
    `grep '[0-9]G'` contained a lot of false positives and also omitted any decimals. This worked better for me: `sudo du -h / | grep -P '^[0-9\.]+G'` – jchavannes Aug 14 '14 at 06:09
  • @BenCollins I think you also need the -P flag for Perl regex. – jchavannes Aug 14 '14 at 18:10
  • 1
    @jchavannes `-P` is unnecessary for this expression because there's nothing specific to Perl there. Also, `-P` isn't portable to systems that don't have the GNU implementation. – Ben Collins Aug 14 '14 at 18:11
  • Ahh. Well having a carat at the beginning will remove false positives of directories which have a number followed by a G in the name, which I did. – jchavannes Aug 15 '14 at 04:11
  • 3
    In case you have really big directories, you'll want `[GT]` instead of just `G` – Vitruvie Mar 28 '15 at 20:20
  • 1
    Is there a tool that will continuously monitor disk usage across all directories (lazily) in the filesystem? Something that can be streamed to a web UI? Preferably soft-realtime information. – CMCDragonkai May 07 '15 at 09:00
  • 29
    I like to use `du -h | sort -hr | head` – augurar Jun 13 '16 at 18:48

For a first look, use the “summary” view of du:

du -s /*

The effect is to print the size of each of its arguments, i.e. every root folder in the case above.

Furthermore, both GNU du and BSD du can be depth-restricted (but POSIX du cannot!):

  • GNU (Linux, …):

    du --max-depth 3
  • BSD (macOS, …):

    du -d 3

This will limit the output display to depth 3. The calculated and displayed size is still the total of the full depth, of course. But despite this, restricting the display depth drastically speeds up the calculation.

Another helpful option is -h (words on both GNU and BSD but, once again, not on POSIX-only du) for “human-readable” output (i.e. using KiB, MiB etc.).

Konrad Rudolph
  • 3,570
  • 3
  • 19
  • 29

You can also run the following command using du:

~# du -Pshx /* 2>/dev/null
  • The -s option summarizes and displays total for each argument.
  • -h prints Mio, Gio, etc.
  • -x = stay in one filesystem (very useful).
  • -P = don't follow symlinks (which could cause files to be counted twice for instance).

Be careful with -x, which will not show the /root directory if it is on a different filesystem. In that case, you have to run du -Pshx /root 2>/dev/null to show it (once, I struggled a lot not pointing out that my /root directory had gone full).

Roger Pate
  • 167
  • 5
  • 761
  • 6
  • 5

Finding the biggest files on the filesystem is always going to take a long time. By definition you have to traverse the whole filesystem looking for big files. The only solution is probably to run a cron job on all your systems to have the file ready ahead of time.

One other thing, the x option of du is useful to keep du from following mount points into other filesystems. I.e:

du -x [path]

The full command I usually run is:

sudo du -xm / | sort -rn > usage.txt

The -m means return results in megabytes, and sort -rn will sort the results largest number first. You can then open usage.txt in an editor, and the biggest folders (starting with /) will be at the top.

  • 1,004
  • 7
  • 10
  • 4
    Thanks for pointing out the `-x` flag! – SamB Jun 02 '10 at 20:55
  • 1
    "finding biggest takes long time.." -> Well it depends, but tend to disagree: doesn't take that long with utilities like `ncdu` - at least quicker than `du` or `find` (depending on depth and arguments).. – B. Shea Sep 21 '17 at 15:35
  • 1
    since I prefer not to be root, I had to adapt where the file is written : `sudo du -xm / | sort -rn > ~/usage.txt` – Bruno Sep 14 '18 at 06:55

I always use du -sm * | sort -n, which gives you a sorted list of how much the subdirectories of the current working directory use up, in mebibytes.

You can also try Konqueror, which has a "size view" mode, which is similar to what WinDirStat does on Windows: it gives you a viual representation of which files/directories use up most of your space.

Update: on more recent versions, you can also use du -sh * | sort -h which will show human-readable filesizes and sort by those. (numbers will be suffixed with K, M, G, ...)

For people looking for an alternative to KDE3's Konqueror file size view may take a look at filelight, though it's not quite as nice.

  • 351
  • 1
  • 3

I use this for the top 25 worst offenders below the current directory

# -S to not include subdir size, sorted and limited to top 25
du -S . | sort -nr | head -25
  • 321
  • 2
  • 6
  • This command did the trick to find a hidden folder that seemed to be increasing in size over time. Thanks! – thegreendroid Jun 20 '13 at 02:24
  • Is this in bytes? – User Sep 17 '14 at 00:12
  • By default, on my system, 'du -S' gives a nice human readable output. You get a plain number of bytes for small files, then a number with a 'KB' or 'MB' suffix for bigger files. – serg10 Sep 17 '14 at 08:48
  • You could do du -Sh to get a human readable output. – Siddhartha Jan 26 '16 at 02:57
  • 1
    @Siddhartha If you add `-h`, it will likely change the effect of the `sort -nr` command - meaning the sort will no longer work, and then the `head` command will also no longer work – Clare Macrae Dec 04 '17 at 13:00
  • 1
    On Ubuntu, I need to use `-h` to `du` for human readable numbers, as well as `sort -h` for human-numeric sort. The list is sorted in reverse, so either use `tail` or change order. – oarfish Aug 30 '18 at 08:41

At a previous company we used to have a cron job that was run overnight and identified any files over a certain size, e.g.

find / -size +10000k

You may want to be more selective about the directories that you are searching, and watch out for any remotely mounted drives which might go offline.

  • 103
  • 4
  • You can use the `-x ` option of find to make sure you don't find files on other devices than the start point of your find command. This fixes the remotely mounted drives issue. – rjmunro Jun 29 '15 at 16:29

I use

du -ch --max-depth=2 .

and I change the max-depth to suit my needs. The "c" option prints totals for the folders and the "h" option prints the sizes in K, M, or G as appropriate. As others have said, it still scans all the directories, but it limits the output in a way that I find easier to find the large directories.


One option would be to run your du/sort command as a cron job, and output to a file, so it's already there when you need it.


For the commandline I think the du/sort method is the best. If you're not on a server you should take a look at Baobab - Disk usage analyzer. This program also takes some time to run, but you can easily find the sub directory deep, deep down where all the old Linux ISOs are.

Peter Stuifzand
  • 1,761
  • 2
  • 11
  • 7
  • 2
    It can also scan remote folders via SSH, FTP, SMB and WebDAV. –  Dec 02 '08 at 16:34
  • This is great. Some things just work better with a GUI to visualize them, and this is one of them! I need an X-server on my server anyways for CrashPlan, so it works on that too. – timelmer Jun 25 '16 at 20:46

I'm going to second xdiskusage. But I'm going to add in the note that it is actually a du frontend and can read the du output from a file. So you can run du -ax /home > ~/home-du on your server, scp the file back, and then analyze it graphically. Or pipe it through ssh.

  • 103,581
  • 18
  • 226
  • 273

Try feeding the output of du into a simple awk script that checks to see if the size of the directory is larger than some threshold, if so it prints it. You don't have to wait for the entire tree to be traversed before you start getting info (vs. many of the other answers).

For example, the following displays any directories that consume more than about 500 MB.

du -kx / | awk '{ if ($1 > 500000) { print $0} }'

To make the above a little more reusable, you can define a function in your .bashrc, ( or you could make it into a standalone script).

dubig() {
    [ -z "$1" ] && echo "usage: dubig sizethreshMB [dir]" && return
    du -kx $2 | awk '{ if ($1 > '$1'*1024) { print $0} }'

So dubig 200 ~/ looks under the home directory (without following symlinks off device) for directories that use more than 200 MB.

  • It's a pity that a dozen of grep hacks are more upvoted. Oh and `du -k` will make it absolutely certain that du is using KB units – ndemou Nov 23 '16 at 20:05
  • Good idea about the -k. Edited. – Mark Borgerding Nov 24 '16 at 11:16
  • Even simpler and more robust: `du -kx $2 | awk '$1>'$(($1*1024))` (if you specify only a condition aka pattern to awk the default action is `print $0`) – dave_thompson_085 Nov 27 '16 at 11:31
  • Good point @date_thompson_085. That's true for all versions of awk I know of (net/free-BSD & GNU). @mark-borgerding so this means that you can greatly simplify your first example to just `du -kx / | awk '$1 > 500000'` – ndemou Dec 13 '16 at 09:46
  • @mark-borgerding: If you have just a few kBytes left somewhere you can also keep the whole output of du like this `du -kx / | tee /tmp/du.log | awk '$1 > 500000'`. This is very helpful because if your first filtering turns out to be fruitless you can try other values like this `awk '$1 > 200000' /tmp/du.log` or inspect the complete output like this `sort -nr /tmp/du.log|less` without re-scanning the whole filesystem – ndemou Dec 13 '16 at 09:59
  • Regarding the simplification -- I think that kills clarity to save a few characters. – Mark Borgerding Dec 13 '16 at 11:55
  • Regarding saving the whole du output, -- That "few kBytes" could easily be many megabytes if the volume contains millions of files. That seems dangerous under the presumable circumstances. – Mark Borgerding Dec 13 '16 at 12:01

I prefer to use the following to get an overview and drill down from there...

cd /folder_to_check
du -shx */

This will display results with human readable output such as GB, MB. It will also prevent traversing through remote filesystems. The -s option only shows summary of each folder found so you can drill down further if interested in more details of a folder. Keep in mind that this solution will only show folders so you will want to omit the / after the asterisk if you want files too.

  • 429
  • 4
  • 4

Not mentioned here but you should also check lsof in case of deleted/hanging files. I had a 5.9GB deleted tmp file from a run away cronjob.

https://serverfault.com/questions/207100/how-can-i-find-phantom-storage-usage Helped me out in find the process owner of said file ( cron ) and then I was able to goto /proc/{cron id}/fd/{file handle #} less the file in question to get the start of the run away, resolve that, and then echo "" > file to clear up space and let cron gracefully close itself up.

  • 151
  • 1
  • 3

Maybe worth to note that mc (Midnight Commander, a classic text-mode file manager) by default show only the size of the directory inodes (usually 4096) but with CtrlSpace or with menu Tools you can see the space occupied by the selected directory in a human readable format (e.g., some like 103151M).

For instance, the picture below show the full size of the vanilla TeX Live distributions of 2018 and 2017, while the versions of 2015 and 2016 show only the size of the inode (but they have really near of 5 Gb each).

That is, CtrlSpace must be done one for one, only for the actual directory level, but it is so fast and handy when you are navigating with mc that maybe you will not need ncdu (that indeed, only for this purpose is better). Otherwise, you can also run ncdu from mc. without exit from mc or launch another terminal.


  • 1,631
  • 1
  • 14
  • 8

I like the good old xdiskusage as a graphical alternative to du(1).

  • 149
  • 2
  • Note this part of the question: "I'd prefer a command line solution which relies on standard Linux commands since..." – ndemou Jul 04 '17 at 20:20

From the terminal, you can get a visual representation of disk usage with dutree

It is very fast and light because it is implemented in Rust


$ dutree -h
Usage: dutree [options] <path> [<path>..]

    -d, --depth [DEPTH] show directories up to depth N (def 1)
    -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
    -s, --summary       equivalent to -da, or -d1 -a1M
    -u, --usage         report real disk usage instead of file size
    -b, --bytes         print sizes in bytes
    -f, --files-only    skip directories for a fast local overview
    -x, --exclude NAME  exclude matching files or directories
    -H, --no-hidden     exclude hidden files
    -A, --ascii         ASCII characters only, no colors
    -h, --help          show help
    -v, --version       print version number

See all the usage details in the website

  • 829
  • 10
  • 7

You can use standard tools like find and sort to analyze your disk space usage.

List directories sorted by their size:

find / -mount -type d -exec du -s "{}" \; | sort -n

List files sorted by their size:

find / -mount -printf "%k\t%p\n" | sort -n
  • 10,023
  • 2
  • 23
  • 40

For command line du (and it's options) seems to be the best way. DiskHog looks like it uses du/df info from a cron job too so Peter's suggestion is probably the best combination of simple and effective.

(FileLight and KDirStat are ideal for GUI.)

Jeff Schaller
  • 64,162
  • 33
  • 104
  • 234
  • 129
  • 3

I have used this command to find files bigger than 100Mb:

find / -size +100M -exec ls -l {} \;

At first I check the size of directories, like so:

du -sh /var/cache/*/
  • 131
  • 1
  • 1
  • 9

Here is a tiny app that uses deep sampling to find tumors in any disk or directory. It walks the directory tree twice, once to measure it, and the second time to print out the paths to 20 "random" bytes under the directory.

void walk(string sDir, int iPass, int64& n, int64& n1, int64 step){
    foreach(string sSubDir in sDir){
        walk(sDir + "/" + sSubDir, iPass, n, n1, step);
    foreach(string sFile in sDir){
        string sPath = sDir + "/" + sFile;
        int64 len = File.Size(sPath);
        if (iPass == 2){
            while(n1 <= n+len){
               print sPath;
               n1 += step;
        n += len;

void dscan(){
    int64 n = 0, n1 = 0, step = 0;
    // pass 1, measure
    walk(".", 1, n, n1);
    print n;
    // pass 2, print
    step = n/20; n1 = step/2; n = 0;
    walk(".", 2, n, n1);
    print n;

The output looks like this for my Program Files directory:

.\ArcSoft\PhotoStudio 2000\Samples\3.jpg
.\Common Files\Java\Update\Base Images\j2re1.4.2-b28\core1.zip
.\Common Files\Wise Installation Wizard\WISDED53B0BB67C4244AE6AD6FD3C28D1EF_7_0_2_7.MSI
.\Microsoft SQL Server\90\Setup Bootstrap\sqlsval.dll
.\Microsoft Visual Studio\DF98\DOC\TAPI.CHM
.\Microsoft Visual Studio .NET 2003\CompactFrameworkSDK\v1.0.5000\Windows CE\sqlce20sql2ksp1.exe
.\Microsoft Visual Studio .NET 2003\SDK\v1.1\Tool Developers Guide\docs\Partition II Metadata.doc
.\Microsoft Visual Studio .NET 2003\Visual Studio .NET Enterprise Architect 2003 - English\Logs\VSMsiLog0A34.txt
.\Microsoft Visual Studio 8\Microsoft Visual Studio 2005 Professional Edition - ENU\Logs\VSMsiLog1A9E.txt
.\Microsoft Visual Studio 8\SmartDevices\SDK\CompactFramework\2.0\v2.0\WindowsCE\wce500\mipsiv\NETCFv2.wce5.mipsiv.cab
.\Microsoft Visual Studio 8\VC\ce\atlmfc\lib\armv4i\UafxcW.lib
.\Microsoft Visual Studio 8\VC\ce\Dll\mipsii\mfc80ud.pdb
.\Movie Maker\MUI\0409\moviemk.chm
.\TheCompany\TheProduct\docs\TheProduct User's Guide.pdf

It tells me that the directory is 7.9gb, of which

  • ~15% goes to the Intel Fortran compiler
  • ~15% goes to VS .NET 2003
  • ~20% goes to VS 8

It is simple enough to ask if any of these can be unloaded.

It also tells about file types that are distributed across the file system, but taken together represent an opportunity for space saving:

  • ~15% roughly goes to .cab and .MSI files
  • ~10% roughly goes to logging text files

It shows plenty of other things in there also, that I could probably do without, like "SmartDevices" and "ce" support (~15%).

It does take linear time, but it doesn't have to be done often.

Examples of things it has found:

  • backup copies of DLLs in many saved code repositories, that don't really need to be saved
  • a backup copy of someone's hard drive on the server, under an obscure directory
  • voluminous temporary internet files
  • ancient doc and help files long past being needed

If you know that the large files have been added in the last few days (say, 3), then you can use a find command in conjunction with "ls -ltra" to discover those recently added files:

find /some/dir -type f -mtime -3 -exec ls -lart {} \;

This will give you just the files ("-type f"), not directories; just the files with modification time over the last 3 days ("-mtime -3") and execute "ls -lart" against each found file ("-exec" part).


To understand disproportionate disk space usage it's often useful to start at the root directory and walk up through some of its largest children.

We can do this by

  • saving the output of du into a file
  • grepping through the result iteratively

That is:

# sum up the size of all files and directories under the root filesystem
du -a -h -x / > disk_usage.txt
# display the size of root items
grep $'\t/[^/]*$' disk_usage.txt

now let's say /usr appear too large

# display the size of /usr items
grep $'\t/usr/[^/]*$' disk_usage.txt

now if /usr/local is suspiciously large

# display the size /usr/local items
grep $'\t/usr/local/[^/]*$' disk_usage.txt

and so on...

Alex Jasmin
  • 275
  • 3
  • 7

Still here? Or perhaps this answer has been upvoted...

While there are various graphical tools described in other answers, they don't do much to address the underlying issue of identifying how you may be able to free up space.

I am currently researching the same issue and came across agedu - which reports on access times as well as size. I've not had a chance to play with it yet - it's written by Simon Tatham (you may have heard of PuTTy) so is probably sensible/reliable.

However, like all the tools listed here, it collects data on demand. Even the most efficint coding on the fastests hardware will take time to walk a milt-terrabyte filesystem.

  • 4,374
  • 2
  • 22
  • 31
  • If you can't use a GUI (like you're on a remote server), `ncdu -e` works nicely. Once the display opens up, use `m` then `M` to display and sort by mtime, while the (admittedly small) percentage graph is still there to get you an idea of the size. –  Aug 24 '19 at 12:53
  • "If you can't use a GUI (like you're on a remote server)," - why does a remote server prevent you from using a gui? – symcbean Aug 24 '19 at 16:02
  • `ncdu -e` is wrong becasue it requires an argument – Dennis Jan 17 '21 at 15:01

Another one is duc, sort of a collection of command line tools which are indeed scalable, fast and versatile. It also features some GUI/TUI options.

Nikos Alexandris
  • 1,386
  • 2
  • 16
  • 39

I've had success tracking down the worst offender(s) piping the du output in human readable form to egrep and matching to a regular expression.

For example:

du -h | egrep "[0-9]+G.*|[5-9][0-9][0-9]M.*"

which should give you back everything 500 megs or higher.

  • Don't use grep for arithmetic operations -- use awk instead: `du -k | awk '$1 > 500000'`. It is much easier to understand, edit and get correct on the first try. – ndemou Jul 04 '17 at 20:25

If you want speed, you can enable quotas on the filesystems you want to monitor (you need not set quotas for any user), and use a script that uses the quota command to list the disk space being used by each user. For instance:

quota -v $user | grep $filesystem | awk '{ print $2 }'

would give you the disk usage in blocks for the particular user on the particular filesystem. You should be able to check usages in a matter of seconds this way.

To enable quotas you will need to add usrquota to the filesystem options in your /etc/fstab file and then probably reboot so that quotacheck can be run on a idle filesystem before quotaon is called.


I had a similar issue, but the answers on this page weren't enough. I found the following command to be the most useful for the listing:

du -a / | sort -n -r | head -n 20

Which would show me the 20 biggest offenders. However even though I ran this, it did not show me the real issue, because I had already deleted the file. The catch was that there was a process still running that was referencing the deleted log file... so I had to kill that process first then the disk space showed up as free.

  • 344
  • 2
  • 12
  • Good point but this should be a comment and not an answer by itself - this question suffers from too many answers – ndemou Jun 09 '17 at 10:36

You can use DiskReport.net to generate an online web report of all your disks.

With many runs it will show you history graph for all your folders, easy to find what has grow

  • 9
  • 1
  • This tool doesn't match two main points of the question "I often find myself struggling to track down the culprit after a partition goes full" and "I'd prefer a command line solution which relies on standard Linux commands" – ndemou Jun 09 '17 at 10:35
du -sk ./* | sort -nr | \
awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} \
     { total = total + $1; x = $1; y = 1; \
       while( x > 1024 ) { x = (x + 1023)/1024; y++; } \
       printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } \
    END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } \
          printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'


  • 3,011
  • 7
  • 31
  • 44
  • 1

To show the top50 largest files:

find /st0 -type f 2>/dev/null -exec du -Sh {} + | sort -rh | head -n 50

To show the top50 largest folders:

du -hcs /st0/* 2>/dev/null | sort -rh | head -50
  • 797
  • 1
  • 7
  • 12

I have an alias called du1:

alias du1='du -h --max-depth=1'

which is handy if you want a quick list of what space everything in the current directory is taking up. But really, ncdu is all you need...

  • 131
  • 3

I can't take credit for this, but I found it just yesterday:

$ find <path> -size +10000k -print0 | xargs -0 ls -l

link text


Identify the problematic filesystem and then use -xdev to only traverse that filesystem.


find / -xdev -size +500000 -ls
  • 20,593
  • 4
  • 45
  • 72

There is a nice piece of cross-platform freeware called JDiskReport which includes a GUI to explore what's taking up all that space.

Example screenshot:
JDiskReport screenshot

Of course, you'll need to clear up a little bit of space manually before you can download and install it, or download this to a different drive (like a USB thumbdrive).

(Copied here from same-author answer on duplicate question)

  • 119
  • 1
  • 1
  • 4

I realise that this thread is quite old, but nonetheless, very pertinent in any setup today and beyond. While all have offered excellent options to track down the disk hogs, what caught my attention was your statement "...I often find myself struggling...". It looks like you have to battle this symptom frequently. I would take a step back and see how you can prevent this. A precautionary measure will involve two steps:

  1. Alerting
  2. Action on the filesystem

As an example, when the FS hits 90%, you can set up an alert via Email to inform users about this situation. Or, you can Email yourself about it. A cron job can check the status at 5-min intervals.

Next, when it hits, say, 98%, you can run a script to set the FS readonly. This won't hurt much as it will go ro in a short while. But the advantage of setting an FS ro before 100% is that the user(s) can delete files when write is restored. While on this, there is a bug in some older versions of Solaris that will crash the system in the event of an FS hitting 100%, but we will leave it for another day.

Rui F Ribeiro
  • 53,870
  • 25
  • 138
  • 216
Hopping Bunny
  • 472
  • 2
  • 3

Here's the best method I've found:

cd /
find . -size +500000 -print
  • 299
  • 1
  • 3
  • 10

The simplest is to change your current directory to / and execute :

du -chs / | sort -h