I tried to obtain the size of a directory (containing directories and sub directories) by using the ls
command with option l
. It seems to work for files (ls -l file name
), but if I try to get the size of a directory (for instance, ls -l /home
), I get only 4096 bytes, although altogether it is much bigger.

- 1,659
- 3
- 14
- 25

- 23,792
- 22
- 62
- 85
-
241) Strictly speaking, you can't. Linux has directories, not folders. 2) There's a difference between the size of a directory (which is a special file holding inodes that point to other files), and the size of the contents of that directory. As others have pointed out, the du command provides the latter, which is what it appears you want. – jamesqf Feb 19 '15 at 18:27
-
15as you seem to be new, I'll just point out the helpful `-h` option you can add to the `-l` option (i.e. `ls -lh`) to get the sizes of files to be printed out in human-friendly notation like 1.1M instead of 1130301. The "h" in the `du -hs` command that @sam gave as the answer for your question about directories also means "human-readable", and it also appears in `df -h` which shows the human readable amounts of used and free space on disk. – msouth Feb 20 '15 at 05:44
-
Cross-site duplicate: *[How do I determine the total size of a directory (folder) from the command line?](https://askubuntu.com/questions/1224)* – Peter Mortensen Nov 26 '17 at 23:16
-
`du -sh -- *` works for me. – roottraveller Aug 03 '20 at 10:55
16 Answers
du -sh file_path
Explanation
du
(disc usage) command estimates file_path space usageThe options
-sh
are (fromman du
):-s, --summarize display only a total for each argument -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)
To check more than one directory and see the total, use
du -sch
:-c, --total produce a grand total

- 103
- 4

- 20,897
- 4
- 21
- 30
-
62...unless you have hardlinks ;-) http://stackoverflow.com/questions/19951883/du-counting-hardlinks-towards-filesize – Rmano Feb 20 '15 at 10:31
-
2It works very nice with `find` e.g. to count the amount of space in specific subdirectories in current path: `$ find . -type d -name "node_modules" -prune -exec du -sh {} \;` – Alex Glukhovtsev Apr 16 '19 at 06:02
-
1
-
-
I'm looking right now at a folder I just copied from an external drive. It contains four files (no hardlinks). `du -ba $folder` reports that each of these files is identical in size across the copied folders, but the total at the folder level does not match. `du -bs`, `du -h`, etc., same answer. (One folder size is six bytes more than the sum of the files; the other is ~10% larger.) I've seen this issue before comparing a folder on an external drive. Is there any unix command that will reliably report two folders containing identical files as being the same "size"? – Sasgorilla May 08 '21 at 20:15
-
Just use the du
command:
du -sh -- *
will give you the cumulative disk usage of all non-hidden directories, files etc in the current directory in human-readable format.
You can use the df
command to know the free space in the filesystem containing the directory:
df -h .

- 481,585
- 88
- 947
- 1,398

- 5,207
- 1
- 11
- 14
-
7`du -sh *` starts throwing "unknown option" errors if any of the files in that dir begin with a dash. Safer to do `du -sh -- *` – mpen Mar 01 '16 at 23:07
-
2`du -sh *` doesn't show memory usages of hidden folders – Prashant Prabhakar Singh Oct 18 '16 at 13:00
-
7`du -sh -- * .*` to include dotfiles. This is useful to include a possibly large `.git` directory, for example. Alternatively in zsh you can `setopt globdots` to glob dotfiles by default. – cbarrick Nov 29 '16 at 03:13
-
4What does the `--` do? I know it applies to shell *built-ins* to end option arguments, but `du` is not a built-in, and I don't see this usage documented for `du`: https://linux.die.net/man/1/du – flow2k Jan 10 '19 at 05:16
-
1
-
9(--) is used in most bash built-in commands and many other commands to signify the end of command options, after which only positional parameters are accepted. [`source`](https://unix.stackexchange.com/questions/11376/what-does-double-dash-mean) – Krishna Jun 03 '20 at 11:47
du
is your friend. If you just want to know the total size of a directory then jump into it and run:
du -hs
If you also would like to know which sub-folders take up how much disk space?! You could extend this command to:
du -h --max-depth=1 | sort -hr
which will give you the size of all sub-folders (level 1). The output will be sorted (largest folder on top).

- 113
- 4

- 3,453
- 2
- 12
- 15
-
1It seems on some (perhaps older?) versions of linux, sort does not have an h switch, and therefore the next best command I could find is: du -c --max-depth=1 | sort -rn – richhallstoke Jul 13 '16 at 09:41
-
2@richhallstoke if you use [`ncdu`](https://unix.stackexchange.com/a/342796/139857) the files are sorted by descending size by default. – Armfoot Feb 03 '18 at 06:35
-
9
-
1+1 for `--max-depth`. I needed to see the size of folders of my current directory, not each and every subdirectories and files. This solved my problem – imans77 Dec 16 '19 at 20:36
-
2just wanted to note that on osx `du` appears to use `-d` instead of `--max-depth` – lfender6445 Dec 21 '20 at 18:11
-
to avoid the line for current directory in the result, just add star (idea from Pacifist, above): `du -h --max-depth=1 * | sort -h` – honzajde May 24 '22 at 14:45
-
du
can be complicated to use since you have to seemingly pass 100 arguments to get decent output. And figuring out the size of hidden folders is even tougher.
Make your life easy and use ncdu
.
You get per folder summaries that are easily browsable.

- 4,039
- 14
- 21
- 32

- 1,272
- 1
- 8
- 4
-
1checked out `ncdu` and would like to point out to others: when you're hunting for those files that are bloating some directory this utility is extremely useful as it displays size tapes/indicators which make the culprit(s) stand out. Overall this offers the right amount of interactivity which may be particularly useful in command-line only environments. – darbehdar Aug 02 '21 at 03:30
The du
command shows the disk usage of the file.
The -h
option shows results in human-readable form (e.g., 4k, 5M, 3G).
du -h (file name)

- 321
- 1
- 3
- 18

- 473
- 4
- 7
All of the above examples will tell you the size of the data on disk (i.e. the amount of disk space a particular file is using, which is usually larger than the actual file size). There are some situations where these will not give you an accurate report, if the data is not actually stored on this particular disk and only inode references exist.
In your example, you have used ls -l on a single file, which will have returned the file's actual size, NOT its size on disk.
If you want to know the actual file sizes, add the -b option to du.
du -csbh .

- 493
- 4
- 6
-
1Yes. I'm using sdfs which compresses & dedups the files, so I couldn't figure out why it was reporting such low numbers. The actual size of the files with ls can be found by using: du -b – Ryan Shillington Oct 03 '16 at 22:31
personally I think this is best, if you don't want to use ncdu
# du -sh ./*

- 319
- 2
- 2
-
Thank you! A command to see the size of just the direct children -- avoiding the huge wall of text that displays when you use the regular "recursive" version. – Venryx Oct 04 '21 at 16:48
df -h .; du -sh -- * | sort -hr
This shows how much disk space you have left on the current drive and then tells you how much every file/directory takes up. e.g.,
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 206G 167G 29G 86% /
115M node_modules
2.1M examples
68K src
4.0K webpack.config.js
4.0K README.md
4.0K package.json

- 794
- 7
- 13
-
FYI, it seems to report the size-on-disk. i.e., it'll probably be padded to the nearest 4KB. – mpen Jun 15 '16 at 15:51
Here is a function for your .bash_aliases
# du with mount exclude and sort
function dusort () {
DIR=$(echo $1 | sed 's#\/$##')
du -scxh $(mount | awk '{print $3}' | sort | uniq \
| sed 's#/# -- exclude=/#') $DIR/* | sort -h
}
sample output:
$ dusort /
...
0 /mnt
0 /sbin
0 /srv
4,0K /tmp
728K /home
23M /etc
169M /boot
528M /root
1,4G /usr
3,3G /var
4,3G /opt
9,6G total
for subdirs:
$ dusort .
$ dusort /var/log/

- 139
- 1
- 2
find all files under current directory recursively and sum up their size:
find -type f -print0 | xargs -0 stat --print='%s\n' | awk '{total+=$1} END {print total}'

- 119
- 1
- 2
-
I would use `-not -type d` to sum not only sizes of ordinary files (`-type f`) but also sizes of symbolic links and so on. – anton_rh Sep 24 '18 at 11:08
-
This is great, because you don't get the overhead required to store the files, but only the size of the files themselves. – bballdave025 May 22 '19 at 21:31
-
-
Here is a POSIX script that will work with:
- A file
- Files
- A directory
- Directories
ls -A -R -g -o "$@" | awk '{n1 += $3} END {print n1}'

- 1
- 5
- 42
- 62
Note that du
prints the space that a directory occupy on the media which is usually bigger than just the total size of all files in the directory, because du
takes into account the size of all auxiliary information that is stored on the media to organize the directory in compliance with file system format.
If the file system is compressible, then du
may output even smaller number than the total size of all files, because files may be internally compressed by the file system and so they take less space on the media than just uncompressed information they contain. Same if there are sparse files.
if there are hard links in the directory, then du
may print smaller value as well because several different files in the directory refer the same data on the media.
To get the straightforward total size of all files in the directory, the following one-line shell expression can be used (assuming a GNU system):
find . ! -type d -print0 | xargs -r0 stat -c %s | paste -sd+ - | bc
or even shorter:
find . ! -type d -printf '%s\n' | paste -sd+ - | bc
It just sums sizes of all non-directory files in the directory (and its subdirectories recursively) one by one. Note that for symlinks, it reports the size of the symlink (not of the file the symlink points to).

- 481,585
- 88
- 947
- 1,398

- 379
- 2
- 9