convert and copy a file
add an example, a script, a trick and tips
dd if=file1 of=file2 bs=2M
example added by an anonymous user
Why does 'dd' not work for creating bootable USB?
Have you made sure that your motherboard is set to boot from the
USB device before it tries booting from your HDD? I would guess
that may be your only issue - there's not much to using
dd as you can see.
What happens if I dd zeros to the drive where dd resides?
Yes. Of course, it'll also cost you most of your filesystem, but
presumably you already know that...
How can I mount dd image of a partition?
You might first have to use losetup to create a device from your
file, and then mount that device. Here's what I do to mount a
backup file with partition image inside:
losetup /dev/loop1 /home/backup-file
mount /dev/loop1 /mnt/backup
My partition then appears under /mnt/backup, and the original
file is /home/backup-file. Maybe you can do this all with "mount
-o loop" but I haven't been successful with that, so I'm using
After I'm finished, I umount the partition and then delete the
loop with "losetup -d /dev/loop1", just in case.
Also, you can use losetup to find out what loop device is
currently free in your system, with losetup -f
Let me know if this works.
How to pad a file with "FF" using dd?
What exactly does the dd command do?
dd does a byte-by-byte copy from the source to the
destination, with an optional conversion specified by the
conv argument. It performs reads and writes as
specified by the
with the range defined by the
what happens if the specified output file is too small to be
turned into the specified input file?
of is too small to contain
the data is truncated to fit. Note that if
of is a
regular file then it is overwritten.
How do I create a 1GB random file in Linux?
Try this script.
$random=(openssl rand -base64 1000)
dd if=$random of="sample.txt bs=1G count=1"
This script might work as long as you don't mind using
dd if=/dev/random of="sample.txt bs=1G count=1"
How to create VHD disk image from a Linux live system?
One approach is to use a couple of handy technologies:
VirtualBox, and the
Recent versions of VirtualBox allow you to create VHD hard disk
ntfsprogs provides the
ntfsclone utility. As its name suggests,
ntfsclone clones NTFS filesystems, and I believe
that it does it at the filesystem level, skipping over unused
So, to begin, create a new VM in VirtualBox, and provision a new,
empty VHD-file drive for it. The VHD drive need only be as large
as the size of data in use on the physical drive you want to
clone (well actually, make it a little bit larger, to allow for
some wiggle room).
Next, find a Linux live CD that contains the
ntfsprogs package, as well as
openssh-server. I like System Rescue CD
for this, but pretty much any Debian- or Ubuntu-based live CD
should work as well.
Boot the VirtualBox VM with the Linux live CD, and start
sshd within the VM so that you will be able execute
commands on it remotely. Partition the empty VHD drive
approriately, using whatever partitioning tool you prefer (I like
fdisk, but I'm somewhat old school).
With another copy of the Linux live CD, boot the machine
containing the physical disk you want to clone. I assume that the
VirtualBox VM and this machine are accessible to each other over
the network. On this machine, execute the following command (all
on one line):
ntfsclone --save-image -o - /dev/sdXX |
ssh root@VirtualBox-VM 'ntfsclone --restore-image --overwrite /dev/sdYY -'
/dev/sdXX is the device name (on the local
machine) of the physical drive you want to clone, and
/dev/sdYY is the device name (in the VM) of the
VHD destination drive.
Explanation: The first
ntfsclone command in the
pipeline extracts an image of the source NTFS filesystem and
sends it out through the ssh tunnel, while the second
ntfsclone command receives the image and restores it
to the VHD drive.
Once the operation completes, the VHD file should contain a
file-for-file exact clone of the original physical disk (barring
any hardware errors, like bad sectors, that might cause the
process to abort prematurely).
One last thing you may want to do is to run a Windows
chkdsk on the VHD drive, just to ensure the cloning
didn't introduce any problems (it shouldn't have, but hey, I'm a
bit paranoid about these things).
dd_rescue vs dcfldd vs dd
The three are different, and the two varients are derived for the
needs of specific communities. dd is a general purpose software
for imaging, dd-rescue is designed to rebuild damaged files from
multiple passes and sources, and forensic dd varients are
designed to make verifiable, legally sound copies
dd is the baseline version - its the generic product, so to
speak. DD is designed to make a bit perfect copy. Its what you
use when you want to make a disk image, with no fancy addons.dd
does one thing well, and absolutely nothing else. While there's
distinct gnu and bsd versions, their functionality and commands
are identical to both the unix
dd, and a previous software made for the IBM JCL
gnu ddrescue is optimised for data recovery - it will note down
where bad sectors are, and will attempt to fill in those areas
with data from subsequent runs.As a result, the aim is to get
files that are readable, as opposed to bit perfect. You will want
to use it to recover data from a drive you suspect is damaged
From the DD Rescue Webpage
Ddrescue does not write zeros to the output when it finds bad
sectors in the input, and does not truncate the output file if
not asked to. So, every time you run it on the same output
file, it tries to fill in the gaps without wiping out the data
Automatic merging of backups: If you have two or more damaged
copies of a file, cdrom, etc, and run ddrescue on all of them,
one at a time, with the same output file, you will probably
obtain a complete and error-free file. This is so because the
probability of having damaged areas at the same places on
different input files is very low. Using the logfile, only the
needed blocks are read from the second and successive copies.
dcfldd and other forensic dd varients are designed
to make forensic copies. These need to be bit perfect AND
verifiable. Use this when you absolutely need to know that a copy
and subsequent copies are identical to the original - forensic dd
varients add additional features such as hashing
From the website, additional features of dcfldd are
Hashing on-the-fly - dcfldd can hash the input data as it is
being transferred, helping to ensure data integrity.
Status output - dcfldd can update the user of its progress in
terms of the amount of data transferred and how much longer
operation will take. Flexible disk wipes - dcfldd can be used
to wipe disks quickly and with a known pattern if desired.
Image/wipe Verify - dcfldd can verify that a target drive is a
bit-for-bit match of the specified input file or pattern.
Multiple outputs - dcfldd can output to multiple files or disks
at the same time.
Split output - dcfldd can split output to multiple files with
more configurability than the split command. Piped output and
logs - dcfldd can send all its log data and output to commands
as well as files natively.
creating a bootable USB from command line on linux
This is a common issue with SanDisk USB sticks, or sticks not
formatted in FAT32.
If not either of those it is most certainly an issue with your
stick partition order or the syslinux.cfg file.
Compressing a file in place - does "gzip -c file | dd of=file" really work?
Experiment shows that this does not work.
I created a 2-megabyte file from
tried the above command on it. Here are the results:
% ls -l
-rw-r--r-- 1 kst kst 20971520 2012-01-18 03:47 file
-rw-r--r-- 1 kst kst 20971520 2012-01-18 02:48 orig
% gzip -c file | dd of=file
0+1 records in
0+1 records out
25 bytes (25 B) copied, 0.000118005 s, 212 kB/s
% ls -l
-rw-r--r-- 1 kst kst 25 2012-01-18 03:47 file
-rw-r--r-- 1 kst kst 20971520 2012-01-18 02:48 orig
Obviously a 2-megabyte random file won't compress to 25
bytes, and in fact running
gunzip on the compressed
file yields an empty file.
I got similar results for a much smaller random file (100 bytes).
So what happened?
In this case, the
dd command truncated
file to zero bytes before starting to write to it;
gzip started reading from the newly empty file and
produced 25 bytes of output, which
dd then appended
to the empty
file. (An empty file "compresses" to a
non-zero size; it's theoretically impossible for any compressor
to make all input smaller).
Other results may be possible, depending on the timing of the
dd, and shell processes, all of
which are running in parallel.
There's a race condition because one process,
file, while another parallel process, the
shell, writes to it.
It should be possible to implement an in-place file compressor
that reads and writes to the same file, using whatever internal
buffering is necessary to avoid clobbering data. But I've never
heard of anyone actually implementing that, probably because it
usually isn't necessary and because if the compressor fails
partway through, the file will be permanently corrupted.
How can I mount a disk image?
which seems exactly what I was looking for.
Here's the key part:
mount -o loop,ro,offset=32256 hda.img /mnt/rabbit
where the value of offset is in bytes. The suggested way to get
the offset is to point
parted at the image, then
unit B for bytes and take the start value from the print output.
As an alternative, assuming you have the disk space, do the
obvious: once you have the offset and size, just use
dd to extract each partition to a separate file.
Mac OSX - Why is /dev/rdisk 20 times faster than /dev/disk
/dev/rdisk nodes are character-special devices, but are "raw"
in the BSD sense and force block-aligned I/O. They are closer
to the physical disk than the buffer cache. /dev/disk nodes, on
the other hand, are buffered block-special devices and are used
primarily by the kernel's filesystem code.
In layman's terms
/dev/rdisk goes almost directly to
/dev/disk goes via a longer more expensive
dd performance on Mac OS X vs. Linux
For OS X, use
For some reason
rdisk is faster than
disk. I believe it has to do with buffers.
Also in general using the
bs flag with
dd helps with speed.
dd if=/path/to/image.iso of=/dev/sdc bs=1M
The bytesize is 1M which transfers faster. On OS X you have to
1m (lowercase) instead of
How Do I Find The Hardware Block Read Size for My Hard Drive?
Linux exposes the physical sector size in files
to get the best performance you should probably do a little
testing with different sizes and meassure. I could not
clear answer in
that using exactly the physical block size would get the optimal
result (although I assume it cannot be a bad choice).
Copy a file,
converting and formatting according to the operands.
read and write up to BYTES
bytes at a time
convert BYTES bytes at a
convert the file as per the
comma separated symbol list
copy only N input blocks
read up to BYTES bytes at a
time (default: 512)
read from FILE instead of
read as per the comma separated
write BYTES bytes at a time
write to FILE instead of
write as per the comma
separated symbol list
skip N obs-sized blocks at start of output
skip N ibs-sized blocks at start of input
WHICH info to suppress
outputting to stderr; ’noxfer’ suppresses
transfer stats, ’none’ suppresses all
N and BYTES may
be followed by the following multiplicative suffixes: c =1,
w =2, b =512, kB =1000, K =1024, MB =1000*1000, M
=1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and
so on for T, P, E, Z, Y.
symbol may be:
from EBCDIC to ASCII
from ASCII to EBCDIC
from ASCII to alternate EBCDIC
pad newline-terminated records with spaces to
replace trailing spaces in
cbs-size records with newline
change upper case to lower case
change lower case to upper case
try to seek rather than write the output for NUL input
swap every pair of input bytes
pad every input block with NULs to ibs-size; when
used with block or unblock, pad with spaces rather than
fail if the output file already exists
do not create the output
do not truncate the output
continue after read errors
physically write output file
data before finishing
likewise, but also write metadata
symbol may be:
append mode (makes sense only
for output; conv=notrunc suggested)
use direct I/O for data
fail unless a directory
use synchronized I/O for data
likewise, but also for metadata
accumulate full blocks of input
use non-blocking I/O
do not update access time
discard cached data
do not assign controlling terminal from file
do not follow symlinks
treat ’count=N’ as
a byte count (iflag only)
treat ’skip=N’ as a
byte count (iflag only)
treat ’seek=N’ as a
byte count (oflag only)
Sending a USR1
signal to a running ’dd’ process makes it print
I/O statistics to standard error and then resume
if=/dev/zero of=/dev/null& pid=$!
$ kill -USR1 $pid; sleep 1; kill $pid
records in 18335302+0 records out 9387674624 bytes (9.4 GB)
copied, 34.6279 seconds, 271 MB/s
display this help and exit
output version information and
Copyright © 2012 Free Software Foundation, Inc. License
GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute
it. There is NO WARRANTY, to the extent permitted by law.
Report dd bugs to bug-coreutils[:at:]gnu[:dot:]org
GNU coreutils home page:
General help using GNU software:
Report dd translation bugs to
documentation for dd is maintained as a Texinfo
manual. If the info and dd programs are
properly installed at your site, the command
coreutils 'dd invocation'
should give you
access to the complete manual.
Written by Paul
Rubin, David MacKenzie, and Stuart Kemp.