output the last part of files
add an example, a script, a trick and tips
Filter tail command through multiple grep commands to separate files
It might be easier if you use more than one line to do this. You
could write a bash script:
tail -f test.log | while read line; do
if echo "$line" | grep -q "Error"; then
echo "$line" >> error.log
elif echo "$line" | grep -q "Warning"; then
echo "$line" >> warning.log
# The following is in case you want to print out lines that do not match
tail -f `ls -t | head -1`
##What does it do ?
I open the latest file is always, and see data that are added on the fly. It is useful to see what is happening in a log file, in a log directory for example.
The last lines of the last modified file, and lines that are added from then.
(won't work if the latest one is a directory)
example added by yoshimura
How to easily break up a text file into pieces smaller than a threshold?
There already is a nice tool for that:
> man 1 split
split -- split a file into pieces
split [-l line_count] [-a suffix_length] [file [prefix]]
split -b byte_count[K|k|M|m|G|g] [-a suffix_length] [file [prefix]]
split -p pattern [-a suffix_length] [file [prefix]]
split --bytes 50M test.out test.out_ would split the
test.out into test.out_xaa, test.out_xab,
A much uglier solution would be to use
dd if=test.out of=test.out.part1 bs=1M count=50
skip=0 creates a file named test.out.part1 with the first
50M from test.out. You can increase the value for skip to 1 to
get the second chunk, to 2 for the third etc etc. Just make sure
to also change the filenames or you will end up overwriting the
same output file.
Combine tail -f with grep?
You almost wrote the answer, which is :
tail -f file.log | grep "foobar"
How to grab a random section in the middle of a huge file?
You just have to write a little program to seek to some random
spot and read some amount of lines.
An example in Python (reads one line, but you can modify it):
"""Return a randomly selected line from a file."""
fo = open("/some/file.txt")
point = random.randrange(fo.size)
c = fo.read(1)
while c != '\n' and fo.tell() > 0:
c = fo.read(1)
line = fo.readline().strip()
tail -f not tracking file changes
-f follows by inode. If you want to follow by name,
such as when a program completely recreates the file, then use
Linux application to tail multiple log files (like OS X Console.app)
Not sure about other distros, but Ubuntu has/had the GNOME System Log Viewer.
quick tail on a huge file on linux
Apart from splitting up the file into some smaller files, you
could simply open the file and seek it to something you may think
of as close to the end of the file.
After that, you read as much lines as there may come, and, if you
EOF without all of your 10000000 desired
lines, you just need to make a diff from the first
position you guessed, and a new -- prior -- position,
and try to read the n = diff lines.
I do not actually know if
tail does so, or if
there's any available POSIX tool that performs this kind of
operation; implementing this shouldn't take more than five
minutes, I guess (: This may be of
Delete first lines from a Unicode html file
I can't access your file so I can't test this, but one of these
gawk 'NR>5' Result.html>small2
perl -ne 'print if $.>5' Result.html>small2
If they don't work, I doubt it is a problem with the encoding,
you may have some strange characters screwing things up. try
passing your file through
od to check:
od -c Result.html | more
I see in the output of
od -c that you have mac-style
lines that end with a carriage return (\r) and not a line feed
(\n). So, try changing these to \n and running sed or one of the
other commands again:
perl -ne 's/\r/\n/g; print' Results.html | gawk 'NR>5' > small2
Also, please post your file so we can access it and try it
ourselves. It will greatly speed up the process. The service you
have linked to requires us to get an account.
running tail -f on a server which connects to another server over ssh
If you have netcat installed on server1.com (you probably do),
you may want to use the ssh directive
to seamlessly hop across server1.com; thus, when you press
Ctrl+C, it will only terminate the command on server2.com, not
your SSH session.
Example of your
~/.ssh/config (create the file if it
doesn't exist; append to end if it does):
ProxyCommand ssh -q server1.com nc -q0 server2.com 22
What happens here:
- ssh connects to server1.com
- it remotely connects from there to server2.com (using nc)
- which ferries the data through server1.com
This is completely transparent to your ssh client, so you can
work with server2.com as if you were connected directly (e.g.
SFTP, X forwarding, TCP forwarding, etc.)
For a more detailed explanation (as well as extending this to
multiple hops), see this article, or this
similar question on SU.
Print the last
10 lines of each FILE to standard output. With more than one
FILE, precede each with a header giving the file name. With
no FILE, or when FILE is -, read standard input.
arguments to long options are mandatory for short options
output the last K bytes;
alternatively, use -c +K to output bytes
starting with the Kth of each file
output appended data as the
file grows; -f, --follow,
and --follow=descriptor are
same as --follow=name
output the last K lines,
instead of the last 10; or use -n +K to output
lines starting with the Kth
--follow=name, reopen a FILE which
has not changed size after N (default 5) iterations to see
if it has been unlinked or renamed (this is the usual case
of rotated log files). With inotify, this option is rarely
with -f, terminate
after process ID, PID dies
never output headers giving
keep trying to open a file even
when it is or becomes inaccessible; useful when following by
name, i.e., with --follow=name
with -f, sleep for
approximately N seconds (default 1.0) between iterations.
With inotify and --pid=P, check
process P at least once every N seconds.
always output headers giving
display this help and exit
output version information and
If the first
character of K (the number of bytes or lines) is a
’+’, print beginning with the Kth item from the
start of each file, otherwise, print the last K items in the
file. K may have a multiplier suffix: b 512, kB 1000, K
1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G
1024*1024*1024, and so on for T, P, E, Z, Y.
--follow (-f), tail defaults
to following the file descriptor, which means that even if a
tail’ed file is renamed, tail will continue to track
its end. This default behavior is not desirable when you
really want to track the actual name of the file, not the
file descriptor (e.g., log rotation). Use
--follow=name in that case. That
causes tail to track the named file in a way that
accommodates renaming, removal and creation.
Copyright © 2012 Free Software Foundation, Inc. License
GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute
it. There is NO WARRANTY, to the extent permitted by law.
Report tail bugs to bug-coreutils[:at:]gnu[:dot:]org
GNU coreutils home page:
General help using GNU software:
Report tail translation bugs to
documentation for tail is maintained as a Texinfo
manual. If the info and tail programs are
properly installed at your site, the command
coreutils 'tail invocation'
should give you
access to the complete manual.
Written by Paul
Rubin, David MacKenzie, Ian Lance Taylor, and Jim