print pdf man ls Show Sample Output
This command won't delete resource forks from an HFS file system, only from file systems that don't natively support resource forks.
This simple command removes all the .svn directories recursively. Useful when you want to get a clean code excluding .svn files. Check what is getting delete through this command " find . -name '.svn' -type d | xargs echo "
rm /SOME/PATH/*, when you hit "argument list too long".
Like the above, but runs a single rm command
Code to delete file with gremlins/special characters/unicode in file name. Use ls -i to find the INODE number corresponding to the file and then delete it using that find statement. detailed here: http://www.arsc.edu/arsc/support/howtos/nonprintingchars/ Show Sample Output
touch -t 201208211200 first ; touch -t 201208220100 last ; creates 2 files: first & last, with timestamps that the find command should look between: 201208211200 = 2012-08-21 12:00 201208220100 = 2012-08-22 01:00 then we run find command with "-newer" switch, that finds by comparing timestamp against a reference file: find /path/to/files/ -newer first ! -newer last meaning: find any files in /path/to/files that are newer than file "first" and not newer than file "last" pipe the output of this find command through xargs to a move command: | xargs -ifile mv -fv file /path/to/destination/ and finally, remove the reference files we created for this operation: rm first; rm last;
Uses zsh globbing syntax to safely remove all the files known to be generated by LaTeX, but only if there is actually a .tex source file with the same basename present. So we don't accidentally delete a .nav .log or .out file that has nothing to do with LaTeX, e/'[[ -f ${REPLY:r}.tex ]]'/ actually checks for the existance of a .tex file of the same name, beforehand.
A different way to do this, would be to glob all *.tex files and generate a globbing pattern from them:
TEXTEMPFILES=(*.tex(.N:s/%tex/'(log|toc|aux|nav|snm|out|tex.backup|bbl|blg|bib.backup|vrb|lof|lot|hd|idx)(.N)'/)) ;
rm -v ${~TEXTEMPFILES}
or, you could use purge() from grml-etc-core ( http://github.com/grml/grml-etc-core/blob/master/usr_share_grml/zsh/functions/purge )
Good for when your working on building a clean source install for RPM packaging or what have you. After testing, run this command to compare the original extracted source to your working source directory and it will remove the differences that are created when running './configure' and 'make'.
Only zsh supports it. This removes all the regular files in the current directory except for any .tex and .pdf files.
Copies file to a temporary location, edit and set to real file's time stamp then copy back. Assumes access to /tmp and has $EDITOR, but can be replaced with better values.
svn must be 1.7
Take a screenshot, give $1 seconds pause to choose what to screenshot, then upload and get URI of post in ompdlr.org Show Sample Output
Linux users wanting to extract text from PDF files in the current directory and its sub-directories can use this command. It requires "bash", "ps2ascii" and "par", and the PARINIT environment variable sanely set (see man par). WARNING: the file "junk.sh" will be created, run, and destroyed in the current directory, so you _must_ have sufficient rights. Edit the command if you need to avoid using the file name "junk.sh"
While `echo rm * | batch` might seem to work, it might still raise the load of the system since `rm` will be _started_ when the load is low, but run for a long time. My proposed command executes a new `rm` execution once every minute when the load is small. Obviously, load could also be lower using `ionice`, but I still think this is a useful example for sequential batch jobs. Show Sample Output
remove old index.html if you download it again and organiaz the java script tag on the file index.html
Install Ksuperkey one command in Kubuntu. You must manually add ksuperkey to autostart in System Settings KDE.
This command removes and then cvs removes all files in the current directory recursively.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: