Showing posts with label UNIX. Show all posts
Showing posts with label UNIX. Show all posts

Saturday, December 14, 2013

"Useless Use of Cat" Award, "Useless Use of …" Award, …


Rather reasonable and entertaining reading as the same time.
In the future I will certainly refer people to his page, whenever I will happen to find an opportunity.

Wednesday, December 4, 2013

AIX moans: "Cannot open or remove a file containing a running program"


I came across this phenomenon in various contexts on AIX. 
It really sounds like "text file busy" from the old Unix days.

Looks like the guys who misuse this error message for there miserable situations actually mean "cannot open or remove a file". I call this capturing.

Looks like some variants of the "copy" utility (like CPAN File::Copy) use this error message instead of "no such file or directory". Maybe this happens in concurrent environments / situations, where a file disappears between the call to File::Copy and the actual kernel "open". Once "open" has a file handle, the inode access counter gets incremented, and the file cannot really disappear any longer, as long as it remains open ie. did not get closed again.

To be continue …

Wednesday, October 23, 2013

created utility "cvsgrep" from "rcsgrep" shipping with O'Reilly's "Unix Power Tools" (the code TAR ball)

Derived cvsrevs from rcsrevs, and cvsgrep from rcsgrep. This is really nice!! Imagine this:
$ cvsgrep -a SEARCHME files


I knew,  rcsgrep and sisters would be very, very useful.

svngrep should be quite useful as well.

Thursday, October 17, 2013

added a section on "XCOFF-based Unix-like systems" at the article on "dynamic linker" on the English wikipedia

A long PATH… makes the OS …
  1. "… access the disk quite often, hence there is a lot of disk I/O" – do you agree?
  2. "… compute a lot of …, hence a high CPU load" – do you agree?
I got involved in a rather "controversial" discussion on that. I voted for (1). What are you voting for?

started the article on LIBPATH at the English wikipedia

https://en.wikipedia.org/wiki/LIBPATH

I have no idea for how long it will survive the storms on en.wikipedia.org. I still just felt like I really should start it.

On AIX (apparently with some OS/2 ancestry) it means basically the same as LD_LIBRARY_PATH elsewhere.

Wednesday, October 16, 2013

long PATH, LD_LIBRARY_PATH, … – what kind of performance impact do they have?

A long PATH… makes the OS …
  1. "… access the disk quite often, hence there is a lot of disk I/O" – do you agree?
  2. "… compute a lot of …, hence a high CPU load" – do you agree?
I got involved in a rather "controversial" discussion on that. I voted for (1). What are you voting for?

I posted this issue at http://www.unix.com/shell-programming-scripting/238465-long-path-ld_library_path-libpath-what-kind-performance-impact-do-they-have.html as well.

"Lexical File Names in Plan 9" or "Getting Dot-Dot Right" (an article on symlinks in Unix)

http://www.cs.bell-labs.com/sys/doc/lexnames.html

An article on symlinks in Unix.
ABSTRACT
Symbolic links make the Unix file system non-hierarchical, resulting in multiple valid path names for a given file. This ambiguity is a source of confusion, especially since some shells work overtime to present a consistent view from programs such as pwd, while other programs and the kernel itself do nothing about the problem.
Plan 9 has no symbolic links but it does have other mechanisms that produce the same difficulty. Moreover, Plan 9 is founded on the ability to control a program’s environment by manipulating its name space. Ambiguous names muddle the result of operations such as copying a name space across the network.
To address these problems, the Plan 9 kernel has been modified to maintain an accurate path name for every active file (open file, working directory, mount table entry) in the system. The definition of ‘accurate’ is that the path name for a file is guaranteed to be the rooted, absolute name the program used to acquire it. These names are maintained by an efficient method that combines lexical processing—such as evaluating .. by just removing the last path name element of a directory—with local operations within the file system to maintain a consistently, easily understood view of the name system. Ambiguous situations are resolved by examining the lexically maintained names themselves.
A new kernel call, fd2path, returns the file name associated with an open file, permitting the use of reliable names to improve system services ranging from pwd to debugging. Although this work was done in Plan 9, Unix systems could also benefit from the addition of a method to recover the accurate name of an open file or the current directory.

Wednesday, September 25, 2013

O'Reilly Media book: Unix Power Tools, 3rd Edition (2002)


a couple of nice utilities from their sample code:
  • rcsgrep, rcsegrep, rcsfgrep - check out files and grep/egrep/fgrep them – is there somthing similar for subversion etc.?
Update 2013-10-23:
Derived cvsrevs from rcsrevs, and cvsgrep from rcsgrep. This is really nice!! Imagine this:
$ cvsgrep -a SEARCHME files

do you remember "pipegrep" from a rather old Camel Book?

[Chapter 27] 27.13 More grep-like Programs Written in Perl (an outdated but still useful "Unix Power Tools" edition somewhere on a web-server discusses it)

The pipegrep program greps the output of a series of commands. The difficulty with doing this using the normal grep program is that you lose track of which file was being processed. This program prints out the command it was executing at the time, including the filename. The command, which is a single argument, will be executed once for each file in the list. If you give the string {} anywhere in the command, the filename will be substituted at that point. Otherwise the filename will be added on to the end of the command. This program has one option, -l, which causes it to list the files containing the pattern.

Follow the above link for a nice example!

I remembered the pipegrep utility today, because I came across a task …

Update 2013-10-17:

GNU Coding Standards: Option Table (command line options)

GNU Coding Standards: Option Table

Tuesday, September 10, 2013

yet another Perl power tool: ack - grep-like text finder

ack - grep-like text finder - metacpan.org - Perl programming language

added "Perl Power Tools" to the PPT "disambiguation page" on the English wikipedia

en.wikipedia.org/wiki/PPT

I am curious, how long this entry will survive there. It's actually not referring to an existing article on the Perl Power Tools. It's just meant to tell "the reader", hey, there are these nice Perl Power Tools out there, and they also get abbreviated as PPT.

Well, yes, I could create a wikipedia stub page, but it would only have these 2 links:

And maybe some short text with like 30 words from the first of these 2 links.

But for how long would this stub article survive on wikipedia?

the Unix date command: yesterday, tomorrow, nanoseconds

GNU's date lets you specify yesterday, tomorrow, Sunday, … to be displayed following the format specification you supply. And (if available) you can also get nanoseconds displayed. Please do not complain, that precisions between nanoseconds and seconds are missing!

Maybe on your UNIX flavor computer GNU date is installed as gnudate or gdate (just like: gnutar, gtar, …).

Tuesday, September 3, 2013

how to keep folders in sync over the Internet ("long-distance", not LAN)? I mean *sparingly*

So far I have been using rsync for this purpose, but there is still space for optimizations.

If I locally rename a big file, next time my "rsync -vaz --delete-after" will copy the file using the new name and remove it using its old name. I would actually prefer to see the file getting renamed remotely.

Posted this issue on the rsync user mailing list [link].
In response to that, I received a message pointing me to the "should" tool [link].

Tuesday, June 11, 2013

how to sort files by content

Same content, different names of 2 or more files –– how to identify the duplicates?

Use check sum utilities (like chksum, md5sum, sha1sum, …)  for this, and sort on their output!

Friday, June 7, 2013

linux - How to 'grep' a continuous stream - Stack Overflow

linux - How to 'grep' a continuous stream - Stack Overflow

  • a had never heard of stdbuf before
  • AIX grep has -u
  • GNU grep has --line-buffered, its -u is not the same as AIX grep's

Monday, April 29, 2013

logcheck – scans your logfiles and warns you

Dependencies and restrictions:
  • logcheck depends on logtail
  • logtail runs on exactly one file …
  • … and only once on that file, as it keeps a sister file called logfile.offset
  • improving this should actually be "easy", e.g. keeping a sister file called logfile.offset.user

Pros:
  • abstracting resp. filtering log files like /var/log/messages
  • this happens "realtime" i.e. instantly
Cons:
  • the results get to you resp. a whole crowd via e-mail – isn't that a little excessive regarding the resources necessary to look at the output?
  • if you already have a utilty in place, that filters daily chunks of log files, this is 
Ideas / inspirations taken from here:
  • the output could actually go to another log file, that you may want to "logtail" or "tail -f" as to your needs

Friday, March 8, 2013

AIX has a limit for cronjobs concurrently being run

Sounds reasonable in a way. But it's rather bad, if you are not aware of it, and your corporate OS Monitoring Services do not remind you of critical entries in the resp. cron log file. They may say something like:
there is crontab entry, that did not get executed, because we exceeded the maximum number of … – maybe you want to fix something here
I suppose, similar limits exist in every UNIX flavour.

Thursday, November 8, 2012

PPT = Perl Power Tools: "Unix Reconstruction Project"


Welcome to the Unix Reconstruction Project.
Our goal is quite simply to reimplement the classic Unix command set in pure Perl, and to have as much fun as we can doing so.

I would really love to have a few more utilities in that collection:
Yes, the GNU coreutils do certain things much, much better, but still … – yet: you can install PPT privately and with a rather small footprint, which makes them rather suitable in certain corporate environments. 

Just a few goodies:
  • tcgrep lets you traverse directory trees – more or less like ack (see below!)
  • tail lets you follow multiple files simultaneously
Parts of the collection with a separate life outside the collection:
A few utilities, that are not formally a part of this collection: