Bash help requested: Capturing command errors within pipes

Cameron Simpson cs at zip.com.au
Sun Mar 22 02:41:05 UTC 2009


On 21Mar2009 16:47, Daniel B. Thurman <dant at cdkkt.com> wrote:
>> =>> out=$(grep "$pat" "${TRACKER}" | \
>> =>>          eval "$rex" | sort -n | \
>> =>>          uniq >> "${TFILE}"); ret="$?";
[...]
>> =>  if out=$(grep "$pat" "$tracker" | $rex | sort -un >>"$tfile")
>>   
> Please note:
>
> When I tried `sort -un', the data was truncated, i.e.
> there is data loss.  So, when I went back to my original
> code using 'sort -n | uniq',  there is no data loss.  There
> seems to be a problem using the `sort -un' method.

Well, they do mean slightly different things.

"sort -un" sorts and returns the first row of each set of rows that
sorted equal. (i.e. "1 foo" and "1 bah" sort equal (numeric) and only "1
foo" is returned. (See "man sort" for the details, and "man 1p sort" for
what you may portably expect on multiple UNIX platforms.)

"uniq" discards repeated identical lines. "1 foo" and "1 foo" are
identical, but not "1 bah". (And uniq requires sorted input; the
repeated lines must be adjacent in the input.)

It is often correct to replace "sort -n | uniq" with "sort -un", but I was
clearly wrong to do so here.

> What I do in my code, is to create a copy of the sorted
> and uniq'd original file to a temp file, and then append
> new data to the temp file, then sorted and uniq the temp
> file back into the original file. The result was a file that
> ended up much smaller than the original file!

Yah, see discussion above.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

The govt MUST regulate the Net NOW! We can't have average people saying
what's on their minds!  - ezwriter at netcom.com




More information about the users mailing list