Dear All
Taking a text file, how can one eliminate at once all superfluous blank lines?
Thanks in advance,
Paul
On Wed, 2005-11-30 at 11:00 +0000, Paul Smith wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
I used to know of something that did that (something with various different reformatting options for massaging text files), but I can't think what it was. Quickly looking at the man file for the cat program, you could do something like:
cat --squeeze-blank inputfilename -> outputfilename
On 11/30/05, Tim ignored_mailbox@yahoo.com.au wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
I used to know of something that did that (something with various different reformatting options for massaging text files), but I can't think what it was. Quickly looking at the man file for the cat program, you could do something like:
cat --squeeze-blank inputfilename -> outputfilename
Thanks, Tim and Paul. Paul's method does not mysteriously work:
$ more file1.txt word1
word2
word3 $ more -s file1.txt > file2.txt $ more file2.txt word1
word2
word3 $
Tim's way works partially, i.e., many blank lines are in effect erased, but some remain. I suspect that the left blank lines are not blank lines although they look like blank lines. Can one go further with deleting the left "false" blank lines?
Paul
On Wed, 2005-11-30 at 12:42 +0000, Paul Smith wrote:
On 11/30/05, Tim ignored_mailbox@yahoo.com.au wrote: $ more file1.txt word1
word2
word3 $ more -s file1.txt > file2.txt $ more file2.txt word1
word2
word3 $
Tim's way works partially, i.e., many blank lines are in effect erased, but some remain. I suspect that the left blank lines are not blank lines although they look like blank lines. Can one go further with deleting the left "false" blank lines?
I did it this way:
cat file1 | grep -v "^[ ]*$"
supposing some lines could have no blanks (regex=/^$/) or some spaces (regex for any number of spaces=/[ ]*/)
-- Rodolfo Alcazar - rodolfo.alcazar@padep.org.bo Netzmanager Padep, GTZ 591-70656800, -22417628, LA PAZ, BOLIVIA http://otbits.blogspot.com -- Murphy's Law of Thermodynamics: Things get worse under pressure.
Tim:
I used to know of something that did that (something with various different reformatting options for massaging text files), but I can't think what it was. Quickly looking at the man file for the cat program, you could do something like:
cat --squeeze-blank inputfilename -> outputfilename
Paul Smith:
Thanks, Tim and Paul. Paul's method does not mysteriously work:
$ more file1.txt word1
word2
word3 $ more -s file1.txt > file2.txt $ more file2.txt word1
word2
word3
Hmm, seems to work for me. Both with the "less" and "more" programs, as well as the "cat" program.
Tim's way works partially, i.e., many blank lines are in effect erased, but some remain. I suspect that the left blank lines are not blank lines although they look like blank lines. Can one go further with deleting the left "false" blank lines?
In what way do they remain? Can you provide an actual example? (Rather than an explanation of what's happening.)
What I see is that all consecutive blank lines are replaced by a single blank line, on the file I tried it with.
e.g. Tested on /etc/selinux/targeted/contexts/files/file_contexts
If you want to remove all blank lines, then perhaps you could use grep.
On 11/30/05, Tim ignored_mailbox@yahoo.com.au wrote:
Tim:
I used to know of something that did that (something with various different reformatting options for massaging text files), but I can't think what it was. Quickly looking at the man file for the cat program, you could do something like:
cat --squeeze-blank inputfilename -> outputfilename
Paul Smith:
Thanks, Tim and Paul. Paul's method does not mysteriously work:
$ more file1.txt word1
word2
word3 $ more -s file1.txt > file2.txt $ more file2.txt word1
word2
word3
Hmm, seems to work for me. Both with the "less" and "more" programs, as well as the "cat" program.
Tim's way works partially, i.e., many blank lines are in effect erased, but some remain. I suspect that the left blank lines are not blank lines although they look like blank lines. Can one go further with deleting the left "false" blank lines?
In what way do they remain? Can you provide an actual example? (Rather than an explanation of what's happening.)
What I see is that all consecutive blank lines are replaced by a single blank line, on the file I tried it with.
e.g. Tested on /etc/selinux/targeted/contexts/files/file_contexts
If you want to remove all blank lines, then perhaps you could use grep.
Thanks, Tim. Rodolfo's technique works fine for me. It reduced about 40Kb of a HTML document produced by NVU. I do not know why, but NVU seems to add blocks of blank lines, drastically increasing the size of the document.
Paul
On Wed, 2005-11-30 at 15:01 +0000, Paul Smith wrote:
Rodolfo's technique works fine for me. It reduced about 40Kb of a HTML document produced by NVU. I do not know why, but NVU seems to add blocks of blank lines, drastically increasing the size of the document.
NVU adds 40Kb to a file just from blank lines? 40,000 blank lines? How big's the document, overall?
Personally, I use tidy on HTML files. Though you have to use it with some care. It'll remove character entities if they're ANYWHERE on the page in or *after* a PRE element. And mangles some other character entities, too (e.g. × can get translated into garbage). If I know I haven't used them in a document, I'll use tidy on it. It also tidies up a few silly authoring errors (like not closing p tags, etc.).
On 11/30/05, Tim ignored_mailbox@yahoo.com.au wrote:
Rodolfo's technique works fine for me. It reduced about 40Kb of a HTML document produced by NVU. I do not know why, but NVU seems to add blocks of blank lines, drastically increasing the size of the document.
NVU adds 40Kb to a file just from blank lines? 40,000 blank lines? How big's the document, overall?
The original size of the document was 220Kb and now (without the blank lines) is 180Kb. I think NVU does not insert all blank lines at once, but as one updates the HTML document, the number of superfluous blank lines increases. I think I will use Quanta instead in the future.
Paul
Paul Smith wrote:
On 11/30/05, Tim ignored_mailbox@yahoo.com.au wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
I used to know of something that did that (something with various different reformatting options for massaging text files), but I can't think what it was. Quickly looking at the man file for the cat program, you could do something like:
cat --squeeze-blank inputfilename -> outputfilename
Thanks, Tim and Paul. Paul's method does not mysteriously work:
$ more file1.txt word1
word2
word3 $ more -s file1.txt > file2.txt $ more file2.txt word1
word2
word3 $
Tim's way works partially, i.e., many blank lines are in effect erased, but some remain. I suspect that the left blank lines are not blank lines although they look like blank lines. Can one go further with deleting the left "false" blank lines?
Paul
man tr You may have some tabs. You might also need to remove consecutive blanks in which case man sed and maybe man regex
On Wed, 2005-11-30 at 06:42, Paul Smith wrote:
cat --squeeze-blank inputfilename -> outputfilename
Thanks, Tim and Paul. Paul's method does not mysteriously work:
$ more file1.txt word1
word2
word3 $ more -s file1.txt > file2.txt $ more file2.txt word1
word2
word3 $
Tim's way works partially, i.e., many blank lines are in effect erased, but some remain. I suspect that the left blank lines are not blank lines although they look like blank lines. Can one go further with deleting the left "false" blank lines?
In vi: :%s/^[ ]*$// That says for the range of all lines, substitute any number of white spaces (there's a space and tab inside the []'s) filling from the beginning (^) and end ($) of the line with nothing. If you don't like the results, hit 'u' (undo). then 1G!Gcat -s which says filter the range from the first through last line through the command cat -s and replace the buffer with the results. Again, if you don't like the results, hit 'u'. Repeat until you get it right.
Paul Smith wrote:
Dear All
Taking a text file, how can one eliminate at once all superfluous blank lines?
What is your definition of a "superfluous blank line"? Is that all blank lines, or all consecutive blank lines where the number is more than one?
If it's the latter, you may find "more -s filename" to your liking.
Paul.
On 11/30/05, Paul Howarth paul@city-fan.org wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
What is your definition of a "superfluous blank line"? Is that all blank lines, or all consecutive blank lines where the number is more than one?
If it's the latter, you may find "more -s filename" to your liking.
Thanks, Paul. I mean "all consecutive blank lines where the number is more than one". Furthermore, I mean how to delete those blank lines from the file.
Paul
Paul Smith wrote:
On 11/30/05, Paul Howarth paul@city-fan.org wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
What is your definition of a "superfluous blank line"? Is that all blank lines, or all consecutive blank lines where the number is more than one?
If it's the latter, you may find "more -s filename" to your liking.
Thanks, Paul. I mean "all consecutive blank lines where the number is more than one". Furthermore, I mean how to delete those blank lines from the file.
$ more -s file > file.new $ mv file.new file
Paul
On Wed, 30 Nov 2005, Paul Howarth wrote:
Paul Smith wrote:
On 11/30/05, Paul Howarth paul@city-fan.org wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
What is your definition of a "superfluous blank line"? Is that all blank lines, or all consecutive blank lines where the number is more than one?
If it's the latter, you may find "more -s filename" to your liking.
Thanks, Paul. I mean "all consecutive blank lines where the number is more than one". Furthermore, I mean how to delete those blank lines from the file.
$ more -s file > file.new $ mv file.new file
Paul
Or better, "cat -s file > file.new" and avoid having to page through the file.
Matthew Saltzman wrote:
On Wed, 30 Nov 2005, Paul Howarth wrote:
Paul Smith wrote:
On 11/30/05, Paul Howarth paul@city-fan.org wrote:
Taking a text file, how can one eliminate at once all superfluous blank lines?
What is your definition of a "superfluous blank line"? Is that all blank lines, or all consecutive blank lines where the number is more than one?
If it's the latter, you may find "more -s filename" to your liking.
Thanks, Paul. I mean "all consecutive blank lines where the number is more than one". Furthermore, I mean how to delete those blank lines from the file.
$ more -s file > file.new $ mv file.new file
Paul
Or better, "cat -s file > file.new" and avoid having to page through the file.
You only have to page through a file when output is a terminal (not in this case) IIRC.
Paul.
On Wed, 30 Nov 2005, Paul Howarth wrote:
Matthew Saltzman wrote:
On Wed, 30 Nov 2005, Paul Howarth wrote:
$ more -s file > file.new $ mv file.new file
Paul
Or better, "cat -s file > file.new" and avoid having to page through the file.
You only have to page through a file when output is a terminal (not in this case) IIRC.
Ah, makes sense. What I get for posting before coffee.
Paul Smith wrote:
Dear All
Taking a text file, how can one eliminate at once all superfluous blank lines?
Thanks in advance,
Paul
man cat man grep
On Wed, 2005-30-11 at 23:44 +0800, John Summerfied wrote:
Paul Smith wrote:
Dear All
Taking a text file, how can one eliminate at once all superfluous blank lines?
Thanks in advance,
Paul
man cat man grep
--
Cheers John
-- spambait 1aaaaaaa@computerdatasafe.com.au Z1aaaaaaa@computerdatasafe.com.au Tourist pics http://portgeographe.environmentaldisasters.cds.merseine.nu/
do not reply off-list
sed -e 's;^\w*$;;' file-to-clean | grep -v '^$'
--On Thursday, December 01, 2005 1:38 PM -0700 Guy Fraser guy@incentre.net wrote:
sed -e 's;^\w*$;;' file-to-clean | grep -v '^$'
Why not use egrep to do it in one pass? Something like:
egrep -v '^\w*$' file-to-clean
On Thu, 2005-12-01 at 16:18, Kenneth Porter wrote:
--On Thursday, December 01, 2005 1:38 PM -0700 Guy Fraser guy@incentre.net wrote:
sed -e 's;^\w*$;;' file-to-clean | grep -v '^$'
Why not use egrep to do it in one pass? Something like:
egrep -v '^\w*$' file-to-clean
Neither of these leave any blank lines. The idea was to collapse repeated blank lines or lines containing only white space to one.
On 12/1/05, Les Mikesell lesmikesell@gmail.com wrote:
sed -e 's;^\w*$;;' file-to-clean | grep -v '^$'
Why not use egrep to do it in one pass? Something like:
egrep -v '^\w*$' file-to-clean
Neither of these leave any blank lines. The idea was to collapse repeated blank lines or lines containing only white space to one.
So many ways of solving my problem show that this list is quite creative!
Paul
Paul Smith wrote:
On 12/1/05, Les Mikesell lesmikesell@gmail.com wrote:
sed -e 's;^\w*$;;' file-to-clean | grep -v '^$'
Why not use egrep to do it in one pass? Something like:
egrep -v '^\w*$' file-to-clean
Neither of these leave any blank lines. The idea was to collapse repeated blank lines or lines containing only white space to one.
So many ways of solving my problem show that this list is quite creative!
Paul
Do you have gcc installed on your system?
Mike
On 12/1/05, Mike McCarty mike.mccarty@sbcglobal.net wrote:
So many ways of solving my problem show that this list is quite creative!
Do you have gcc installed on your system?
Yes, I do. Should I run
gcc noblank.c
?
(I am not a programmer and I have never used gcc.)
Paul
Paul Smith wrote:
On 12/1/05, Mike McCarty mike.mccarty@sbcglobal.net wrote:
So many ways of solving my problem show that this list is quite creative!
Do you have gcc installed on your system?
Yes, I do. Should I run
gcc noblank.c
?
(I am not a programmer and I have never used gcc.)
Paul
I sent you the source. Extract it to a file named noblank.c I suspect you have already done that. Ok, now build the program...
$ gcc -o noblank noblank.c
After this runs (takes no more than 10 seconds), you will have a program named "noblank" in the current directory. Either move it to a place in your path, or use ./noblank to run it. The usage is:
$ noblank < input_file > output_file
IOW, it runs as a filter, and may be used in a pipe like this:
$ program_producing_output | noblank > processed_file
Or even
$ program_producing_output | noblank | program_consuming_input
Note that I typed it in, and ran a few test cases, but I don't guarantee that it does exactly what you want. If it doesn't, let me know, and we'll work together to make it do exactly what you want.
If this seems like too much work, let me know, and I'll send you a compiled version. I just thought that posting source was better because then you could be assured that you weren't getting malware. Since the source is there for all to see, anyone who spots some evil stuff in it would pipe up.
In fact, I'll send you a compiled version under separate cover, which if you wish, you may use.
I don't warantee it, except to claim that it is indeed the compiled version of what I sent you.
HTH
Mike
On 12/2/05, Mike McCarty mike.mccarty@sbcglobal.net wrote:
I sent you the source. Extract it to a file named noblank.c I suspect you have already done that. Ok, now build the program...
$ gcc -o noblank noblank.c
After this runs (takes no more than 10 seconds), you will have a program named "noblank" in the current directory. Either move it to a place in your path, or use ./noblank to run it. The usage is:
$ noblank < input_file > output_file
I kindly thank you, Mike, for your program. I have tried that, but when I run
$./noblank file1.txt file2.txt
the program does not terminate, as if waiting for something.
Paul
On Sat, 2005-12-03 at 17:54 +0000, Paul Smith wrote:
On 12/2/05, Mike McCarty mike.mccarty@sbcglobal.net wrote:
I sent you the source. Extract it to a file named noblank.c I suspect you have already done that. Ok, now build the program...
$ gcc -o noblank noblank.c
After this runs (takes no more than 10 seconds), you will have a program named "noblank" in the current directory. Either move it to a place in your path, or use ./noblank to run it. The usage is:
$ noblank < input_file > output_file
I kindly thank you, Mike, for your program. I have tried that, but when I run
$./noblank file1.txt file2.txt
the program does not terminate, as if waiting for something.
You have to redirect input and output with < and > as stated above.
Paul
On 12/3/05, peter kostov fedora@light-bg.com wrote:
I kindly thank you, Mike, for your program. I have tried that, but when I run
$./noblank file1.txt file2.txt
the program does not terminate, as if waiting for something.
You have to redirect input and output with < and > as stated above.
Thanks, Peter. That is it. Now, it works very fine!
Paul
On Sat, 2005-12-03 at 11:54, Paul Smith wrote:
After this runs (takes no more than 10 seconds), you will have a program named "noblank" in the current directory. Either move it to a place in your path, or use ./noblank to run it. The usage is:
$ noblank < input_file > output_file
I kindly thank you, Mike, for your program. I have tried that, but when I run
$./noblank file1.txt file2.txt
the program does not terminate, as if waiting for something.
OK, I looked up the holding space syntax for sed. This command will do it in one pass:
sed -e '/^\s*$/{ N /^\s*\n\s*$/D }' input_file >output_file
The first line finds matches from the beginning of a line (^) followed by any amount of white space (\s) which could also be represented as [ ] with a space an tab typed between the character-class brackets but it is harder to type a tab on the shell command line. The number represented by * can be 0 or more up to the end of the line ($). When a match is found, the N command appends the next line to the pattern space where the match is repeated, allowing for the embedded newline (\n) from the previous line and if it succeeds, the D command deletes up to the embedded newline.
If you expect to do it often, you can put the sed command (inside the 's) in a file and execute it with sed -f script_file instead of typing it again.
It's rarely worth the trouble to compile a specialized program for anything that can be done with regular expression since the native unix tools are so good at text manipulation. If you can describe the problem in words, you can usually come up with a sequence of steps to transform what you have into what you want with regular expression substitions.
Les Mikesell wrote:
[snip]
It's rarely worth the trouble to compile a specialized program for anything that can be done with regular expression since the native unix tools are so good at text manipulation. If you can describe the problem
For me, it's rarely worth the trouble to read the man pages and try to figure out how the weird syntax of sed works all over again, when in 5 minutes or so I can have a specialized program working. In fact, for *really* simple stuff, I just use a keyboard macro for my favorite text editor (not vi, and not emacs) and tell it to repeat the edit.
in words, you can usually come up with a sequence of steps to transform what you have into what you want with regular expression substitions.
IOW, regular expressions are (1) complicated (2) work differently with every tool even with the same OS, (3) usually take three or four iterations to get right, and (4) not portable to other OS, whereas a little C program (1) is simple to write, (2) has the same syntax on all systems, and (3) is portable even to my non-hosted environment on my little MC68HC11 machine running no OS at all.
Mike
On Mon, 2005-12-05 at 02:09, Mike McCarty wrote:
It's rarely worth the trouble to compile a specialized program for anything that can be done with regular expression since the native unix tools are so good at text manipulation. If you can describe the problem
For me, it's rarely worth the trouble to read the man pages and try to figure out how the weird syntax of sed works all over again, when in 5 minutes or so I can have a specialized program working.
I agree that the holding space notion is weird. However pipelines are not. I would have used the 2 step solution myself and been done in one minute without cluttering my system with bits of some specialized program I'll never use again.
In fact, for *really* simple stuff, I just use a keyboard macro for my favorite text editor (not vi, and not emacs) and tell it to repeat the edit.
Note that vi uses essentially the same syntax as sed and grep so you only have to learn it once, and in vi you can practice interactively, using 'u' to undo a mistake. And the motion syntax of vi is the approximately the same as 'less' so you also learn to search and page through files.
in words, you can usually come up with a sequence of steps to transform what you have into what you want with regular expression substitions.
IOW, regular expressions are (1) complicated (2) work differently with every tool even with the same OS,
Partly true, but the differences are small and predictable. If you stick to the original form used in 'ed', they work the same everywhere and the \char to represent character classes are usually either all there or not. The differences evolved over 30 years or so in discrete steps. They aren't just random changes. And if you use ed, you can include backward motions in your edit script syntax, but nobody does that anymore...
(3) usually take three or four iterations to get right,
Which is why doing them in vi is a good start because you see the result without extra steps and a simple 'u'ndo puts it back regardless of the complexity of the change.
and (4) not portable to other OS,
The free Cygwin tools bring all the useful parts to windows. OSX sensibly already has them. What else do you care about? And why use a unix-like system if you don't want to take advantage of it's toolset?
whereas a little C program (1) is simple to write, (2) has the same syntax on all systems, and (3) is portable even to my non-hosted environment on my little MC68HC11 machine running no OS at all.
Many OS's don't come with a C compiler, and even on the ones that do, the easiest way to do a lot of text transformations is to use the regular expression library routines.
Les Mikesell wrote:
On Mon, 2005-12-05 at 02:09, Mike McCarty wrote:
It's rarely worth the trouble to compile a specialized program for anything that can be done with regular expression since the native unix tools are so good at text manipulation. If you can describe the problem
For me, it's rarely worth the trouble to read the man pages and try to figure out how the weird syntax of sed works all over again, when in 5 minutes or so I can have a specialized program working.
I agree that the holding space notion is weird. However
[snip]
My point (which perhaps got buried in the noise) is that there are different strokes for different folks.
[snip]
The free Cygwin tools bring all the useful parts to windows. OSX sensibly already has them. What else do you care about? And why use a unix-like system if you don't want to take advantage of it's toolset?
I've used Cygwin for about one day, and took it back off my machine. When I'm not using Linux, I normally use DOS, not Windows. I found a couple of undesirable interactions between Cygwin and Windows XP, and anyway, as I said, I normally use DOS, not any version of Windows when I'm not using Linux.
I don't use Linux because it has a "powerful toolset". If I wanted that, I'd prefer DEC VMS, where the commands at least make sense and have the same syntax everywhere.
whereas a little C program (1) is simple to write, (2) has the same syntax on all systems, and (3) is portable even to my non-hosted environment on my little MC68HC11 machine running no OS at all.
Many OS's don't come with a C compiler, and even on the ones that do, the easiest way to do a lot of text transformations is to use the regular expression library routines.
Many machines don't come with an OS, but I can port my compiler anywhere.
I don't do a lot of text transformations. In fact, I hardly ever do text transformations. I can't recall the last one I did (other than the little one-off I did for the OP). Mostly, I read e-mail, and browse the web, and edit source code for programs. I installed Linux on my machine because I got a contract in October of 2004, and was requested to use it by the guy who hired me.
OTOH, I've been using *NIX like systems since 1985 or so, and am comfortable with the development environment.
But, as I said, different strokes for different folks. I know quite a few people who have as their first reaction to anything a script, others think awk always fits, and others like perl.
Some prefer C, since it goes anywhere, even where *NIX systems do not. Like very small embedded systems.
Mike
On Mon, 2005-12-05 at 13:39, Mike McCarty wrote:
But, as I said, different strokes for different folks. I know quite a few people who have as their first reaction to anything a script, others think awk always fits, and others like perl.
Some prefer C, since it goes anywhere, even where *NIX systems do not. Like very small embedded systems.
Agreed... But the perspective for my choice is that I spent a few days perhaps 20 years ago learning regular expressions and shell syntax (pipes, redirection, variable substitution, etc.) and those things have saved me time nearly every working day since, with a few new features to learn showing up every 5 years or so. I spent several months around the same time learning C, had to learn a lot over between K&R and ANSI and haven't used it much since machines got fast enough to start perl before I lifted my finger off the <enter> key (but it is still handy to know).
And in contrast I can't think of much of anything reusable I've learned about GUI procedures. It's like starting from scratch with every new program and context.
Paul Smith wrote:
So many ways of solving my problem show that this list is quite creative!
Actually, it shows something more fundamental. It shows (one reason) why Unix is so powerful.
Both Linux and html came from the Unix tradition of text-based input and output wherever possible, and *good* tools for manipulating text. That files, input and output are text mean that we can understand them fairly easily. That we have good tools to manipulate text means we can take text-based output, massage it, and use it as input. And this is incredibly powerful yet still pretty straightforward.
But this is not an accident. It's the way the system is supposed to work. It means *you* can control and work with the facilities on your computer, and script them how you want.
James.
James Wilkinson wrote:
Paul Smith wrote:
So many ways of solving my problem show that this list is quite creative!
Actually, it shows something more fundamental. It shows (one reason) why Unix is so powerful.
[snip]
Except that none of the "solutions" actually did what he wanted. I'm not a scripting expert, but I suspect that a script could be developed to do what he wants. I sent him source for a C program which I believe does exactly what he wants.
Limitation: No line over 1022 characters in length (plus a newline for 1023 if you count that).
Paul: If you have input lines longer than 1022 characters (not counting the newline at the end) then you'll have to modify that program. There is a line there
#define LINE_SIZE 1024
Change the number 1024 to be (longest_length_I_need + 2). So, for examle, if you need 2176 characters in your longest line, you would make the number at least 2178. Larger is ok. Make it huge and the program will grow to be rather large, so don't make it millions of bytes, ok? If you really need *very* long lines, then I can help you there, too.
Mike
On Fri, 2005-12-02 at 14:06, Mike McCarty wrote:
Actually, it shows something more fundamental. It shows (one reason) why Unix is so powerful.
[snip]
Except that none of the "solutions" actually did what he wanted.
There were several, including my 2 vi commands that did it. It can be done in sed or ed scripts but I'm too lazy to look up the holding-space synax when 'cat -s' works after converting lines with only white space to empty lines.
On Thursday, Dec 1st 2005 at 23:30 -0000, quoth Paul Smith:
=>So many ways of solving my problem show that this list is quite creative! =>Paul
And why is it pray tell that no one thought to use *THE* most efficient of all systems for this horribly CPU intensive task? I refer you all to flex of course. As long as you're using deterministic regular expressions then this is the best there is.
Here's foo.l
/* Note that the square brackets contain space tab */
%%%%%%%%%%%%%%%%%%START OF foo.l%%%%%%%%%%%%%%%%%%% %% [\n ] putchar('\n'); [\n ]{2,} puts ("\n\n"); .* printf ( "%s", yytext ); %% #include <stdio.h> %%%%%%%%%%%%%%%%%%END OF foo.l%%%%%%%%%%%%%%%%%%%
flex foo.l gcc -O -o foo lex.yy.c -ll ./foo < input_file