NAME
lex - fast lexical analyzer generator

SYNOPSIS
lex [[ -bcdfinpstvFILT8 ] -C[efmF] -Sskeleton ] [ file ]

DESCRIPTION
Lex is a tool for generating scanners: programs which recognized
lexical patterns in text. Lex reads the given input files, or its
standard input if no file names are given, for a description of a
scanner to generate. The description is in the form of pairs of
regular expressions and C code, called rules. Lex generates as
output a C source file, lex.yy.c, which defines a routine yylex().
This file is compiled and linked with the -lln library to produce
an executable. When the executable is run, it analyzes its input
for occurrences of the regular expressions. Whenever it finds one,
it executes the corresponding C code.

For full documentation, see Lexdoc. This manual entry is intended
for use as a quick reference.

OPTIONS
Lex has the following options:

-b Generate backtracking information to lex.backtrack. This is a
list of scanner states which require backtracking and the
input characters on which they do so. By adding rules one can
remove backtracking states. If all backtracking states are
eliminated and -f or -F is used, the generated scanner will
run faster.

-c is a do-nothing, deprecated option included for POSIX
compliance.

NOTE: in previous releases of Lex -c specified table-
compression options. This functionality is now given by the
-C flag. To ease the the impact of this change, when lex
encounters -c, it currently issues a warning message and
assumes that -C was desired instead. In the future this
"promotion" of -c to -C will go away in the name of full POSIX
compliance (unless the POSIX meaning is removed first).

-d Makes the generated scanner run in debug mode. Whenever a
pattern is recognized and the global yy_Lex_debug is non-zero
(which is the default), the scanner will write to stderr a
line of the form:

--accepting rule at line 53 ("the matched text") 9
The line number refers to the location of the rule in the file
defining the scanner (i.e., the file that was fed to lex).
Messages are also generated when the scanner backtracks,
accepts the default rule, reaches the end of its input buffer
(or encounters a NUL; the two look the same as far as the
scanner’s concerned), or reaches an end-of-file.

-f Specifies (take your pick) full table or fast scanner. No
table compression is done. The result is large but fast.
This option is equivalent to -Cf (see below).

-i Instructs lex to generate a case-insensitive scanner. The
case of letters given in the lex input patterns will be
ignored, and tokens in the input will be matched regardless of
case. The matched text given in yytext will have the
preserved case (i.e., it will not be folded).

-n Is another do-nothing, deprecated option included only for
POSIX compliance.

-p Generates a performance report to stderr. The report consists
of comments regarding features of the lex input file which
will cause a loss of performance in the resulting scanner.

-s Causes the default rule (that unmatched scanner input is
echoed to stdout) to be suppressed. If the scanner encounters
input that does not match any of its rules, it aborts with an
error.

-t Instructs lex to write the scanner it generates to stdout
instead of lex.yy.c.

-v Specifies that lex should write to stderr a summary of
statistics regarding the scanner it generates.

-F Specifies that the fast scanner table representation should be
used. This representation is about as fast as the full table
representation (-f), and for some sets of patterns will be
considerably smaller (and for others, larger). See Lexdoc for
details.

This option is equivalent to -CF (see below).

-I Instructs lex to generate an interactive scanner, that is, a
scanner which stops immediately rather than looking ahead if
it knows that the currently scanned text cannot be part of a
longer rule’s match. Again, see Lexdoc for details.

Note, -I cannot be used in conjunction with full or fast
tables , i.e., the -f, -F, -Cf, or -CF flags.

-L Instructs lex not to generate #line directives in lex.yy.c.
The default is to generate such directives so error messages
in the actions will be correctly located with respect to the
original lex input file, and not to the fairly meaningless
line numbers of lex.yy.c.

-T Makes lex run in trace mode. It will generate a lot of
messages to stdout concerning the form of the input and the
resultant non-deterministic and deterministic finite automata.
This option is mostly for use in maintaining lex.

-8 Instructs lex to generate an 8-bit scanner. On some sites,
this is the default. On others, the default is 7-bit
characters. To see which is the case, check the verbose (-v)
output for "equivalence classes created". If the denominator
of the number shown is 128, then by default lex is generating
7-bit characters. If it is 256, then the default is 8-bit
characters.

-C[efmF]
Controls the degree of table compression. The default setting
is -Cem.

-C A lone -C specifies that the scanner tables should be
compressed but neither equivalence classes nor meta-
equivalence classes should be used.

-Ce Directs lex to construct equivalence classes , i.e., sets
of characters which have identical lexical properties.
Equivalence classes usually give dramatic reductions in
the final table/object file sizes (typically a factor of
2-5) and are pretty cheap performance-wise (one array
look-up per character scanned).

-Cf Specifies that the full scanner tables should be
generated - lex should not compress the tables by taking
advantages of similar transition functions for different
states.

-CF Specifies that the alternate fast scanner representation
(described in Lexdoc) should be used.

-Cm Directs lex to construct meta-equivalence classes , which
are sets of equivalence classes (or characters, if
equivalence classes are not being used) that are commonly
used together. Meta-equivalence classes are often a big
win when using compressed tables, but they have a
moderate performance impact (one or two "if" tests and
one array look-up per character scanned).

-Cem (Default) Generate both equivalence classes and meta-
equivalence classes. This setting provides the highest
degree of table compression.

Faster-executing scanners can be traded off at the cost of
larger tables with the following generally being true:

slowest & smallest
-Cem
-Cm
-Ce
-C
-C{f,F}e
-C{f,F}
fastest & largest 9
-C options are not cumulative; whenever the flag is
encountered, the previous -C settings are forgotten.

The options -Cf or -CF and -Cm do not make sense together -
there is no opportunity for meta-equivalence classes if the
table is not being compressed. Otherwise the options may be
freely mixed.

-Sskeleton_file
Overrides the default skeleton file from which lex constructs
its scanners. Useful for lex maintenance or development.

SUMMARY OF LEX REGULAR EXPRESSIONS
The patterns in the input are written using an extended set of
regular expressions. These are:

x Match the character ‘x’.

. Any character except newline.

[xyz] A "character class"; in this case, the pattern matches
either an ‘x’, a ‘y’, or a ‘z’.

[abj-oZ] A "character class" with a range in it; matches an ‘a’, a
‘b’, any letter from ‘j’ through ‘o’, or a ‘Z’.

[^A-Z] A "negated character class", i.e., any character but those
in the class. In this case, any character except an
uppercase letter.

[^A-Z0 Any character except an uppercase letter or a newline.

r* Zero or more r’s, where r is any regular expression.

r+ One or more r’s.

r? Zero or one r’s (that is, "an optional r").

r{2,5} Anywhere from two to five r’s.

r{2,} Two or more r’s.

r{4} Exactly 4 r’s.

{name} The expansion of the "name" definition (see above).

"[xyz]
The literal string: [xyz]"foo.

If X is an ‘a’, ‘b’, ‘f’, ‘n’, ‘r’, ‘t’, or ‘v’, then the
ANSI-C interpretation of Otherwise, a literal ‘X’
(used to escape operators such as ‘*’).

123 The character with octal value 123.

a The character with hexadecimal value 2a.

(r) Match an r; parentheses are used to override precedence
(see below).

rs The regular expression r followed by the regular
expression s; called "concatenation".

r|s Either an r or an s.

r/s An r but only if it is followed by an s. The s is not
part of the matched text. This type of pattern is called
as "trailing context".

^r An r, but only at the beginning of a line.

r$ An r, but only at the end of a line. Equivalent to
"r/0.

<s>r An r, but only in start condition s (see below for
discussion of start conditions).

<s1,s2,s3>r
Same, but in any of start conditions s1, s2, or s3.

<<EOF>> An end-of-file.

<s1,s2><<EOF>>
An end-of-file when in start condition s1 or s2.

The regular expressions listed above are grouped according to
precedence, from highest precedence at the top to lowest at the
bottom. Those grouped together have equal precedence.

Some notes on patterns:

Negated character classes match newlines unless "0 (or an
equivalent escape sequence) is one of the characters explicitly
present in the negated character class (e.g., " [^A-Z0 ").

A rule can have at most one instance of trailing context (the ‘/’
operator or the ‘$’ operator). The start condition, ‘^’, and
"<<EOF>>" patterns can only occur at the beginning of a pattern,
and, as well as with ‘/’ and ‘$’, cannot be grouped inside
parentheses. The following are all illegal:

foo/bar$
foo(bar$)
foo^bar
<sc1>foo<sc2>bar 9 SUMMARY OF SPECIAL ACTIONS
In addition to arbitrary C code, the following can appear in
actions:

ECHO Copies yytext to the scanner’s output.

BEGIN Followed by the name of a start condition places the
scanner in the corresponding start condition.

REJECT Directs the scanner to proceed on to the "second best"
rule which matched the input (or a prefix of the input).
yytext and yyleng are set up appropriately. Note that
REJECT is a particularly expensive feature in terms
scanner performance; if it is used in any of the
scanner’s actions it will slow down all of the scanner’s
matching. Furthermore, REJECT cannot be used with the -f
or -F options.

Note also that unlike the other special actions, REJECT
is a branch; code immediately following it in the action
will not be executed.

yymore() tells the scanner that the next time it matches a rule,
the corresponding token should be appended onto the
current value of yytext rather than replacing it.

yyless(n) returns all but the first n characters of the current
token back to the input stream, where they will be
rescanned when the scanner looks for the next match.
yytext and yyleng are adjusted appropriately (e.g.,
yyleng will now be equal to n).

unput(c) puts the character c back onto the input stream. It will
be the next character scanned.

input() reads the next character from the input stream (this
routine is called yyinput() if the scanner is compiled
using C ++ ) .

yyterminate()
can be used in lieu of a return statement in an action.
It terminates the scanner and returns a 0 to the
scanner’s caller, indicating "all done".

By default, yyterminate() is also called when an end-of-
file is encountered. It is a macro and may be redefined.

YY_NEW_FILE
is an action available only in <<EOF>> rules. It means
"Okay, I’ve set up a new input file, continue scanning". 9 yy_create_buffer(file,size)
takes a FILE pointer and an integer size. It returns a
YY_BUFFER_STATE handle to a new input buffer large enough
to accomodate size characters and associated with the
given file. When in doubt, use YY_BUF_SIZE for the size.

yy_switch_to_buffer(new_buffer)
switches the scanner’s processing to scan for tokens from
the given buffer, which must be a YY_BUFFER_STATE.

yy_delete_buffer(buffer)
deletes the given buffer.

VALUES AVAILABLE TO THE USER
char *yytext holds the text of the current token. It may not be
modified.

int yyleng holds the length of the current token. It may not be
modified.

FILE *yyin is the file which by default lex reads from. It may
be redefined but doing so only makes sense before
scanning begins. Changing it in the middle of
scanning will have unexpected results since lex
buffers its input. Once scanning terminates because
an end-of-file has been seen, void yyrestart(FILE
*new_file) may be called to point yyin at the new
input file.

FILE *yyout is the file to which ECHO actions are done. It can be
reassigned by the user.

YY_CURRENT_BUFFER
returns a YY_BUFFER_STATE handle to the current
buffer.

MACROS THE USER CAN REDEFINE
YY_DECL controls how the scanning routine is declared. By
default, it is "int yylex()", or, if prototypes are being
used, "int yylex(void)". This definition may be changed
by redefining the "YY_DECL" macro. Note that if you give
arguments to the scanning routine using a K&R-style/non-
prototyped function declaration, you must terminate the
definition with a semi-colon (;).

YY_INPUT The nature of how the scanner gets its input can be
controlled by redefining the YY_INPUT macro. YY_INPUT’s
calling sequence is "YY_INPUT(buf,result,max_size)". Its
action is to place up to max_size characters in the
character array buf and return in the integer variable
result either the number of characters read or the
constant YY_NULL (0 on Unix systems) to indicate EOF. The
default YY_INPUT reads from the global file-pointer
"yyin". A sample redefinition of YY_INPUT (in the
definitions section of the input file):

%{
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) result = ((buf[0] = getchar()) == EOF) ? YY_NULL : 1;
%} 9
YY_INPUT When the scanner receives an end-of-file indication from
YY_INPUT, it then checks the yywrap() function. If
yywrap() returns false (zero), then it is assumed that the
function has gone ahead and set up yyin to point to
another input file, and scanning continues. If it returns
true (non-zero), then the scanner terminates, returning 0
to its caller.

yywrap The default yywrap() always returns 1. Presently, to
redefine it you must first "#undef yywrap", as it is
currently implemented as a macro. It is likely that
yywrap() will soon be defined to be a function rather than
a macro.

YY_USER_ACTION
can be redefined to provide an action which is always
executed prior to the matched rule’s action.

YY_USER_INIT
The macro YY_USER_INIT may be redefined to provide an
action which is always executed before the first scan.

YY_BREAK In the generated scanner, the actions are all gathered in
one large switch statement and separated using YY_BREAK ,
which may be redefined. By default, it is simply a
"break", to separate each rule’s action from the following
rule’s.

FILES
lex.skel skeleton scanner.
lex.yy.c generated scanner (called lexyy.c on some systems).
lex.backtrack backtracking information for -b
flag (called lex.bck on some systems).

SEE ALSO
lex(1), yacc(1), sed(1), awk(1).

lexdoc.

M. E. Lesk, and E. Schmidt, LEX - Lexical Analyzer Generator.

DIAGNOSTICS
reject_used_but_not_detected undefined
or

yymore_used_but_not_detected undefined
These errors can occur at compile time. They indicate that
the scanner uses REJECT or yymore() but that lex failed to
notice the fact, meaning that lex scanned the first two
sections looking for occurrences of these actions and failed
to find any, but somehow you snuck some in via a #include
file, for example . Make an explicit reference to the action
in your lex input file. Note that previously lex supported a
%used/%unused mechanism for dealing with this problem; this
feature is still supported but now deprecated, and will go
away soon unless the author hears from people who can argue
compellingly that they need it.

lex scanner jammed
a scanner compiled with -s has encountered an input string
which wasn’t matched by any of its rules.

lex input buffer overflowed
a scanner rule matched a string long enough to overflow the
scanner’s internal input buffer 16K bytes - controlled by
YY_BUF_MAX in lex.skel.

scanner requires -8 flag
Your scanner specification includes recognizing 8-bit
characters and you did not specify the -8 flag and your site
has not installed lex with -8 as the default.

too many %t classes!
You managed to put every single character into its own %t
class. Lex requires that at least one of the classes share
characters.

HISTORY
A lex appeared in Version 6 AT&T UNIX. The version this man page
describes is derived from code contributed by Vern Paxson.

AUTHOR
Vern Paxson, with the help of many ideas and much inspiration from
Van Jacobson. Original version by Jef Poskanzer.

See Lexdoc for additional credits and the address to send comments
to.

BUGS
Some trailing context patterns cannot be properly matched and
generate warning messages ("Dangerous trailing context"). These
are patterns where the ending of the first part of the rule matches
the beginning of the second part, such as "zx*/xy*", where the ‘x*’
matches the ‘x’ at the beginning of the trailing context. (Note
that the POSIX draft states that the text matched by such patterns
is undefined.)

For some trailing context rules, parts which are actually fixed-
length are not recognized as such, leading to the abovementioned
performance loss. In particular, parts using ‘|’ or {n} (such as
"foo{3}") are always considered variable-length.

Combining trailing context with the special ‘|’ action can result
in fixed trailing context being turned into the more expensive
variable trailing context. This happens in the following example:

%%
abc |
xyz/def 9
Use of unput() invalidates yytext and yyleng.

Use of unput() to push back more text than was matched can result
in the pushed-back text matching a beginning-of-line (‘^’) rule
even though it didn’t come at the beginning of the line (though
this is rare!).

Pattern-matching of NUL’s is substantially slower than matching
other characters.

Lex does not generate correct #line directives for code internal to
the scanner; thus, bugs in lex.skel yield bogus line numbers.

Due to both buffering of input and read-ahead, you cannot intermix
calls to <stdio.h> routines, such as, for example, getchar(), with
lex rules and expect it to work. Call input() instead.

The total table entries listed by the -v flag excludes the number
of table entries needed to determine what rule has been matched.
The number of entries is equal to the number of DFA states if the
scanner does not use REJECT, and somewhat greater than the number
of states if it does.

REJECT cannot be used with the -f or -F options.

9 Some of the macros, such as yywrap(), may in the future become
functions which live in the -lln library. This will doubtless
break a lot of code, but may be required for POSIX-compliance .

The lex internal algorithms need documentation.