+By default, the value of @code{yylloc} is a structure and you need only
+initialize the members that are going to be used by the actions. The
+four members are called @code{first_line}, @code{first_column},
+@code{last_line} and @code{last_column}. Note that the use of this
+feature makes the parser noticeably slower.
+
+@tindex YYLTYPE
+The data type of @code{yylloc} has the name @code{YYLTYPE}.
+
+@node Pure Calling
+@subsection Calling Conventions for Pure Parsers
+
+When you use the Bison declaration @samp{%define api.pure} to request a
+pure, reentrant parser, the global communication variables @code{yylval}
+and @code{yylloc} cannot be used. (@xref{Pure Decl, ,A Pure (Reentrant)
+Parser}.) In such parsers the two global variables are replaced by
+pointers passed as arguments to @code{yylex}. You must declare them as
+shown here, and pass the information back by storing it through those
+pointers.
+
+@example
+int
+yylex (YYSTYPE *lvalp, YYLTYPE *llocp)
+@{
+ @dots{}
+ *lvalp = value; /* Put value onto Bison stack. */
+ return INT; /* Return the type of the token. */
+ @dots{}
+@}
+@end example
+
+If the grammar file does not use the @samp{@@} constructs to refer to
+textual locations, then the type @code{YYLTYPE} will not be defined. In
+this case, omit the second argument; @code{yylex} will be called with
+only one argument.
+
+If you wish to pass additional arguments to @code{yylex}, use
+@code{%lex-param} just like @code{%parse-param} (@pxref{Parser
+Function}). To pass additional arguments to both @code{yylex} and
+@code{yyparse}, use @code{%param}.
+
+@deffn {Directive} %lex-param @{@var{argument-declaration}@} @dots{}
+@findex %lex-param
+Specify that @var{argument-declaration} are additional @code{yylex} argument
+declarations. You may pass one or more such declarations, which is
+equivalent to repeating @code{%lex-param}.
+@end deffn
+
+@deffn {Directive} %param @{@var{argument-declaration}@} @dots{}
+@findex %param
+Specify that @var{argument-declaration} are additional
+@code{yylex}/@code{yyparse} argument declaration. This is equivalent to
+@samp{%lex-param @{@var{argument-declaration}@} @dots{} %parse-param
+@{@var{argument-declaration}@} @dots{}}. You may pass one or more
+declarations, which is equivalent to repeating @code{%param}.
+@end deffn
+
+For instance:
+
+@example
+%lex-param @{scanner_mode *mode@}
+%parse-param @{parser_mode *mode@}
+%param @{environment_type *env@}
+@end example
+
+@noindent
+results in the following signature:
+
+@example
+int yylex (scanner_mode *mode, environment_type *env);
+int yyparse (parser_mode *mode, environment_type *env);
+@end example
+
+If @samp{%define api.pure} is added:
+
+@example
+int yylex (YYSTYPE *lvalp, scanner_mode *mode, environment_type *env);
+int yyparse (parser_mode *mode, environment_type *env);
+@end example
+
+@noindent
+and finally, if both @samp{%define api.pure} and @code{%locations} are used:
+
+@example
+int yylex (YYSTYPE *lvalp, YYLTYPE *llocp,
+ scanner_mode *mode, environment_type *env);
+int yyparse (parser_mode *mode, environment_type *env);
+@end example
+
+@node Error Reporting
+@section The Error Reporting Function @code{yyerror}
+@cindex error reporting function
+@findex yyerror
+@cindex parse error
+@cindex syntax error
+
+The Bison parser detects a @dfn{syntax error} (or @dfn{parse error})
+whenever it reads a token which cannot satisfy any syntax rule. An
+action in the grammar can also explicitly proclaim an error, using the
+macro @code{YYERROR} (@pxref{Action Features, ,Special Features for Use
+in Actions}).
+
+The Bison parser expects to report the error by calling an error
+reporting function named @code{yyerror}, which you must supply. It is
+called by @code{yyparse} whenever a syntax error is found, and it
+receives one argument. For a syntax error, the string is normally
+@w{@code{"syntax error"}}.
+
+@findex %define parse.error
+If you invoke @samp{%define parse.error verbose} in the Bison
+declarations section (@pxref{Bison Declarations, ,The Bison Declarations
+Section}), then Bison provides a more verbose and specific error message
+string instead of just plain @w{@code{"syntax error"}}.
+
+The parser can detect one other kind of error: memory exhaustion. This
+can happen when the input contains constructions that are very deeply
+nested. It isn't likely you will encounter this, since the Bison
+parser normally extends its stack automatically up to a very large limit. But
+if memory is exhausted, @code{yyparse} calls @code{yyerror} in the usual
+fashion, except that the argument string is @w{@code{"memory exhausted"}}.
+
+In some cases diagnostics like @w{@code{"syntax error"}} are
+translated automatically from English to some other language before
+they are passed to @code{yyerror}. @xref{Internationalization}.
+
+The following definition suffices in simple programs:
+
+@example
+@group
+void
+yyerror (char const *s)
+@{
+@end group
+@group
+ fprintf (stderr, "%s\n", s);
+@}
+@end group
+@end example
+
+After @code{yyerror} returns to @code{yyparse}, the latter will attempt
+error recovery if you have written suitable error recovery grammar rules
+(@pxref{Error Recovery}). If recovery is impossible, @code{yyparse} will
+immediately return 1.
+
+Obviously, in location tracking pure parsers, @code{yyerror} should have
+an access to the current location.
+This is indeed the case for the @acronym{GLR}
+parsers, but not for the Yacc parser, for historical reasons. I.e., if
+@samp{%locations %define api.pure} is passed then the prototypes for
+@code{yyerror} are:
+
+@example
+void yyerror (char const *msg); /* Yacc parsers. */
+void yyerror (YYLTYPE *locp, char const *msg); /* GLR parsers. */
+@end example
+
+If @samp{%parse-param @{int *nastiness@}} is used, then:
+
+@example
+void yyerror (int *nastiness, char const *msg); /* Yacc parsers. */
+void yyerror (int *nastiness, char const *msg); /* GLR parsers. */
+@end example
+
+Finally, @acronym{GLR} and Yacc parsers share the same @code{yyerror} calling
+convention for absolutely pure parsers, i.e., when the calling
+convention of @code{yylex} @emph{and} the calling convention of
+@samp{%define api.pure} are pure.
+I.e.:
+
+@example
+/* Location tracking. */
+%locations
+/* Pure yylex. */
+%define api.pure
+%lex-param @{int *nastiness@}
+/* Pure yyparse. */
+%parse-param @{int *nastiness@}
+%parse-param @{int *randomness@}
+@end example
+
+@noindent
+results in the following signatures for all the parser kinds:
+
+@example
+int yylex (YYSTYPE *lvalp, YYLTYPE *llocp, int *nastiness);
+int yyparse (int *nastiness, int *randomness);
+void yyerror (YYLTYPE *locp,
+ int *nastiness, int *randomness,
+ char const *msg);
+@end example
+
+@noindent
+The prototypes are only indications of how the code produced by Bison
+uses @code{yyerror}. Bison-generated code always ignores the returned
+value, so @code{yyerror} can return any type, including @code{void}.
+Also, @code{yyerror} can be a variadic function; that is why the
+message is always passed last.
+
+Traditionally @code{yyerror} returns an @code{int} that is always
+ignored, but this is purely for historical reasons, and @code{void} is
+preferable since it more accurately describes the return type for
+@code{yyerror}.
+
+@vindex yynerrs
+The variable @code{yynerrs} contains the number of syntax errors
+reported so far. Normally this variable is global; but if you
+request a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser})
+then it is a local variable which only the actions can access.
+
+@node Action Features
+@section Special Features for Use in Actions
+@cindex summary, action features
+@cindex action features summary
+
+Here is a table of Bison constructs, variables and macros that
+are useful in actions.
+
+@deffn {Variable} $$
+Acts like a variable that contains the semantic value for the
+grouping made by the current rule. @xref{Actions}.
+@end deffn
+
+@deffn {Variable} $@var{n}
+Acts like a variable that contains the semantic value for the
+@var{n}th component of the current rule. @xref{Actions}.
+@end deffn
+
+@deffn {Variable} $<@var{typealt}>$
+Like @code{$$} but specifies alternative @var{typealt} in the union
+specified by the @code{%union} declaration. @xref{Action Types, ,Data
+Types of Values in Actions}.
+@end deffn
+
+@deffn {Variable} $<@var{typealt}>@var{n}
+Like @code{$@var{n}} but specifies alternative @var{typealt} in the
+union specified by the @code{%union} declaration.
+@xref{Action Types, ,Data Types of Values in Actions}.
+@end deffn
+
+@deffn {Macro} YYABORT;
+Return immediately from @code{yyparse}, indicating failure.
+@xref{Parser Function, ,The Parser Function @code{yyparse}}.
+@end deffn
+
+@deffn {Macro} YYACCEPT;
+Return immediately from @code{yyparse}, indicating success.
+@xref{Parser Function, ,The Parser Function @code{yyparse}}.
+@end deffn
+
+@deffn {Macro} YYBACKUP (@var{token}, @var{value});
+@findex YYBACKUP
+Unshift a token. This macro is allowed only for rules that reduce
+a single value, and only when there is no lookahead token.
+It is also disallowed in @acronym{GLR} parsers.
+It installs a lookahead token with token type @var{token} and
+semantic value @var{value}; then it discards the value that was
+going to be reduced by this rule.
+
+If the macro is used when it is not valid, such as when there is
+a lookahead token already, then it reports a syntax error with
+a message @samp{cannot back up} and performs ordinary error
+recovery.
+
+In either case, the rest of the action is not executed.
+@end deffn
+
+@deffn {Macro} YYEMPTY
+@vindex YYEMPTY
+Value stored in @code{yychar} when there is no lookahead token.
+@end deffn
+
+@deffn {Macro} YYEOF
+@vindex YYEOF
+Value stored in @code{yychar} when the lookahead is the end of the input
+stream.
+@end deffn
+
+@deffn {Macro} YYERROR;
+@findex YYERROR
+Cause an immediate syntax error. This statement initiates error
+recovery just as if the parser itself had detected an error; however, it
+does not call @code{yyerror}, and does not print any message. If you
+want to print an error message, call @code{yyerror} explicitly before
+the @samp{YYERROR;} statement. @xref{Error Recovery}.
+@end deffn
+
+@deffn {Macro} YYRECOVERING
+@findex YYRECOVERING
+The expression @code{YYRECOVERING ()} yields 1 when the parser
+is recovering from a syntax error, and 0 otherwise.
+@xref{Error Recovery}.
+@end deffn
+
+@deffn {Variable} yychar
+Variable containing either the lookahead token, or @code{YYEOF} when the
+lookahead is the end of the input stream, or @code{YYEMPTY} when no lookahead
+has been performed so the next token is not yet known.
+Do not modify @code{yychar} in a deferred semantic action (@pxref{GLR Semantic
+Actions}).
+@xref{Lookahead, ,Lookahead Tokens}.
+@end deffn
+
+@deffn {Macro} yyclearin;
+Discard the current lookahead token. This is useful primarily in
+error rules.
+Do not invoke @code{yyclearin} in a deferred semantic action (@pxref{GLR
+Semantic Actions}).
+@xref{Error Recovery}.
+@end deffn
+
+@deffn {Macro} yyerrok;
+Resume generating error messages immediately for subsequent syntax
+errors. This is useful primarily in error rules.
+@xref{Error Recovery}.
+@end deffn
+
+@deffn {Variable} yylloc
+Variable containing the lookahead token location when @code{yychar} is not set
+to @code{YYEMPTY} or @code{YYEOF}.
+Do not modify @code{yylloc} in a deferred semantic action (@pxref{GLR Semantic
+Actions}).
+@xref{Actions and Locations, ,Actions and Locations}.
+@end deffn
+
+@deffn {Variable} yylval
+Variable containing the lookahead token semantic value when @code{yychar} is
+not set to @code{YYEMPTY} or @code{YYEOF}.
+Do not modify @code{yylval} in a deferred semantic action (@pxref{GLR Semantic
+Actions}).
+@xref{Actions, ,Actions}.
+@end deffn
+
+@deffn {Value} @@$
+@findex @@$
+Acts like a structure variable containing information on the textual location
+of the grouping made by the current rule. @xref{Locations, ,
+Tracking Locations}.
+
+@c Check if those paragraphs are still useful or not.
+
+@c @example
+@c struct @{
+@c int first_line, last_line;
+@c int first_column, last_column;
+@c @};
+@c @end example
+
+@c Thus, to get the starting line number of the third component, you would
+@c use @samp{@@3.first_line}.
+
+@c In order for the members of this structure to contain valid information,
+@c you must make @code{yylex} supply this information about each token.
+@c If you need only certain members, then @code{yylex} need only fill in
+@c those members.
+
+@c The use of this feature makes the parser noticeably slower.
+@end deffn
+
+@deffn {Value} @@@var{n}
+@findex @@@var{n}
+Acts like a structure variable containing information on the textual location
+of the @var{n}th component of the current rule. @xref{Locations, ,
+Tracking Locations}.
+@end deffn
+
+@node Internationalization
+@section Parser Internationalization
+@cindex internationalization
+@cindex i18n
+@cindex NLS
+@cindex gettext
+@cindex bison-po
+
+A Bison-generated parser can print diagnostics, including error and
+tracing messages. By default, they appear in English. However, Bison
+also supports outputting diagnostics in the user's native language. To
+make this work, the user should set the usual environment variables.
+@xref{Users, , The User's View, gettext, GNU @code{gettext} utilities}.
+For example, the shell command @samp{export LC_ALL=fr_CA.UTF-8} might
+set the user's locale to French Canadian using the @acronym{UTF}-8
+encoding. The exact set of available locales depends on the user's
+installation.
+
+The maintainer of a package that uses a Bison-generated parser enables
+the internationalization of the parser's output through the following
+steps. Here we assume a package that uses @acronym{GNU} Autoconf and
+@acronym{GNU} Automake.
+
+@enumerate
+@item
+@cindex bison-i18n.m4
+Into the directory containing the @acronym{GNU} Autoconf macros used
+by the package---often called @file{m4}---copy the
+@file{bison-i18n.m4} file installed by Bison under
+@samp{share/aclocal/bison-i18n.m4} in Bison's installation directory.
+For example:
+
+@example
+cp /usr/local/share/aclocal/bison-i18n.m4 m4/bison-i18n.m4
+@end example
+
+@item
+@findex BISON_I18N
+@vindex BISON_LOCALEDIR
+@vindex YYENABLE_NLS
+In the top-level @file{configure.ac}, after the @code{AM_GNU_GETTEXT}
+invocation, add an invocation of @code{BISON_I18N}. This macro is
+defined in the file @file{bison-i18n.m4} that you copied earlier. It
+causes @samp{configure} to find the value of the
+@code{BISON_LOCALEDIR} variable, and it defines the source-language
+symbol @code{YYENABLE_NLS} to enable translations in the
+Bison-generated parser.
+
+@item
+In the @code{main} function of your program, designate the directory
+containing Bison's runtime message catalog, through a call to
+@samp{bindtextdomain} with domain name @samp{bison-runtime}.
+For example:
+
+@example
+bindtextdomain ("bison-runtime", BISON_LOCALEDIR);
+@end example
+
+Typically this appears after any other call @code{bindtextdomain
+(PACKAGE, LOCALEDIR)} that your package already has. Here we rely on
+@samp{BISON_LOCALEDIR} to be defined as a string through the
+@file{Makefile}.
+
+@item
+In the @file{Makefile.am} that controls the compilation of the @code{main}
+function, make @samp{BISON_LOCALEDIR} available as a C preprocessor macro,
+either in @samp{DEFS} or in @samp{AM_CPPFLAGS}. For example:
+
+@example
+DEFS = @@DEFS@@ -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"'
+@end example
+
+or:
+
+@example
+AM_CPPFLAGS = -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"'
+@end example
+
+@item
+Finally, invoke the command @command{autoreconf} to generate the build
+infrastructure.
+@end enumerate
+
+
+@node Algorithm
+@chapter The Bison Parser Algorithm
+@cindex Bison parser algorithm
+@cindex algorithm of parser
+@cindex shifting
+@cindex reduction
+@cindex parser stack
+@cindex stack, parser
+
+As Bison reads tokens, it pushes them onto a stack along with their
+semantic values. The stack is called the @dfn{parser stack}. Pushing a
+token is traditionally called @dfn{shifting}.
+
+For example, suppose the infix calculator has read @samp{1 + 5 *}, with a
+@samp{3} to come. The stack will have four elements, one for each token
+that was shifted.
+
+But the stack does not always have an element for each token read. When
+the last @var{n} tokens and groupings shifted match the components of a
+grammar rule, they can be combined according to that rule. This is called
+@dfn{reduction}. Those tokens and groupings are replaced on the stack by a
+single grouping whose symbol is the result (left hand side) of that rule.
+Running the rule's action is part of the process of reduction, because this
+is what computes the semantic value of the resulting grouping.
+
+For example, if the infix calculator's parser stack contains this:
+
+@example
+1 + 5 * 3
+@end example
+
+@noindent
+and the next input token is a newline character, then the last three
+elements can be reduced to 15 via the rule:
+
+@example
+expr: expr '*' expr;
+@end example
+
+@noindent
+Then the stack contains just these three elements:
+
+@example
+1 + 15
+@end example
+
+@noindent
+At this point, another reduction can be made, resulting in the single value
+16. Then the newline token can be shifted.
+
+The parser tries, by shifts and reductions, to reduce the entire input down
+to a single grouping whose symbol is the grammar's start-symbol
+(@pxref{Language and Grammar, ,Languages and Context-Free Grammars}).
+
+This kind of parser is known in the literature as a bottom-up parser.
+
+@menu
+* Lookahead:: Parser looks one token ahead when deciding what to do.
+* Shift/Reduce:: Conflicts: when either shifting or reduction is valid.
+* Precedence:: Operator precedence works by resolving conflicts.
+* Contextual Precedence:: When an operator's precedence depends on context.
+* Parser States:: The parser is a finite-state-machine with stack.
+* Reduce/Reduce:: When two rules are applicable in the same situation.
+* Mystery Conflicts:: Reduce/reduce conflicts that look unjustified.
+* Generalized LR Parsing:: Parsing arbitrary context-free grammars.
+* Memory Management:: What happens when memory is exhausted. How to avoid it.
+@end menu
+
+@node Lookahead
+@section Lookahead Tokens
+@cindex lookahead token
+
+The Bison parser does @emph{not} always reduce immediately as soon as the
+last @var{n} tokens and groupings match a rule. This is because such a
+simple strategy is inadequate to handle most languages. Instead, when a
+reduction is possible, the parser sometimes ``looks ahead'' at the next
+token in order to decide what to do.
+
+When a token is read, it is not immediately shifted; first it becomes the
+@dfn{lookahead token}, which is not on the stack. Now the parser can
+perform one or more reductions of tokens and groupings on the stack, while
+the lookahead token remains off to the side. When no more reductions
+should take place, the lookahead token is shifted onto the stack. This
+does not mean that all possible reductions have been done; depending on the
+token type of the lookahead token, some rules may choose to delay their
+application.
+
+Here is a simple case where lookahead is needed. These three rules define
+expressions which contain binary addition operators and postfix unary
+factorial operators (@samp{!}), and allow parentheses for grouping.
+
+@example
+@group
+expr: term '+' expr
+ | term
+ ;
+@end group
+
+@group
+term: '(' expr ')'
+ | term '!'
+ | NUMBER
+ ;
+@end group
+@end example
+
+Suppose that the tokens @w{@samp{1 + 2}} have been read and shifted; what
+should be done? If the following token is @samp{)}, then the first three
+tokens must be reduced to form an @code{expr}. This is the only valid
+course, because shifting the @samp{)} would produce a sequence of symbols
+@w{@code{term ')'}}, and no rule allows this.
+
+If the following token is @samp{!}, then it must be shifted immediately so
+that @w{@samp{2 !}} can be reduced to make a @code{term}. If instead the
+parser were to reduce before shifting, @w{@samp{1 + 2}} would become an
+@code{expr}. It would then be impossible to shift the @samp{!} because
+doing so would produce on the stack the sequence of symbols @code{expr
+'!'}. No rule allows that sequence.
+
+@vindex yychar
+@vindex yylval
+@vindex yylloc
+The lookahead token is stored in the variable @code{yychar}.
+Its semantic value and location, if any, are stored in the variables
+@code{yylval} and @code{yylloc}.
+@xref{Action Features, ,Special Features for Use in Actions}.
+
+@node Shift/Reduce
+@section Shift/Reduce Conflicts
+@cindex conflicts
+@cindex shift/reduce conflicts
+@cindex dangling @code{else}
+@cindex @code{else}, dangling
+
+Suppose we are parsing a language which has if-then and if-then-else
+statements, with a pair of rules like this:
+
+@example
+@group
+if_stmt:
+ IF expr THEN stmt
+ | IF expr THEN stmt ELSE stmt
+ ;
+@end group
+@end example
+
+@noindent
+Here we assume that @code{IF}, @code{THEN} and @code{ELSE} are
+terminal symbols for specific keyword tokens.
+
+When the @code{ELSE} token is read and becomes the lookahead token, the
+contents of the stack (assuming the input is valid) are just right for
+reduction by the first rule. But it is also legitimate to shift the
+@code{ELSE}, because that would lead to eventual reduction by the second
+rule.
+
+This situation, where either a shift or a reduction would be valid, is
+called a @dfn{shift/reduce conflict}. Bison is designed to resolve
+these conflicts by choosing to shift, unless otherwise directed by
+operator precedence declarations. To see the reason for this, let's
+contrast it with the other alternative.
+
+Since the parser prefers to shift the @code{ELSE}, the result is to attach
+the else-clause to the innermost if-statement, making these two inputs
+equivalent:
+
+@example
+if x then if y then win (); else lose;
+
+if x then do; if y then win (); else lose; end;
+@end example
+
+But if the parser chose to reduce when possible rather than shift, the
+result would be to attach the else-clause to the outermost if-statement,
+making these two inputs equivalent:
+
+@example
+if x then if y then win (); else lose;
+
+if x then do; if y then win (); end; else lose;
+@end example
+
+The conflict exists because the grammar as written is ambiguous: either
+parsing of the simple nested if-statement is legitimate. The established
+convention is that these ambiguities are resolved by attaching the
+else-clause to the innermost if-statement; this is what Bison accomplishes
+by choosing to shift rather than reduce. (It would ideally be cleaner to
+write an unambiguous grammar, but that is very hard to do in this case.)
+This particular ambiguity was first encountered in the specifications of
+Algol 60 and is called the ``dangling @code{else}'' ambiguity.
+
+To avoid warnings from Bison about predictable, legitimate shift/reduce
+conflicts, use the @code{%expect @var{n}} declaration. There will be no
+warning as long as the number of shift/reduce conflicts is exactly @var{n}.
+@xref{Expect Decl, ,Suppressing Conflict Warnings}.
+
+The definition of @code{if_stmt} above is solely to blame for the
+conflict, but the conflict does not actually appear without additional
+rules. Here is a complete Bison input file that actually manifests the
+conflict:
+
+@example
+@group
+%token IF THEN ELSE variable
+%%
+@end group
+@group
+stmt: expr
+ | if_stmt
+ ;
+@end group
+
+@group
+if_stmt:
+ IF expr THEN stmt
+ | IF expr THEN stmt ELSE stmt
+ ;
+@end group
+
+expr: variable
+ ;
+@end example
+
+@node Precedence
+@section Operator Precedence
+@cindex operator precedence
+@cindex precedence of operators
+
+Another situation where shift/reduce conflicts appear is in arithmetic
+expressions. Here shifting is not always the preferred resolution; the
+Bison declarations for operator precedence allow you to specify when to
+shift and when to reduce.
+
+@menu
+* Why Precedence:: An example showing why precedence is needed.
+* Using Precedence:: How to specify precedence and associativity.
+* Precedence Only:: How to specify precedence only.
+* Precedence Examples:: How these features are used in the previous example.
+* How Precedence:: How they work.
+@end menu
+
+@node Why Precedence
+@subsection When Precedence is Needed
+
+Consider the following ambiguous grammar fragment (ambiguous because the
+input @w{@samp{1 - 2 * 3}} can be parsed in two different ways):
+
+@example
+@group
+expr: expr '-' expr
+ | expr '*' expr
+ | expr '<' expr
+ | '(' expr ')'
+ @dots{}
+ ;
+@end group
+@end example
+
+@noindent
+Suppose the parser has seen the tokens @samp{1}, @samp{-} and @samp{2};
+should it reduce them via the rule for the subtraction operator? It
+depends on the next token. Of course, if the next token is @samp{)}, we
+must reduce; shifting is invalid because no single rule can reduce the
+token sequence @w{@samp{- 2 )}} or anything starting with that. But if
+the next token is @samp{*} or @samp{<}, we have a choice: either
+shifting or reduction would allow the parse to complete, but with
+different results.
+
+To decide which one Bison should do, we must consider the results. If
+the next operator token @var{op} is shifted, then it must be reduced
+first in order to permit another opportunity to reduce the difference.
+The result is (in effect) @w{@samp{1 - (2 @var{op} 3)}}. On the other
+hand, if the subtraction is reduced before shifting @var{op}, the result
+is @w{@samp{(1 - 2) @var{op} 3}}. Clearly, then, the choice of shift or
+reduce should depend on the relative precedence of the operators
+@samp{-} and @var{op}: @samp{*} should be shifted first, but not
+@samp{<}.
+
+@cindex associativity
+What about input such as @w{@samp{1 - 2 - 5}}; should this be
+@w{@samp{(1 - 2) - 5}} or should it be @w{@samp{1 - (2 - 5)}}? For most
+operators we prefer the former, which is called @dfn{left association}.
+The latter alternative, @dfn{right association}, is desirable for
+assignment operators. The choice of left or right association is a
+matter of whether the parser chooses to shift or reduce when the stack
+contains @w{@samp{1 - 2}} and the lookahead token is @samp{-}: shifting
+makes right-associativity.
+
+@node Using Precedence
+@subsection Specifying Operator Precedence
+@findex %left
+@findex %nonassoc
+@findex %precedence
+@findex %right
+
+Bison allows you to specify these choices with the operator precedence
+declarations @code{%left} and @code{%right}. Each such declaration
+contains a list of tokens, which are operators whose precedence and
+associativity is being declared. The @code{%left} declaration makes all
+those operators left-associative and the @code{%right} declaration makes
+them right-associative. A third alternative is @code{%nonassoc}, which
+declares that it is a syntax error to find the same operator twice ``in a
+row''.
+The last alternative, @code{%precedence}, allows to define only
+precedence and no associativity at all. As a result, any
+associativity-related conflict that remains will be reported as an
+compile-time error. The directive @code{%nonassoc} creates run-time
+error: using the operator in a associative way is a syntax error. The
+directive @code{%precedence} creates compile-time errors: an operator
+@emph{can} be involved in an associativity-related conflict, contrary to
+what expected the grammar author.
+
+The relative precedence of different operators is controlled by the
+order in which they are declared. The first precedence/associativity
+declaration in the file declares the operators whose
+precedence is lowest, the next such declaration declares the operators
+whose precedence is a little higher, and so on.
+
+@node Precedence Only
+@subsection Specifying Precedence Only
+@findex %precedence
+
+Since @acronym{POSIX} Yacc defines only @code{%left}, @code{%right}, and
+@code{%nonassoc}, which all defines precedence and associativity, little
+attention is paid to the fact that precedence cannot be defined without
+defining associativity. Yet, sometimes, when trying to solve a
+conflict, precedence suffices. In such a case, using @code{%left},
+@code{%right}, or @code{%nonassoc} might hide future (associativity
+related) conflicts that would remain hidden.
+
+The dangling @code{else} ambiguity (@pxref{Shift/Reduce, , Shift/Reduce
+Conflicts}) can be solved explicitly. This shift/reduce conflicts occurs
+in the following situation, where the period denotes the current parsing
+state:
+
+@example
+if @var{e1} then if @var{e2} then @var{s1} . else @var{s2}
+@end example
+
+The conflict involves the reduction of the rule @samp{IF expr THEN
+stmt}, which precedence is by default that of its last token
+(@code{THEN}), and the shifting of the token @code{ELSE}. The usual
+disambiguation (attach the @code{else} to the closest @code{if}),
+shifting must be preferred, i.e., the precedence of @code{ELSE} must be
+higher than that of @code{THEN}. But neither is expected to be involved
+in an associativity related conflict, which can be specified as follows.
+
+@example
+%precedence THEN
+%precedence ELSE
+@end example
+
+The unary-minus is another typical example where associativity is
+usually over-specified, see @ref{Infix Calc, , Infix Notation
+Calculator: @code{calc}}. The @code{%left} directive is traditionally
+used to declare the precedence of @code{NEG}, which is more than needed
+since it also defines its associativity. While this is harmless in the
+traditional example, who knows how @code{NEG} might be used in future
+evolutions of the grammar@dots{}
+
+@node Precedence Examples
+@subsection Precedence Examples
+
+In our example, we would want the following declarations:
+
+@example
+%left '<'
+%left '-'
+%left '*'
+@end example
+
+In a more complete example, which supports other operators as well, we
+would declare them in groups of equal precedence. For example, @code{'+'} is
+declared with @code{'-'}:
+
+@example
+%left '<' '>' '=' NE LE GE
+%left '+' '-'
+%left '*' '/'
+@end example
+
+@noindent
+(Here @code{NE} and so on stand for the operators for ``not equal''
+and so on. We assume that these tokens are more than one character long
+and therefore are represented by names, not character literals.)
+
+@node How Precedence
+@subsection How Precedence Works
+
+The first effect of the precedence declarations is to assign precedence
+levels to the terminal symbols declared. The second effect is to assign
+precedence levels to certain rules: each rule gets its precedence from
+the last terminal symbol mentioned in the components. (You can also
+specify explicitly the precedence of a rule. @xref{Contextual
+Precedence, ,Context-Dependent Precedence}.)
+
+Finally, the resolution of conflicts works by comparing the precedence
+of the rule being considered with that of the lookahead token. If the
+token's precedence is higher, the choice is to shift. If the rule's
+precedence is higher, the choice is to reduce. If they have equal
+precedence, the choice is made based on the associativity of that
+precedence level. The verbose output file made by @samp{-v}
+(@pxref{Invocation, ,Invoking Bison}) says how each conflict was
+resolved.
+
+Not all rules and not all tokens have precedence. If either the rule or
+the lookahead token has no precedence, then the default is to shift.
+
+@node Contextual Precedence
+@section Context-Dependent Precedence
+@cindex context-dependent precedence
+@cindex unary operator precedence
+@cindex precedence, context-dependent
+@cindex precedence, unary operator
+@findex %prec
+
+Often the precedence of an operator depends on the context. This sounds
+outlandish at first, but it is really very common. For example, a minus
+sign typically has a very high precedence as a unary operator, and a
+somewhat lower precedence (lower than multiplication) as a binary operator.
+
+The Bison precedence declarations
+can only be used once for a given token; so a token has
+only one precedence declared in this way. For context-dependent
+precedence, you need to use an additional mechanism: the @code{%prec}
+modifier for rules.
+
+The @code{%prec} modifier declares the precedence of a particular rule by
+specifying a terminal symbol whose precedence should be used for that rule.
+It's not necessary for that symbol to appear otherwise in the rule. The
+modifier's syntax is:
+
+@example
+%prec @var{terminal-symbol}
+@end example
+
+@noindent
+and it is written after the components of the rule. Its effect is to
+assign the rule the precedence of @var{terminal-symbol}, overriding
+the precedence that would be deduced for it in the ordinary way. The
+altered rule precedence then affects how conflicts involving that rule
+are resolved (@pxref{Precedence, ,Operator Precedence}).
+
+Here is how @code{%prec} solves the problem of unary minus. First, declare
+a precedence for a fictitious terminal symbol named @code{UMINUS}. There
+are no tokens of this type, but the symbol serves to stand for its
+precedence:
+
+@example
+@dots{}
+%left '+' '-'
+%left '*'
+%left UMINUS
+@end example
+
+Now the precedence of @code{UMINUS} can be used in specific rules:
+
+@example
+@group
+exp: @dots{}
+ | exp '-' exp
+ @dots{}
+ | '-' exp %prec UMINUS
+@end group
+@end example
+
+@ifset defaultprec
+If you forget to append @code{%prec UMINUS} to the rule for unary
+minus, Bison silently assumes that minus has its usual precedence.
+This kind of problem can be tricky to debug, since one typically
+discovers the mistake only by testing the code.
+
+The @code{%no-default-prec;} declaration makes it easier to discover
+this kind of problem systematically. It causes rules that lack a
+@code{%prec} modifier to have no precedence, even if the last terminal
+symbol mentioned in their components has a declared precedence.
+
+If @code{%no-default-prec;} is in effect, you must specify @code{%prec}
+for all rules that participate in precedence conflict resolution.
+Then you will see any shift/reduce conflict until you tell Bison how
+to resolve it, either by changing your grammar or by adding an
+explicit precedence. This will probably add declarations to the
+grammar, but it helps to protect against incorrect rule precedences.
+
+The effect of @code{%no-default-prec;} can be reversed by giving
+@code{%default-prec;}, which is the default.
+@end ifset
+
+@node Parser States
+@section Parser States
+@cindex finite-state machine
+@cindex parser state
+@cindex state (of parser)
+
+The function @code{yyparse} is implemented using a finite-state machine.
+The values pushed on the parser stack are not simply token type codes; they
+represent the entire sequence of terminal and nonterminal symbols at or
+near the top of the stack. The current state collects all the information
+about previous input which is relevant to deciding what to do next.
+
+Each time a lookahead token is read, the current parser state together
+with the type of lookahead token are looked up in a table. This table
+entry can say, ``Shift the lookahead token.'' In this case, it also
+specifies the new parser state, which is pushed onto the top of the
+parser stack. Or it can say, ``Reduce using rule number @var{n}.''
+This means that a certain number of tokens or groupings are taken off
+the top of the stack, and replaced by one grouping. In other words,
+that number of states are popped from the stack, and one new state is
+pushed.
+
+There is one other alternative: the table can say that the lookahead token
+is erroneous in the current state. This causes error processing to begin
+(@pxref{Error Recovery}).
+
+@node Reduce/Reduce
+@section Reduce/Reduce Conflicts
+@cindex reduce/reduce conflict
+@cindex conflicts, reduce/reduce
+
+A reduce/reduce conflict occurs if there are two or more rules that apply
+to the same sequence of input. This usually indicates a serious error
+in the grammar.
+
+For example, here is an erroneous attempt to define a sequence
+of zero or more @code{word} groupings.
+
+@example
+sequence: /* empty */
+ @{ printf ("empty sequence\n"); @}
+ | maybeword
+ | sequence word
+ @{ printf ("added word %s\n", $2); @}
+ ;
+
+maybeword: /* empty */
+ @{ printf ("empty maybeword\n"); @}
+ | word
+ @{ printf ("single word %s\n", $1); @}
+ ;
+@end example
+
+@noindent
+The error is an ambiguity: there is more than one way to parse a single
+@code{word} into a @code{sequence}. It could be reduced to a
+@code{maybeword} and then into a @code{sequence} via the second rule.
+Alternatively, nothing-at-all could be reduced into a @code{sequence}
+via the first rule, and this could be combined with the @code{word}
+using the third rule for @code{sequence}.
+
+There is also more than one way to reduce nothing-at-all into a
+@code{sequence}. This can be done directly via the first rule,
+or indirectly via @code{maybeword} and then the second rule.
+
+You might think that this is a distinction without a difference, because it
+does not change whether any particular input is valid or not. But it does
+affect which actions are run. One parsing order runs the second rule's
+action; the other runs the first rule's action and the third rule's action.
+In this example, the output of the program changes.
+
+Bison resolves a reduce/reduce conflict by choosing to use the rule that
+appears first in the grammar, but it is very risky to rely on this. Every
+reduce/reduce conflict must be studied and usually eliminated. Here is the
+proper way to define @code{sequence}:
+
+@example
+sequence: /* empty */
+ @{ printf ("empty sequence\n"); @}
+ | sequence word
+ @{ printf ("added word %s\n", $2); @}
+ ;
+@end example
+
+Here is another common error that yields a reduce/reduce conflict:
+
+@example
+sequence: /* empty */
+ | sequence words
+ | sequence redirects
+ ;
+
+words: /* empty */
+ | words word
+ ;
+
+redirects:/* empty */
+ | redirects redirect
+ ;
+@end example
+
+@noindent
+The intention here is to define a sequence which can contain either
+@code{word} or @code{redirect} groupings. The individual definitions of
+@code{sequence}, @code{words} and @code{redirects} are error-free, but the
+three together make a subtle ambiguity: even an empty input can be parsed
+in infinitely many ways!
+
+Consider: nothing-at-all could be a @code{words}. Or it could be two
+@code{words} in a row, or three, or any number. It could equally well be a
+@code{redirects}, or two, or any number. Or it could be a @code{words}
+followed by three @code{redirects} and another @code{words}. And so on.
+
+Here are two ways to correct these rules. First, to make it a single level
+of sequence:
+
+@example
+sequence: /* empty */
+ | sequence word
+ | sequence redirect
+ ;
+@end example
+
+Second, to prevent either a @code{words} or a @code{redirects}
+from being empty:
+
+@example
+sequence: /* empty */
+ | sequence words
+ | sequence redirects
+ ;
+
+words: word
+ | words word
+ ;
+
+redirects:redirect
+ | redirects redirect
+ ;
+@end example
+
+@node Mystery Conflicts
+@section Mysterious Reduce/Reduce Conflicts
+
+Sometimes reduce/reduce conflicts can occur that don't look warranted.
+Here is an example:
+
+@example
+@group
+%token ID
+
+%%
+def: param_spec return_spec ','
+ ;
+param_spec:
+ type
+ | name_list ':' type
+ ;
+@end group
+@group
+return_spec:
+ type
+ | name ':' type
+ ;
+@end group
+@group
+type: ID
+ ;
+@end group
+@group
+name: ID
+ ;
+name_list:
+ name
+ | name ',' name_list
+ ;
+@end group
+@end example
+
+It would seem that this grammar can be parsed with only a single token
+of lookahead: when a @code{param_spec} is being read, an @code{ID} is
+a @code{name} if a comma or colon follows, or a @code{type} if another
+@code{ID} follows. In other words, this grammar is @acronym{LR}(1).
+
+@cindex @acronym{LR}(1)
+@cindex @acronym{LALR}(1)
+However, for historical reasons, Bison cannot by default handle all
+@acronym{LR}(1) grammars.
+In this grammar, two contexts, that after an @code{ID} at the beginning
+of a @code{param_spec} and likewise at the beginning of a
+@code{return_spec}, are similar enough that Bison assumes they are the
+same.
+They appear similar because the same set of rules would be
+active---the rule for reducing to a @code{name} and that for reducing to
+a @code{type}. Bison is unable to determine at that stage of processing
+that the rules would require different lookahead tokens in the two
+contexts, so it makes a single parser state for them both. Combining
+the two contexts causes a conflict later. In parser terminology, this
+occurrence means that the grammar is not @acronym{LALR}(1).
+
+For many practical grammars (specifically those that fall into the
+non-@acronym{LR}(1) class), the limitations of @acronym{LALR}(1) result in
+difficulties beyond just mysterious reduce/reduce conflicts.
+The best way to fix all these problems is to select a different parser
+table generation algorithm.
+Either @acronym{IELR}(1) or canonical @acronym{LR}(1) would suffice, but
+the former is more efficient and easier to debug during development.
+@xref{Decl Summary,,lr.type}, for details.
+(Bison's @acronym{IELR}(1) and canonical @acronym{LR}(1) implementations
+are experimental.
+More user feedback will help to stabilize them.)
+
+If you instead wish to work around @acronym{LALR}(1)'s limitations, you
+can often fix a mysterious conflict by identifying the two parser states
+that are being confused, and adding something to make them look
+distinct. In the above example, adding one rule to
+@code{return_spec} as follows makes the problem go away:
+
+@example
+@group
+%token BOGUS
+@dots{}
+%%
+@dots{}
+return_spec:
+ type
+ | name ':' type
+ /* This rule is never used. */
+ | ID BOGUS
+ ;
+@end group
+@end example
+
+This corrects the problem because it introduces the possibility of an
+additional active rule in the context after the @code{ID} at the beginning of
+@code{return_spec}. This rule is not active in the corresponding context
+in a @code{param_spec}, so the two contexts receive distinct parser states.
+As long as the token @code{BOGUS} is never generated by @code{yylex},
+the added rule cannot alter the way actual input is parsed.
+
+In this particular example, there is another way to solve the problem:
+rewrite the rule for @code{return_spec} to use @code{ID} directly
+instead of via @code{name}. This also causes the two confusing
+contexts to have different sets of active rules, because the one for
+@code{return_spec} activates the altered rule for @code{return_spec}
+rather than the one for @code{name}.
+
+@example
+param_spec:
+ type
+ | name_list ':' type
+ ;
+return_spec:
+ type
+ | ID ':' type
+ ;
+@end example
+
+For a more detailed exposition of @acronym{LALR}(1) parsers and parser
+generators, please see:
+Frank DeRemer and Thomas Pennello, Efficient Computation of
+@acronym{LALR}(1) Look-Ahead Sets, @cite{@acronym{ACM} Transactions on
+Programming Languages and Systems}, Vol.@: 4, No.@: 4 (October 1982),
+pp.@: 615--649 @uref{http://doi.acm.org/10.1145/69622.357187}.
+
+@node Generalized LR Parsing
+@section Generalized @acronym{LR} (@acronym{GLR}) Parsing
+@cindex @acronym{GLR} parsing
+@cindex generalized @acronym{LR} (@acronym{GLR}) parsing
+@cindex ambiguous grammars
+@cindex nondeterministic parsing
+
+Bison produces @emph{deterministic} parsers that choose uniquely
+when to reduce and which reduction to apply
+based on a summary of the preceding input and on one extra token of lookahead.
+As a result, normal Bison handles a proper subset of the family of
+context-free languages.
+Ambiguous grammars, since they have strings with more than one possible
+sequence of reductions cannot have deterministic parsers in this sense.
+The same is true of languages that require more than one symbol of
+lookahead, since the parser lacks the information necessary to make a
+decision at the point it must be made in a shift-reduce parser.
+Finally, as previously mentioned (@pxref{Mystery Conflicts}),
+there are languages where Bison's default choice of how to
+summarize the input seen so far loses necessary information.
+
+When you use the @samp{%glr-parser} declaration in your grammar file,
+Bison generates a parser that uses a different algorithm, called
+Generalized @acronym{LR} (or @acronym{GLR}). A Bison @acronym{GLR}
+parser uses the same basic
+algorithm for parsing as an ordinary Bison parser, but behaves
+differently in cases where there is a shift-reduce conflict that has not
+been resolved by precedence rules (@pxref{Precedence}) or a
+reduce-reduce conflict. When a @acronym{GLR} parser encounters such a
+situation, it
+effectively @emph{splits} into a several parsers, one for each possible
+shift or reduction. These parsers then proceed as usual, consuming
+tokens in lock-step. Some of the stacks may encounter other conflicts
+and split further, with the result that instead of a sequence of states,
+a Bison @acronym{GLR} parsing stack is what is in effect a tree of states.
+
+In effect, each stack represents a guess as to what the proper parse
+is. Additional input may indicate that a guess was wrong, in which case
+the appropriate stack silently disappears. Otherwise, the semantics
+actions generated in each stack are saved, rather than being executed
+immediately. When a stack disappears, its saved semantic actions never
+get executed. When a reduction causes two stacks to become equivalent,
+their sets of semantic actions are both saved with the state that
+results from the reduction. We say that two stacks are equivalent
+when they both represent the same sequence of states,
+and each pair of corresponding states represents a
+grammar symbol that produces the same segment of the input token
+stream.
+
+Whenever the parser makes a transition from having multiple
+states to having one, it reverts to the normal deterministic parsing
+algorithm, after resolving and executing the saved-up actions.
+At this transition, some of the states on the stack will have semantic
+values that are sets (actually multisets) of possible actions. The
+parser tries to pick one of the actions by first finding one whose rule
+has the highest dynamic precedence, as set by the @samp{%dprec}
+declaration. Otherwise, if the alternative actions are not ordered by
+precedence, but there the same merging function is declared for both
+rules by the @samp{%merge} declaration,
+Bison resolves and evaluates both and then calls the merge function on
+the result. Otherwise, it reports an ambiguity.
+
+It is possible to use a data structure for the @acronym{GLR} parsing tree that
+permits the processing of any @acronym{LR}(1) grammar in linear time (in the
+size of the input), any unambiguous (not necessarily
+@acronym{LR}(1)) grammar in
+quadratic worst-case time, and any general (possibly ambiguous)
+context-free grammar in cubic worst-case time. However, Bison currently
+uses a simpler data structure that requires time proportional to the
+length of the input times the maximum number of stacks required for any
+prefix of the input. Thus, really ambiguous or nondeterministic
+grammars can require exponential time and space to process. Such badly
+behaving examples, however, are not generally of practical interest.
+Usually, nondeterminism in a grammar is local---the parser is ``in
+doubt'' only for a few tokens at a time. Therefore, the current data
+structure should generally be adequate. On @acronym{LR}(1) portions of a
+grammar, in particular, it is only slightly slower than with the
+deterministic @acronym{LR}(1) Bison parser.
+
+For a more detailed exposition of @acronym{GLR} parsers, please see: Elizabeth
+Scott, Adrian Johnstone and Shamsa Sadaf Hussain, Tomita-Style
+Generalised @acronym{LR} Parsers, Royal Holloway, University of
+London, Department of Computer Science, TR-00-12,
+@uref{http://www.cs.rhul.ac.uk/research/languages/publications/tomita_style_1.ps},
+(2000-12-24).
+
+@node Memory Management
+@section Memory Management, and How to Avoid Memory Exhaustion
+@cindex memory exhaustion
+@cindex memory management
+@cindex stack overflow
+@cindex parser stack overflow
+@cindex overflow of parser stack
+
+The Bison parser stack can run out of memory if too many tokens are shifted and
+not reduced. When this happens, the parser function @code{yyparse}
+calls @code{yyerror} and then returns 2.
+
+Because Bison parsers have growing stacks, hitting the upper limit
+usually results from using a right recursion instead of a left
+recursion, @xref{Recursion, ,Recursive Rules}.
+
+@vindex YYMAXDEPTH
+By defining the macro @code{YYMAXDEPTH}, you can control how deep the
+parser stack can become before memory is exhausted. Define the
+macro with a value that is an integer. This value is the maximum number
+of tokens that can be shifted (and not reduced) before overflow.
+
+The stack space allowed is not necessarily allocated. If you specify a
+large value for @code{YYMAXDEPTH}, the parser normally allocates a small
+stack at first, and then makes it bigger by stages as needed. This
+increasing allocation happens automatically and silently. Therefore,
+you do not need to make @code{YYMAXDEPTH} painfully small merely to save
+space for ordinary inputs that do not need much stack.
+
+However, do not allow @code{YYMAXDEPTH} to be a value so large that
+arithmetic overflow could occur when calculating the size of the stack
+space. Also, do not allow @code{YYMAXDEPTH} to be less than
+@code{YYINITDEPTH}.
+
+@cindex default stack limit
+The default value of @code{YYMAXDEPTH}, if you do not define it, is
+10000.
+
+@vindex YYINITDEPTH
+You can control how much stack is allocated initially by defining the
+macro @code{YYINITDEPTH} to a positive integer. For the deterministic
+parser in C, this value must be a compile-time constant
+unless you are assuming C99 or some other target language or compiler
+that allows variable-length arrays. The default is 200.
+
+Do not allow @code{YYINITDEPTH} to be greater than @code{YYMAXDEPTH}.
+
+@c FIXME: C++ output.
+Because of semantic differences between C and C++, the deterministic
+parsers in C produced by Bison cannot grow when compiled
+by C++ compilers. In this precise case (compiling a C parser as C++) you are
+suggested to grow @code{YYINITDEPTH}. The Bison maintainers hope to fix
+this deficiency in a future release.
+
+@node Error Recovery
+@chapter Error Recovery
+@cindex error recovery
+@cindex recovery from errors
+
+It is not usually acceptable to have a program terminate on a syntax
+error. For example, a compiler should recover sufficiently to parse the
+rest of the input file and check it for errors; a calculator should accept
+another expression.
+
+In a simple interactive command parser where each input is one line, it may
+be sufficient to allow @code{yyparse} to return 1 on error and have the
+caller ignore the rest of the input line when that happens (and then call
+@code{yyparse} again). But this is inadequate for a compiler, because it
+forgets all the syntactic context leading up to the error. A syntax error
+deep within a function in the compiler input should not cause the compiler
+to treat the following line like the beginning of a source file.
+
+@findex error
+You can define how to recover from a syntax error by writing rules to
+recognize the special token @code{error}. This is a terminal symbol that
+is always defined (you need not declare it) and reserved for error
+handling. The Bison parser generates an @code{error} token whenever a
+syntax error happens; if you have provided a rule to recognize this token
+in the current context, the parse can continue.
+
+For example:
+
+@example
+stmnts: /* empty string */
+ | stmnts '\n'
+ | stmnts exp '\n'
+ | stmnts error '\n'
+@end example
+
+The fourth rule in this example says that an error followed by a newline
+makes a valid addition to any @code{stmnts}.
+
+What happens if a syntax error occurs in the middle of an @code{exp}? The
+error recovery rule, interpreted strictly, applies to the precise sequence
+of a @code{stmnts}, an @code{error} and a newline. If an error occurs in
+the middle of an @code{exp}, there will probably be some additional tokens
+and subexpressions on the stack after the last @code{stmnts}, and there
+will be tokens to read before the next newline. So the rule is not
+applicable in the ordinary way.
+
+But Bison can force the situation to fit the rule, by discarding part of
+the semantic context and part of the input. First it discards states
+and objects from the stack until it gets back to a state in which the
+@code{error} token is acceptable. (This means that the subexpressions
+already parsed are discarded, back to the last complete @code{stmnts}.)
+At this point the @code{error} token can be shifted. Then, if the old
+lookahead token is not acceptable to be shifted next, the parser reads
+tokens and discards them until it finds a token which is acceptable. In
+this example, Bison reads and discards input until the next newline so
+that the fourth rule can apply. Note that discarded symbols are
+possible sources of memory leaks, see @ref{Destructor Decl, , Freeing
+Discarded Symbols}, for a means to reclaim this memory.
+
+The choice of error rules in the grammar is a choice of strategies for
+error recovery. A simple and useful strategy is simply to skip the rest of
+the current input line or current statement if an error is detected:
+
+@example
+stmnt: error ';' /* On error, skip until ';' is read. */
+@end example
+
+It is also useful to recover to the matching close-delimiter of an
+opening-delimiter that has already been parsed. Otherwise the
+close-delimiter will probably appear to be unmatched, and generate another,
+spurious error message:
+
+@example
+primary: '(' expr ')'
+ | '(' error ')'
+ @dots{}
+ ;
+@end example
+
+Error recovery strategies are necessarily guesses. When they guess wrong,
+one syntax error often leads to another. In the above example, the error
+recovery rule guesses that an error is due to bad input within one
+@code{stmnt}. Suppose that instead a spurious semicolon is inserted in the
+middle of a valid @code{stmnt}. After the error recovery rule recovers
+from the first error, another syntax error will be found straightaway,
+since the text following the spurious semicolon is also an invalid
+@code{stmnt}.
+
+To prevent an outpouring of error messages, the parser will output no error
+message for another syntax error that happens shortly after the first; only
+after three consecutive input tokens have been successfully shifted will
+error messages resume.
+
+Note that rules which accept the @code{error} token may have actions, just
+as any other rules can.
+
+@findex yyerrok
+You can make error messages resume immediately by using the macro
+@code{yyerrok} in an action. If you do this in the error rule's action, no
+error messages will be suppressed. This macro requires no arguments;
+@samp{yyerrok;} is a valid C statement.
+
+@findex yyclearin
+The previous lookahead token is reanalyzed immediately after an error. If
+this is unacceptable, then the macro @code{yyclearin} may be used to clear
+this token. Write the statement @samp{yyclearin;} in the error rule's
+action.
+@xref{Action Features, ,Special Features for Use in Actions}.
+
+For example, suppose that on a syntax error, an error handling routine is
+called that advances the input stream to some point where parsing should
+once again commence. The next symbol returned by the lexical scanner is
+probably correct. The previous lookahead token ought to be discarded
+with @samp{yyclearin;}.
+
+@vindex YYRECOVERING
+The expression @code{YYRECOVERING ()} yields 1 when the parser
+is recovering from a syntax error, and 0 otherwise.
+Syntax error diagnostics are suppressed while recovering from a syntax
+error.
+
+@node Context Dependency
+@chapter Handling Context Dependencies
+
+The Bison paradigm is to parse tokens first, then group them into larger
+syntactic units. In many languages, the meaning of a token is affected by
+its context. Although this violates the Bison paradigm, certain techniques
+(known as @dfn{kludges}) may enable you to write Bison parsers for such
+languages.
+
+@menu
+* Semantic Tokens:: Token parsing can depend on the semantic context.
+* Lexical Tie-ins:: Token parsing can depend on the syntactic context.
+* Tie-in Recovery:: Lexical tie-ins have implications for how
+ error recovery rules must be written.
+@end menu
+
+(Actually, ``kludge'' means any technique that gets its job done but is
+neither clean nor robust.)
+
+@node Semantic Tokens
+@section Semantic Info in Token Types
+
+The C language has a context dependency: the way an identifier is used
+depends on what its current meaning is. For example, consider this:
+
+@example
+foo (x);
+@end example
+
+This looks like a function call statement, but if @code{foo} is a typedef
+name, then this is actually a declaration of @code{x}. How can a Bison
+parser for C decide how to parse this input?
+
+The method used in @acronym{GNU} C is to have two different token types,
+@code{IDENTIFIER} and @code{TYPENAME}. When @code{yylex} finds an
+identifier, it looks up the current declaration of the identifier in order
+to decide which token type to return: @code{TYPENAME} if the identifier is
+declared as a typedef, @code{IDENTIFIER} otherwise.
+
+The grammar rules can then express the context dependency by the choice of
+token type to recognize. @code{IDENTIFIER} is accepted as an expression,
+but @code{TYPENAME} is not. @code{TYPENAME} can start a declaration, but
+@code{IDENTIFIER} cannot. In contexts where the meaning of the identifier
+is @emph{not} significant, such as in declarations that can shadow a
+typedef name, either @code{TYPENAME} or @code{IDENTIFIER} is
+accepted---there is one rule for each of the two token types.
+
+This technique is simple to use if the decision of which kinds of
+identifiers to allow is made at a place close to where the identifier is
+parsed. But in C this is not always so: C allows a declaration to
+redeclare a typedef name provided an explicit type has been specified
+earlier:
+
+@example
+typedef int foo, bar;
+int baz (void)
+@{
+ static bar (bar); /* @r{redeclare @code{bar} as static variable} */
+ extern foo foo (foo); /* @r{redeclare @code{foo} as function} */
+ return foo (bar);
+@}
+@end example
+
+Unfortunately, the name being declared is separated from the declaration
+construct itself by a complicated syntactic structure---the ``declarator''.
+
+As a result, part of the Bison parser for C needs to be duplicated, with
+all the nonterminal names changed: once for parsing a declaration in
+which a typedef name can be redefined, and once for parsing a
+declaration in which that can't be done. Here is a part of the
+duplication, with actions omitted for brevity:
+
+@example
+initdcl:
+ declarator maybeasm '='
+ init
+ | declarator maybeasm
+ ;
+
+notype_initdcl:
+ notype_declarator maybeasm '='
+ init
+ | notype_declarator maybeasm
+ ;
+@end example
+
+@noindent
+Here @code{initdcl} can redeclare a typedef name, but @code{notype_initdcl}
+cannot. The distinction between @code{declarator} and
+@code{notype_declarator} is the same sort of thing.
+
+There is some similarity between this technique and a lexical tie-in
+(described next), in that information which alters the lexical analysis is
+changed during parsing by other parts of the program. The difference is
+here the information is global, and is used for other purposes in the
+program. A true lexical tie-in has a special-purpose flag controlled by
+the syntactic context.
+
+@node Lexical Tie-ins
+@section Lexical Tie-ins
+@cindex lexical tie-in
+
+One way to handle context-dependency is the @dfn{lexical tie-in}: a flag
+which is set by Bison actions, whose purpose is to alter the way tokens are
+parsed.
+
+For example, suppose we have a language vaguely like C, but with a special
+construct @samp{hex (@var{hex-expr})}. After the keyword @code{hex} comes
+an expression in parentheses in which all integers are hexadecimal. In
+particular, the token @samp{a1b} must be treated as an integer rather than
+as an identifier if it appears in that context. Here is how you can do it:
+
+@example
+@group
+%@{
+ int hexflag;
+ int yylex (void);
+ void yyerror (char const *);
+%@}
+%%
+@dots{}
+@end group
+@group
+expr: IDENTIFIER
+ | constant
+ | HEX '('
+ @{ hexflag = 1; @}
+ expr ')'
+ @{ hexflag = 0;
+ $$ = $4; @}
+ | expr '+' expr
+ @{ $$ = make_sum ($1, $3); @}
+ @dots{}
+ ;
+@end group
+
+@group
+constant:
+ INTEGER
+ | STRING
+ ;
+@end group
+@end example
+
+@noindent
+Here we assume that @code{yylex} looks at the value of @code{hexflag}; when
+it is nonzero, all integers are parsed in hexadecimal, and tokens starting
+with letters are parsed as integers if possible.
+
+The declaration of @code{hexflag} shown in the prologue of the parser file
+is needed to make it accessible to the actions (@pxref{Prologue, ,The Prologue}).
+You must also write the code in @code{yylex} to obey the flag.
+
+@node Tie-in Recovery
+@section Lexical Tie-ins and Error Recovery
+
+Lexical tie-ins make strict demands on any error recovery rules you have.
+@xref{Error Recovery}.
+
+The reason for this is that the purpose of an error recovery rule is to
+abort the parsing of one construct and resume in some larger construct.
+For example, in C-like languages, a typical error recovery rule is to skip
+tokens until the next semicolon, and then start a new statement, like this:
+
+@example
+stmt: expr ';'
+ | IF '(' expr ')' stmt @{ @dots{} @}
+ @dots{}
+ error ';'
+ @{ hexflag = 0; @}
+ ;
+@end example
+
+If there is a syntax error in the middle of a @samp{hex (@var{expr})}
+construct, this error rule will apply, and then the action for the
+completed @samp{hex (@var{expr})} will never run. So @code{hexflag} would
+remain set for the entire rest of the input, or until the next @code{hex}
+keyword, causing identifiers to be misinterpreted as integers.
+
+To avoid this problem the error recovery rule itself clears @code{hexflag}.
+
+There may also be an error recovery rule that works within expressions.
+For example, there could be a rule which applies within parentheses
+and skips to the close-parenthesis:
+
+@example
+@group
+expr: @dots{}
+ | '(' expr ')'
+ @{ $$ = $2; @}
+ | '(' error ')'
+ @dots{}
+@end group
+@end example
+
+If this rule acts within the @code{hex} construct, it is not going to abort
+that construct (since it applies to an inner level of parentheses within
+the construct). Therefore, it should not clear the flag: the rest of
+the @code{hex} construct should be parsed with the flag still in effect.
+
+What if there is an error recovery rule which might abort out of the
+@code{hex} construct or might not, depending on circumstances? There is no
+way you can write the action to determine whether a @code{hex} construct is
+being aborted or not. So if you are using a lexical tie-in, you had better
+make sure your error recovery rules are not of this kind. Each rule must
+be such that you can be sure that it always will, or always won't, have to
+clear the flag.
+
+@c ================================================== Debugging Your Parser
+
+@node Debugging
+@chapter Debugging Your Parser
+
+Developing a parser can be a challenge, especially if you don't
+understand the algorithm (@pxref{Algorithm, ,The Bison Parser
+Algorithm}). Even so, sometimes a detailed description of the automaton
+can help (@pxref{Understanding, , Understanding Your Parser}), or
+tracing the execution of the parser can give some insight on why it
+behaves improperly (@pxref{Tracing, , Tracing Your Parser}).
+
+@menu
+* Understanding:: Understanding the structure of your parser.
+* Tracing:: Tracing the execution of your parser.
+@end menu
+
+@node Understanding
+@section Understanding Your Parser
+
+As documented elsewhere (@pxref{Algorithm, ,The Bison Parser Algorithm})
+Bison parsers are @dfn{shift/reduce automata}. In some cases (much more
+frequent than one would hope), looking at this automaton is required to
+tune or simply fix a parser. Bison provides two different
+representation of it, either textually or graphically (as a DOT file).
+
+The textual file is generated when the options @option{--report} or
+@option{--verbose} are specified, see @xref{Invocation, , Invoking
+Bison}. Its name is made by removing @samp{.tab.c} or @samp{.c} from
+the parser output file name, and adding @samp{.output} instead.
+Therefore, if the input file is @file{foo.y}, then the parser file is
+called @file{foo.tab.c} by default. As a consequence, the verbose
+output file is called @file{foo.output}.
+
+The following grammar file, @file{calc.y}, will be used in the sequel:
+
+@example
+%token NUM STR
+%left '+' '-'
+%left '*'
+%%
+exp: exp '+' exp
+ | exp '-' exp
+ | exp '*' exp
+ | exp '/' exp
+ | NUM
+ ;
+useless: STR;
+%%
+@end example
+
+@command{bison} reports:
+
+@example
+calc.y: warning: 1 nonterminal useless in grammar
+calc.y: warning: 1 rule useless in grammar
+calc.y:11.1-7: warning: nonterminal useless in grammar: useless
+calc.y:11.10-12: warning: rule useless in grammar: useless: STR
+calc.y: conflicts: 7 shift/reduce
+@end example
+
+When given @option{--report=state}, in addition to @file{calc.tab.c}, it
+creates a file @file{calc.output} with contents detailed below. The
+order of the output and the exact presentation might vary, but the
+interpretation is the same.
+
+The first section includes details on conflicts that were solved thanks
+to precedence and/or associativity:
+
+@example
+Conflict in state 8 between rule 2 and token '+' resolved as reduce.
+Conflict in state 8 between rule 2 and token '-' resolved as reduce.
+Conflict in state 8 between rule 2 and token '*' resolved as shift.
+@exdent @dots{}
+@end example
+
+@noindent
+The next section lists states that still have conflicts.
+
+@example
+State 8 conflicts: 1 shift/reduce
+State 9 conflicts: 1 shift/reduce
+State 10 conflicts: 1 shift/reduce
+State 11 conflicts: 4 shift/reduce
+@end example
+
+@noindent
+@cindex token, useless
+@cindex useless token
+@cindex nonterminal, useless
+@cindex useless nonterminal
+@cindex rule, useless
+@cindex useless rule
+The next section reports useless tokens, nonterminal and rules. Useless
+nonterminals and rules are removed in order to produce a smaller parser,
+but useless tokens are preserved, since they might be used by the
+scanner (note the difference between ``useless'' and ``unused''
+below):
+
+@example
+Nonterminals useless in grammar:
+ useless
+
+Terminals unused in grammar:
+ STR
+
+Rules useless in grammar:
+#6 useless: STR;
+@end example
+
+@noindent
+The next section reproduces the exact grammar that Bison used:
+
+@example
+Grammar
+
+ Number, Line, Rule
+ 0 5 $accept -> exp $end
+ 1 5 exp -> exp '+' exp
+ 2 6 exp -> exp '-' exp
+ 3 7 exp -> exp '*' exp
+ 4 8 exp -> exp '/' exp
+ 5 9 exp -> NUM
+@end example
+
+@noindent
+and reports the uses of the symbols:
+
+@example
+Terminals, with rules where they appear
+
+$end (0) 0
+'*' (42) 3
+'+' (43) 1
+'-' (45) 2
+'/' (47) 4
+error (256)
+NUM (258) 5
+
+Nonterminals, with rules where they appear
+
+$accept (8)
+ on left: 0
+exp (9)
+ on left: 1 2 3 4 5, on right: 0 1 2 3 4
+@end example
+
+@noindent
+@cindex item
+@cindex pointed rule
+@cindex rule, pointed
+Bison then proceeds onto the automaton itself, describing each state
+with it set of @dfn{items}, also known as @dfn{pointed rules}. Each
+item is a production rule together with a point (marked by @samp{.})
+that the input cursor.
+
+@example
+state 0
+
+ $accept -> . exp $ (rule 0)
+
+ NUM shift, and go to state 1
+
+ exp go to state 2
+@end example
+
+This reads as follows: ``state 0 corresponds to being at the very
+beginning of the parsing, in the initial rule, right before the start
+symbol (here, @code{exp}). When the parser returns to this state right
+after having reduced a rule that produced an @code{exp}, the control
+flow jumps to state 2. If there is no such transition on a nonterminal
+symbol, and the lookahead is a @code{NUM}, then this token is shifted on
+the parse stack, and the control flow jumps to state 1. Any other
+lookahead triggers a syntax error.''
+
+@cindex core, item set
+@cindex item set core
+@cindex kernel, item set
+@cindex item set core
+Even though the only active rule in state 0 seems to be rule 0, the
+report lists @code{NUM} as a lookahead token because @code{NUM} can be
+at the beginning of any rule deriving an @code{exp}. By default Bison
+reports the so-called @dfn{core} or @dfn{kernel} of the item set, but if
+you want to see more detail you can invoke @command{bison} with
+@option{--report=itemset} to list all the items, include those that can
+be derived:
+
+@example
+state 0
+
+ $accept -> . exp $ (rule 0)
+ exp -> . exp '+' exp (rule 1)
+ exp -> . exp '-' exp (rule 2)
+ exp -> . exp '*' exp (rule 3)
+ exp -> . exp '/' exp (rule 4)
+ exp -> . NUM (rule 5)
+
+ NUM shift, and go to state 1
+
+ exp go to state 2
+@end example
+
+@noindent
+In the state 1...
+
+@example
+state 1
+
+ exp -> NUM . (rule 5)
+
+ $default reduce using rule 5 (exp)
+@end example
+
+@noindent
+the rule 5, @samp{exp: NUM;}, is completed. Whatever the lookahead token
+(@samp{$default}), the parser will reduce it. If it was coming from
+state 0, then, after this reduction it will return to state 0, and will
+jump to state 2 (@samp{exp: go to state 2}).
+
+@example
+state 2
+
+ $accept -> exp . $ (rule 0)
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ $ shift, and go to state 3
+ '+' shift, and go to state 4
+ '-' shift, and go to state 5
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+@end example
+
+@noindent
+In state 2, the automaton can only shift a symbol. For instance,
+because of the item @samp{exp -> exp . '+' exp}, if the lookahead if
+@samp{+}, it will be shifted on the parse stack, and the automaton
+control will jump to state 4, corresponding to the item @samp{exp -> exp
+'+' . exp}. Since there is no default action, any other token than
+those listed above will trigger a syntax error.
+
+@cindex accepting state
+The state 3 is named the @dfn{final state}, or the @dfn{accepting
+state}:
+
+@example
+state 3
+
+ $accept -> exp $ . (rule 0)
+
+ $default accept
+@end example
+
+@noindent
+the initial rule is completed (the start symbol and the end
+of input were read), the parsing exits successfully.
+
+The interpretation of states 4 to 7 is straightforward, and is left to
+the reader.
+
+@example
+state 4
+
+ exp -> exp '+' . exp (rule 1)
+
+ NUM shift, and go to state 1
+
+ exp go to state 8
+
+state 5
+
+ exp -> exp '-' . exp (rule 2)
+
+ NUM shift, and go to state 1
+
+ exp go to state 9
+
+state 6
+
+ exp -> exp '*' . exp (rule 3)
+
+ NUM shift, and go to state 1
+
+ exp go to state 10
+
+state 7
+
+ exp -> exp '/' . exp (rule 4)
+
+ NUM shift, and go to state 1
+
+ exp go to state 11
+@end example
+
+As was announced in beginning of the report, @samp{State 8 conflicts:
+1 shift/reduce}:
+
+@example
+state 8
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp '+' exp . (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 1 (exp)]
+ $default reduce using rule 1 (exp)
+@end example
+
+Indeed, there are two actions associated to the lookahead @samp{/}:
+either shifting (and going to state 7), or reducing rule 1. The
+conflict means that either the grammar is ambiguous, or the parser lacks
+information to make the right decision. Indeed the grammar is
+ambiguous, as, since we did not specify the precedence of @samp{/}, the
+sentence @samp{NUM + NUM / NUM} can be parsed as @samp{NUM + (NUM /
+NUM)}, which corresponds to shifting @samp{/}, or as @samp{(NUM + NUM) /
+NUM}, which corresponds to reducing rule 1.
+
+Because in deterministic parsing a single decision can be made, Bison
+arbitrarily chose to disable the reduction, see @ref{Shift/Reduce, ,
+Shift/Reduce Conflicts}. Discarded actions are reported in between
+square brackets.
+
+Note that all the previous states had a single possible action: either
+shifting the next token and going to the corresponding state, or
+reducing a single rule. In the other cases, i.e., when shifting
+@emph{and} reducing is possible or when @emph{several} reductions are
+possible, the lookahead is required to select the action. State 8 is
+one such state: if the lookahead is @samp{*} or @samp{/} then the action
+is shifting, otherwise the action is reducing rule 1. In other words,
+the first two items, corresponding to rule 1, are not eligible when the
+lookahead token is @samp{*}, since we specified that @samp{*} has higher
+precedence than @samp{+}. More generally, some items are eligible only
+with some set of possible lookahead tokens. When run with
+@option{--report=lookahead}, Bison specifies these lookahead tokens:
+
+@example
+state 8
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp '+' exp . [$, '+', '-', '/'] (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 1 (exp)]
+ $default reduce using rule 1 (exp)
+@end example
+
+The remaining states are similar:
+
+@example
+state 9
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp '-' exp . (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 2 (exp)]
+ $default reduce using rule 2 (exp)
+
+state 10
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp '*' exp . (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 3 (exp)]
+ $default reduce using rule 3 (exp)
+
+state 11
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+ exp -> exp '/' exp . (rule 4)
+
+ '+' shift, and go to state 4
+ '-' shift, and go to state 5
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '+' [reduce using rule 4 (exp)]
+ '-' [reduce using rule 4 (exp)]
+ '*' [reduce using rule 4 (exp)]
+ '/' [reduce using rule 4 (exp)]
+ $default reduce using rule 4 (exp)
+@end example
+
+@noindent
+Observe that state 11 contains conflicts not only due to the lack of
+precedence of @samp{/} with respect to @samp{+}, @samp{-}, and
+@samp{*}, but also because the
+associativity of @samp{/} is not specified.
+
+
+@node Tracing
+@section Tracing Your Parser
+@findex yydebug
+@cindex debugging
+@cindex tracing the parser
+
+If a Bison grammar compiles properly but doesn't do what you want when it
+runs, the @code{yydebug} parser-trace feature can help you figure out why.
+
+There are several means to enable compilation of trace facilities:
+
+@table @asis
+@item the macro @code{YYDEBUG}
+@findex YYDEBUG
+Define the macro @code{YYDEBUG} to a nonzero value when you compile the
+parser. This is compliant with @acronym{POSIX} Yacc. You could use
+@samp{-DYYDEBUG=1} as a compiler option or you could put @samp{#define
+YYDEBUG 1} in the prologue of the grammar file (@pxref{Prologue, , The
+Prologue}).
+
+@item the option @option{-t}, @option{--debug}
+Use the @samp{-t} option when you run Bison (@pxref{Invocation,
+,Invoking Bison}). This is @acronym{POSIX} compliant too.
+
+@item the directive @samp{%debug}
+@findex %debug
+Add the @code{%debug} directive (@pxref{Decl Summary, ,Bison Declaration
+Summary}). This Bison extension is maintained for backward
+compatibility with previous versions of Bison.
+
+@item the variable @samp{parse.trace}
+@findex %define parse.trace
+Add the @samp{%define parse.trace} directive (@pxref{Decl Summary,
+,Bison Declaration Summary}), or pass the @option{-Dparse.trace} option
+(@pxref{Bison Options}). This is a Bison extension, which is especially
+useful for languages that don't use a preprocessor. Unless
+@acronym{POSIX} and Yacc portability matter to you, this is the
+preferred solution.
+@end table
+
+We suggest that you always enable the trace option so that debugging is
+always possible.
+
+The trace facility outputs messages with macro calls of the form
+@code{YYFPRINTF (stderr, @var{format}, @var{args})} where
+@var{format} and @var{args} are the usual @code{printf} format and variadic
+arguments. If you define @code{YYDEBUG} to a nonzero value but do not
+define @code{YYFPRINTF}, @code{<stdio.h>} is automatically included
+and @code{YYFPRINTF} is defined to @code{fprintf}.
+
+Once you have compiled the program with trace facilities, the way to
+request a trace is to store a nonzero value in the variable @code{yydebug}.
+You can do this by making the C code do it (in @code{main}, perhaps), or
+you can alter the value with a C debugger.
+
+Each step taken by the parser when @code{yydebug} is nonzero produces a
+line or two of trace information, written on @code{stderr}. The trace
+messages tell you these things:
+
+@itemize @bullet
+@item
+Each time the parser calls @code{yylex}, what kind of token was read.
+
+@item
+Each time a token is shifted, the depth and complete contents of the
+state stack (@pxref{Parser States}).
+
+@item
+Each time a rule is reduced, which rule it is, and the complete contents
+of the state stack afterward.
+@end itemize
+
+To make sense of this information, it helps to refer to the listing file
+produced by the Bison @samp{-v} option (@pxref{Invocation, ,Invoking
+Bison}). This file shows the meaning of each state in terms of
+positions in various rules, and also what each state will do with each
+possible input token. As you read the successive trace messages, you
+can see that the parser is functioning according to its specification in
+the listing file. Eventually you will arrive at the place where
+something undesirable happens, and you will see which parts of the
+grammar are to blame.
+
+The parser file is a C program and you can use C debuggers on it, but it's
+not easy to interpret what it is doing. The parser function is a
+finite-state machine interpreter, and aside from the actions it executes
+the same code over and over. Only the values of variables show where in
+the grammar it is working.
+
+@findex YYPRINT
+The debugging information normally gives the token type of each token
+read, but not its semantic value. You can optionally define a macro
+named @code{YYPRINT} to provide a way to print the value. If you define
+@code{YYPRINT}, it should take three arguments. The parser will pass a
+standard I/O stream, the numeric code for the token type, and the token
+value (from @code{yylval}).
+
+Here is an example of @code{YYPRINT} suitable for the multi-function
+calculator (@pxref{Mfcalc Declarations, ,Declarations for @code{mfcalc}}):
+
+@smallexample
+%@{
+ static void print_token_value (FILE *, int, YYSTYPE);
+ #define YYPRINT(file, type, value) print_token_value (file, type, value)
+%@}
+
+@dots{} %% @dots{} %% @dots{}
+
+static void
+print_token_value (FILE *file, int type, YYSTYPE value)
+@{
+ if (type == VAR)
+ fprintf (file, "%s", value.tptr->name);
+ else if (type == NUM)
+ fprintf (file, "%d", value.val);
+@}
+@end smallexample
+
+@c ================================================= Invoking Bison
+
+@node Invocation
+@chapter Invoking Bison
+@cindex invoking Bison
+@cindex Bison invocation
+@cindex options for invoking Bison
+
+The usual way to invoke Bison is as follows:
+
+@example
+bison @var{infile}
+@end example
+
+Here @var{infile} is the grammar file name, which usually ends in
+@samp{.y}. The parser file's name is made by replacing the @samp{.y}
+with @samp{.tab.c} and removing any leading directory. Thus, the
+@samp{bison foo.y} file name yields
+@file{foo.tab.c}, and the @samp{bison hack/foo.y} file name yields
+@file{foo.tab.c}. It's also possible, in case you are writing
+C++ code instead of C in your grammar file, to name it @file{foo.ypp}
+or @file{foo.y++}. Then, the output files will take an extension like
+the given one as input (respectively @file{foo.tab.cpp} and
+@file{foo.tab.c++}).
+This feature takes effect with all options that manipulate file names like
+@samp{-o} or @samp{-d}.
+
+For example :
+
+@example
+bison -d @var{infile.yxx}
+@end example
+@noindent
+will produce @file{infile.tab.cxx} and @file{infile.tab.hxx}, and
+
+@example
+bison -d -o @var{output.c++} @var{infile.y}
+@end example
+@noindent
+will produce @file{output.c++} and @file{outfile.h++}.
+
+For compatibility with @acronym{POSIX}, the standard Bison
+distribution also contains a shell script called @command{yacc} that
+invokes Bison with the @option{-y} option.
+
+@menu
+* Bison Options:: All the options described in detail,
+ in alphabetical order by short options.
+* Option Cross Key:: Alphabetical list of long options.
+* Yacc Library:: Yacc-compatible @code{yylex} and @code{main}.
+@end menu
+
+@node Bison Options
+@section Bison Options
+
+Bison supports both traditional single-letter options and mnemonic long
+option names. Long option names are indicated with @samp{--} instead of
+@samp{-}. Abbreviations for option names are allowed as long as they
+are unique. When a long option takes an argument, like
+@samp{--file-prefix}, connect the option name and the argument with
+@samp{=}.
+
+Here is a list of options that can be used with Bison, alphabetized by
+short option. It is followed by a cross key alphabetized by long
+option.
+
+@c Please, keep this ordered as in `bison --help'.
+@noindent
+Operations modes:
+@table @option
+@item -h
+@itemx --help
+Print a summary of the command-line options to Bison and exit.
+
+@item -V
+@itemx --version
+Print the version number of Bison and exit.
+
+@item --print-localedir
+Print the name of the directory containing locale-dependent data.
+
+@item --print-datadir
+Print the name of the directory containing skeletons and XSLT.
+
+@item -y
+@itemx --yacc
+Act more like the traditional Yacc command. This can cause
+different diagnostics to be generated, and may change behavior in
+other minor ways. Most importantly, imitate Yacc's output
+file name conventions, so that the parser output file is called
+@file{y.tab.c}, and the other outputs are called @file{y.output} and
+@file{y.tab.h}.
+Also, if generating a deterministic parser in C, generate @code{#define}
+statements in addition to an @code{enum} to associate token numbers with token
+names.
+Thus, the following shell script can substitute for Yacc, and the Bison
+distribution contains such a script for compatibility with @acronym{POSIX}:
+
+@example
+#! /bin/sh
+bison -y "$@@"
+@end example
+
+The @option{-y}/@option{--yacc} option is intended for use with
+traditional Yacc grammars. If your grammar uses a Bison extension
+like @samp{%glr-parser}, Bison might not be Yacc-compatible even if
+this option is specified.
+
+@item -W [@var{category}]
+@itemx --warnings[=@var{category}]
+Output warnings falling in @var{category}. @var{category} can be one
+of:
+@table @code
+@item midrule-values
+Warn about mid-rule values that are set but not used within any of the actions
+of the parent rule.
+For example, warn about unused @code{$2} in:
+
+@example
+exp: '1' @{ $$ = 1; @} '+' exp @{ $$ = $1 + $4; @};
+@end example
+
+Also warn about mid-rule values that are used but not set.
+For example, warn about unset @code{$$} in the mid-rule action in:
+
+@example
+ exp: '1' @{ $1 = 1; @} '+' exp @{ $$ = $2 + $4; @};
+@end example
+
+These warnings are not enabled by default since they sometimes prove to
+be false alarms in existing grammars employing the Yacc constructs
+@code{$0} or @code{$-@var{n}} (where @var{n} is some positive integer).
+
+
+@item yacc
+Incompatibilities with @acronym{POSIX} Yacc.
+
+@item all
+All the warnings.
+@item none
+Turn off all the warnings.
+@item error
+Treat warnings as errors.
+@end table
+
+A category can be turned off by prefixing its name with @samp{no-}. For
+instance, @option{-Wno-syntax} will hide the warnings about unused
+variables.
+@end table
+
+@noindent
+Tuning the parser:
+
+@table @option
+@item -t
+@itemx --debug
+In the parser file, define the macro @code{YYDEBUG} to 1 if it is not
+already defined, so that the debugging facilities are compiled.
+@xref{Tracing, ,Tracing Your Parser}.
+
+@item -D @var{name}[=@var{value}]
+@itemx --define=@var{name}[=@var{value}]
+@itemx -F @var{name}[=@var{value}]
+@itemx --force-define=@var{name}[=@var{value}]
+Each of these is equivalent to @samp{%define @var{name} "@var{value}"}
+(@pxref{Decl Summary, ,%define}) except that Bison processes multiple
+definitions for the same @var{name} as follows:
+
+@itemize
+@item
+Bison quietly ignores all command-line definitions for @var{name} except
+the last.
+@item
+If that command-line definition is specified by a @code{-D} or
+@code{--define}, Bison reports an error for any @code{%define}
+definition for @var{name}.
+@item
+If that command-line definition is specified by a @code{-F} or
+@code{--force-define} instead, Bison quietly ignores all @code{%define}
+definitions for @var{name}.
+@item
+Otherwise, Bison reports an error if there are multiple @code{%define}
+definitions for @var{name}.
+@end itemize
+
+You should avoid using @code{-F} and @code{--force-define} in your
+makefiles unless you are confident that it is safe to quietly ignore any
+conflicting @code{%define} that may be added to the grammar file.
+
+@item -L @var{language}
+@itemx --language=@var{language}
+Specify the programming language for the generated parser, as if
+@code{%language} was specified (@pxref{Decl Summary, , Bison Declaration
+Summary}). Currently supported languages include C, C++, and Java.
+@var{language} is case-insensitive.
+
+This option is experimental and its effect may be modified in future
+releases.
+
+@item --locations
+Pretend that @code{%locations} was specified. @xref{Decl Summary}.
+
+@item -p @var{prefix}
+@itemx --name-prefix=@var{prefix}
+Pretend that @code{%name-prefix "@var{prefix}"} was specified.
+@xref{Decl Summary}.
+
+@item -l
+@itemx --no-lines
+Don't put any @code{#line} preprocessor commands in the parser file.
+Ordinarily Bison puts them in the parser file so that the C compiler
+and debuggers will associate errors with your source file, the
+grammar file. This option causes them to associate errors with the
+parser file, treating it as an independent source file in its own right.
+
+@item -S @var{file}
+@itemx --skeleton=@var{file}
+Specify the skeleton to use, similar to @code{%skeleton}
+(@pxref{Decl Summary, , Bison Declaration Summary}).
+
+@c You probably don't need this option unless you are developing Bison.
+@c You should use @option{--language} if you want to specify the skeleton for a
+@c different language, because it is clearer and because it will always
+@c choose the correct skeleton for non-deterministic or push parsers.
+
+If @var{file} does not contain a @code{/}, @var{file} is the name of a skeleton
+file in the Bison installation directory.
+If it does, @var{file} is an absolute file name or a file name relative to the
+current working directory.
+This is similar to how most shells resolve commands.
+
+@item -k
+@itemx --token-table
+Pretend that @code{%token-table} was specified. @xref{Decl Summary}.
+@end table
+
+@noindent
+Adjust the output:
+
+@table @option
+@item --defines[=@var{file}]
+Pretend that @code{%defines} was specified, i.e., write an extra output
+file containing macro definitions for the token type names defined in
+the grammar, as well as a few other declarations. @xref{Decl Summary}.
+
+@item -d
+This is the same as @code{--defines} except @code{-d} does not accept a
+@var{file} argument since POSIX Yacc requires that @code{-d} can be bundled
+with other short options.
+
+@item -b @var{file-prefix}
+@itemx --file-prefix=@var{prefix}
+Pretend that @code{%file-prefix} was specified, i.e., specify prefix to use
+for all Bison output file names. @xref{Decl Summary}.
+
+@item -r @var{things}
+@itemx --report=@var{things}
+Write an extra output file containing verbose description of the comma
+separated list of @var{things} among:
+
+@table @code
+@item state
+Description of the grammar, conflicts (resolved and unresolved), and
+parser's automaton.
+
+@item lookahead
+Implies @code{state} and augments the description of the automaton with
+each rule's lookahead set.
+
+@item itemset
+Implies @code{state} and augments the description of the automaton with
+the full set of items for each state, instead of its core only.
+@end table
+
+@item --report-file=@var{file}
+Specify the @var{file} for the verbose description.
+
+@item -v
+@itemx --verbose
+Pretend that @code{%verbose} was specified, i.e., write an extra output
+file containing verbose descriptions of the grammar and
+parser. @xref{Decl Summary}.
+
+@item -o @var{file}
+@itemx --output=@var{file}
+Specify the @var{file} for the parser file.
+
+The other output files' names are constructed from @var{file} as
+described under the @samp{-v} and @samp{-d} options.
+
+@item -g [@var{file}]
+@itemx --graph[=@var{file}]
+Output a graphical representation of the parser's
+automaton computed by Bison, in @uref{http://www.graphviz.org/, Graphviz}
+@uref{http://www.graphviz.org/doc/info/lang.html, @acronym{DOT}} format.
+@code{@var{file}} is optional.
+If omitted and the grammar file is @file{foo.y}, the output file will be
+@file{foo.dot}.
+
+@item -x [@var{file}]
+@itemx --xml[=@var{file}]
+Output an XML report of the parser's automaton computed by Bison.
+@code{@var{file}} is optional.
+If omitted and the grammar file is @file{foo.y}, the output file will be
+@file{foo.xml}.
+(The current XML schema is experimental and may evolve.
+More user feedback will help to stabilize it.)
+@end table
+
+@node Option Cross Key
+@section Option Cross Key
+
+Here is a list of options, alphabetized by long option, to help you find
+the corresponding short option and directive.
+
+@multitable {@option{--force-define=@var{name}[=@var{value}]}} {@option{-F @var{name}[=@var{value}]}} {@code{%nondeterministic-parser}}
+@headitem Long Option @tab Short Option @tab Bison Directive
+@include cross-options.texi
+@end multitable