+@group
+%token BOGUS
+@dots{}
+%%
+@dots{}
+return_spec:
+ type
+ | name ':' type
+ /* This rule is never used. */
+ | ID BOGUS
+ ;
+@end group
+@end example
+
+This corrects the problem because it introduces the possibility of an
+additional active rule in the context after the @code{ID} at the beginning of
+@code{return_spec}. This rule is not active in the corresponding context
+in a @code{param_spec}, so the two contexts receive distinct parser states.
+As long as the token @code{BOGUS} is never generated by @code{yylex},
+the added rule cannot alter the way actual input is parsed.
+
+In this particular example, there is another way to solve the problem:
+rewrite the rule for @code{return_spec} to use @code{ID} directly
+instead of via @code{name}. This also causes the two confusing
+contexts to have different sets of active rules, because the one for
+@code{return_spec} activates the altered rule for @code{return_spec}
+rather than the one for @code{name}.
+
+@example
+param_spec:
+ type
+ | name_list ':' type
+ ;
+return_spec:
+ type
+ | ID ':' type
+ ;
+@end example
+
+For a more detailed exposition of @acronym{LALR}(1) parsers and parser
+generators, please see:
+Frank DeRemer and Thomas Pennello, Efficient Computation of
+@acronym{LALR}(1) Look-Ahead Sets, @cite{@acronym{ACM} Transactions on
+Programming Languages and Systems}, Vol.@: 4, No.@: 4 (October 1982),
+pp.@: 615--649 @uref{http://doi.acm.org/10.1145/69622.357187}.
+
+@node Generalized LR Parsing
+@section Generalized @acronym{LR} (@acronym{GLR}) Parsing
+@cindex @acronym{GLR} parsing
+@cindex generalized @acronym{LR} (@acronym{GLR}) parsing
+@cindex ambiguous grammars
+@cindex nondeterministic parsing
+
+Bison produces @emph{deterministic} parsers that choose uniquely
+when to reduce and which reduction to apply
+based on a summary of the preceding input and on one extra token of lookahead.
+As a result, normal Bison handles a proper subset of the family of
+context-free languages.
+Ambiguous grammars, since they have strings with more than one possible
+sequence of reductions cannot have deterministic parsers in this sense.
+The same is true of languages that require more than one symbol of
+lookahead, since the parser lacks the information necessary to make a
+decision at the point it must be made in a shift-reduce parser.
+Finally, as previously mentioned (@pxref{Mystery Conflicts}),
+there are languages where Bison's default choice of how to
+summarize the input seen so far loses necessary information.
+
+When you use the @samp{%glr-parser} declaration in your grammar file,
+Bison generates a parser that uses a different algorithm, called
+Generalized @acronym{LR} (or @acronym{GLR}). A Bison @acronym{GLR}
+parser uses the same basic
+algorithm for parsing as an ordinary Bison parser, but behaves
+differently in cases where there is a shift-reduce conflict that has not
+been resolved by precedence rules (@pxref{Precedence}) or a
+reduce-reduce conflict. When a @acronym{GLR} parser encounters such a
+situation, it
+effectively @emph{splits} into a several parsers, one for each possible
+shift or reduction. These parsers then proceed as usual, consuming
+tokens in lock-step. Some of the stacks may encounter other conflicts
+and split further, with the result that instead of a sequence of states,
+a Bison @acronym{GLR} parsing stack is what is in effect a tree of states.
+
+In effect, each stack represents a guess as to what the proper parse
+is. Additional input may indicate that a guess was wrong, in which case
+the appropriate stack silently disappears. Otherwise, the semantics
+actions generated in each stack are saved, rather than being executed
+immediately. When a stack disappears, its saved semantic actions never
+get executed. When a reduction causes two stacks to become equivalent,
+their sets of semantic actions are both saved with the state that
+results from the reduction. We say that two stacks are equivalent
+when they both represent the same sequence of states,
+and each pair of corresponding states represents a
+grammar symbol that produces the same segment of the input token
+stream.
+
+Whenever the parser makes a transition from having multiple
+states to having one, it reverts to the normal deterministic parsing
+algorithm, after resolving and executing the saved-up actions.
+At this transition, some of the states on the stack will have semantic
+values that are sets (actually multisets) of possible actions. The
+parser tries to pick one of the actions by first finding one whose rule
+has the highest dynamic precedence, as set by the @samp{%dprec}
+declaration. Otherwise, if the alternative actions are not ordered by
+precedence, but there the same merging function is declared for both
+rules by the @samp{%merge} declaration,
+Bison resolves and evaluates both and then calls the merge function on
+the result. Otherwise, it reports an ambiguity.
+
+It is possible to use a data structure for the @acronym{GLR} parsing tree that
+permits the processing of any @acronym{LR}(1) grammar in linear time (in the
+size of the input), any unambiguous (not necessarily
+@acronym{LR}(1)) grammar in
+quadratic worst-case time, and any general (possibly ambiguous)
+context-free grammar in cubic worst-case time. However, Bison currently
+uses a simpler data structure that requires time proportional to the
+length of the input times the maximum number of stacks required for any
+prefix of the input. Thus, really ambiguous or nondeterministic
+grammars can require exponential time and space to process. Such badly
+behaving examples, however, are not generally of practical interest.
+Usually, nondeterminism in a grammar is local---the parser is ``in
+doubt'' only for a few tokens at a time. Therefore, the current data
+structure should generally be adequate. On @acronym{LR}(1) portions of a
+grammar, in particular, it is only slightly slower than with the
+deterministic @acronym{LR}(1) Bison parser.
+
+For a more detailed exposition of @acronym{GLR} parsers, please see: Elizabeth
+Scott, Adrian Johnstone and Shamsa Sadaf Hussain, Tomita-Style
+Generalised @acronym{LR} Parsers, Royal Holloway, University of
+London, Department of Computer Science, TR-00-12,
+@uref{http://www.cs.rhul.ac.uk/research/languages/publications/tomita_style_1.ps},
+(2000-12-24).
+
+@node Memory Management
+@section Memory Management, and How to Avoid Memory Exhaustion
+@cindex memory exhaustion
+@cindex memory management
+@cindex stack overflow
+@cindex parser stack overflow
+@cindex overflow of parser stack
+
+The Bison parser stack can run out of memory if too many tokens are shifted and
+not reduced. When this happens, the parser function @code{yyparse}
+calls @code{yyerror} and then returns 2.
+
+Because Bison parsers have growing stacks, hitting the upper limit
+usually results from using a right recursion instead of a left
+recursion, @xref{Recursion, ,Recursive Rules}.
+
+@vindex YYMAXDEPTH
+By defining the macro @code{YYMAXDEPTH}, you can control how deep the
+parser stack can become before memory is exhausted. Define the
+macro with a value that is an integer. This value is the maximum number
+of tokens that can be shifted (and not reduced) before overflow.
+
+The stack space allowed is not necessarily allocated. If you specify a
+large value for @code{YYMAXDEPTH}, the parser normally allocates a small
+stack at first, and then makes it bigger by stages as needed. This
+increasing allocation happens automatically and silently. Therefore,
+you do not need to make @code{YYMAXDEPTH} painfully small merely to save
+space for ordinary inputs that do not need much stack.
+
+However, do not allow @code{YYMAXDEPTH} to be a value so large that
+arithmetic overflow could occur when calculating the size of the stack
+space. Also, do not allow @code{YYMAXDEPTH} to be less than
+@code{YYINITDEPTH}.
+
+@cindex default stack limit
+The default value of @code{YYMAXDEPTH}, if you do not define it, is
+10000.
+
+@vindex YYINITDEPTH
+You can control how much stack is allocated initially by defining the
+macro @code{YYINITDEPTH} to a positive integer. For the deterministic
+parser in C, this value must be a compile-time constant
+unless you are assuming C99 or some other target language or compiler
+that allows variable-length arrays. The default is 200.
+
+Do not allow @code{YYINITDEPTH} to be greater than @code{YYMAXDEPTH}.
+
+@c FIXME: C++ output.
+Because of semantical differences between C and C++, the deterministic
+parsers in C produced by Bison cannot grow when compiled
+by C++ compilers. In this precise case (compiling a C parser as C++) you are
+suggested to grow @code{YYINITDEPTH}. The Bison maintainers hope to fix
+this deficiency in a future release.
+
+@node Error Recovery
+@chapter Error Recovery
+@cindex error recovery
+@cindex recovery from errors
+
+It is not usually acceptable to have a program terminate on a syntax
+error. For example, a compiler should recover sufficiently to parse the
+rest of the input file and check it for errors; a calculator should accept
+another expression.
+
+In a simple interactive command parser where each input is one line, it may
+be sufficient to allow @code{yyparse} to return 1 on error and have the
+caller ignore the rest of the input line when that happens (and then call
+@code{yyparse} again). But this is inadequate for a compiler, because it
+forgets all the syntactic context leading up to the error. A syntax error
+deep within a function in the compiler input should not cause the compiler
+to treat the following line like the beginning of a source file.
+
+@findex error
+You can define how to recover from a syntax error by writing rules to
+recognize the special token @code{error}. This is a terminal symbol that
+is always defined (you need not declare it) and reserved for error
+handling. The Bison parser generates an @code{error} token whenever a
+syntax error happens; if you have provided a rule to recognize this token
+in the current context, the parse can continue.
+
+For example:
+
+@example
+stmnts: /* empty string */
+ | stmnts '\n'
+ | stmnts exp '\n'
+ | stmnts error '\n'
+@end example
+
+The fourth rule in this example says that an error followed by a newline
+makes a valid addition to any @code{stmnts}.
+
+What happens if a syntax error occurs in the middle of an @code{exp}? The
+error recovery rule, interpreted strictly, applies to the precise sequence
+of a @code{stmnts}, an @code{error} and a newline. If an error occurs in
+the middle of an @code{exp}, there will probably be some additional tokens
+and subexpressions on the stack after the last @code{stmnts}, and there
+will be tokens to read before the next newline. So the rule is not
+applicable in the ordinary way.
+
+But Bison can force the situation to fit the rule, by discarding part of
+the semantic context and part of the input. First it discards states
+and objects from the stack until it gets back to a state in which the
+@code{error} token is acceptable. (This means that the subexpressions
+already parsed are discarded, back to the last complete @code{stmnts}.)
+At this point the @code{error} token can be shifted. Then, if the old
+lookahead token is not acceptable to be shifted next, the parser reads
+tokens and discards them until it finds a token which is acceptable. In
+this example, Bison reads and discards input until the next newline so
+that the fourth rule can apply. Note that discarded symbols are
+possible sources of memory leaks, see @ref{Destructor Decl, , Freeing
+Discarded Symbols}, for a means to reclaim this memory.
+
+The choice of error rules in the grammar is a choice of strategies for
+error recovery. A simple and useful strategy is simply to skip the rest of
+the current input line or current statement if an error is detected:
+
+@example
+stmnt: error ';' /* On error, skip until ';' is read. */
+@end example
+
+It is also useful to recover to the matching close-delimiter of an
+opening-delimiter that has already been parsed. Otherwise the
+close-delimiter will probably appear to be unmatched, and generate another,
+spurious error message:
+
+@example
+primary: '(' expr ')'
+ | '(' error ')'
+ @dots{}
+ ;
+@end example
+
+Error recovery strategies are necessarily guesses. When they guess wrong,
+one syntax error often leads to another. In the above example, the error
+recovery rule guesses that an error is due to bad input within one
+@code{stmnt}. Suppose that instead a spurious semicolon is inserted in the
+middle of a valid @code{stmnt}. After the error recovery rule recovers
+from the first error, another syntax error will be found straightaway,
+since the text following the spurious semicolon is also an invalid
+@code{stmnt}.
+
+To prevent an outpouring of error messages, the parser will output no error
+message for another syntax error that happens shortly after the first; only
+after three consecutive input tokens have been successfully shifted will
+error messages resume.
+
+Note that rules which accept the @code{error} token may have actions, just
+as any other rules can.
+
+@findex yyerrok
+You can make error messages resume immediately by using the macro
+@code{yyerrok} in an action. If you do this in the error rule's action, no
+error messages will be suppressed. This macro requires no arguments;
+@samp{yyerrok;} is a valid C statement.
+
+@findex yyclearin
+The previous lookahead token is reanalyzed immediately after an error. If
+this is unacceptable, then the macro @code{yyclearin} may be used to clear
+this token. Write the statement @samp{yyclearin;} in the error rule's
+action.
+@xref{Action Features, ,Special Features for Use in Actions}.
+
+For example, suppose that on a syntax error, an error handling routine is
+called that advances the input stream to some point where parsing should
+once again commence. The next symbol returned by the lexical scanner is
+probably correct. The previous lookahead token ought to be discarded
+with @samp{yyclearin;}.
+
+@vindex YYRECOVERING
+The expression @code{YYRECOVERING ()} yields 1 when the parser
+is recovering from a syntax error, and 0 otherwise.
+Syntax error diagnostics are suppressed while recovering from a syntax
+error.
+
+@node Context Dependency
+@chapter Handling Context Dependencies
+
+The Bison paradigm is to parse tokens first, then group them into larger
+syntactic units. In many languages, the meaning of a token is affected by
+its context. Although this violates the Bison paradigm, certain techniques
+(known as @dfn{kludges}) may enable you to write Bison parsers for such
+languages.
+
+@menu
+* Semantic Tokens:: Token parsing can depend on the semantic context.
+* Lexical Tie-ins:: Token parsing can depend on the syntactic context.
+* Tie-in Recovery:: Lexical tie-ins have implications for how
+ error recovery rules must be written.
+@end menu
+
+(Actually, ``kludge'' means any technique that gets its job done but is
+neither clean nor robust.)
+
+@node Semantic Tokens
+@section Semantic Info in Token Types
+
+The C language has a context dependency: the way an identifier is used
+depends on what its current meaning is. For example, consider this:
+
+@example
+foo (x);
+@end example
+
+This looks like a function call statement, but if @code{foo} is a typedef
+name, then this is actually a declaration of @code{x}. How can a Bison
+parser for C decide how to parse this input?
+
+The method used in @acronym{GNU} C is to have two different token types,
+@code{IDENTIFIER} and @code{TYPENAME}. When @code{yylex} finds an
+identifier, it looks up the current declaration of the identifier in order
+to decide which token type to return: @code{TYPENAME} if the identifier is
+declared as a typedef, @code{IDENTIFIER} otherwise.
+
+The grammar rules can then express the context dependency by the choice of
+token type to recognize. @code{IDENTIFIER} is accepted as an expression,
+but @code{TYPENAME} is not. @code{TYPENAME} can start a declaration, but
+@code{IDENTIFIER} cannot. In contexts where the meaning of the identifier
+is @emph{not} significant, such as in declarations that can shadow a
+typedef name, either @code{TYPENAME} or @code{IDENTIFIER} is
+accepted---there is one rule for each of the two token types.
+
+This technique is simple to use if the decision of which kinds of
+identifiers to allow is made at a place close to where the identifier is
+parsed. But in C this is not always so: C allows a declaration to
+redeclare a typedef name provided an explicit type has been specified
+earlier:
+
+@example
+typedef int foo, bar;
+int baz (void)
+@{
+ static bar (bar); /* @r{redeclare @code{bar} as static variable} */
+ extern foo foo (foo); /* @r{redeclare @code{foo} as function} */
+ return foo (bar);
+@}
+@end example
+
+Unfortunately, the name being declared is separated from the declaration
+construct itself by a complicated syntactic structure---the ``declarator''.
+
+As a result, part of the Bison parser for C needs to be duplicated, with
+all the nonterminal names changed: once for parsing a declaration in
+which a typedef name can be redefined, and once for parsing a
+declaration in which that can't be done. Here is a part of the
+duplication, with actions omitted for brevity:
+
+@example
+initdcl:
+ declarator maybeasm '='
+ init
+ | declarator maybeasm
+ ;
+
+notype_initdcl:
+ notype_declarator maybeasm '='
+ init
+ | notype_declarator maybeasm
+ ;
+@end example
+
+@noindent
+Here @code{initdcl} can redeclare a typedef name, but @code{notype_initdcl}
+cannot. The distinction between @code{declarator} and
+@code{notype_declarator} is the same sort of thing.
+
+There is some similarity between this technique and a lexical tie-in
+(described next), in that information which alters the lexical analysis is
+changed during parsing by other parts of the program. The difference is
+here the information is global, and is used for other purposes in the
+program. A true lexical tie-in has a special-purpose flag controlled by
+the syntactic context.
+
+@node Lexical Tie-ins
+@section Lexical Tie-ins
+@cindex lexical tie-in
+
+One way to handle context-dependency is the @dfn{lexical tie-in}: a flag
+which is set by Bison actions, whose purpose is to alter the way tokens are
+parsed.
+
+For example, suppose we have a language vaguely like C, but with a special
+construct @samp{hex (@var{hex-expr})}. After the keyword @code{hex} comes
+an expression in parentheses in which all integers are hexadecimal. In
+particular, the token @samp{a1b} must be treated as an integer rather than
+as an identifier if it appears in that context. Here is how you can do it:
+
+@example
+@group
+%@{
+ int hexflag;
+ int yylex (void);
+ void yyerror (char const *);
+%@}
+%%
+@dots{}
+@end group
+@group
+expr: IDENTIFIER
+ | constant
+ | HEX '('
+ @{ hexflag = 1; @}
+ expr ')'
+ @{ hexflag = 0;
+ $$ = $4; @}
+ | expr '+' expr
+ @{ $$ = make_sum ($1, $3); @}
+ @dots{}
+ ;
+@end group
+
+@group
+constant:
+ INTEGER
+ | STRING
+ ;
+@end group
+@end example
+
+@noindent
+Here we assume that @code{yylex} looks at the value of @code{hexflag}; when
+it is nonzero, all integers are parsed in hexadecimal, and tokens starting
+with letters are parsed as integers if possible.
+
+The declaration of @code{hexflag} shown in the prologue of the parser file
+is needed to make it accessible to the actions (@pxref{Prologue, ,The Prologue}).
+You must also write the code in @code{yylex} to obey the flag.
+
+@node Tie-in Recovery
+@section Lexical Tie-ins and Error Recovery
+
+Lexical tie-ins make strict demands on any error recovery rules you have.
+@xref{Error Recovery}.
+
+The reason for this is that the purpose of an error recovery rule is to
+abort the parsing of one construct and resume in some larger construct.
+For example, in C-like languages, a typical error recovery rule is to skip
+tokens until the next semicolon, and then start a new statement, like this:
+
+@example
+stmt: expr ';'
+ | IF '(' expr ')' stmt @{ @dots{} @}
+ @dots{}
+ error ';'
+ @{ hexflag = 0; @}
+ ;
+@end example
+
+If there is a syntax error in the middle of a @samp{hex (@var{expr})}
+construct, this error rule will apply, and then the action for the
+completed @samp{hex (@var{expr})} will never run. So @code{hexflag} would
+remain set for the entire rest of the input, or until the next @code{hex}
+keyword, causing identifiers to be misinterpreted as integers.
+
+To avoid this problem the error recovery rule itself clears @code{hexflag}.
+
+There may also be an error recovery rule that works within expressions.
+For example, there could be a rule which applies within parentheses
+and skips to the close-parenthesis:
+
+@example
+@group
+expr: @dots{}
+ | '(' expr ')'
+ @{ $$ = $2; @}
+ | '(' error ')'
+ @dots{}
+@end group
+@end example
+
+If this rule acts within the @code{hex} construct, it is not going to abort
+that construct (since it applies to an inner level of parentheses within
+the construct). Therefore, it should not clear the flag: the rest of
+the @code{hex} construct should be parsed with the flag still in effect.
+
+What if there is an error recovery rule which might abort out of the
+@code{hex} construct or might not, depending on circumstances? There is no
+way you can write the action to determine whether a @code{hex} construct is
+being aborted or not. So if you are using a lexical tie-in, you had better
+make sure your error recovery rules are not of this kind. Each rule must
+be such that you can be sure that it always will, or always won't, have to
+clear the flag.
+
+@c ================================================== Debugging Your Parser
+
+@node Debugging
+@chapter Debugging Your Parser
+
+Developing a parser can be a challenge, especially if you don't
+understand the algorithm (@pxref{Algorithm, ,The Bison Parser
+Algorithm}). Even so, sometimes a detailed description of the automaton
+can help (@pxref{Understanding, , Understanding Your Parser}), or
+tracing the execution of the parser can give some insight on why it
+behaves improperly (@pxref{Tracing, , Tracing Your Parser}).
+
+@menu
+* Understanding:: Understanding the structure of your parser.
+* Tracing:: Tracing the execution of your parser.
+@end menu
+
+@node Understanding
+@section Understanding Your Parser
+
+As documented elsewhere (@pxref{Algorithm, ,The Bison Parser Algorithm})
+Bison parsers are @dfn{shift/reduce automata}. In some cases (much more
+frequent than one would hope), looking at this automaton is required to
+tune or simply fix a parser. Bison provides two different
+representation of it, either textually or graphically (as a DOT file).
+
+The textual file is generated when the options @option{--report} or
+@option{--verbose} are specified, see @xref{Invocation, , Invoking
+Bison}. Its name is made by removing @samp{.tab.c} or @samp{.c} from
+the parser output file name, and adding @samp{.output} instead.
+Therefore, if the input file is @file{foo.y}, then the parser file is
+called @file{foo.tab.c} by default. As a consequence, the verbose
+output file is called @file{foo.output}.
+
+The following grammar file, @file{calc.y}, will be used in the sequel:
+
+@example
+%token NUM STR
+%left '+' '-'
+%left '*'
+%%
+exp: exp '+' exp
+ | exp '-' exp
+ | exp '*' exp
+ | exp '/' exp
+ | NUM
+ ;
+useless: STR;
+%%
+@end example
+
+@command{bison} reports:
+
+@example
+calc.y: warning: 1 nonterminal useless in grammar
+calc.y: warning: 1 rule useless in grammar
+calc.y:11.1-7: warning: nonterminal useless in grammar: useless
+calc.y:11.10-12: warning: rule useless in grammar: useless: STR
+calc.y: conflicts: 7 shift/reduce
+@end example
+
+When given @option{--report=state}, in addition to @file{calc.tab.c}, it
+creates a file @file{calc.output} with contents detailed below. The
+order of the output and the exact presentation might vary, but the
+interpretation is the same.
+
+The first section includes details on conflicts that were solved thanks
+to precedence and/or associativity:
+
+@example
+Conflict in state 8 between rule 2 and token '+' resolved as reduce.
+Conflict in state 8 between rule 2 and token '-' resolved as reduce.
+Conflict in state 8 between rule 2 and token '*' resolved as shift.
+@exdent @dots{}
+@end example
+
+@noindent
+The next section lists states that still have conflicts.
+
+@example
+State 8 conflicts: 1 shift/reduce
+State 9 conflicts: 1 shift/reduce
+State 10 conflicts: 1 shift/reduce
+State 11 conflicts: 4 shift/reduce
+@end example
+
+@noindent
+@cindex token, useless
+@cindex useless token
+@cindex nonterminal, useless
+@cindex useless nonterminal
+@cindex rule, useless
+@cindex useless rule
+The next section reports useless tokens, nonterminal and rules. Useless
+nonterminals and rules are removed in order to produce a smaller parser,
+but useless tokens are preserved, since they might be used by the
+scanner (note the difference between ``useless'' and ``unused''
+below):
+
+@example
+Nonterminals useless in grammar:
+ useless
+
+Terminals unused in grammar:
+ STR
+
+Rules useless in grammar:
+#6 useless: STR;
+@end example
+
+@noindent
+The next section reproduces the exact grammar that Bison used:
+
+@example
+Grammar
+
+ Number, Line, Rule
+ 0 5 $accept -> exp $end
+ 1 5 exp -> exp '+' exp
+ 2 6 exp -> exp '-' exp
+ 3 7 exp -> exp '*' exp
+ 4 8 exp -> exp '/' exp
+ 5 9 exp -> NUM
+@end example
+
+@noindent
+and reports the uses of the symbols:
+
+@example
+Terminals, with rules where they appear
+
+$end (0) 0
+'*' (42) 3
+'+' (43) 1
+'-' (45) 2
+'/' (47) 4
+error (256)
+NUM (258) 5
+
+Nonterminals, with rules where they appear
+
+$accept (8)
+ on left: 0
+exp (9)
+ on left: 1 2 3 4 5, on right: 0 1 2 3 4
+@end example
+
+@noindent
+@cindex item
+@cindex pointed rule
+@cindex rule, pointed
+Bison then proceeds onto the automaton itself, describing each state
+with it set of @dfn{items}, also known as @dfn{pointed rules}. Each
+item is a production rule together with a point (marked by @samp{.})
+that the input cursor.
+
+@example
+state 0
+
+ $accept -> . exp $ (rule 0)
+
+ NUM shift, and go to state 1
+
+ exp go to state 2
+@end example
+
+This reads as follows: ``state 0 corresponds to being at the very
+beginning of the parsing, in the initial rule, right before the start
+symbol (here, @code{exp}). When the parser returns to this state right
+after having reduced a rule that produced an @code{exp}, the control
+flow jumps to state 2. If there is no such transition on a nonterminal
+symbol, and the lookahead is a @code{NUM}, then this token is shifted on
+the parse stack, and the control flow jumps to state 1. Any other
+lookahead triggers a syntax error.''
+
+@cindex core, item set
+@cindex item set core
+@cindex kernel, item set
+@cindex item set core
+Even though the only active rule in state 0 seems to be rule 0, the
+report lists @code{NUM} as a lookahead token because @code{NUM} can be
+at the beginning of any rule deriving an @code{exp}. By default Bison
+reports the so-called @dfn{core} or @dfn{kernel} of the item set, but if
+you want to see more detail you can invoke @command{bison} with
+@option{--report=itemset} to list all the items, include those that can
+be derived:
+
+@example
+state 0
+
+ $accept -> . exp $ (rule 0)
+ exp -> . exp '+' exp (rule 1)
+ exp -> . exp '-' exp (rule 2)
+ exp -> . exp '*' exp (rule 3)
+ exp -> . exp '/' exp (rule 4)
+ exp -> . NUM (rule 5)
+
+ NUM shift, and go to state 1
+
+ exp go to state 2
+@end example
+
+@noindent
+In the state 1...
+
+@example
+state 1
+
+ exp -> NUM . (rule 5)
+
+ $default reduce using rule 5 (exp)
+@end example
+
+@noindent
+the rule 5, @samp{exp: NUM;}, is completed. Whatever the lookahead token
+(@samp{$default}), the parser will reduce it. If it was coming from
+state 0, then, after this reduction it will return to state 0, and will
+jump to state 2 (@samp{exp: go to state 2}).
+
+@example
+state 2
+
+ $accept -> exp . $ (rule 0)
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ $ shift, and go to state 3
+ '+' shift, and go to state 4
+ '-' shift, and go to state 5
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+@end example
+
+@noindent
+In state 2, the automaton can only shift a symbol. For instance,
+because of the item @samp{exp -> exp . '+' exp}, if the lookahead if
+@samp{+}, it will be shifted on the parse stack, and the automaton
+control will jump to state 4, corresponding to the item @samp{exp -> exp
+'+' . exp}. Since there is no default action, any other token than
+those listed above will trigger a syntax error.
+
+@cindex accepting state
+The state 3 is named the @dfn{final state}, or the @dfn{accepting
+state}:
+
+@example
+state 3
+
+ $accept -> exp $ . (rule 0)
+
+ $default accept
+@end example
+
+@noindent
+the initial rule is completed (the start symbol and the end
+of input were read), the parsing exits successfully.
+
+The interpretation of states 4 to 7 is straightforward, and is left to
+the reader.
+
+@example
+state 4
+
+ exp -> exp '+' . exp (rule 1)
+
+ NUM shift, and go to state 1
+
+ exp go to state 8
+
+state 5
+
+ exp -> exp '-' . exp (rule 2)
+
+ NUM shift, and go to state 1
+
+ exp go to state 9
+
+state 6
+
+ exp -> exp '*' . exp (rule 3)
+
+ NUM shift, and go to state 1
+
+ exp go to state 10
+
+state 7
+
+ exp -> exp '/' . exp (rule 4)
+
+ NUM shift, and go to state 1
+
+ exp go to state 11
+@end example
+
+As was announced in beginning of the report, @samp{State 8 conflicts:
+1 shift/reduce}:
+
+@example
+state 8
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp '+' exp . (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 1 (exp)]
+ $default reduce using rule 1 (exp)
+@end example
+
+Indeed, there are two actions associated to the lookahead @samp{/}:
+either shifting (and going to state 7), or reducing rule 1. The
+conflict means that either the grammar is ambiguous, or the parser lacks
+information to make the right decision. Indeed the grammar is
+ambiguous, as, since we did not specify the precedence of @samp{/}, the
+sentence @samp{NUM + NUM / NUM} can be parsed as @samp{NUM + (NUM /
+NUM)}, which corresponds to shifting @samp{/}, or as @samp{(NUM + NUM) /
+NUM}, which corresponds to reducing rule 1.
+
+Because in deterministic parsing a single decision can be made, Bison
+arbitrarily chose to disable the reduction, see @ref{Shift/Reduce, ,
+Shift/Reduce Conflicts}. Discarded actions are reported in between
+square brackets.
+
+Note that all the previous states had a single possible action: either
+shifting the next token and going to the corresponding state, or
+reducing a single rule. In the other cases, i.e., when shifting
+@emph{and} reducing is possible or when @emph{several} reductions are
+possible, the lookahead is required to select the action. State 8 is
+one such state: if the lookahead is @samp{*} or @samp{/} then the action
+is shifting, otherwise the action is reducing rule 1. In other words,
+the first two items, corresponding to rule 1, are not eligible when the
+lookahead token is @samp{*}, since we specified that @samp{*} has higher
+precedence than @samp{+}. More generally, some items are eligible only
+with some set of possible lookahead tokens. When run with
+@option{--report=lookahead}, Bison specifies these lookahead tokens:
+
+@example
+state 8
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp '+' exp . [$, '+', '-', '/'] (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 1 (exp)]
+ $default reduce using rule 1 (exp)
+@end example
+
+The remaining states are similar:
+
+@example
+state 9
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp '-' exp . (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 2 (exp)]
+ $default reduce using rule 2 (exp)
+
+state 10
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp '*' exp . (rule 3)
+ exp -> exp . '/' exp (rule 4)
+
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 3 (exp)]
+ $default reduce using rule 3 (exp)
+
+state 11
+
+ exp -> exp . '+' exp (rule 1)
+ exp -> exp . '-' exp (rule 2)
+ exp -> exp . '*' exp (rule 3)
+ exp -> exp . '/' exp (rule 4)
+ exp -> exp '/' exp . (rule 4)
+
+ '+' shift, and go to state 4
+ '-' shift, and go to state 5
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '+' [reduce using rule 4 (exp)]
+ '-' [reduce using rule 4 (exp)]
+ '*' [reduce using rule 4 (exp)]
+ '/' [reduce using rule 4 (exp)]
+ $default reduce using rule 4 (exp)
+@end example
+
+@noindent
+Observe that state 11 contains conflicts not only due to the lack of
+precedence of @samp{/} with respect to @samp{+}, @samp{-}, and
+@samp{*}, but also because the
+associativity of @samp{/} is not specified.
+
+
+@node Tracing
+@section Tracing Your Parser
+@findex yydebug
+@cindex debugging
+@cindex tracing the parser
+
+If a Bison grammar compiles properly but doesn't do what you want when it
+runs, the @code{yydebug} parser-trace feature can help you figure out why.
+
+There are several means to enable compilation of trace facilities:
+
+@table @asis
+@item the macro @code{YYDEBUG}
+@findex YYDEBUG
+Define the macro @code{YYDEBUG} to a nonzero value when you compile the
+parser. This is compliant with @acronym{POSIX} Yacc. You could use
+@samp{-DYYDEBUG=1} as a compiler option or you could put @samp{#define
+YYDEBUG 1} in the prologue of the grammar file (@pxref{Prologue, , The
+Prologue}).
+
+@item the option @option{-t}, @option{--debug}
+Use the @samp{-t} option when you run Bison (@pxref{Invocation,
+,Invoking Bison}). This is @acronym{POSIX} compliant too.
+
+@item the directive @samp{%debug}
+@findex %debug
+Add the @code{%debug} directive (@pxref{Decl Summary, ,Bison Declaration
+Summary}). This Bison extension is maintained for backward
+compatibility with previous versions of Bison.
+
+@item the variable @samp{parse.trace}
+@findex %define parse.trace
+Add the @samp{%define parse.trace} directive (@pxref{Decl Summary,
+,Bison Declaration Summary}), or pass the @option{-Dparse.trace} option
+(@pxref{Bison Options}). This is a Bison extension, which is especially
+useful for languages that don't use a preprocessor. Unless
+@acronym{POSIX} and Yacc portability matter to you, this is the
+preferred solution.
+@end table
+
+We suggest that you always enable the trace option so that debugging is
+always possible.
+
+The trace facility outputs messages with macro calls of the form
+@code{YYFPRINTF (stderr, @var{format}, @var{args})} where
+@var{format} and @var{args} are the usual @code{printf} format and variadic
+arguments. If you define @code{YYDEBUG} to a nonzero value but do not
+define @code{YYFPRINTF}, @code{<stdio.h>} is automatically included
+and @code{YYFPRINTF} is defined to @code{fprintf}.
+
+Once you have compiled the program with trace facilities, the way to
+request a trace is to store a nonzero value in the variable @code{yydebug}.
+You can do this by making the C code do it (in @code{main}, perhaps), or
+you can alter the value with a C debugger.
+
+Each step taken by the parser when @code{yydebug} is nonzero produces a
+line or two of trace information, written on @code{stderr}. The trace
+messages tell you these things:
+
+@itemize @bullet
+@item
+Each time the parser calls @code{yylex}, what kind of token was read.
+
+@item
+Each time a token is shifted, the depth and complete contents of the
+state stack (@pxref{Parser States}).
+
+@item
+Each time a rule is reduced, which rule it is, and the complete contents
+of the state stack afterward.
+@end itemize
+
+To make sense of this information, it helps to refer to the listing file
+produced by the Bison @samp{-v} option (@pxref{Invocation, ,Invoking
+Bison}). This file shows the meaning of each state in terms of
+positions in various rules, and also what each state will do with each
+possible input token. As you read the successive trace messages, you
+can see that the parser is functioning according to its specification in
+the listing file. Eventually you will arrive at the place where
+something undesirable happens, and you will see which parts of the
+grammar are to blame.
+
+The parser file is a C program and you can use C debuggers on it, but it's
+not easy to interpret what it is doing. The parser function is a
+finite-state machine interpreter, and aside from the actions it executes
+the same code over and over. Only the values of variables show where in
+the grammar it is working.
+
+@findex YYPRINT
+The debugging information normally gives the token type of each token
+read, but not its semantic value. You can optionally define a macro
+named @code{YYPRINT} to provide a way to print the value. If you define
+@code{YYPRINT}, it should take three arguments. The parser will pass a
+standard I/O stream, the numeric code for the token type, and the token
+value (from @code{yylval}).
+
+Here is an example of @code{YYPRINT} suitable for the multi-function
+calculator (@pxref{Mfcalc Declarations, ,Declarations for @code{mfcalc}}):
+
+@smallexample
+%@{
+ static void print_token_value (FILE *, int, YYSTYPE);
+ #define YYPRINT(file, type, value) print_token_value (file, type, value)
+%@}
+
+@dots{} %% @dots{} %% @dots{}
+
+static void
+print_token_value (FILE *file, int type, YYSTYPE value)
+@{
+ if (type == VAR)
+ fprintf (file, "%s", value.tptr->name);
+ else if (type == NUM)
+ fprintf (file, "%d", value.val);
+@}
+@end smallexample
+
+@c ================================================= Invoking Bison
+
+@node Invocation
+@chapter Invoking Bison
+@cindex invoking Bison
+@cindex Bison invocation
+@cindex options for invoking Bison
+
+The usual way to invoke Bison is as follows:
+
+@example
+bison @var{infile}
+@end example
+
+Here @var{infile} is the grammar file name, which usually ends in
+@samp{.y}. The parser file's name is made by replacing the @samp{.y}
+with @samp{.tab.c} and removing any leading directory. Thus, the
+@samp{bison foo.y} file name yields
+@file{foo.tab.c}, and the @samp{bison hack/foo.y} file name yields
+@file{foo.tab.c}. It's also possible, in case you are writing
+C++ code instead of C in your grammar file, to name it @file{foo.ypp}
+or @file{foo.y++}. Then, the output files will take an extension like
+the given one as input (respectively @file{foo.tab.cpp} and
+@file{foo.tab.c++}).
+This feature takes effect with all options that manipulate file names like
+@samp{-o} or @samp{-d}.
+
+For example :
+
+@example
+bison -d @var{infile.yxx}
+@end example
+@noindent
+will produce @file{infile.tab.cxx} and @file{infile.tab.hxx}, and
+
+@example
+bison -d -o @var{output.c++} @var{infile.y}
+@end example
+@noindent
+will produce @file{output.c++} and @file{outfile.h++}.
+
+For compatibility with @acronym{POSIX}, the standard Bison
+distribution also contains a shell script called @command{yacc} that
+invokes Bison with the @option{-y} option.
+
+@menu
+* Bison Options:: All the options described in detail,
+ in alphabetical order by short options.
+* Option Cross Key:: Alphabetical list of long options.
+* Yacc Library:: Yacc-compatible @code{yylex} and @code{main}.
+@end menu
+
+@node Bison Options
+@section Bison Options
+
+Bison supports both traditional single-letter options and mnemonic long
+option names. Long option names are indicated with @samp{--} instead of
+@samp{-}. Abbreviations for option names are allowed as long as they
+are unique. When a long option takes an argument, like
+@samp{--file-prefix}, connect the option name and the argument with
+@samp{=}.
+
+Here is a list of options that can be used with Bison, alphabetized by
+short option. It is followed by a cross key alphabetized by long
+option.
+
+@c Please, keep this ordered as in `bison --help'.
+@noindent
+Operations modes:
+@table @option
+@item -h
+@itemx --help
+Print a summary of the command-line options to Bison and exit.
+
+@item -V
+@itemx --version
+Print the version number of Bison and exit.
+
+@item --print-localedir
+Print the name of the directory containing locale-dependent data.
+
+@item --print-datadir
+Print the name of the directory containing skeletons and XSLT.
+
+@item -y
+@itemx --yacc
+Act more like the traditional Yacc command. This can cause
+different diagnostics to be generated, and may change behavior in
+other minor ways. Most importantly, imitate Yacc's output
+file name conventions, so that the parser output file is called
+@file{y.tab.c}, and the other outputs are called @file{y.output} and
+@file{y.tab.h}.
+Also, if generating a deterministic parser in C, generate @code{#define}
+statements in addition to an @code{enum} to associate token numbers with token
+names.
+Thus, the following shell script can substitute for Yacc, and the Bison
+distribution contains such a script for compatibility with @acronym{POSIX}:
+
+@example
+#! /bin/sh
+bison -y "$@@"