+@noindent
+The driver is passed by reference to the parser and to the scanner.
+This provides a simple but effective pure interface, not relying on
+global variables.
+
+@comment file: calc++-parser.yy
+@example
+// The parsing context.
+%parse-param @{ calcxx_driver& driver @}
+%lex-param @{ calcxx_driver& driver @}
+@end example
+
+@noindent
+Then we request the location tracking feature, and initialize the
+first location's file name. Afterwards new locations are computed
+relatively to the previous locations: the file name will be
+automatically propagated.
+
+@comment file: calc++-parser.yy
+@example
+%locations
+%initial-action
+@{
+ // Initialize the initial location.
+ @@$.begin.filename = @@$.end.filename = &driver.file;
+@};
+@end example
+
+@noindent
+Use the two following directives to enable parser tracing and verbose
+error messages.
+
+@comment file: calc++-parser.yy
+@example
+%debug
+%error-verbose
+@end example
+
+@noindent
+Semantic values cannot use ``real'' objects, but only pointers to
+them.
+
+@comment file: calc++-parser.yy
+@example
+// Symbols.
+%union
+@{
+ int ival;
+ std::string *sval;
+@};
+@end example
+
+@noindent
+The code between @samp{%@{} and @samp{%@}} after the introduction of the
+@samp{%union} is output in the @file{*.cc} file; it needs detailed
+knowledge about the driver.
+
+@comment file: calc++-parser.yy
+@example
+%@{
+# include "calc++-driver.hh"
+%@}
+@end example
+
+
+@noindent
+The token numbered as 0 corresponds to end of file; the following line
+allows for nicer error messages referring to ``end of file'' instead
+of ``$end''. Similarly user friendly named are provided for each
+symbol. Note that the tokens names are prefixed by @code{TOKEN_} to
+avoid name clashes.
+
+@comment file: calc++-parser.yy
+@example
+%token END 0 "end of file"
+%token ASSIGN ":="
+%token <sval> IDENTIFIER "identifier"
+%token <ival> NUMBER "number"
+%type <ival> exp "expression"
+@end example
+
+@noindent
+To enable memory deallocation during error recovery, use
+@code{%destructor}.
+
+@comment file: calc++-parser.yy
+@example
+%printer @{ debug_stream () << *$$; @} "identifier"
+%destructor @{ delete $$; @} "identifier"
+
+%printer @{ debug_stream () << $$; @} "number" "expression"
+@end example
+
+@noindent
+The grammar itself is straightforward.
+
+@comment file: calc++-parser.yy
+@example
+%%
+%start unit;
+unit: assignments exp @{ driver.result = $2; @};
+
+assignments: assignments assignment @{@}
+ | /* Nothing. */ @{@};
+
+assignment: "identifier" ":=" exp @{ driver.variables[*$1] = $3; @};
+
+%left '+' '-';
+%left '*' '/';
+exp: exp '+' exp @{ $$ = $1 + $3; @}
+ | exp '-' exp @{ $$ = $1 - $3; @}
+ | exp '*' exp @{ $$ = $1 * $3; @}
+ | exp '/' exp @{ $$ = $1 / $3; @}
+ | "identifier" @{ $$ = driver.variables[*$1]; @}
+ | "number" @{ $$ = $1; @};
+%%
+@end example
+
+@noindent
+Finally the @code{error} member function registers the errors to the
+driver.
+
+@comment file: calc++-parser.yy
+@example
+void
+yy::calcxx_parser::error (const yy::calcxx_parser::location_type& l,
+ const std::string& m)
+@{
+ driver.error (l, m);
+@}
+@end example
+
+@node Calc++ Scanner
+@subsection Calc++ Scanner
+
+The Flex scanner first includes the driver declaration, then the
+parser's to get the set of defined tokens.
+
+@comment file: calc++-scanner.ll
+@example
+%@{ /* -*- C++ -*- */
+# include <cstdlib>
+# include <errno.h>
+# include <limits.h>
+# include <string>
+# include "calc++-driver.hh"
+# include "calc++-parser.hh"
+%@}
+@end example
+
+@noindent
+Because there is no @code{#include}-like feature we don't need
+@code{yywrap}, we don't need @code{unput} either, and we parse an
+actual file, this is not an interactive session with the user.
+Finally we enable the scanner tracing features.
+
+@comment file: calc++-scanner.ll
+@example
+%option noyywrap nounput batch debug
+@end example
+
+@noindent
+Abbreviations allow for more readable rules.
+
+@comment file: calc++-scanner.ll
+@example
+id [a-zA-Z][a-zA-Z_0-9]*
+int [0-9]+
+blank [ \t]
+@end example
+
+@noindent
+The following paragraph suffices to track locations accurately. Each
+time @code{yylex} is invoked, the begin position is moved onto the end
+position. Then when a pattern is matched, the end position is
+advanced of its width. In case it matched ends of lines, the end
+cursor is adjusted, and each time blanks are matched, the begin cursor
+is moved onto the end cursor to effectively ignore the blanks
+preceding tokens. Comments would be treated equally.
+
+@comment file: calc++-scanner.ll
+@example
+%@{
+# define YY_USER_ACTION yylloc->columns (yyleng);
+%@}
+%%
+%@{
+ yylloc->step ();
+%@}
+@{blank@}+ yylloc->step ();
+[\n]+ yylloc->lines (yyleng); yylloc->step ();
+@end example
+
+@noindent
+The rules are simple, just note the use of the driver to report errors.
+It is convenient to use a typedef to shorten
+@code{yy::calcxx_parser::token::identifier} into
+@code{token::identifier} for instance.
+
+@comment file: calc++-scanner.ll
+@example
+%@{
+ typedef yy::calcxx_parser::token token;
+%@}
+
+[-+*/] return yytext[0];
+":=" return token::ASSIGN;
+@{int@} @{
+ errno = 0;
+ long n = strtol (yytext, NULL, 10);
+ if (! (INT_MIN <= n && n <= INT_MAX && errno != ERANGE))
+ driver.error (*yylloc, "integer is out of range");
+ yylval->ival = n;
+ return token::NUMBER;
+@}
+@{id@} yylval->sval = new std::string (yytext); return token::IDENTIFIER;
+. driver.error (*yylloc, "invalid character");
+%%
+@end example
+
+@noindent
+Finally, because the scanner related driver's member function depend
+on the scanner's data, it is simpler to implement them in this file.
+
+@comment file: calc++-scanner.ll
+@example
+void
+calcxx_driver::scan_begin ()
+@{
+ yy_flex_debug = trace_scanning;
+ if (!(yyin = fopen (file.c_str (), "r")))
+ error (std::string ("cannot open ") + file);
+@}
+
+void
+calcxx_driver::scan_end ()
+@{
+ fclose (yyin);
+@}
+@end example
+
+@node Calc++ Top Level
+@subsection Calc++ Top Level
+
+The top level file, @file{calc++.cc}, poses no problem.
+
+@comment file: calc++.cc
+@example
+#include <iostream>
+#include "calc++-driver.hh"
+
+int
+main (int argc, char *argv[])
+@{
+ calcxx_driver driver;
+ for (++argv; argv[0]; ++argv)
+ if (*argv == std::string ("-p"))
+ driver.trace_parsing = true;
+ else if (*argv == std::string ("-s"))
+ driver.trace_scanning = true;
+ else
+ @{
+ driver.parse (*argv);
+ std::cout << driver.result << std::endl;
+ @}
+@}
+@end example
+
+@c ================================================= FAQ
+
+@node FAQ
+@chapter Frequently Asked Questions
+@cindex frequently asked questions
+@cindex questions
+
+Several questions about Bison come up occasionally. Here some of them
+are addressed.
+
+@menu
+* Memory Exhausted:: Breaking the Stack Limits
+* How Can I Reset the Parser:: @code{yyparse} Keeps some State
+* Strings are Destroyed:: @code{yylval} Loses Track of Strings
+* Implementing Gotos/Loops:: Control Flow in the Calculator
+@end menu
+
+@node Memory Exhausted
+@section Memory Exhausted
+
+@display
+My parser returns with error with a @samp{memory exhausted}
+message. What can I do?
+@end display
+
+This question is already addressed elsewhere, @xref{Recursion,
+,Recursive Rules}.
+
+@node How Can I Reset the Parser
+@section How Can I Reset the Parser
+
+The following phenomenon has several symptoms, resulting in the
+following typical questions:
+
+@display
+I invoke @code{yyparse} several times, and on correct input it works
+properly; but when a parse error is found, all the other calls fail
+too. How can I reset the error flag of @code{yyparse}?
+@end display
+
+@noindent
+or
+
+@display
+My parser includes support for an @samp{#include}-like feature, in
+which case I run @code{yyparse} from @code{yyparse}. This fails
+although I did specify I needed a @code{%pure-parser}.
+@end display
+
+These problems typically come not from Bison itself, but from
+Lex-generated scanners. Because these scanners use large buffers for
+speed, they might not notice a change of input file. As a
+demonstration, consider the following source file,
+@file{first-line.l}:
+
+@verbatim
+%{
+#include <stdio.h>
+#include <stdlib.h>
+%}
+%%
+.*\n ECHO; return 1;
+%%
+int
+yyparse (char const *file)
+{
+ yyin = fopen (file, "r");
+ if (!yyin)
+ exit (2);
+ /* One token only. */
+ yylex ();
+ if (fclose (yyin) != 0)
+ exit (3);
+ return 0;
+}
+
+int
+main (void)
+{
+ yyparse ("input");
+ yyparse ("input");
+ return 0;
+}
+@end verbatim
+
+@noindent
+If the file @file{input} contains
+
+@verbatim
+input:1: Hello,
+input:2: World!
+@end verbatim
+
+@noindent
+then instead of getting the first line twice, you get:
+
+@example
+$ @kbd{flex -ofirst-line.c first-line.l}
+$ @kbd{gcc -ofirst-line first-line.c -ll}
+$ @kbd{./first-line}
+input:1: Hello,
+input:2: World!
+@end example
+
+Therefore, whenever you change @code{yyin}, you must tell the
+Lex-generated scanner to discard its current buffer and switch to the
+new one. This depends upon your implementation of Lex; see its
+documentation for more. For Flex, it suffices to call
+@samp{YY_FLUSH_BUFFER} after each change to @code{yyin}. If your
+Flex-generated scanner needs to read from several input streams to
+handle features like include files, you might consider using Flex
+functions like @samp{yy_switch_to_buffer} that manipulate multiple
+input buffers.
+
+If your Flex-generated scanner uses start conditions (@pxref{Start
+conditions, , Start conditions, flex, The Flex Manual}), you might
+also want to reset the scanner's state, i.e., go back to the initial
+start condition, through a call to @samp{BEGIN (0)}.
+
+@node Strings are Destroyed
+@section Strings are Destroyed
+
+@display
+My parser seems to destroy old strings, or maybe it loses track of
+them. Instead of reporting @samp{"foo", "bar"}, it reports
+@samp{"bar", "bar"}, or even @samp{"foo\nbar", "bar"}.
+@end display
+
+This error is probably the single most frequent ``bug report'' sent to
+Bison lists, but is only concerned with a misunderstanding of the role
+of scanner. Consider the following Lex code:
+
+@verbatim
+%{
+#include <stdio.h>
+char *yylval = NULL;
+%}
+%%
+.* yylval = yytext; return 1;
+\n /* IGNORE */
+%%
+int
+main ()
+{
+ /* Similar to using $1, $2 in a Bison action. */
+ char *fst = (yylex (), yylval);
+ char *snd = (yylex (), yylval);
+ printf ("\"%s\", \"%s\"\n", fst, snd);
+ return 0;
+}
+@end verbatim
+
+If you compile and run this code, you get:
+
+@example
+$ @kbd{flex -osplit-lines.c split-lines.l}
+$ @kbd{gcc -osplit-lines split-lines.c -ll}
+$ @kbd{printf 'one\ntwo\n' | ./split-lines}
+"one
+two", "two"
+@end example
+
+@noindent
+this is because @code{yytext} is a buffer provided for @emph{reading}
+in the action, but if you want to keep it, you have to duplicate it
+(e.g., using @code{strdup}). Note that the output may depend on how
+your implementation of Lex handles @code{yytext}. For instance, when
+given the Lex compatibility option @option{-l} (which triggers the
+option @samp{%array}) Flex generates a different behavior:
+
+@example
+$ @kbd{flex -l -osplit-lines.c split-lines.l}
+$ @kbd{gcc -osplit-lines split-lines.c -ll}
+$ @kbd{printf 'one\ntwo\n' | ./split-lines}
+"two", "two"
+@end example
+
+
+@node Implementing Gotos/Loops
+@section Implementing Gotos/Loops
+
+@display
+My simple calculator supports variables, assignments, and functions,
+but how can I implement gotos, or loops?
+@end display
+
+Although very pedagogical, the examples included in the document blur
+the distinction to make between the parser---whose job is to recover
+the structure of a text and to transmit it to subsequent modules of
+the program---and the processing (such as the execution) of this
+structure. This works well with so called straight line programs,
+i.e., precisely those that have a straightforward execution model:
+execute simple instructions one after the others.
+
+@cindex abstract syntax tree
+@cindex @acronym{AST}
+If you want a richer model, you will probably need to use the parser
+to construct a tree that does represent the structure it has
+recovered; this tree is usually called the @dfn{abstract syntax tree},
+or @dfn{@acronym{AST}} for short. Then, walking through this tree,
+traversing it in various ways, will enable treatments such as its
+execution or its translation, which will result in an interpreter or a
+compiler.
+
+This topic is way beyond the scope of this manual, and the reader is
+invited to consult the dedicated literature.
+
+
+
+@c ================================================= Table of Symbols
+
+@node Table of Symbols
+@appendix Bison Symbols
+@cindex Bison symbols, table of
+@cindex symbols in Bison, table of
+
+@deffn {Variable} @@$
+In an action, the location of the left-hand side of the rule.
+@xref{Locations, , Locations Overview}.
+@end deffn
+
+@deffn {Variable} @@@var{n}
+In an action, the location of the @var{n}-th symbol of the right-hand
+side of the rule. @xref{Locations, , Locations Overview}.
+@end deffn
+
+@deffn {Variable} $$
+In an action, the semantic value of the left-hand side of the rule.
+@xref{Actions}.
+@end deffn
+
+@deffn {Variable} $@var{n}
+In an action, the semantic value of the @var{n}-th symbol of the
+right-hand side of the rule. @xref{Actions}.
+@end deffn
+
+@deffn {Delimiter} %%
+Delimiter used to separate the grammar rule section from the
+Bison declarations section or the epilogue.
+@xref{Grammar Layout, ,The Overall Layout of a Bison Grammar}.
+@end deffn
+
+@c Don't insert spaces, or check the DVI output.
+@deffn {Delimiter} %@{@var{code}%@}
+All code listed between @samp{%@{} and @samp{%@}} is copied directly to
+the output file uninterpreted. Such code forms the prologue of the input
+file. @xref{Grammar Outline, ,Outline of a Bison
+Grammar}.
+@end deffn
+
+@deffn {Construct} /*@dots{}*/
+Comment delimiters, as in C.
+@end deffn
+
+@deffn {Delimiter} :
+Separates a rule's result from its components. @xref{Rules, ,Syntax of
+Grammar Rules}.
+@end deffn
+
+@deffn {Delimiter} ;
+Terminates a rule. @xref{Rules, ,Syntax of Grammar Rules}.
+@end deffn