@copying
-This manual is for @acronym{GNU} Bison (version @value{VERSION},
-@value{UPDATED}), the @acronym{GNU} parser generator.
+This manual (@value{UPDATED}) is for GNU Bison (version
+@value{VERSION}), the GNU parser generator.
-Copyright @copyright{} 1988, 1989, 1990, 1991, 1992, 1993, 1995, 1998,
-1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
+Copyright @copyright{} 1988-1993, 1995, 1998-2012 Free Software
+Foundation, Inc.
@quotation
Permission is granted to copy, distribute and/or modify this document
-under the terms of the @acronym{GNU} Free Documentation License,
-Version 1.2 or any later version published by the Free Software
+under the terms of the GNU Free Documentation License,
+Version 1.3 or any later version published by the Free Software
Foundation; with no Invariant Sections, with the Front-Cover texts
-being ``A @acronym{GNU} Manual,'' and with the Back-Cover Texts as in
+being ``A GNU Manual,'' and with the Back-Cover Texts as in
(a) below. A copy of the license is included in the section entitled
-``@acronym{GNU} Free Documentation License.''
+``GNU Free Documentation License.''
-(a) The @acronym{FSF}'s Back-Cover Text is: ``You have freedom to copy
-and modify this @acronym{GNU} Manual, like @acronym{GNU} software.
-Copies published by the Free Software Foundation raise funds for
-@acronym{GNU} development.''
+(a) The FSF's Back-Cover Text is: ``You have the freedom to copy and
+modify this GNU manual. Buying copies from the FSF
+supports it in developing GNU and promoting software
+freedom.''
@end quotation
@end copying
@dircategory Software development
@direntry
-* bison: (bison). @acronym{GNU} parser generator (Yacc replacement).
+* bison: (bison). GNU parser generator (Yacc replacement).
@end direntry
@titlepage
51 Franklin Street, Fifth Floor @*
Boston, MA 02110-1301 USA @*
Printed copies are available from the Free Software Foundation.@*
-@acronym{ISBN} 1-882114-44-2
+ISBN 1-882114-44-2
@sp 2
Cover art by Etienne Suvasa.
@end titlepage
@menu
* Introduction::
* Conditions::
-* Copying:: The @acronym{GNU} General Public License says
- how you can copy and share Bison
+* Copying:: The GNU General Public License says
+ how you can copy and share Bison.
Tutorial sections:
-* Concepts:: Basic concepts for understanding Bison.
-* Examples:: Three simple explained examples of using Bison.
+* Concepts:: Basic concepts for understanding Bison.
+* Examples:: Three simple explained examples of using Bison.
Reference sections:
-* Grammar File:: Writing Bison declarations and rules.
-* Interface:: C-language interface to the parser function @code{yyparse}.
-* Algorithm:: How the Bison parser works at run-time.
-* Error Recovery:: Writing rules for error recovery.
+* Grammar File:: Writing Bison declarations and rules.
+* Interface:: C-language interface to the parser function @code{yyparse}.
+* Algorithm:: How the Bison parser works at run-time.
+* Error Recovery:: Writing rules for error recovery.
* Context Dependency:: What to do if your language syntax is too
- messy for Bison to handle straightforwardly.
-* Debugging:: Understanding or debugging Bison parsers.
-* Invocation:: How to run Bison (to produce the parser source file).
-* C++ Language Interface:: Creating C++ parser objects.
-* FAQ:: Frequently Asked Questions
-* Table of Symbols:: All the keywords of the Bison language are explained.
-* Glossary:: Basic concepts are explained.
-* Copying This Manual:: License for copying this manual.
-* Index:: Cross-references to the text.
+ messy for Bison to handle straightforwardly.
+* Debugging:: Understanding or debugging Bison parsers.
+* Invocation:: How to run Bison (to produce the parser implementation).
+* Other Languages:: Creating C++ and Java parsers.
+* FAQ:: Frequently Asked Questions
+* Table of Symbols:: All the keywords of the Bison language are explained.
+* Glossary:: Basic concepts are explained.
+* Copying This Manual:: License for copying this manual.
+* Bibliography:: Publications cited in this manual.
+* Index:: Cross-references to the text.
@detailmenu
--- The Detailed Node Listing ---
The Concepts of Bison
-* Language and Grammar:: Languages and context-free grammars,
- as mathematical ideas.
-* Grammar in Bison:: How we represent grammars for Bison's sake.
-* Semantic Values:: Each token or syntactic grouping can have
- a semantic value (the value of an integer,
- the name of an identifier, etc.).
-* Semantic Actions:: Each rule can have an action containing C code.
-* GLR Parsers:: Writing parsers for general context-free languages.
-* Locations Overview:: Tracking Locations.
-* Bison Parser:: What are Bison's input and output,
- how is the output used?
-* Stages:: Stages in writing and running Bison grammars.
-* Grammar Layout:: Overall structure of a Bison grammar file.
-
-Writing @acronym{GLR} Parsers
-
-* Simple GLR Parsers:: Using @acronym{GLR} parsers on unambiguous grammars.
-* Merging GLR Parses:: Using @acronym{GLR} parsers to resolve ambiguities.
-* GLR Semantic Actions:: Deferred semantic actions have special concerns.
-* Compiler Requirements:: @acronym{GLR} parsers require a modern C compiler.
+* Language and Grammar:: Languages and context-free grammars,
+ as mathematical ideas.
+* Grammar in Bison:: How we represent grammars for Bison's sake.
+* Semantic Values:: Each token or syntactic grouping can have
+ a semantic value (the value of an integer,
+ the name of an identifier, etc.).
+* Semantic Actions:: Each rule can have an action containing C code.
+* GLR Parsers:: Writing parsers for general context-free languages.
+* Locations:: Overview of location tracking.
+* Bison Parser:: What are Bison's input and output,
+ how is the output used?
+* Stages:: Stages in writing and running Bison grammars.
+* Grammar Layout:: Overall structure of a Bison grammar file.
+
+Writing GLR Parsers
+
+* Simple GLR Parsers:: Using GLR parsers on unambiguous grammars.
+* Merging GLR Parses:: Using GLR parsers to resolve ambiguities.
+* GLR Semantic Actions:: Considerations for semantic values and deferred actions.
+* Semantic Predicates:: Controlling a parse with arbitrary computations.
+* Compiler Requirements:: GLR parsers require a modern C compiler.
Examples
-* RPN Calc:: Reverse polish notation calculator;
- a first example with no operator precedence.
-* Infix Calc:: Infix (algebraic) notation calculator.
- Operator precedence is introduced.
+* RPN Calc:: Reverse polish notation calculator;
+ a first example with no operator precedence.
+* Infix Calc:: Infix (algebraic) notation calculator.
+ Operator precedence is introduced.
* Simple Error Recovery:: Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @@@var{n} and @@$.
-* Multi-function Calc:: Calculator with memory and trig functions.
- It uses multiple data-types for semantic values.
-* Exercises:: Ideas for improving the multi-function calculator.
+* Multi-function Calc:: Calculator with memory and trig functions.
+ It uses multiple data-types for semantic values.
+* Exercises:: Ideas for improving the multi-function calculator.
Reverse Polish Notation Calculator
-* Decls: Rpcalc Decls. Prologue (declarations) for rpcalc.
-* Rules: Rpcalc Rules. Grammar Rules for rpcalc, with explanation.
-* Lexer: Rpcalc Lexer. The lexical analyzer.
-* Main: Rpcalc Main. The controlling function.
-* Error: Rpcalc Error. The error reporting function.
-* Gen: Rpcalc Gen. Running Bison on the grammar file.
-* Comp: Rpcalc Compile. Run the C compiler on the output code.
+* Rpcalc Declarations:: Prologue (declarations) for rpcalc.
+* Rpcalc Rules:: Grammar Rules for rpcalc, with explanation.
+* Rpcalc Lexer:: The lexical analyzer.
+* Rpcalc Main:: The controlling function.
+* Rpcalc Error:: The error reporting function.
+* Rpcalc Generate:: Running Bison on the grammar file.
+* Rpcalc Compile:: Run the C compiler on the output code.
Grammar Rules for @code{rpcalc}
-* Rpcalc Input::
-* Rpcalc Line::
-* Rpcalc Expr::
+* Rpcalc Input:: Explanation of the @code{input} nonterminal
+* Rpcalc Line:: Explanation of the @code{line} nonterminal
+* Rpcalc Expr:: Explanation of the @code{expr} nonterminal
Location Tracking Calculator: @code{ltcalc}
-* Decls: Ltcalc Decls. Bison and C declarations for ltcalc.
-* Rules: Ltcalc Rules. Grammar rules for ltcalc, with explanations.
-* Lexer: Ltcalc Lexer. The lexical analyzer.
+* Ltcalc Declarations:: Bison and C declarations for ltcalc.
+* Ltcalc Rules:: Grammar rules for ltcalc, with explanations.
+* Ltcalc Lexer:: The lexical analyzer.
Multi-Function Calculator: @code{mfcalc}
-* Decl: Mfcalc Decl. Bison declarations for multi-function calculator.
-* Rules: Mfcalc Rules. Grammar rules for the calculator.
-* Symtab: Mfcalc Symtab. Symbol table management subroutines.
+* Mfcalc Declarations:: Bison declarations for multi-function calculator.
+* Mfcalc Rules:: Grammar rules for the calculator.
+* Mfcalc Symbol Table:: Symbol table management subroutines.
+* Mfcalc Lexer:: The lexical analyzer.
+* Mfcalc Main:: The controlling function.
Bison Grammar Files
-* Grammar Outline:: Overall layout of the grammar file.
-* Symbols:: Terminal and nonterminal symbols.
-* Rules:: How to write grammar rules.
-* Recursion:: Writing recursive rules.
-* Semantics:: Semantic values and actions.
-* Locations:: Locations and actions.
-* Declarations:: All kinds of Bison declarations are described here.
-* Multiple Parsers:: Putting more than one Bison parser in one program.
+* Grammar Outline:: Overall layout of the grammar file.
+* Symbols:: Terminal and nonterminal symbols.
+* Rules:: How to write grammar rules.
+* Recursion:: Writing recursive rules.
+* Semantics:: Semantic values and actions.
+* Tracking Locations:: Locations and actions.
+* Named References:: Using named references in actions.
+* Declarations:: All kinds of Bison declarations are described here.
+* Multiple Parsers:: Putting more than one Bison parser in one program.
Outline of a Bison Grammar
-* Prologue:: Syntax and usage of the prologue.
+* Prologue:: Syntax and usage of the prologue.
* Prologue Alternatives:: Syntax and usage of alternatives to the prologue.
-* Bison Declarations:: Syntax and usage of the Bison declarations section.
-* Grammar Rules:: Syntax and usage of the grammar rules section.
-* Epilogue:: Syntax and usage of the epilogue.
+* Bison Declarations:: Syntax and usage of the Bison declarations section.
+* Grammar Rules:: Syntax and usage of the grammar rules section.
+* Epilogue:: Syntax and usage of the epilogue.
Defining Language Semantics
* Expect Decl:: Suppressing warnings about parsing conflicts.
* Start Decl:: Specifying the start symbol.
* Pure Decl:: Requesting a reentrant parser.
+* Push Decl:: Requesting a push parser.
* Decl Summary:: Table of all Bison declarations.
+* %define Summary:: Defining variables to adjust Bison's behavior.
+* %code Summary:: Inserting code into the parser source.
Parser C-Language Interface
-* Parser Function:: How to call @code{yyparse} and what it returns.
-* Lexical:: You must supply a function @code{yylex}
- which reads tokens.
-* Error Reporting:: You must supply a function @code{yyerror}.
-* Action Features:: Special features for use in actions.
-* Internationalization:: How to let the parser speak in the user's
- native language.
+* Parser Function:: How to call @code{yyparse} and what it returns.
+* Push Parser Function:: How to call @code{yypush_parse} and what it returns.
+* Pull Parser Function:: How to call @code{yypull_parse} and what it returns.
+* Parser Create Function:: How to call @code{yypstate_new} and what it returns.
+* Parser Delete Function:: How to call @code{yypstate_delete} and what it returns.
+* Lexical:: You must supply a function @code{yylex}
+ which reads tokens.
+* Error Reporting:: You must supply a function @code{yyerror}.
+* Action Features:: Special features for use in actions.
+* Internationalization:: How to let the parser speak in the user's
+ native language.
The Lexical Analyzer Function @code{yylex}
* Calling Convention:: How @code{yyparse} calls @code{yylex}.
-* Token Values:: How @code{yylex} must return the semantic value
- of the token it has read.
-* Token Locations:: How @code{yylex} must return the text location
- (line number, etc.) of the token, if the
- actions want that.
-* Pure Calling:: How the calling convention differs
- in a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).
+* Token Values:: How @code{yylex} must return the semantic value
+ of the token it has read.
+* Token Locations:: How @code{yylex} must return the text location
+ (line number, etc.) of the token, if the
+ actions want that.
+* Pure Calling:: How the calling convention differs in a pure parser
+ (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).
The Bison Parser Algorithm
* Contextual Precedence:: When an operator's precedence depends on context.
* Parser States:: The parser is a finite-state-machine with stack.
* Reduce/Reduce:: When two rules are applicable in the same situation.
-* Mystery Conflicts:: Reduce/reduce conflicts that look unjustified.
+* Mysterious Conflicts:: Conflicts that look unjustified.
+* Tuning LR:: How to tune fundamental aspects of LR-based parsing.
* Generalized LR Parsing:: Parsing arbitrary context-free grammars.
* Memory Management:: What happens when memory is exhausted. How to avoid it.
Operator Precedence
* Why Precedence:: An example showing why precedence is needed.
-* Using Precedence:: How to specify precedence in Bison grammars.
+* Using Precedence:: How to specify precedence and associativity.
+* Precedence Only:: How to specify precedence only.
* Precedence Examples:: How these features are used in the previous example.
* How Precedence:: How they work.
+Tuning LR
+
+* LR Table Construction:: Choose a different construction algorithm.
+* Default Reductions:: Disable default reductions.
+* LAC:: Correct lookahead sets in the parser states.
+* Unreachable States:: Keep unreachable parser states for debugging.
+
Handling Context Dependencies
* Semantic Tokens:: Token parsing can depend on the semantic context.
* Option Cross Key:: Alphabetical list of long options.
* Yacc Library:: Yacc-compatible @code{yylex} and @code{main}.
-C++ Language Interface
+Parsers Written In Other Languages
* C++ Parsers:: The interface to generate C++ parser classes
-* A Complete C++ Example:: Demonstrating their use
+* Java Parsers:: The interface to generate Java parser classes
C++ Parsers
* C++ Location Values:: The position and location classes
* C++ Parser Interface:: Instantiating and running the parser
* C++ Scanner Interface:: Exchanges between yylex and parse
+* A Complete C++ Example:: Demonstrating their use
+
+C++ Location Values
+
+* C++ position:: One point in the source file
+* C++ location:: Two points in the source file
A Complete C++ Example
* Calc++ Scanner:: A pure C++ Flex scanner
* Calc++ Top Level:: Conducting the band
+Java Parsers
+
+* Java Bison Interface:: Asking for Java parser generation
+* Java Semantic Values:: %type and %token vs. Java
+* Java Location Values:: The position and location classes
+* Java Parser Interface:: Instantiating and running the parser
+* Java Scanner Interface:: Specifying the scanner for the parser
+* Java Action Features:: Special features for use in actions
+* Java Differences:: Differences between C/C++ and Java Grammars
+* Java Declarations Summary:: List of Bison declarations used with Java
+
Frequently Asked Questions
-* Memory Exhausted:: Breaking the Stack Limits
-* How Can I Reset the Parser:: @code{yyparse} Keeps some State
-* Strings are Destroyed:: @code{yylval} Loses Track of Strings
-* Implementing Gotos/Loops:: Control Flow in the Calculator
-* Multiple start-symbols:: Factoring closely related grammars
-* Secure? Conform?:: Is Bison @acronym{POSIX} safe?
-* I can't build Bison:: Troubleshooting
-* Where can I find help?:: Troubleshouting
-* Bug Reports:: Troublereporting
-* Other Languages:: Parsers in Java and others
-* Beta Testing:: Experimenting development versions
-* Mailing Lists:: Meeting other Bison users
+* Memory Exhausted:: Breaking the Stack Limits
+* How Can I Reset the Parser:: @code{yyparse} Keeps some State
+* Strings are Destroyed:: @code{yylval} Loses Track of Strings
+* Implementing Gotos/Loops:: Control Flow in the Calculator
+* Multiple start-symbols:: Factoring closely related grammars
+* Secure? Conform?:: Is Bison POSIX safe?
+* I can't build Bison:: Troubleshooting
+* Where can I find help?:: Troubleshouting
+* Bug Reports:: Troublereporting
+* More Languages:: Parsers in C++, Java, and so on
+* Beta Testing:: Experimenting development versions
+* Mailing Lists:: Meeting other Bison users
Copying This Manual
-* GNU Free Documentation License:: License for copying this manual.
+* Copying This Manual:: License for copying this manual.
@end detailmenu
@end menu
@cindex introduction
@dfn{Bison} is a general-purpose parser generator that converts an
-annotated context-free grammar into an @acronym{LALR}(1) or
-@acronym{GLR} parser for that grammar. Once you are proficient with
-Bison, you can use it to develop a wide range of language parsers, from those
-used in simple desk calculators to complex programming languages.
-
-Bison is upward compatible with Yacc: all properly-written Yacc grammars
-ought to work with Bison with no change. Anyone familiar with Yacc
-should be able to use Bison with little trouble. You need to be fluent in
-C or C++ programming in order to use Bison or to understand this manual.
+annotated context-free grammar into a deterministic LR or generalized
+LR (GLR) parser employing LALR(1) parser tables. As an experimental
+feature, Bison can also generate IELR(1) or canonical LR(1) parser
+tables. Once you are proficient with Bison, you can use it to develop
+a wide range of language parsers, from those used in simple desk
+calculators to complex programming languages.
+
+Bison is upward compatible with Yacc: all properly-written Yacc
+grammars ought to work with Bison with no change. Anyone familiar
+with Yacc should be able to use Bison with little trouble. You need
+to be fluent in C or C++ programming in order to use Bison or to
+understand this manual. Java is also supported as an experimental
+feature.
-We begin with tutorial chapters that explain the basic concepts of using
-Bison and show three explained examples, each building on the last. If you
-don't know Bison or Yacc, start by reading these chapters. Reference
-chapters follow which describe specific aspects of Bison in detail.
+We begin with tutorial chapters that explain the basic concepts of
+using Bison and show three explained examples, each building on the
+last. If you don't know Bison or Yacc, start by reading these
+chapters. Reference chapters follow, which describe specific aspects
+of Bison in detail.
-Bison was written primarily by Robert Corbett; Richard Stallman made it
-Yacc-compatible. Wilfred Hansen of Carnegie Mellon University added
-multi-character string literals and other features.
+Bison was written originally by Robert Corbett. Richard Stallman made
+it Yacc-compatible. Wilfred Hansen of Carnegie Mellon University
+added multi-character string literals and other features. Since then,
+Bison has grown more robust and evolved many other new features thanks
+to the hard work of a long list of volunteers. For details, see the
+@file{THANKS} and @file{ChangeLog} files included in the Bison
+distribution.
This edition corresponds to version @value{VERSION} of Bison.
The distribution terms for Bison-generated parsers permit using the
parsers in nonfree programs. Before Bison version 2.2, these extra
-permissions applied only when Bison was generating @acronym{LALR}(1)
+permissions applied only when Bison was generating LALR(1)
parsers in C@. And before Bison version 1.24, Bison-generated
parsers could be used only in programs that were free software.
-The other @acronym{GNU} programming tools, such as the @acronym{GNU} C
+The other GNU programming tools, such as the GNU C
compiler, have never
had such a requirement. They could always be used for nonfree
software. The reason Bison was different was not due to a special
policy decision; it resulted from applying the usual General Public
License to all of the Bison source code.
-The output of the Bison utility---the Bison parser file---contains a
-verbatim copy of a sizable piece of Bison, which is the code for the
-parser's implementation. (The actions from your grammar are inserted
-into this implementation at one point, but most of the rest of the
-implementation is not changed.) When we applied the @acronym{GPL}
-terms to the skeleton code for the parser's implementation,
+The main output of the Bison utility---the Bison parser implementation
+file---contains a verbatim copy of a sizable piece of Bison, which is
+the code for the parser's implementation. (The actions from your
+grammar are inserted into this implementation at one point, but most
+of the rest of the implementation is not changed.) When we applied
+the GPL terms to the skeleton code for the parser's implementation,
the effect was to restrict the use of Bison output to free software.
We didn't change the terms because of sympathy for people who want to
concluded that limiting Bison's use to free software was doing little to
encourage people to make other software free. So we decided to make the
practical conditions for using Bison match the practical conditions for
-using the other @acronym{GNU} tools.
+using the other GNU tools.
This exception applies when Bison is generating code for a parser.
You can tell whether the exception applies to a Bison output file by
exception@dots{}''. The text spells out the exact terms of the
exception.
-@include gpl.texi
+@node Copying
+@unnumbered GNU GENERAL PUBLIC LICENSE
+@include gpl-3.0.texi
@node Concepts
@chapter The Concepts of Bison
use Bison or Yacc, we suggest you start by reading this chapter carefully.
@menu
-* Language and Grammar:: Languages and context-free grammars,
- as mathematical ideas.
-* Grammar in Bison:: How we represent grammars for Bison's sake.
-* Semantic Values:: Each token or syntactic grouping can have
- a semantic value (the value of an integer,
- the name of an identifier, etc.).
-* Semantic Actions:: Each rule can have an action containing C code.
-* GLR Parsers:: Writing parsers for general context-free languages.
-* Locations Overview:: Tracking Locations.
-* Bison Parser:: What are Bison's input and output,
- how is the output used?
-* Stages:: Stages in writing and running Bison grammars.
-* Grammar Layout:: Overall structure of a Bison grammar file.
+* Language and Grammar:: Languages and context-free grammars,
+ as mathematical ideas.
+* Grammar in Bison:: How we represent grammars for Bison's sake.
+* Semantic Values:: Each token or syntactic grouping can have
+ a semantic value (the value of an integer,
+ the name of an identifier, etc.).
+* Semantic Actions:: Each rule can have an action containing C code.
+* GLR Parsers:: Writing parsers for general context-free languages.
+* Locations:: Overview of location tracking.
+* Bison Parser:: What are Bison's input and output,
+ how is the output used?
+* Stages:: Stages in writing and running Bison grammars.
+* Grammar Layout:: Overall structure of a Bison grammar file.
@end menu
@node Language and Grammar
recursive, but there must be at least one rule which leads out of the
recursion.
-@cindex @acronym{BNF}
+@cindex BNF
@cindex Backus-Naur form
The most common formal system for presenting such rules for humans to read
-is @dfn{Backus-Naur Form} or ``@acronym{BNF}'', which was developed in
+is @dfn{Backus-Naur Form} or ``BNF'', which was developed in
order to specify the language Algol 60. Any grammar expressed in
-@acronym{BNF} is a context-free grammar. The input to Bison is
-essentially machine-readable @acronym{BNF}.
-
-@cindex @acronym{LALR}(1) grammars
-@cindex @acronym{LR}(1) grammars
-There are various important subclasses of context-free grammar. Although it
-can handle almost all context-free grammars, Bison is optimized for what
-are called @acronym{LALR}(1) grammars.
-In brief, in these grammars, it must be possible to
-tell how to parse any portion of an input string with just a single
-token of lookahead. Strictly speaking, that is a description of an
-@acronym{LR}(1) grammar, and @acronym{LALR}(1) involves additional
-restrictions that are
-hard to explain simply; but it is rare in actual practice to find an
-@acronym{LR}(1) grammar that fails to be @acronym{LALR}(1).
-@xref{Mystery Conflicts, ,Mysterious Reduce/Reduce Conflicts}, for
-more information on this.
-
-@cindex @acronym{GLR} parsing
-@cindex generalized @acronym{LR} (@acronym{GLR}) parsing
+BNF is a context-free grammar. The input to Bison is
+essentially machine-readable BNF.
+
+@cindex LALR grammars
+@cindex IELR grammars
+@cindex LR grammars
+There are various important subclasses of context-free grammars. Although
+it can handle almost all context-free grammars, Bison is optimized for what
+are called LR(1) grammars. In brief, in these grammars, it must be possible
+to tell how to parse any portion of an input string with just a single token
+of lookahead. For historical reasons, Bison by default is limited by the
+additional restrictions of LALR(1), which is hard to explain simply.
+@xref{Mysterious Conflicts}, for more information on this. As an
+experimental feature, you can escape these additional restrictions by
+requesting IELR(1) or canonical LR(1) parser tables. @xref{LR Table
+Construction}, to learn how.
+
+@cindex GLR parsing
+@cindex generalized LR (GLR) parsing
@cindex ambiguous grammars
@cindex nondeterministic parsing
-Parsers for @acronym{LALR}(1) grammars are @dfn{deterministic}, meaning
+Parsers for LR(1) grammars are @dfn{deterministic}, meaning
roughly that the next grammar rule to apply at any point in the input is
uniquely determined by the preceding input and a fixed, finite portion
(called a @dfn{lookahead}) of the remaining input. A context-free
grammars can be @dfn{nondeterministic}, meaning that no fixed
lookahead always suffices to determine the next grammar rule to apply.
With the proper declarations, Bison is also able to parse these more
-general context-free grammars, using a technique known as @acronym{GLR}
-parsing (for Generalized @acronym{LR}). Bison's @acronym{GLR} parsers
+general context-free grammars, using a technique known as GLR
+parsing (for Generalized LR). Bison's GLR parsers
are able to handle any context-free grammar for which the number of
possible parses of any given string is finite.
Here is a simple C function subdivided into tokens:
-@ifinfo
@example
int /* @r{keyword `int'} */
square (int x) /* @r{identifier, open-paren, keyword `int',}
@r{identifier, semicolon} */
@} /* @r{close-brace} */
@end example
-@end ifinfo
-@ifnotinfo
-@example
-int /* @r{keyword `int'} */
-square (int x) /* @r{identifier, open-paren, keyword `int', identifier, close-paren} */
-@{ /* @r{open-brace} */
- return x * x; /* @r{keyword `return', identifier, asterisk, identifier, semicolon} */
-@} /* @r{close-brace} */
-@end example
-@end ifnotinfo
The syntactic groupings of C include the expression, the statement, the
declaration, and the function definition. These are represented in the
used in every rule.
@example
-stmt: RETURN expr ';'
- ;
+stmt: RETURN expr ';' ;
@end example
@noindent
two subexpressions:
@example
-expr: expr '+' expr @{ $$ = $1 + $3; @}
- ;
+expr: expr '+' expr @{ $$ = $1 + $3; @} ;
@end example
@noindent
from the values of the two subexpressions.
@node GLR Parsers
-@section Writing @acronym{GLR} Parsers
-@cindex @acronym{GLR} parsing
-@cindex generalized @acronym{LR} (@acronym{GLR}) parsing
+@section Writing GLR Parsers
+@cindex GLR parsing
+@cindex generalized LR (GLR) parsing
@findex %glr-parser
@cindex conflicts
@cindex shift/reduce conflicts
@cindex reduce/reduce conflicts
-In some grammars, Bison's standard
-@acronym{LALR}(1) parsing algorithm cannot decide whether to apply a
+In some grammars, Bison's deterministic
+LR(1) parsing algorithm cannot decide whether to apply a
certain grammar rule at a given point. That is, it may not be able to
decide (on the basis of the input read so far) which of two possible
reductions (applications of a grammar rule) applies, or whether to apply
(@pxref{Reduce/Reduce}), and @dfn{shift/reduce} conflicts
(@pxref{Shift/Reduce}).
-To use a grammar that is not easily modified to be @acronym{LALR}(1), a
+To use a grammar that is not easily modified to be LR(1), a
more general parsing algorithm is sometimes necessary. If you include
@code{%glr-parser} among the Bison declarations in your file
-(@pxref{Grammar Outline}), the result is a Generalized @acronym{LR}
-(@acronym{GLR}) parser. These parsers handle Bison grammars that
+(@pxref{Grammar Outline}), the result is a Generalized LR
+(GLR) parser. These parsers handle Bison grammars that
contain no unresolved conflicts (i.e., after applying precedence
-declarations) identically to @acronym{LALR}(1) parsers. However, when
+declarations) identically to deterministic parsers. However, when
faced with unresolved shift/reduce and reduce/reduce conflicts,
-@acronym{GLR} parsers use the simple expedient of doing both,
+GLR parsers use the simple expedient of doing both,
effectively cloning the parser to follow both possibilities. Each of
the resulting parsers can again split, so that at any given time, there
can be any number of possible parses being explored. The parsers
merged result.
@menu
-* Simple GLR Parsers:: Using @acronym{GLR} parsers on unambiguous grammars.
-* Merging GLR Parses:: Using @acronym{GLR} parsers to resolve ambiguities.
-* GLR Semantic Actions:: Deferred semantic actions have special concerns.
-* Compiler Requirements:: @acronym{GLR} parsers require a modern C compiler.
+* Simple GLR Parsers:: Using GLR parsers on unambiguous grammars.
+* Merging GLR Parses:: Using GLR parsers to resolve ambiguities.
+* GLR Semantic Actions:: Considerations for semantic values and deferred actions.
+* Semantic Predicates:: Controlling a parse with arbitrary computations.
+* Compiler Requirements:: GLR parsers require a modern C compiler.
@end menu
@node Simple GLR Parsers
-@subsection Using @acronym{GLR} on Unambiguous Grammars
-@cindex @acronym{GLR} parsing, unambiguous grammars
-@cindex generalized @acronym{LR} (@acronym{GLR}) parsing, unambiguous grammars
+@subsection Using GLR on Unambiguous Grammars
+@cindex GLR parsing, unambiguous grammars
+@cindex generalized LR (GLR) parsing, unambiguous grammars
@findex %glr-parser
@findex %expect-rr
@cindex conflicts
@cindex reduce/reduce conflicts
@cindex shift/reduce conflicts
-In the simplest cases, you can use the @acronym{GLR} algorithm
-to parse grammars that are unambiguous, but fail to be @acronym{LALR}(1).
-Such grammars typically require more than one symbol of lookahead,
-or (in rare cases) fall into the category of grammars in which the
-@acronym{LALR}(1) algorithm throws away too much information (they are in
-@acronym{LR}(1), but not @acronym{LALR}(1), @ref{Mystery Conflicts}).
+In the simplest cases, you can use the GLR algorithm
+to parse grammars that are unambiguous but fail to be LR(1).
+Such grammars typically require more than one symbol of lookahead.
Consider a problem that
arises in the declaration of enumerated and subrange types in the
@noindent
The original language standard allows only numeric
literals and constant identifiers for the subrange bounds (@samp{lo}
-and @samp{hi}), but Extended Pascal (@acronym{ISO}/@acronym{IEC}
+and @samp{hi}), but Extended Pascal (ISO/IEC
10206) and many other
Pascal implementations allow arbitrary expressions there. This gives
rise to the following situation, containing a superfluous pair of
valid, and more-complicated cases can come up in practical programs.)
These two declarations look identical until the @samp{..} token.
-With normal @acronym{LALR}(1) one-token lookahead it is not
+With normal LR(1) one-token lookahead it is not
possible to decide between the two forms when the identifier
@samp{a} is parsed. It is, however, desirable
for a parser to decide this, since in the latter case
work.
A simple solution to this problem is to declare the parser to
-use the @acronym{GLR} algorithm.
-When the @acronym{GLR} parser reaches the critical state, it
+use the GLR algorithm.
+When the GLR parser reaches the critical state, it
merely splits into two branches and pursues both syntax rules
simultaneously. Sooner or later, one of them runs into a parsing
error. If there is a @samp{..} token before the next
The effect of all this is that the parser seems to ``guess'' the
correct branch to take, or in other words, it seems to use more
-lookahead than the underlying @acronym{LALR}(1) algorithm actually allows
-for. In this example, @acronym{LALR}(2) would suffice, but also some cases
-that are not @acronym{LALR}(@math{k}) for any @math{k} can be handled this way.
+lookahead than the underlying LR(1) algorithm actually allows
+for. In this example, LR(2) would suffice, but also some cases
+that are not LR(@math{k}) for any @math{k} can be handled this way.
-In general, a @acronym{GLR} parser can take quadratic or cubic worst-case time,
+In general, a GLR parser can take quadratic or cubic worst-case time,
and the current Bison parser even takes exponential time and space
for some grammars. In practice, this rarely happens, and for many
grammars it is possible to prove that it cannot happen.
%%
@group
-type_decl : TYPE ID '=' type ';'
- ;
+type_decl: TYPE ID '=' type ';' ;
@end group
@group
-type : '(' id_list ')'
- | expr DOTDOT expr
- ;
+type:
+ '(' id_list ')'
+| expr DOTDOT expr
+;
@end group
@group
-id_list : ID
- | id_list ',' ID
- ;
+id_list:
+ ID
+| id_list ',' ID
+;
@end group
@group
-expr : '(' expr ')'
- | expr '+' expr
- | expr '-' expr
- | expr '*' expr
- | expr '/' expr
- | ID
- ;
+expr:
+ '(' expr ')'
+| expr '+' expr
+| expr '-' expr
+| expr '*' expr
+| expr '/' expr
+| ID
+;
@end group
@end example
-When used as a normal @acronym{LALR}(1) grammar, Bison correctly complains
+When used as a normal LR(1) grammar, Bison correctly complains
about one reduce/reduce conflict. In the conflicting situation the
parser chooses one of the alternatives, arbitrarily the one
declared first. Therefore the following correct input is not
type t = (a) .. b;
@end example
-The parser can be turned into a @acronym{GLR} parser, while also telling Bison
-to be silent about the one known reduce/reduce conflict, by
-adding these two declarations to the Bison input file (before the first
+The parser can be turned into a GLR parser, while also telling Bison
+to be silent about the one known reduce/reduce conflict, by adding
+these two declarations to the Bison grammar file (before the first
@samp{%%}):
@example
limited syntax above, transparently. In fact, the user does not even
notice when the parser splits.
-So here we have a case where we can use the benefits of @acronym{GLR},
+So here we have a case where we can use the benefits of GLR,
almost without disadvantages. Even in simple cases like this, however,
there are at least two potential problems to beware. First, always
-analyze the conflicts reported by Bison to make sure that @acronym{GLR}
-splitting is only done where it is intended. A @acronym{GLR} parser
+analyze the conflicts reported by Bison to make sure that GLR
+splitting is only done where it is intended. A GLR parser
splitting inadvertently may cause problems less obvious than an
-@acronym{LALR} parser statically choosing the wrong alternative in a
+LR parser statically choosing the wrong alternative in a
conflict. Second, consider interactions with the lexer (@pxref{Semantic
Tokens}) with great care. Since a split parser consumes tokens without
performing any actions during the split, the lexer cannot obtain
information via parser actions. Some cases of lexer interactions can be
-eliminated by using @acronym{GLR} to shift the complications from the
+eliminated by using GLR to shift the complications from the
lexer to the parser. You must check the remaining cases for
correctness.
they cannot be used within the same enumerated type declaration.
@node Merging GLR Parses
-@subsection Using @acronym{GLR} to Resolve Ambiguities
-@cindex @acronym{GLR} parsing, ambiguous grammars
-@cindex generalized @acronym{LR} (@acronym{GLR}) parsing, ambiguous grammars
+@subsection Using GLR to Resolve Ambiguities
+@cindex GLR parsing, ambiguous grammars
+@cindex generalized LR (GLR) parsing, ambiguous grammars
@findex %dprec
@findex %merge
@cindex conflicts
%%
-prog :
- | prog stmt @{ printf ("\n"); @}
- ;
+prog:
+ /* Nothing. */
+| prog stmt @{ printf ("\n"); @}
+;
-stmt : expr ';' %dprec 1
- | decl %dprec 2
- ;
+stmt:
+ expr ';' %dprec 1
+| decl %dprec 2
+;
-expr : ID @{ printf ("%s ", $$); @}
- | TYPENAME '(' expr ')'
- @{ printf ("%s <cast> ", $1); @}
- | expr '+' expr @{ printf ("+ "); @}
- | expr '=' expr @{ printf ("= "); @}
- ;
+expr:
+ ID @{ printf ("%s ", $$); @}
+| TYPENAME '(' expr ')'
+ @{ printf ("%s <cast> ", $1); @}
+| expr '+' expr @{ printf ("+ "); @}
+| expr '=' expr @{ printf ("= "); @}
+;
-decl : TYPENAME declarator ';'
- @{ printf ("%s <declare> ", $1); @}
- | TYPENAME declarator '=' expr ';'
- @{ printf ("%s <init-declare> ", $1); @}
- ;
+decl:
+ TYPENAME declarator ';'
+ @{ printf ("%s <declare> ", $1); @}
+| TYPENAME declarator '=' expr ';'
+ @{ printf ("%s <init-declare> ", $1); @}
+;
-declarator : ID @{ printf ("\"%s\" ", $1); @}
- | '(' declarator ')'
- ;
+declarator:
+ ID @{ printf ("\"%s\" ", $1); @}
+| '(' declarator ')'
+;
@end example
@noindent
Bison detects this as a reduce/reduce conflict between the rules
@code{expr : ID} and @code{declarator : ID}, which it cannot resolve at the
time it encounters @code{x} in the example above. Since this is a
-@acronym{GLR} parser, it therefore splits the problem into two parses, one for
+GLR parser, it therefore splits the problem into two parses, one for
each choice of resolving the reduce/reduce conflict.
Unlike the example from the previous section (@pxref{Simple GLR Parsers}),
however, neither of these parses ``dies,'' because the grammar as it stands is
identical state: they've seen @samp{prog stmt} and have the same unprocessed
input remaining. We say that these parses have @dfn{merged.}
-At this point, the @acronym{GLR} parser requires a specification in the
+At this point, the GLR parser requires a specification in the
grammar of how to choose between the competing parses.
In the example above, the two @code{%dprec}
declarations specify that Bison is to give precedence
@end example
@noindent
-This is another example of using @acronym{GLR} to parse an unambiguous
+This is another example of using GLR to parse an unambiguous
construct, as shown in the previous section (@pxref{Simple GLR Parsers}).
Here, there is no ambiguity (this cannot be parsed as a declaration).
However, at the time the Bison parser encounters @code{x}, it does not
follows:
@example
-stmt : expr ';' %merge <stmtMerge>
- | decl %merge <stmtMerge>
- ;
+stmt:
+ expr ';' %merge <stmtMerge>
+| decl %merge <stmtMerge>
+;
@end example
@noindent
@node GLR Semantic Actions
@subsection GLR Semantic Actions
+The nature of GLR parsing and the structure of the generated
+parsers give rise to certain restrictions on semantic values and actions.
+
+@subsubsection Deferred semantic actions
@cindex deferred semantic actions
By definition, a deferred semantic action is not performed at the same time as
the associated reduction.
This raises caveats for several Bison features you might use in a semantic
-action in a @acronym{GLR} parser.
+action in a GLR parser.
@vindex yychar
-@cindex @acronym{GLR} parsers and @code{yychar}
+@cindex GLR parsers and @code{yychar}
@vindex yylval
-@cindex @acronym{GLR} parsers and @code{yylval}
+@cindex GLR parsers and @code{yylval}
@vindex yylloc
-@cindex @acronym{GLR} parsers and @code{yylloc}
+@cindex GLR parsers and @code{yylloc}
In any semantic action, you can examine @code{yychar} to determine the type of
the lookahead token present at the time of the associated reduction.
After checking that @code{yychar} is not set to @code{YYEMPTY} or @code{YYEOF},
@xref{Lookahead, ,Lookahead Tokens}.
@findex yyclearin
-@cindex @acronym{GLR} parsers and @code{yyclearin}
+@cindex GLR parsers and @code{yyclearin}
In a deferred semantic action, it's too late to influence syntax analysis.
In this case, @code{yychar}, @code{yylval}, and @code{yylloc} are set to
shallow copies of the values they had at the time of the associated reduction.
to invoke @code{yyclearin} (@pxref{Action Features}) or to attempt to free
memory referenced by @code{yylval}.
+@subsubsection YYERROR
@findex YYERROR
-@cindex @acronym{GLR} parsers and @code{YYERROR}
+@cindex GLR parsers and @code{YYERROR}
Another Bison feature requiring special consideration is @code{YYERROR}
(@pxref{Action Features}), which you can invoke in a semantic action to
initiate error recovery.
-During deterministic @acronym{GLR} operation, the effect of @code{YYERROR} is
-the same as its effect in an @acronym{LALR}(1) parser.
-In a deferred semantic action, its effect is undefined.
-@c The effect is probably a syntax error at the split point.
+During deterministic GLR operation, the effect of @code{YYERROR} is
+the same as its effect in a deterministic parser.
+The effect in a deferred action is similar, but the precise point of the
+error is undefined; instead, the parser reverts to deterministic operation,
+selecting an unspecified stack on which to continue with a syntax error.
+In a semantic predicate (see @ref{Semantic Predicates}) during nondeterministic
+parsing, @code{YYERROR} silently prunes
+the parse that invoked the test.
+
+@subsubsection Restrictions on semantic values and locations
+GLR parsers require that you use POD (Plain Old Data) types for
+semantic values and location types when using the generated parsers as
+C++ code.
+
+@node Semantic Predicates
+@subsection Controlling a Parse with Arbitrary Predicates
+@findex %?
+@cindex Semantic predicates in GLR parsers
+
+In addition to the @code{%dprec} and @code{%merge} directives,
+GLR parsers
+allow you to reject parses on the basis of arbitrary computations executed
+in user code, without having Bison treat this rejection as an error
+if there are alternative parses. (This feature is experimental and may
+evolve. We welcome user feedback.) For example,
+
+@example
+widget:
+ %?@{ new_syntax @} "widget" id new_args @{ $$ = f($3, $4); @}
+| %?@{ !new_syntax @} "widget" id old_args @{ $$ = f($3, $4); @}
+;
+@end example
+
+@noindent
+is one way to allow the same parser to handle two different syntaxes for
+widgets. The clause preceded by @code{%?} is treated like an ordinary
+action, except that its text is treated as an expression and is always
+evaluated immediately (even when in nondeterministic mode). If the
+expression yields 0 (false), the clause is treated as a syntax error,
+which, in a nondeterministic parser, causes the stack in which it is reduced
+to die. In a deterministic parser, it acts like YYERROR.
+
+As the example shows, predicates otherwise look like semantic actions, and
+therefore you must be take them into account when determining the numbers
+to use for denoting the semantic values of right-hand side symbols.
+Predicate actions, however, have no defined value, and may not be given
+labels.
+
+There is a subtle difference between semantic predicates and ordinary
+actions in nondeterministic mode, since the latter are deferred.
+For example, we could try to rewrite the previous example as
+
+@example
+widget:
+ @{ if (!new_syntax) YYERROR; @}
+ "widget" id new_args @{ $$ = f($3, $4); @}
+| @{ if (new_syntax) YYERROR; @}
+ "widget" id old_args @{ $$ = f($3, $4); @}
+;
+@end example
-Also, see @ref{Location Default Action, ,Default Action for Locations}, which
-describes a special usage of @code{YYLLOC_DEFAULT} in @acronym{GLR} parsers.
+@noindent
+(reversing the sense of the predicate tests to cause an error when they are
+false). However, this
+does @emph{not} have the same effect if @code{new_args} and @code{old_args}
+have overlapping syntax.
+Since the mid-rule actions testing @code{new_syntax} are deferred,
+a GLR parser first encounters the unresolved ambiguous reduction
+for cases where @code{new_args} and @code{old_args} recognize the same string
+@emph{before} performing the tests of @code{new_syntax}. It therefore
+reports an error.
+
+Finally, be careful in writing predicates: deferred actions have not been
+evaluated, so that using them in a predicate will have undefined effects.
@node Compiler Requirements
-@subsection Considerations when Compiling @acronym{GLR} Parsers
+@subsection Considerations when Compiling GLR Parsers
@cindex @code{inline}
-@cindex @acronym{GLR} parsers and @code{inline}
+@cindex GLR parsers and @code{inline}
-The @acronym{GLR} parsers require a compiler for @acronym{ISO} C89 or
+The GLR parsers require a compiler for ISO C89 or
later. In addition, they use the @code{inline} keyword, which is not
C89, but is C99 and is a common extension in pre-C99 compilers. It is
up to the user of these parsers to handle
@example
%@{
- #if __STDC_VERSION__ < 199901 && ! defined __GNUC__ && ! defined inline
- #define inline
+ #if (__STDC_VERSION__ < 199901 && ! defined __GNUC__ \
+ && ! defined inline)
+ # define inline
#endif
%@}
@end example
-@node Locations Overview
+@node Locations
@section Locations
@cindex location
@cindex textual location
Bison provides a mechanism for handling these locations.
Each token has a semantic value. In a similar fashion, each token has an
-associated location, but the type of locations is the same for all tokens and
-groupings. Moreover, the output parser is equipped with a default data
-structure for storing locations (@pxref{Locations}, for more details).
+associated location, but the type of locations is the same for all tokens
+and groupings. Moreover, the output parser is equipped with a default data
+structure for storing locations (@pxref{Tracking Locations}, for more
+details).
Like semantic values, locations can be reached in actions using a dedicated
set of constructs. In the example above, the location of the whole grouping
of the first symbol, and the end of the last symbol.
@node Bison Parser
-@section Bison Output: the Parser File
+@section Bison Output: the Parser Implementation File
@cindex Bison parser
@cindex Bison utility
@cindex lexical analyzer, purpose
@cindex parser
-When you run Bison, you give it a Bison grammar file as input. The output
-is a C source file that parses the language described by the grammar.
-This file is called a @dfn{Bison parser}. Keep in mind that the Bison
-utility and the Bison parser are two distinct programs: the Bison utility
-is a program whose output is the Bison parser that becomes part of your
-program.
+When you run Bison, you give it a Bison grammar file as input. The
+most important output is a C source file that implements a parser for
+the language described by the grammar. This parser is called a
+@dfn{Bison parser}, and this file is called a @dfn{Bison parser
+implementation file}. Keep in mind that the Bison utility and the
+Bison parser are two distinct programs: the Bison utility is a program
+whose output is the Bison parser implementation file that becomes part
+of your program.
The job of the Bison parser is to group tokens into groupings according to
the grammar rules---for example, to build identifiers and operators into
parsing characters of text, but Bison does not depend on this.
@xref{Lexical, ,The Lexical Analyzer Function @code{yylex}}.
-The Bison parser file is C code which defines a function named
-@code{yyparse} which implements that grammar. This function does not make
-a complete C program: you must supply some additional functions. One is
-the lexical analyzer. Another is an error-reporting function which the
-parser calls to report an error. In addition, a complete C program must
-start with a function called @code{main}; you have to provide this, and
-arrange for it to call @code{yyparse} or the parser will never run.
-@xref{Interface, ,Parser C-Language Interface}.
+The Bison parser implementation file is C code which defines a
+function named @code{yyparse} which implements that grammar. This
+function does not make a complete C program: you must supply some
+additional functions. One is the lexical analyzer. Another is an
+error-reporting function which the parser calls to report an error.
+In addition, a complete C program must start with a function called
+@code{main}; you have to provide this, and arrange for it to call
+@code{yyparse} or the parser will never run. @xref{Interface, ,Parser
+C-Language Interface}.
Aside from the token type names and the symbols in the actions you
-write, all symbols defined in the Bison parser file itself
-begin with @samp{yy} or @samp{YY}. This includes interface functions
-such as the lexical analyzer function @code{yylex}, the error reporting
-function @code{yyerror} and the parser function @code{yyparse} itself.
-This also includes numerous identifiers used for internal purposes.
-Therefore, you should avoid using C identifiers starting with @samp{yy}
-or @samp{YY} in the Bison grammar file except for the ones defined in
-this manual. Also, you should avoid using the C identifiers
-@samp{malloc} and @samp{free} for anything other than their usual
-meanings.
-
-In some cases the Bison parser file includes system headers, and in
-those cases your code should respect the identifiers reserved by those
-headers. On some non-@acronym{GNU} hosts, @code{<alloca.h>}, @code{<malloc.h>},
-@code{<stddef.h>}, and @code{<stdlib.h>} are included as needed to
-declare memory allocators and related types. @code{<libintl.h>} is
-included if message translation is in use
-(@pxref{Internationalization}). Other system headers may
-be included if you define @code{YYDEBUG} to a nonzero value
-(@pxref{Tracing, ,Tracing Your Parser}).
+write, all symbols defined in the Bison parser implementation file
+itself begin with @samp{yy} or @samp{YY}. This includes interface
+functions such as the lexical analyzer function @code{yylex}, the
+error reporting function @code{yyerror} and the parser function
+@code{yyparse} itself. This also includes numerous identifiers used
+for internal purposes. Therefore, you should avoid using C
+identifiers starting with @samp{yy} or @samp{YY} in the Bison grammar
+file except for the ones defined in this manual. Also, you should
+avoid using the C identifiers @samp{malloc} and @samp{free} for
+anything other than their usual meanings.
+
+In some cases the Bison parser implementation file includes system
+headers, and in those cases your code should respect the identifiers
+reserved by those headers. On some non-GNU hosts, @code{<alloca.h>},
+@code{<malloc.h>}, @code{<stddef.h>}, and @code{<stdlib.h>} are
+included as needed to declare memory allocators and related types.
+@code{<libintl.h>} is included if message translation is in use
+(@pxref{Internationalization}). Other system headers may be included
+if you define @code{YYDEBUG} to a nonzero value (@pxref{Tracing,
+,Tracing Your Parser}).
@node Stages
@section Stages in Using Bison
@cindex simple examples
@cindex examples, simple
-Now we show and explain three sample programs written using Bison: a
+Now we show and explain several sample programs written using Bison: a
reverse polish notation calculator, an algebraic (infix) notation
-calculator, and a multi-function calculator. All three have been tested
-under BSD Unix 4.3; each produces a usable, though limited, interactive
-desk-top calculator.
+calculator --- later extended to track ``locations'' ---
+and a multi-function calculator. All
+produce usable, though limited, interactive desk-top calculators.
These examples are simple, but Bison grammars for real programming
languages are written the same way. You can copy these examples into a
source file to try them.
@menu
-* RPN Calc:: Reverse polish notation calculator;
- a first example with no operator precedence.
-* Infix Calc:: Infix (algebraic) notation calculator.
- Operator precedence is introduced.
+* RPN Calc:: Reverse polish notation calculator;
+ a first example with no operator precedence.
+* Infix Calc:: Infix (algebraic) notation calculator.
+ Operator precedence is introduced.
* Simple Error Recovery:: Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @@@var{n} and @@$.
-* Multi-function Calc:: Calculator with memory and trig functions.
- It uses multiple data-types for semantic values.
-* Exercises:: Ideas for improving the multi-function calculator.
+* Multi-function Calc:: Calculator with memory and trig functions.
+ It uses multiple data-types for semantic values.
+* Exercises:: Ideas for improving the multi-function calculator.
@end menu
@node RPN Calc
The second example will illustrate how operator precedence is handled.
The source code for this calculator is named @file{rpcalc.y}. The
-@samp{.y} extension is a convention used for Bison input files.
+@samp{.y} extension is a convention used for Bison grammar files.
@menu
-* Decls: Rpcalc Decls. Prologue (declarations) for rpcalc.
-* Rules: Rpcalc Rules. Grammar Rules for rpcalc, with explanation.
-* Lexer: Rpcalc Lexer. The lexical analyzer.
-* Main: Rpcalc Main. The controlling function.
-* Error: Rpcalc Error. The error reporting function.
-* Gen: Rpcalc Gen. Running Bison on the grammar file.
-* Comp: Rpcalc Compile. Run the C compiler on the output code.
+* Rpcalc Declarations:: Prologue (declarations) for rpcalc.
+* Rpcalc Rules:: Grammar Rules for rpcalc, with explanation.
+* Rpcalc Lexer:: The lexical analyzer.
+* Rpcalc Main:: The controlling function.
+* Rpcalc Error:: The error reporting function.
+* Rpcalc Generate:: Running Bison on the grammar file.
+* Rpcalc Compile:: Run the C compiler on the output code.
@end menu
-@node Rpcalc Decls
+@node Rpcalc Declarations
@subsection Declarations for @code{rpcalc}
Here are the C and Bison declarations for the reverse polish notation
calculator. As in C, comments are placed between @samp{/*@dots{}*/}.
+@comment file: rpcalc.y
@example
/* Reverse polish notation calculator. */
%@{
#define YYSTYPE double
+ #include <stdio.h>
#include <math.h>
int yylex (void);
void yyerror (char const *);
Here are the grammar rules for the reverse polish notation calculator.
+@comment file: rpcalc.y
@example
-input: /* empty */
- | input line
+@group
+input:
+ /* empty */
+| input line
;
+@end group
-line: '\n'
- | exp '\n' @{ printf ("\t%.10g\n", $1); @}
+@group
+line:
+ '\n'
+| exp '\n' @{ printf ("%.10g\n", $1); @}
;
+@end group
-exp: NUM @{ $$ = $1; @}
- | exp exp '+' @{ $$ = $1 + $2; @}
- | exp exp '-' @{ $$ = $1 - $2; @}
- | exp exp '*' @{ $$ = $1 * $2; @}
- | exp exp '/' @{ $$ = $1 / $2; @}
- /* Exponentiation */
- | exp exp '^' @{ $$ = pow ($1, $2); @}
- /* Unary minus */
- | exp 'n' @{ $$ = -$1; @}
+@group
+exp:
+ NUM @{ $$ = $1; @}
+| exp exp '+' @{ $$ = $1 + $2; @}
+| exp exp '-' @{ $$ = $1 - $2; @}
+| exp exp '*' @{ $$ = $1 * $2; @}
+| exp exp '/' @{ $$ = $1 / $2; @}
+| exp exp '^' @{ $$ = pow ($1, $2); @} /* Exponentiation */
+| exp 'n' @{ $$ = -$1; @} /* Unary minus */
;
+@end group
%%
@end example
rule are referred to as @code{$1}, @code{$2}, and so on.
@menu
-* Rpcalc Input::
-* Rpcalc Line::
-* Rpcalc Expr::
+* Rpcalc Input:: Explanation of the @code{input} nonterminal
+* Rpcalc Line:: Explanation of the @code{line} nonterminal
+* Rpcalc Expr:: Explanation of the @code{expr} nonterminal
@end menu
@node Rpcalc Input
Consider the definition of @code{input}:
@example
-input: /* empty */
- | input line
+input:
+ /* empty */
+| input line
;
@end example
Now consider the definition of @code{line}:
@example
-line: '\n'
- | exp '\n' @{ printf ("\t%.10g\n", $1); @}
+line:
+ '\n'
+| exp '\n' @{ printf ("%.10g\n", $1); @}
;
@end example
followed by a plus-sign. The third handles subtraction, and so on.
@example
-exp: NUM
- | exp exp '+' @{ $$ = $1 + $2; @}
- | exp exp '-' @{ $$ = $1 - $2; @}
- @dots{}
- ;
+exp:
+ NUM
+| exp exp '+' @{ $$ = $1 + $2; @}
+| exp exp '-' @{ $$ = $1 - $2; @}
+@dots{}
+;
@end example
We have used @samp{|} to join all the rules for @code{exp}, but we could
equally well have written them separately:
@example
-exp: NUM ;
-exp: exp exp '+' @{ $$ = $1 + $2; @} ;
-exp: exp exp '-' @{ $$ = $1 - $2; @} ;
- @dots{}
+exp: NUM ;
+exp: exp exp '+' @{ $$ = $1 + $2; @};
+exp: exp exp '-' @{ $$ = $1 - $2; @};
+@dots{}
@end example
Most of the rules have actions that compute the value of the expression in
For example, this:
@example
-exp : NUM | exp exp '+' @{$$ = $1 + $2; @} | @dots{} ;
+exp: NUM | exp exp '+' @{$$ = $1 + $2; @} | @dots{} ;
@end example
@noindent
means the same thing as this:
@example
-exp: NUM
- | exp exp '+' @{ $$ = $1 + $2; @}
- | @dots{}
+exp:
+ NUM
+| exp exp '+' @{ $$ = $1 + $2; @}
+| @dots{}
;
@end example
tokens by calling the lexical analyzer. @xref{Lexical, ,The Lexical
Analyzer Function @code{yylex}}.
-Only a simple lexical analyzer is needed for the @acronym{RPN}
+Only a simple lexical analyzer is needed for the RPN
calculator. This
lexical analyzer skips blanks and tabs, then reads in numbers as
@code{double} and returns them as @code{NUM} tokens. Any other character
The semantic value of the token (if it has one) is stored into the
global variable @code{yylval}, which is where the Bison parser will look
for it. (The C data type of @code{yylval} is @code{YYSTYPE}, which was
-defined at the beginning of the grammar; @pxref{Rpcalc Decls,
+defined at the beginning of the grammar; @pxref{Rpcalc Declarations,
,Declarations for @code{rpcalc}}.)
A token type code of zero is returned if the end-of-input is encountered.
Here is the code for the lexical analyzer:
+@comment file: rpcalc.y
@example
@group
/* The lexical analyzer returns a double floating point
/* Skip white space. */
while ((c = getchar ()) == ' ' || c == '\t')
- ;
+ continue;
@end group
@group
/* Process numbers. */
kept to the bare minimum. The only requirement is that it call
@code{yyparse} to start the process of parsing.
+@comment file: rpcalc.y
@example
@group
int
@code{yyerror} (@pxref{Interface, ,Parser C-Language Interface}), so
here is the definition we will use:
+@comment file: rpcalc.y
@example
@group
#include <stdio.h>
+@end group
+@group
/* Called by yyparse on error. */
void
yyerror (char const *s)
cause the calculator program to exit. This is not clean behavior for a
real calculator, but it is adequate for the first example.
-@node Rpcalc Gen
+@node Rpcalc Generate
@subsection Running Bison to Make the Parser
@cindex running Bison (introduction)
Before running Bison to produce a parser, we need to decide how to
arrange all the source code in one or more source files. For such a
-simple example, the easiest thing is to put everything in one file. The
-definitions of @code{yylex}, @code{yyerror} and @code{main} go at the
-end, in the epilogue of the file
+simple example, the easiest thing is to put everything in one file,
+the grammar file. The definitions of @code{yylex}, @code{yyerror} and
+@code{main} go at the end, in the epilogue of the grammar file
(@pxref{Grammar Layout, ,The Overall Layout of a Bison Grammar}).
For a large project, you would probably have several source files, and use
@code{make} to arrange to recompile them.
-With all the source in a single file, you use the following command to
-convert it into a parser file:
+With all the source in the grammar file, you use the following command
+to convert it into a parser implementation file:
@example
bison @var{file}.y
@end example
@noindent
-In this example the file was called @file{rpcalc.y} (for ``Reverse Polish
-@sc{calc}ulator''). Bison produces a file named @file{@var{file}.tab.c},
-removing the @samp{.y} from the original file name. The file output by
-Bison contains the source code for @code{yyparse}. The additional
-functions in the input file (@code{yylex}, @code{yyerror} and @code{main})
-are copied verbatim to the output.
+In this example, the grammar file is called @file{rpcalc.y} (for
+``Reverse Polish @sc{calc}ulator''). Bison produces a parser
+implementation file named @file{@var{file}.tab.c}, removing the
+@samp{.y} from the grammar file name. The parser implementation file
+contains the source code for @code{yyparse}. The additional functions
+in the grammar file (@code{yylex}, @code{yyerror} and @code{main}) are
+copied verbatim to the parser implementation file.
@node Rpcalc Compile
-@subsection Compiling the Parser File
+@subsection Compiling the Parser Implementation File
@cindex compiling the parser
-Here is how to compile and run the parser file:
+Here is how to compile and run the parser implementation file:
@example
@group
@example
$ @kbd{rpcalc}
@kbd{4 9 +}
-13
+@result{} 13
@kbd{3 7 + 3 4 5 *+-}
--13
+@result{} -13
@kbd{3 7 + 3 4 5 * + - n} @r{Note the unary minus, @samp{n}}
-13
+@result{} 13
@kbd{5 6 / 4 n +}
--3.166666667
+@result{} -3.166666667
@kbd{3 4 ^} @r{Exponentiation}
-81
+@result{} 81
@kbd{^D} @r{End-of-file indicator}
$
@end example
@example
/* Infix notation calculator. */
+@group
%@{
#define YYSTYPE double
#include <math.h>
int yylex (void);
void yyerror (char const *);
%@}
+@end group
+@group
/* Bison declarations. */
%token NUM
%left '-' '+'
%left '*' '/'
-%left NEG /* negation--unary minus */
-%right '^' /* exponentiation */
+%precedence NEG /* negation--unary minus */
+%right '^' /* exponentiation */
+@end group
%% /* The grammar follows. */
-input: /* empty */
- | input line
+@group
+input:
+ /* empty */
+| input line
;
+@end group
-line: '\n'
- | exp '\n' @{ printf ("\t%.10g\n", $1); @}
+@group
+line:
+ '\n'
+| exp '\n' @{ printf ("\t%.10g\n", $1); @}
;
+@end group
-exp: NUM @{ $$ = $1; @}
- | exp '+' exp @{ $$ = $1 + $3; @}
- | exp '-' exp @{ $$ = $1 - $3; @}
- | exp '*' exp @{ $$ = $1 * $3; @}
- | exp '/' exp @{ $$ = $1 / $3; @}
- | '-' exp %prec NEG @{ $$ = -$2; @}
- | exp '^' exp @{ $$ = pow ($1, $3); @}
- | '(' exp ')' @{ $$ = $2; @}
+@group
+exp:
+ NUM @{ $$ = $1; @}
+| exp '+' exp @{ $$ = $1 + $3; @}
+| exp '-' exp @{ $$ = $1 - $3; @}
+| exp '*' exp @{ $$ = $1 * $3; @}
+| exp '/' exp @{ $$ = $1 / $3; @}
+| '-' exp %prec NEG @{ $$ = -$2; @}
+| exp '^' exp @{ $$ = pow ($1, $3); @}
+| '(' exp ')' @{ $$ = $2; @}
;
+@end group
%%
@end example
types and says they are left-associative operators. The declarations
@code{%left} and @code{%right} (right associativity) take the place of
@code{%token} which is used to declare a token type name without
-associativity. (These tokens are single-character literals, which
+associativity/precedence. (These tokens are single-character literals, which
ordinarily don't need to be declared. We declare them here to specify
-the associativity.)
+the associativity/precedence.)
Operator precedence is determined by the line ordering of the
declarations; the higher the line number of the declaration (lower on
the page or screen), the higher the precedence. Hence, exponentiation
has the highest precedence, unary minus (@code{NEG}) is next, followed
-by @samp{*} and @samp{/}, and so on. @xref{Precedence, ,Operator
+by @samp{*} and @samp{/}, and so on. Unary minus is not associative,
+only precedence matters (@code{%precedence}. @xref{Precedence, ,Operator
Precedence}.
The other important new feature is the @code{%prec} in the grammar
@example
@group
-line: '\n'
- | exp '\n' @{ printf ("\t%.10g\n", $1); @}
- | error '\n' @{ yyerrok; @}
+line:
+ '\n'
+| exp '\n' @{ printf ("\t%.10g\n", $1); @}
+| error '\n' @{ yyerrok; @}
;
@end group
@end example
analyzer.
@menu
-* Decls: Ltcalc Decls. Bison and C declarations for ltcalc.
-* Rules: Ltcalc Rules. Grammar rules for ltcalc, with explanations.
-* Lexer: Ltcalc Lexer. The lexical analyzer.
+* Ltcalc Declarations:: Bison and C declarations for ltcalc.
+* Ltcalc Rules:: Grammar rules for ltcalc, with explanations.
+* Ltcalc Lexer:: The lexical analyzer.
@end menu
-@node Ltcalc Decls
+@node Ltcalc Declarations
@subsection Declarations for @code{ltcalc}
The C and Bison declarations for the location tracking calculator are
%left '-' '+'
%left '*' '/'
-%left NEG
+%precedence NEG
%right '^'
%% /* The grammar follows. */
@example
@group
-input : /* empty */
- | input line
+input:
+ /* empty */
+| input line
;
@end group
@group
-line : '\n'
- | exp '\n' @{ printf ("%d\n", $1); @}
+line:
+ '\n'
+| exp '\n' @{ printf ("%d\n", $1); @}
;
@end group
@group
-exp : NUM @{ $$ = $1; @}
- | exp '+' exp @{ $$ = $1 + $3; @}
- | exp '-' exp @{ $$ = $1 - $3; @}
- | exp '*' exp @{ $$ = $1 * $3; @}
+exp:
+ NUM @{ $$ = $1; @}
+| exp '+' exp @{ $$ = $1 + $3; @}
+| exp '-' exp @{ $$ = $1 - $3; @}
+| exp '*' exp @{ $$ = $1 * $3; @}
@end group
@group
- | exp '/' exp
- @{
- if ($3)
- $$ = $1 / $3;
- else
- @{
- $$ = 1;
- fprintf (stderr, "%d.%d-%d.%d: division by zero",
- @@3.first_line, @@3.first_column,
- @@3.last_line, @@3.last_column);
- @}
- @}
+| exp '/' exp
+ @{
+ if ($3)
+ $$ = $1 / $3;
+ else
+ @{
+ $$ = 1;
+ fprintf (stderr, "%d.%d-%d.%d: division by zero",
+ @@3.first_line, @@3.first_column,
+ @@3.last_line, @@3.last_column);
+ @}
+ @}
@end group
@group
- | '-' exp %prec NEG @{ $$ = -$2; @}
- | exp '^' exp @{ $$ = pow ($1, $3); @}
- | '(' exp ')' @{ $$ = $2; @}
+| '-' exp %prec NEG @{ $$ = -$2; @}
+| exp '^' exp @{ $$ = pow ($1, $3); @}
+| '(' exp ')' @{ $$ = $2; @}
@end group
@end example
if (c == EOF)
return 0;
+@group
/* Return a single char, and update location. */
if (c == '\n')
@{
++yylloc.last_column;
return c;
@}
+@end group
@end example
Basically, the lexical analyzer performs the same processing as before:
Here is a sample session with the multi-function calculator:
@example
+@group
$ @kbd{mfcalc}
@kbd{pi = 3.141592653589}
-3.1415926536
+@result{} 3.1415926536
+@end group
+@group
@kbd{sin(pi)}
-0.0000000000
+@result{} 0.0000000000
+@end group
@kbd{alpha = beta1 = 2.3}
-2.3000000000
+@result{} 2.3000000000
@kbd{alpha}
-2.3000000000
+@result{} 2.3000000000
@kbd{ln(alpha)}
-0.8329091229
+@result{} 0.8329091229
@kbd{exp(ln(beta1))}
-2.3000000000
+@result{} 2.3000000000
$
@end example
Note that multiple assignment and nested function calls are permitted.
@menu
-* Decl: Mfcalc Decl. Bison declarations for multi-function calculator.
-* Rules: Mfcalc Rules. Grammar rules for the calculator.
-* Symtab: Mfcalc Symtab. Symbol table management subroutines.
+* Mfcalc Declarations:: Bison declarations for multi-function calculator.
+* Mfcalc Rules:: Grammar rules for the calculator.
+* Mfcalc Symbol Table:: Symbol table management subroutines.
+* Mfcalc Lexer:: The lexical analyzer.
+* Mfcalc Main:: The controlling function.
@end menu
-@node Mfcalc Decl
+@node Mfcalc Declarations
@subsection Declarations for @code{mfcalc}
Here are the C and Bison declarations for the multi-function calculator.
-@smallexample
+@comment file: mfcalc.y
+@example
@group
%@{
- #include <math.h> /* For math functions, cos(), sin(), etc. */
- #include "calc.h" /* Contains definition of `symrec'. */
+ #include <stdio.h> /* For printf, etc. */
+ #include <math.h> /* For pow, used in the grammar. */
+ #include "calc.h" /* Contains definition of `symrec'. */
int yylex (void);
void yyerror (char const *);
%@}
%right '='
%left '-' '+'
%left '*' '/'
-%left NEG /* negation--unary minus */
-%right '^' /* exponentiation */
+%precedence NEG /* negation--unary minus */
+%right '^' /* exponentiation */
@end group
%% /* The grammar follows. */
-@end smallexample
+@end example
The above grammar introduces only two new features of the Bison language.
These features allow semantic values to have various data types
Most of them are copied directly from @code{calc}; three rules,
those which mention @code{VAR} or @code{FNCT}, are new.
-@smallexample
+@comment file: mfcalc.y
+@example
@group
-input: /* empty */
- | input line
+input:
+ /* empty */
+| input line
;
@end group
@group
line:
- '\n'
- | exp '\n' @{ printf ("\t%.10g\n", $1); @}
- | error '\n' @{ yyerrok; @}
+ '\n'
+| exp '\n' @{ printf ("%.10g\n", $1); @}
+| error '\n' @{ yyerrok; @}
;
@end group
@group
-exp: NUM @{ $$ = $1; @}
- | VAR @{ $$ = $1->value.var; @}
- | VAR '=' exp @{ $$ = $3; $1->value.var = $3; @}
- | FNCT '(' exp ')' @{ $$ = (*($1->value.fnctptr))($3); @}
- | exp '+' exp @{ $$ = $1 + $3; @}
- | exp '-' exp @{ $$ = $1 - $3; @}
- | exp '*' exp @{ $$ = $1 * $3; @}
- | exp '/' exp @{ $$ = $1 / $3; @}
- | '-' exp %prec NEG @{ $$ = -$2; @}
- | exp '^' exp @{ $$ = pow ($1, $3); @}
- | '(' exp ')' @{ $$ = $2; @}
+exp:
+ NUM @{ $$ = $1; @}
+| VAR @{ $$ = $1->value.var; @}
+| VAR '=' exp @{ $$ = $3; $1->value.var = $3; @}
+| FNCT '(' exp ')' @{ $$ = (*($1->value.fnctptr))($3); @}
+| exp '+' exp @{ $$ = $1 + $3; @}
+| exp '-' exp @{ $$ = $1 - $3; @}
+| exp '*' exp @{ $$ = $1 * $3; @}
+| exp '/' exp @{ $$ = $1 / $3; @}
+| '-' exp %prec NEG @{ $$ = -$2; @}
+| exp '^' exp @{ $$ = pow ($1, $3); @}
+| '(' exp ')' @{ $$ = $2; @}
;
@end group
/* End of grammar. */
%%
-@end smallexample
+@end example
-@node Mfcalc Symtab
+@node Mfcalc Symbol Table
@subsection The @code{mfcalc} Symbol Table
@cindex symbol table example
definition, which is kept in the header @file{calc.h}, is as follows. It
provides for either functions or variables to be placed in the table.
-@smallexample
+@comment file: calc.h
+@example
@group
/* Function type. */
typedef double (*func_t) (double);
symrec *putsym (char const *, int);
symrec *getsym (char const *);
@end group
-@end smallexample
-
-The new version of @code{main} includes a call to @code{init_table}, a
-function that initializes the symbol table. Here it is, and
-@code{init_table} as well:
-
-@smallexample
-#include <stdio.h>
+@end example
-@group
-/* Called by yyparse on error. */
-void
-yyerror (char const *s)
-@{
- printf ("%s\n", s);
-@}
-@end group
+The new version of @code{main} will call @code{init_table} to initialize
+the symbol table:
+@comment file: mfcalc.y
+@example
@group
struct init
@{
@group
struct init const arith_fncts[] =
@{
- "sin", sin,
- "cos", cos,
- "atan", atan,
- "ln", log,
- "exp", exp,
- "sqrt", sqrt,
- 0, 0
+ @{ "atan", atan @},
+ @{ "cos", cos @},
+ @{ "exp", exp @},
+ @{ "ln", log @},
+ @{ "sin", sin @},
+ @{ "sqrt", sqrt @},
+ @{ 0, 0 @},
@};
@end group
@group
/* Put arithmetic functions in table. */
+static
void
init_table (void)
@{
int i;
- symrec *ptr;
for (i = 0; arith_fncts[i].fname != 0; i++)
@{
- ptr = putsym (arith_fncts[i].fname, FNCT);
+ symrec *ptr = putsym (arith_fncts[i].fname, FNCT);
ptr->value.fnctptr = arith_fncts[i].fnct;
@}
@}
@end group
-
-@group
-int
-main (void)
-@{
- init_table ();
- return yyparse ();
-@}
-@end group
-@end smallexample
+@end example
By simply editing the initialization list and adding the necessary include
files, you can add additional functions to the calculator.
The function @code{getsym} is passed the name of the symbol to look up. If
found, a pointer to that symbol is returned; otherwise zero is returned.
-@smallexample
+@comment file: mfcalc.y
+@example
+#include <stdlib.h> /* malloc. */
+#include <string.h> /* strlen. */
+
+@group
symrec *
putsym (char const *sym_name, int sym_type)
@{
- symrec *ptr;
- ptr = (symrec *) malloc (sizeof (symrec));
+ symrec *ptr = (symrec *) malloc (sizeof (symrec));
ptr->name = (char *) malloc (strlen (sym_name) + 1);
strcpy (ptr->name,sym_name);
ptr->type = sym_type;
sym_table = ptr;
return ptr;
@}
+@end group
+@group
symrec *
getsym (char const *sym_name)
@{
symrec *ptr;
for (ptr = sym_table; ptr != (symrec *) 0;
ptr = (symrec *)ptr->next)
- if (strcmp (ptr->name,sym_name) == 0)
+ if (strcmp (ptr->name, sym_name) == 0)
return ptr;
return 0;
@}
-@end smallexample
+@end group
+@end example
+
+@node Mfcalc Lexer
+@subsection The @code{mfcalc} Lexer
The function @code{yylex} must now recognize variables, numeric values, and
the single-character arithmetic operators. Strings of alphanumeric
No change is needed in the handling of numeric values and arithmetic
operators in @code{yylex}.
-@smallexample
+@comment file: mfcalc.y
+@example
@group
#include <ctype.h>
@end group
int c;
/* Ignore white space, get first nonwhite character. */
- while ((c = getchar ()) == ' ' || c == '\t');
+ while ((c = getchar ()) == ' ' || c == '\t')
+ continue;
if (c == EOF)
return 0;
/* Char starts an identifier => read the name. */
if (isalpha (c))
@{
- symrec *s;
+ /* Initially make the buffer long enough
+ for a 40-character symbol name. */
+ static size_t length = 40;
static char *symbuf = 0;
- static int length = 0;
+ symrec *s;
int i;
@end group
-
-@group
- /* Initially make the buffer long enough
- for a 40-character symbol name. */
- if (length == 0)
- length = 40, symbuf = (char *)malloc (length + 1);
+ if (!symbuf)
+ symbuf = (char *) malloc (length + 1);
i = 0;
do
-@end group
@group
@{
/* If buffer is full, make it bigger. */
return c;
@}
@end group
-@end smallexample
+@end example
+
+@node Mfcalc Main
+@subsection The @code{mfcalc} Main
+
+The error reporting function is unchanged, and the new version of
+@code{main} includes a call to @code{init_table}:
+
+@comment file: mfcalc.y
+@example
+@group
+/* Called by yyparse on error. */
+void
+yyerror (char const *s)
+@{
+ fprintf (stderr, "%s\n", s);
+@}
+@end group
+
+@group
+int
+main (int argc, char const* argv[])
+@{
+ init_table ();
+ return yyparse ();
+@}
+@end group
+@end example
This program is both powerful and flexible. You may easily add new
functions, and it is a simple job to modify this code to install
Bison takes as input a context-free grammar specification and produces a
C-language function that recognizes correct instances of the grammar.
-The Bison grammar input file conventionally has a name ending in @samp{.y}.
+The Bison grammar file conventionally has a name ending in @samp{.y}.
@xref{Invocation, ,Invoking Bison}.
@menu
-* Grammar Outline:: Overall layout of the grammar file.
-* Symbols:: Terminal and nonterminal symbols.
-* Rules:: How to write grammar rules.
-* Recursion:: Writing recursive rules.
-* Semantics:: Semantic values and actions.
-* Locations:: Locations and actions.
-* Declarations:: All kinds of Bison declarations are described here.
-* Multiple Parsers:: Putting more than one Bison parser in one program.
+* Grammar Outline:: Overall layout of the grammar file.
+* Symbols:: Terminal and nonterminal symbols.
+* Rules:: How to write grammar rules.
+* Recursion:: Writing recursive rules.
+* Semantics:: Semantic values and actions.
+* Tracking Locations:: Locations and actions.
+* Named References:: Using named references in actions.
+* Declarations:: All kinds of Bison declarations are described here.
+* Multiple Parsers:: Putting more than one Bison parser in one program.
@end menu
@node Grammar Outline
@end example
Comments enclosed in @samp{/* @dots{} */} may appear in any of the sections.
-As a @acronym{GNU} extension, @samp{//} introduces a comment that
+As a GNU extension, @samp{//} introduces a comment that
continues until end of line.
@menu
-* Prologue:: Syntax and usage of the prologue.
+* Prologue:: Syntax and usage of the prologue.
* Prologue Alternatives:: Syntax and usage of alternatives to the prologue.
-* Bison Declarations:: Syntax and usage of the Bison declarations section.
-* Grammar Rules:: Syntax and usage of the grammar rules section.
-* Epilogue:: Syntax and usage of the epilogue.
+* Bison Declarations:: Syntax and usage of the Bison declarations section.
+* Grammar Rules:: Syntax and usage of the grammar rules section.
+* Epilogue:: Syntax and usage of the epilogue.
@end menu
@node Prologue
The @var{Prologue} section contains macro definitions and declarations
of functions and variables that are used in the actions in the grammar
-rules. These are copied to the beginning of the parser file so that
-they precede the definition of @code{yyparse}. You can use
-@samp{#include} to get the declarations from a header file. If you
-don't need any C declarations, you may omit the @samp{%@{} and
+rules. These are copied to the beginning of the parser implementation
+file so that they precede the definition of @code{yyparse}. You can
+use @samp{#include} to get the declarations from a header file. If
+you don't need any C declarations, you may omit the @samp{%@{} and
@samp{%@}} delimiters that bracket this section.
The @var{Prologue} section is terminated by the first occurrence
can be done with two @var{Prologue} blocks, one before and one after the
@code{%union} declaration.
-@smallexample
+@example
%@{
#define _GNU_SOURCE
#include <stdio.h>
%@}
@dots{}
-@end smallexample
+@end example
When in doubt, it is usually safer to put prologue code before all
Bison declarations, rather than after. For example, any definitions
@findex %code requires
@findex %code provides
@findex %code top
-(The prologue alternatives described here are experimental.
-More user feedback will help to determine whether they should become permanent
-features.)
The functionality of @var{Prologue} sections can often be subtle and
-inflexible.
-As an alternative, Bison provides a %code directive with an explicit qualifier
-field, which identifies the purpose of the code and thus the location(s) where
-Bison should generate it.
-For C/C++, the qualifier can be omitted for the default location, or it can be
-@code{requires}, @code{provides}, or @code{top}.
-@xref{Decl Summary,,%code}.
+inflexible. As an alternative, Bison provides a @code{%code}
+directive with an explicit qualifier field, which identifies the
+purpose of the code and thus the location(s) where Bison should
+generate it. For C/C++, the qualifier can be omitted for the default
+location, or it can be one of @code{requires}, @code{provides},
+@code{top}. @xref{%code Summary}.
Look again at the example of the previous section:
-@smallexample
+@example
%@{
#define _GNU_SOURCE
#include <stdio.h>
%@}
@dots{}
-@end smallexample
-
-@noindent
-Notice that there are two @var{Prologue} sections here, but there's a subtle
-distinction between their functionality.
-For example, if you decide to override Bison's default definition for
-@code{YYLTYPE}, in which @var{Prologue} section should you write your new
-definition?
-You should write it in the first since Bison will insert that code into the
-parser source code file @emph{before} the default @code{YYLTYPE} definition.
-In which @var{Prologue} section should you prototype an internal function,
-@code{trace_token}, that accepts @code{YYLTYPE} and @code{yytokentype} as
-arguments?
-You should prototype it in the second since Bison will insert that code
+@end example
+
+@noindent
+Notice that there are two @var{Prologue} sections here, but there's a
+subtle distinction between their functionality. For example, if you
+decide to override Bison's default definition for @code{YYLTYPE}, in
+which @var{Prologue} section should you write your new definition?
+You should write it in the first since Bison will insert that code
+into the parser implementation file @emph{before} the default
+@code{YYLTYPE} definition. In which @var{Prologue} section should you
+prototype an internal function, @code{trace_token}, that accepts
+@code{YYLTYPE} and @code{yytokentype} as arguments? You should
+prototype it in the second since Bison will insert that code
@emph{after} the @code{YYLTYPE} and @code{yytokentype} definitions.
This distinction in functionality between the two @var{Prologue} sections is
Let's go ahead and add the new @code{YYLTYPE} definition and the
@code{trace_token} prototype at the same time:
-@smallexample
+@example
%code top @{
#define _GNU_SOURCE
#include <stdio.h>
@}
@dots{}
-@end smallexample
+@end example
@noindent
In this way, @code{%code top} and the unqualified @code{%code} achieve the same
explicit which kind you intend.
Moreover, both kinds are always available even in the absence of @code{%union}.
-The @code{%code top} block above logically contains two parts.
-The first two lines before the warning need to appear near the top of the
-parser source code file.
-The first line after the warning is required by @code{YYSTYPE} and thus also
-needs to appear in the parser source code file.
-However, if you've instructed Bison to generate a parser header file
-(@pxref{Decl Summary, ,%defines}), you probably want that line to appear before
-the @code{YYSTYPE} definition in that header file as well.
-The @code{YYLTYPE} definition should also appear in the parser header file to
-override the default @code{YYLTYPE} definition there.
+The @code{%code top} block above logically contains two parts. The
+first two lines before the warning need to appear near the top of the
+parser implementation file. The first line after the warning is
+required by @code{YYSTYPE} and thus also needs to appear in the parser
+implementation file. However, if you've instructed Bison to generate
+a parser header file (@pxref{Decl Summary, ,%defines}), you probably
+want that line to appear before the @code{YYSTYPE} definition in that
+header file as well. The @code{YYLTYPE} definition should also appear
+in the parser header file to override the default @code{YYLTYPE}
+definition there.
In other words, in the @code{%code top} block above, all but the first two
lines are dependency code required by the @code{YYSTYPE} and @code{YYLTYPE}
definitions.
Thus, they belong in one or more @code{%code requires}:
-@smallexample
+@example
+@group
%code top @{
#define _GNU_SOURCE
#include <stdio.h>
@}
+@end group
+@group
%code requires @{
#include "ptypes.h"
@}
+@end group
+@group
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
+@end group
+@group
%code requires @{
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
char *filename;
@} YYLTYPE;
@}
+@end group
+@group
%code @{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
static void trace_token (enum yytokentype token, YYLTYPE loc);
@}
+@end group
@dots{}
-@end smallexample
-
-@noindent
-Now Bison will insert @code{#include "ptypes.h"} and the new @code{YYLTYPE}
-definition before the Bison-generated @code{YYSTYPE} and @code{YYLTYPE}
-definitions in both the parser source code file and the parser header file.
-(By the same reasoning, @code{%code requires} would also be the appropriate
-place to write your own definition for @code{YYSTYPE}.)
-
-When you are writing dependency code for @code{YYSTYPE} and @code{YYLTYPE}, you
-should prefer @code{%code requires} over @code{%code top} regardless of whether
-you instruct Bison to generate a parser header file.
-When you are writing code that you need Bison to insert only into the parser
-source code file and that has no special need to appear at the top of that
-file, you should prefer the unqualified @code{%code} over @code{%code top}.
-These practices will make the purpose of each block of your code explicit to
-Bison and to other developers reading your grammar file.
-Following these practices, we expect the unqualified @code{%code} and
-@code{%code requires} to be the most important of the four @var{Prologue}
+@end example
+
+@noindent
+Now Bison will insert @code{#include "ptypes.h"} and the new
+@code{YYLTYPE} definition before the Bison-generated @code{YYSTYPE}
+and @code{YYLTYPE} definitions in both the parser implementation file
+and the parser header file. (By the same reasoning, @code{%code
+requires} would also be the appropriate place to write your own
+definition for @code{YYSTYPE}.)
+
+When you are writing dependency code for @code{YYSTYPE} and
+@code{YYLTYPE}, you should prefer @code{%code requires} over
+@code{%code top} regardless of whether you instruct Bison to generate
+a parser header file. When you are writing code that you need Bison
+to insert only into the parser implementation file and that has no
+special need to appear at the top of that file, you should prefer the
+unqualified @code{%code} over @code{%code top}. These practices will
+make the purpose of each block of your code explicit to Bison and to
+other developers reading your grammar file. Following these
+practices, we expect the unqualified @code{%code} and @code{%code
+requires} to be the most important of the four @var{Prologue}
alternatives.
-At some point while developing your parser, you might decide to provide
-@code{trace_token} to modules that are external to your parser.
-Thus, you might wish for Bison to insert the prototype into both the parser
-header file and the parser source code file.
-Since this function is not a dependency required by @code{YYSTYPE} or
+At some point while developing your parser, you might decide to
+provide @code{trace_token} to modules that are external to your
+parser. Thus, you might wish for Bison to insert the prototype into
+both the parser header file and the parser implementation file. Since
+this function is not a dependency required by @code{YYSTYPE} or
@code{YYLTYPE}, it doesn't make sense to move its prototype to a
-@code{%code requires}.
-More importantly, since it depends upon @code{YYLTYPE} and @code{yytokentype},
-@code{%code requires} is not sufficient.
-Instead, move its prototype from the unqualified @code{%code} to a
-@code{%code provides}:
+@code{%code requires}. More importantly, since it depends upon
+@code{YYLTYPE} and @code{yytokentype}, @code{%code requires} is not
+sufficient. Instead, move its prototype from the unqualified
+@code{%code} to a @code{%code provides}:
-@smallexample
+@example
+@group
%code top @{
#define _GNU_SOURCE
#include <stdio.h>
@}
+@end group
+@group
%code requires @{
#include "ptypes.h"
@}
+@end group
+@group
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
+@end group
+@group
%code requires @{
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
char *filename;
@} YYLTYPE;
@}
+@end group
+@group
%code provides @{
void trace_token (enum yytokentype token, YYLTYPE loc);
@}
+@end group
+@group
%code @{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
@}
+@end group
@dots{}
-@end smallexample
+@end example
@noindent
-Bison will insert the @code{trace_token} prototype into both the parser header
-file and the parser source code file after the definitions for
-@code{yytokentype}, @code{YYLTYPE}, and @code{YYSTYPE}.
+Bison will insert the @code{trace_token} prototype into both the
+parser header file and the parser implementation file after the
+definitions for @code{yytokentype}, @code{YYLTYPE}, and
+@code{YYSTYPE}.
-The above examples are careful to write directives in an order that reflects
-the layout of the generated parser source code and header files:
-@code{%code top}, @code{%code requires}, @code{%code provides}, and then
-@code{%code}.
-While your grammar files may generally be easier to read if you also follow
-this order, Bison does not require it.
-Instead, Bison lets you choose an organization that makes sense to you.
+The above examples are careful to write directives in an order that
+reflects the layout of the generated parser implementation and header
+files: @code{%code top}, @code{%code requires}, @code{%code provides},
+and then @code{%code}. While your grammar files may generally be
+easier to read if you also follow this order, Bison does not require
+it. Instead, Bison lets you choose an organization that makes sense
+to you.
You may declare any of these directives multiple times in the grammar file.
In that case, Bison concatenates the contained code in declaration order.
For example, you may organize semantic-type-related directives by semantic
type:
-@smallexample
+@example
+@group
%code requires @{ #include "type1.h" @}
%union @{ type1 field1; @}
%destructor @{ type1_free ($$); @} <field1>
%printer @{ type1_print ($$); @} <field1>
+@end group
+@group
%code requires @{ #include "type2.h" @}
%union @{ type2 field2; @}
%destructor @{ type2_free ($$); @} <field2>
%printer @{ type2_print ($$); @} <field2>
-@end smallexample
+@end group
+@end example
@noindent
You could even place each of the above directive groups in the rules section of
the grammar file next to the set of rules that uses the associated semantic
type.
+(In the rules section, you must terminate each of those directives with a
+semicolon.)
And you don't have to worry that some directive (like a @code{%union}) in the
definitions section is going to adversely affect their functionality in some
counter-intuitive manner just because it comes first.
@cindex epilogue
@cindex C code, section for additional
-The @var{Epilogue} is copied verbatim to the end of the parser file, just as
-the @var{Prologue} is copied to the beginning. This is the most convenient
-place to put anything that you want to have in the parser file but which need
-not come before the definition of @code{yyparse}. For example, the
-definitions of @code{yylex} and @code{yyerror} often go here. Because
-C requires functions to be declared before being used, you often need
-to declare functions like @code{yylex} and @code{yyerror} in the Prologue,
-even if you define them in the Epilogue.
-@xref{Interface, ,Parser C-Language Interface}.
+The @var{Epilogue} is copied verbatim to the end of the parser
+implementation file, just as the @var{Prologue} is copied to the
+beginning. This is the most convenient place to put anything that you
+want to have in the parser implementation file but which need not come
+before the definition of @code{yyparse}. For example, the definitions
+of @code{yylex} and @code{yyerror} often go here. Because C requires
+functions to be declared before being used, you often need to declare
+functions like @code{yylex} and @code{yyerror} in the Prologue, even
+if you define them in the Epilogue. @xref{Interface, ,Parser
+C-Language Interface}.
If the last section is empty, you may omit the @samp{%%} that separates it
from the grammar rules.
equivalent groupings. The symbol name is used in writing grammar rules.
By convention, it should be all lower case.
-Symbol names can contain letters, digits (not at the beginning),
-underscores and periods. Periods make sense only in nonterminals.
+Symbol names can contain letters, underscores, periods, and non-initial
+digits and dashes. Dashes in symbol names are a GNU extension, incompatible
+with POSIX Yacc. Periods and dashes make symbol names less convenient to
+use with named references, which require brackets around such names
+(@pxref{Named References}). Terminal symbols that contain periods or dashes
+make little sense: since they are not valid symbols (in most programming
+languages) they are not exported as token names.
There are three ways of writing terminal symbols in the grammar:
character, so @code{yylex} can use the identical value to generate the
requisite code, though you may need to convert it to @code{unsigned
char} to avoid sign-extension on hosts where @code{char} is signed.
-Each named token type becomes a C macro in
-the parser file, so @code{yylex} can use the name to stand for the code.
-(This is why periods don't make sense in terminal symbols.)
-@xref{Calling Convention, ,Calling Convention for @code{yylex}}.
+Each named token type becomes a C macro in the parser implementation
+file, so @code{yylex} can use the name to stand for the code. (This
+is why periods don't make sense in terminal symbols.) @xref{Calling
+Convention, ,Calling Convention for @code{yylex}}.
If @code{yylex} is defined in a separate file, you need to arrange for the
token-type macro definitions to be available there. Use the @samp{-d}
The @code{yylex} function and Bison must use a consistent character set
and encoding for character tokens. For example, if you run Bison in an
-@acronym{ASCII} environment, but then compile and run the resulting
+ASCII environment, but then compile and run the resulting
program in an environment that uses an incompatible character set like
-@acronym{EBCDIC}, the resulting program may not work because the tables
-generated by Bison will assume @acronym{ASCII} numeric values for
+EBCDIC, the resulting program may not work because the tables
+generated by Bison will assume ASCII numeric values for
character tokens. It is standard practice for software distributions to
contain C source files that were generated by Bison in an
-@acronym{ASCII} environment, so installers on platforms that are
-incompatible with @acronym{ASCII} must rebuild those files before
+ASCII environment, so installers on platforms that are
+incompatible with ASCII must rebuild those files before
compiling them.
The symbol @code{error} is a terminal symbol reserved for error recovery
@example
@group
-@var{result}: @var{components}@dots{}
- ;
+@var{result}: @var{components}@dots{};
@end group
@end example
@example
@group
-exp: exp '+' exp
- ;
+exp: exp '+' exp;
@end group
@end example
braces, much like a compound statement in C@. Braced code can contain
any sequence of C tokens, so long as its braces are balanced. Bison
does not check the braced code for correctness directly; it merely
-copies the code to the output file, where the C compiler can check it.
+copies the code to the parser implementation file, where the C
+compiler can check it.
Within braced code, the balanced-brace count is not affected by braces
within comments, string literals, or character constants, but it is
@example
@group
-@var{result}: @var{rule1-components}@dots{}
- | @var{rule2-components}@dots{}
- @dots{}
- ;
+@var{result}:
+ @var{rule1-components}@dots{}
+| @var{rule2-components}@dots{}
+@dots{}
+;
@end group
@end example
@example
@group
-expseq: /* empty */
- | expseq1
- ;
+expseq:
+ /* empty */
+| expseq1
+;
@end group
@group
-expseq1: exp
- | expseq1 ',' exp
- ;
+expseq1:
+ exp
+| expseq1 ',' exp
+;
@end group
@end example
@example
@group
-expseq1: exp
- | expseq1 ',' exp
- ;
+expseq1:
+ exp
+| expseq1 ',' exp
+;
@end group
@end example
@example
@group
-expseq1: exp
- | exp ',' expseq1
- ;
+expseq1:
+ exp
+| exp ',' expseq1
+;
@end group
@end example
@example
@group
-expr: primary
- | primary '+' primary
- ;
+expr:
+ primary
+| primary '+' primary
+;
@end group
@group
-primary: constant
- | '(' expr ')'
- ;
+primary:
+ constant
+| '(' expr ')'
+;
@end group
@end example
In a simple program it may be sufficient to use the same data type for
the semantic values of all language constructs. This was true in the
-@acronym{RPN} and infix calculator examples (@pxref{RPN Calc, ,Reverse Polish
+RPN and infix calculator examples (@pxref{RPN Calc, ,Reverse Polish
Notation Calculator}).
Bison normally uses the type @code{int} for semantic values if your
@cindex action
@vindex $$
@vindex $@var{n}
+@vindex $@var{name}
+@vindex $[@var{name}]
An action accompanies a syntactic rule and contains C code to be executed
each time an instance of that rule is recognized. The task of most actions
a rule are tricky and used only for special purposes (@pxref{Mid-Rule
Actions, ,Actions in Mid-Rule}).
-The C code in an action can refer to the semantic values of the components
-matched by the rule with the construct @code{$@var{n}}, which stands for
-the value of the @var{n}th component. The semantic value for the grouping
-being constructed is @code{$$}. Bison translates both of these
-constructs into expressions of the appropriate type when it copies the
-actions into the parser file. @code{$$} is translated to a modifiable
-lvalue, so it can be assigned to.
+The C code in an action can refer to the semantic values of the
+components matched by the rule with the construct @code{$@var{n}},
+which stands for the value of the @var{n}th component. The semantic
+value for the grouping being constructed is @code{$$}. In addition,
+the semantic values of symbols can be accessed with the named
+references construct @code{$@var{name}} or @code{$[@var{name}]}.
+Bison translates both of these constructs into expressions of the
+appropriate type when it copies the actions into the parser
+implementation file. @code{$$} (or @code{$@var{name}}, when it stands
+for the current grouping) is translated to a modifiable lvalue, so it
+can be assigned to.
Here is a typical example:
@example
@group
-exp: @dots{}
- | exp '+' exp
- @{ $$ = $1 + $3; @}
+exp:
+@dots{}
+| exp '+' exp @{ $$ = $1 + $3; @}
+@end group
+@end example
+
+Or, in terms of named references:
+
+@example
+@group
+exp[result]:
+@dots{}
+| exp[left] '+' exp[right] @{ $result = $left + $right; @}
@end group
@end example
@noindent
This rule constructs an @code{exp} from two smaller @code{exp} groupings
connected by a plus-sign token. In the action, @code{$1} and @code{$3}
+(@code{$left} and @code{$right})
refer to the semantic values of the two component @code{exp} groupings,
which are the first and third symbols on the right hand side of the rule.
-The sum is stored into @code{$$} so that it becomes the semantic value of
+The sum is stored into @code{$$} (@code{$result}) so that it becomes the
+semantic value of
the addition-expression just recognized by the rule. If there were a
useful semantic value associated with the @samp{+} token, it could be
referred to as @code{$2}.
+@xref{Named References}, for more information about using the named
+references construct.
+
Note that the vertical-bar character @samp{|} is really a rule
separator, and actions are attached to a single rule. This is a
difference with tools like Flex, for which @samp{|} stands for either
@example
@group
-foo: expr bar '+' expr @{ @dots{} @}
- | expr bar '-' expr @{ @dots{} @}
- ;
+foo:
+ expr bar '+' expr @{ @dots{} @}
+| expr bar '-' expr @{ @dots{} @}
+;
@end group
@group
-bar: /* empty */
- @{ previous_expr = $0; @}
- ;
+bar:
+ /* empty */ @{ previous_expr = $0; @}
+;
@end group
@end example
@example
@group
-exp: @dots{}
- | exp '+' exp
- @{ $$ = $1 + $3; @}
+exp:
+ @dots{}
+| exp '+' exp @{ $$ = $1 + $3; @}
@end group
@end example
@example
@group
-stmt: LET '(' var ')'
- @{ $<context>$ = push_context ();
- declare_variable ($3); @}
- stmt @{ $$ = $6;
- pop_context ($<context>5); @}
+stmt:
+ LET '(' var ')'
+ @{ $<context>$ = push_context (); declare_variable ($3); @}
+ stmt
+ @{ $$ = $6; pop_context ($<context>5); @}
@end group
@end example
%%
-stmt: let stmt
- @{ $$ = $2;
- pop_context ($1); @}
- ;
+stmt:
+ let stmt
+ @{
+ $$ = $2;
+ pop_context ($1);
+ @};
-let: LET '(' var ')'
- @{ $$ = push_context ();
- declare_variable ($3); @}
- ;
+let:
+ LET '(' var ')'
+ @{
+ $$ = push_context ();
+ declare_variable ($3);
+ @};
@end group
@end example
@example
@group
-compound: '@{' declarations statements '@}'
- | '@{' statements '@}'
- ;
+compound:
+ '@{' declarations statements '@}'
+| '@{' statements '@}'
+;
@end group
@end example
@example
@group
-compound: @{ prepare_for_local_variables (); @}
- '@{' declarations statements '@}'
+compound:
+ @{ prepare_for_local_variables (); @}
+ '@{' declarations statements '@}'
@end group
@group
- | '@{' statements '@}'
- ;
+| '@{' statements '@}'
+;
@end group
@end example
@example
@group
-compound: @{ prepare_for_local_variables (); @}
- '@{' declarations statements '@}'
- | @{ prepare_for_local_variables (); @}
- '@{' statements '@}'
- ;
+compound:
+ @{ prepare_for_local_variables (); @}
+ '@{' declarations statements '@}'
+| @{ prepare_for_local_variables (); @}
+ '@{' statements '@}'
+;
@end group
@end example
@example
@group
-compound: '@{' @{ prepare_for_local_variables (); @}
- declarations statements '@}'
- | '@{' statements '@}'
- ;
+compound:
+ '@{' @{ prepare_for_local_variables (); @}
+ declarations statements '@}'
+| '@{' statements '@}'
+;
@end group
@end example
@example
@group
-subroutine: /* empty */
- @{ prepare_for_local_variables (); @}
- ;
-
+subroutine:
+ /* empty */ @{ prepare_for_local_variables (); @}
+;
@end group
@group
-compound: subroutine
- '@{' declarations statements '@}'
- | subroutine
- '@{' statements '@}'
- ;
+compound:
+ subroutine '@{' declarations statements '@}'
+| subroutine '@{' statements '@}'
+;
@end group
@end example
Now Bison can execute the action in the rule for @code{subroutine} without
deciding which rule for @code{compound} it will eventually use.
-@node Locations
+@node Tracking Locations
@section Tracking Locations
@cindex location
@cindex textual location
@} YYLTYPE;
@end example
-At the beginning of the parsing, Bison initializes all these fields to 1
-for @code{yylloc}.
+When @code{YYLTYPE} is not defined, at the beginning of the parsing, Bison
+initializes all these fields to 1 for @code{yylloc}. To initialize
+@code{yylloc} with a custom location type (or to chose a different
+initialization), use the @code{%initial-action} directive. @xref{Initial
+Action Decl, , Performing Actions before Parsing}.
@node Actions and Locations
@subsection Actions and Locations
@cindex actions, location
@vindex @@$
@vindex @@@var{n}
+@vindex @@@var{name}
+@vindex @@[@var{name}]
Actions are not only useful for defining language semantics, but also for
describing the behavior of the output parser with locations.
@code{@@@var{n}}, while the location of the left hand side grouping is
@code{@@$}.
+In addition, the named references construct @code{@@@var{name}} and
+@code{@@[@var{name}]} may also be used to address the symbol locations.
+@xref{Named References}, for more information about using the named
+references construct.
+
Here is a basic example using the default data type for locations:
@example
@group
-exp: @dots{}
- | exp '/' exp
- @{
- @@$.first_column = @@1.first_column;
- @@$.first_line = @@1.first_line;
- @@$.last_column = @@3.last_column;
- @@$.last_line = @@3.last_line;
- if ($3)
- $$ = $1 / $3;
- else
- @{
- $$ = 1;
- fprintf (stderr,
- "Division by zero, l%d,c%d-l%d,c%d",
- @@3.first_line, @@3.first_column,
- @@3.last_line, @@3.last_column);
- @}
- @}
+exp:
+ @dots{}
+| exp '/' exp
+ @{
+ @@$.first_column = @@1.first_column;
+ @@$.first_line = @@1.first_line;
+ @@$.last_column = @@3.last_column;
+ @@$.last_line = @@3.last_line;
+ if ($3)
+ $$ = $1 / $3;
+ else
+ @{
+ $$ = 1;
+ fprintf (stderr,
+ "Division by zero, l%d,c%d-l%d,c%d",
+ @@3.first_line, @@3.first_column,
+ @@3.last_line, @@3.last_column);
+ @}
+ @}
@end group
@end example
@example
@group
-exp: @dots{}
- | exp '/' exp
- @{
- if ($3)
- $$ = $1 / $3;
- else
- @{
- $$ = 1;
- fprintf (stderr,
- "Division by zero, l%d,c%d-l%d,c%d",
- @@3.first_line, @@3.first_column,
- @@3.last_line, @@3.last_column);
- @}
- @}
+exp:
+ @dots{}
+| exp '/' exp
+ @{
+ if ($3)
+ $$ = $1 / $3;
+ else
+ @{
+ $$ = 1;
+ fprintf (stderr,
+ "Division by zero, l%d,c%d-l%d,c%d",
+ @@3.first_line, @@3.first_column,
+ @@3.last_line, @@3.last_column);
+ @}
+ @}
@end group
@end example
@node Location Default Action
@subsection Default Action for Locations
@vindex YYLLOC_DEFAULT
-@cindex @acronym{GLR} parsers and @code{YYLLOC_DEFAULT}
+@cindex GLR parsers and @code{YYLLOC_DEFAULT}
Actually, actions are not the best place to compute locations. Since
locations are much more general than semantic values, there is room in
rule. The @code{YYLLOC_DEFAULT} macro is invoked each time a rule is
matched, before the associated action is run. It is also invoked
while processing a syntax error, to compute the error's location.
-Before reporting an unresolvable syntactic ambiguity, a @acronym{GLR}
+Before reporting an unresolvable syntactic ambiguity, a GLR
parser invokes @code{YYLLOC_DEFAULT} recursively to compute the location
of that ambiguity.
rule is matched, the second parameter identifies locations of
all right hand side elements of the rule being matched, and the third
parameter is the size of the rule's right hand side.
-When a @acronym{GLR} parser reports an ambiguity, which of multiple candidate
+When a GLR parser reports an ambiguity, which of multiple candidate
right hand sides it passes to @code{YYLLOC_DEFAULT} is undefined.
When processing a syntax error, the second parameter identifies locations
of the symbols that were discarded during error processing, and the third
By default, @code{YYLLOC_DEFAULT} is defined this way:
-@smallexample
-@group
-# define YYLLOC_DEFAULT(Current, Rhs, N) \
- do \
- if (N) \
- @{ \
- (Current).first_line = YYRHSLOC(Rhs, 1).first_line; \
- (Current).first_column = YYRHSLOC(Rhs, 1).first_column; \
- (Current).last_line = YYRHSLOC(Rhs, N).last_line; \
- (Current).last_column = YYRHSLOC(Rhs, N).last_column; \
- @} \
- else \
- @{ \
- (Current).first_line = (Current).last_line = \
- YYRHSLOC(Rhs, 0).last_line; \
- (Current).first_column = (Current).last_column = \
- YYRHSLOC(Rhs, 0).last_column; \
- @} \
- while (0)
-@end group
-@end smallexample
+@example
+@group
+# define YYLLOC_DEFAULT(Cur, Rhs, N) \
+do \
+ if (N) \
+ @{ \
+ (Cur).first_line = YYRHSLOC(Rhs, 1).first_line; \
+ (Cur).first_column = YYRHSLOC(Rhs, 1).first_column; \
+ (Cur).last_line = YYRHSLOC(Rhs, N).last_line; \
+ (Cur).last_column = YYRHSLOC(Rhs, N).last_column; \
+ @} \
+ else \
+ @{ \
+ (Cur).first_line = (Cur).last_line = \
+ YYRHSLOC(Rhs, 0).last_line; \
+ (Cur).first_column = (Cur).last_column = \
+ YYRHSLOC(Rhs, 0).last_column; \
+ @} \
+while (0)
+@end group
+@end example
+@noindent
where @code{YYRHSLOC (rhs, k)} is the location of the @var{k}th symbol
in @var{rhs} when @var{k} is positive, and the location of the symbol
just before the reduction when @var{k} and @var{n} are both zero.
statement when it is followed by a semicolon.
@end itemize
-@node Declarations
-@section Bison Declarations
-@cindex declarations, Bison
-@cindex Bison declarations
+@node Named References
+@section Named References
+@cindex named references
+
+As described in the preceding sections, the traditional way to refer to any
+semantic value or location is a @dfn{positional reference}, which takes the
+form @code{$@var{n}}, @code{$$}, @code{@@@var{n}}, and @code{@@$}. However,
+such a reference is not very descriptive. Moreover, if you later decide to
+insert or remove symbols in the right-hand side of a grammar rule, the need
+to renumber such references can be tedious and error-prone.
+
+To avoid these issues, you can also refer to a semantic value or location
+using a @dfn{named reference}. First of all, original symbol names may be
+used as named references. For example:
+
+@example
+@group
+invocation: op '(' args ')'
+ @{ $invocation = new_invocation ($op, $args, @@invocation); @}
+@end group
+@end example
+
+@noindent
+Positional and named references can be mixed arbitrarily. For example:
+
+@example
+@group
+invocation: op '(' args ')'
+ @{ $$ = new_invocation ($op, $args, @@$); @}
+@end group
+@end example
+
+@noindent
+However, sometimes regular symbol names are not sufficient due to
+ambiguities:
+
+@example
+@group
+exp: exp '/' exp
+ @{ $exp = $exp / $exp; @} // $exp is ambiguous.
+
+exp: exp '/' exp
+ @{ $$ = $1 / $exp; @} // One usage is ambiguous.
+
+exp: exp '/' exp
+ @{ $$ = $1 / $3; @} // No error.
+@end group
+@end example
+
+@noindent
+When ambiguity occurs, explicitly declared names may be used for values and
+locations. Explicit names are declared as a bracketed name after a symbol
+appearance in rule definitions. For example:
+@example
+@group
+exp[result]: exp[left] '/' exp[right]
+ @{ $result = $left / $right; @}
+@end group
+@end example
+
+@noindent
+In order to access a semantic value generated by a mid-rule action, an
+explicit name may also be declared by putting a bracketed name after the
+closing brace of the mid-rule action code:
+@example
+@group
+exp[res]: exp[x] '+' @{$left = $x;@}[left] exp[right]
+ @{ $res = $left + $right; @}
+@end group
+@end example
+
+@noindent
+
+In references, in order to specify names containing dots and dashes, an explicit
+bracketed syntax @code{$[name]} and @code{@@[name]} must be used:
+@example
+@group
+if-stmt: "if" '(' expr ')' "then" then.stmt ';'
+ @{ $[if-stmt] = new_if_stmt ($expr, $[then.stmt]); @}
+@end group
+@end example
+
+It often happens that named references are followed by a dot, dash or other
+C punctuation marks and operators. By default, Bison will read
+@samp{$name.suffix} as a reference to symbol value @code{$name} followed by
+@samp{.suffix}, i.e., an access to the @code{suffix} field of the semantic
+value. In order to force Bison to recognize @samp{name.suffix} in its
+entirety as the name of a semantic value, the bracketed syntax
+@samp{$[name.suffix]} must be used.
+
+The named references feature is experimental. More user feedback will help
+to stabilize it.
+
+@node Declarations
+@section Bison Declarations
+@cindex declarations, Bison
+@cindex Bison declarations
The @dfn{Bison declarations} section of a Bison grammar defines the symbols
used in formulating the grammar and the data types of semantic values.
declared if you need to specify which data type to use for the semantic
value (@pxref{Multiple Types, ,More Than One Value Type}).
-The first rule in the file also specifies the start symbol, by default.
-If you want some other symbol to be the start symbol, you must declare
-it explicitly (@pxref{Language and Grammar, ,Languages and Context-Free
-Grammars}).
+The first rule in the grammar file also specifies the start symbol, by
+default. If you want some other symbol to be the start symbol, you
+must declare it explicitly (@pxref{Language and Grammar, ,Languages
+and Context-Free Grammars}).
@menu
* Require Decl:: Requiring a Bison version.
* Expect Decl:: Suppressing warnings about parsing conflicts.
* Start Decl:: Specifying the start symbol.
* Pure Decl:: Requesting a reentrant parser.
+* Push Decl:: Requesting a push parser.
* Decl Summary:: Table of all Bison declarations.
+* %define Summary:: Defining variables to adjust Bison's behavior.
+* %code Summary:: Inserting code into the parser source.
@end menu
@node Require Decl
the parser, so that the function @code{yylex} (if it is in this file)
can use the name @var{name} to stand for this token type's code.
-Alternatively, you can use @code{%left}, @code{%right}, or
+Alternatively, you can use @code{%left}, @code{%right},
+@code{%precedence}, or
@code{%nonassoc} instead of @code{%token}, if you wish to specify
associativity and precedence. @xref{Precedence Decl, ,Operator
Precedence}.
You can explicitly specify the numeric code for a token type by appending
-a decimal or hexadecimal integer value in the field immediately
+a nonnegative decimal or hexadecimal integer value in the field immediately
following the token name:
@example
interchangeably in further declarations or the grammar rules. The
@code{yylex} function can use the token name or the literal string to
obtain the token type code number (@pxref{Calling Convention}).
+Syntax error messages passed to @code{yyerror} from the parser will reference
+the literal string instead of the token name.
+
+The token numbered as 0 corresponds to end of file; the following line
+allows for nicer error messages referring to ``end of file'' instead
+of ``$end'':
+
+@example
+%token END 0 "end of file"
+@end example
@node Precedence Decl
@subsection Operator Precedence
@cindex declaring operator precedence
@cindex operator precedence, declaring
-Use the @code{%left}, @code{%right} or @code{%nonassoc} declaration to
+Use the @code{%left}, @code{%right}, @code{%nonassoc}, or
+@code{%precedence} declaration to
declare a token and specify its precedence and associativity, all at
once. These are called @dfn{precedence declarations}.
@xref{Precedence, ,Operator Precedence}, for general information on
operator precedence.
-The syntax of a precedence declaration is the same as that of
+The syntax of a precedence declaration is nearly the same as that of
@code{%token}: either
@example
means that @samp{@var{x} @var{op} @var{y} @var{op} @var{z}} is
considered a syntax error.
+@code{%precedence} gives only precedence to the @var{symbols}, and
+defines no associativity at all. Use this to define precedence only,
+and leave any potential conflict due to associativity enabled.
+
@item
The precedence of an operator determines how it nests with other operators.
All the tokens declared in a single precedence declaration have equal
the one declared later has the higher precedence and is grouped first.
@end itemize
+For backward compatibility, there is a confusing difference between the
+argument lists of @code{%token} and precedence declarations.
+Only a @code{%token} can associate a literal string with a token type name.
+A precedence declaration always interprets a literal string as a reference to a
+separate token.
+For example:
+
+@example
+%left OR "<=" // Does not declare an alias.
+%left OR 134 "<=" 135 // Declares 134 for OR and 135 for "<=".
+@end example
+
@node Union Decl
@subsection The Collection of Value Types
@cindex declaring value types
in the @code{%token} and @code{%type} declarations to pick one of the types
for a terminal or nonterminal symbol (@pxref{Type Decl, ,Nonterminal Symbols}).
-As an extension to @acronym{POSIX}, a tag is allowed after the
+As an extension to POSIX, a tag is allowed after the
@code{union}. For example:
@example
@code{union value}. If you do not specify a tag, it defaults to
@code{YYSTYPE}.
-As another extension to @acronym{POSIX}, you may specify multiple
+As another extension to POSIX, you may specify multiple
@code{%union} declarations; their contents are concatenated. However,
only the first @code{%union} declaration can specify a tag.
@noindent
For example:
-@smallexample
+@example
%union @{ char *string; @}
%token <string> STRING1
%token <string> STRING2
%destructor @{ free ($$); @} <*>
%destructor @{ free ($$); printf ("%d", @@$.first_line); @} STRING1 string1
%destructor @{ printf ("Discarding tagless symbol.\n"); @} <>
-@end smallexample
+@end example
@noindent
guarantees that, when the parser discards any user-defined symbol that has a
However, it may invoke one of them for the end token (token 0) if you
redefine it from @code{$end} to, for example, @code{END}:
-@smallexample
+@example
%token END 0
-@end smallexample
+@end example
@cindex actions in mid-rule
@cindex mid-rule actions
Finally, Bison will never invoke a @code{%destructor} for an unreferenced
mid-rule semantic value (@pxref{Mid-Rule Actions,,Actions in Mid-Rule}).
-That is, Bison does not consider a mid-rule to have a semantic value if you do
-not reference @code{$$} in the mid-rule's action or @code{$@var{n}} (where
-@var{n} is the RHS symbol position of the mid-rule) in any later action in that
-rule.
-However, if you do reference either, the Bison-generated parser will invoke the
-@code{<>} @code{%destructor} whenever it discards the mid-rule symbol.
+That is, Bison does not consider a mid-rule to have a semantic value if you
+do not reference @code{$$} in the mid-rule's action or @code{$@var{n}}
+(where @var{n} is the right-hand side symbol position of the mid-rule) in
+any later action in that rule. However, if you do reference either, the
+Bison-generated parser will invoke the @code{<>} @code{%destructor} whenever
+it discards the mid-rule symbol.
@ignore
@noindent
@code{YYABORT} or @code{YYACCEPT}, or failed error recovery, or memory
exhaustion.
-Right-hand size symbols of a rule that explicitly triggers a syntax
+Right-hand side symbols of a rule that explicitly triggers a syntax
error via @code{YYERROR} are not discarded automatically. As a rule
of thumb, destructors are invoked only when user actions cannot manage
the memory.
Bison reports an error if the number of shift/reduce conflicts differs
from @var{n}, or if there are any reduce/reduce conflicts.
-For normal @acronym{LALR}(1) parsers, reduce/reduce conflicts are more
+For deterministic parsers, reduce/reduce conflicts are more
serious, and should be eliminated entirely. Bison will always report
-reduce/reduce conflicts for these parsers. With @acronym{GLR}
+reduce/reduce conflicts for these parsers. With GLR
parsers, however, both kinds of conflicts are routine; otherwise,
-there would be no need to use @acronym{GLR} parsing. Therefore, it is
+there would be no need to use GLR parsing. Therefore, it is
also possible to specify an expected number of reduce/reduce conflicts
-in @acronym{GLR} parsers, using the declaration:
+in GLR parsers, using the declaration:
@example
%expect-rr @var{n}
@item
Add an @code{%expect} declaration, copying the number @var{n} from the
-number which Bison printed. With @acronym{GLR} parsers, add an
+number which Bison printed. With GLR parsers, add an
@code{%expect-rr} declaration as well.
@end itemize
-Now Bison will warn you if you introduce an unexpected conflict, but
-will keep silent otherwise.
+Now Bison will report an error if you introduce an unexpected conflict,
+but will keep silent otherwise.
@node Start Decl
@subsection The Start-Symbol
@subsection A Pure (Reentrant) Parser
@cindex reentrant parser
@cindex pure parser
-@findex %pure-parser
+@findex %define api.pure
A @dfn{reentrant} program is one which does not alter in the course of
execution; in other words, it consists entirely of @dfn{pure} (read-only)
including @code{yylval} and @code{yylloc}.)
Alternatively, you can generate a pure, reentrant parser. The Bison
-declaration @code{%pure-parser} says that you want the parser to be
+declaration @samp{%define api.pure} says that you want the parser to be
reentrant. It looks like this:
@example
-%pure-parser
+%define api.pure
@end example
The result is that the communication variables @code{yylval} and
@code{yylloc} become local variables in @code{yyparse}, and a different
calling convention is used for the lexical analyzer function
@code{yylex}. @xref{Pure Calling, ,Calling Conventions for Pure
-Parsers}, for the details of this. The variable @code{yynerrs} also
-becomes local in @code{yyparse} (@pxref{Error Reporting, ,The Error
+Parsers}, for the details of this. The variable @code{yynerrs}
+becomes local in @code{yyparse} in pull mode but it becomes a member
+of yypstate in push mode. (@pxref{Error Reporting, ,The Error
Reporting Function @code{yyerror}}). The convention for calling
@code{yyparse} itself is unchanged.
You can generate either a pure parser or a nonreentrant parser from any
valid grammar.
+@node Push Decl
+@subsection A Push Parser
+@cindex push parser
+@cindex push parser
+@findex %define api.push-pull
+
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
+
+A pull parser is called once and it takes control until all its input
+is completely parsed. A push parser, on the other hand, is called
+each time a new token is made available.
+
+A push parser is typically useful when the parser is part of a
+main event loop in the client's application. This is typically
+a requirement of a GUI, when the main event loop needs to be triggered
+within a certain time period.
+
+Normally, Bison generates a pull parser.
+The following Bison declaration says that you want the parser to be a push
+parser (@pxref{%define Summary,,api.push-pull}):
+
+@example
+%define api.push-pull push
+@end example
+
+In almost all cases, you want to ensure that your push parser is also
+a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}). The only
+time you should create an impure push parser is to have backwards
+compatibility with the impure Yacc pull mode interface. Unless you know
+what you are doing, your declarations should look like this:
+
+@example
+%define api.pure
+%define api.push-pull push
+@end example
+
+There is a major notable functional difference between the pure push parser
+and the impure push parser. It is acceptable for a pure push parser to have
+many parser instances, of the same type of parser, in memory at the same time.
+An impure push parser should only use one parser at a time.
+
+When a push parser is selected, Bison will generate some new symbols in
+the generated parser. @code{yypstate} is a structure that the generated
+parser uses to store the parser's state. @code{yypstate_new} is the
+function that will create a new parser instance. @code{yypstate_delete}
+will free the resources associated with the corresponding parser instance.
+Finally, @code{yypush_parse} is the function that should be called whenever a
+token is available to provide the parser. A trivial example
+of using a pure push parser would look like this:
+
+@example
+int status;
+yypstate *ps = yypstate_new ();
+do @{
+ status = yypush_parse (ps, yylex (), NULL);
+@} while (status == YYPUSH_MORE);
+yypstate_delete (ps);
+@end example
+
+If the user decided to use an impure push parser, a few things about
+the generated parser will change. The @code{yychar} variable becomes
+a global variable instead of a variable in the @code{yypush_parse} function.
+For this reason, the signature of the @code{yypush_parse} function is
+changed to remove the token as a parameter. A nonreentrant push parser
+example would thus look like this:
+
+@example
+extern int yychar;
+int status;
+yypstate *ps = yypstate_new ();
+do @{
+ yychar = yylex ();
+ status = yypush_parse (ps);
+@} while (status == YYPUSH_MORE);
+yypstate_delete (ps);
+@end example
+
+That's it. Notice the next token is put into the global variable @code{yychar}
+for use by the next invocation of the @code{yypush_parse} function.
+
+Bison also supports both the push parser interface along with the pull parser
+interface in the same generated parser. In order to get this functionality,
+you should replace the @samp{%define api.push-pull push} declaration with the
+@samp{%define api.push-pull both} declaration. Doing this will create all of
+the symbols mentioned earlier along with the two extra symbols, @code{yyparse}
+and @code{yypull_parse}. @code{yyparse} can be used exactly as it normally
+would be used. However, the user should note that it is implemented in the
+generated parser by calling @code{yypull_parse}.
+This makes the @code{yyparse} function that is generated with the
+@samp{%define api.push-pull both} declaration slower than the normal
+@code{yyparse} function. If the user
+calls the @code{yypull_parse} function it will parse the rest of the input
+stream. It is possible to @code{yypush_parse} tokens to select a subgrammar
+and then @code{yypull_parse} the rest of the input stream. If you would like
+to switch back and forth between between parsing styles, you would have to
+write your own @code{yypull_parse} function that knows when to quit looking
+for input. An example of using the @code{yypull_parse} function would look
+like this:
+
+@example
+yypstate *ps = yypstate_new ();
+yypull_parse (ps); /* Will call the lexer */
+yypstate_delete (ps);
+@end example
+
+Adding the @samp{%define api.pure} declaration does exactly the same thing to
+the generated parser with @samp{%define api.push-pull both} as it did for
+@samp{%define api.push-pull push}.
+
@node Decl Summary
@subsection Bison Declaration Summary
@cindex Bison declaration summary
directives:
@deffn {Directive} %code @{@var{code}@}
+@deffnx {Directive} %code @var{qualifier} @{@var{code}@}
@findex %code
-This is the unqualified form of the @code{%code} directive.
-It inserts @var{code} verbatim at the default location in the output.
-That default location is determined by the selected target language and/or
-parser skeleton.
-
-@cindex Prologue
-For the current C/C++ skeletons, the default location is the parser source code
-file after the usual contents of the parser header file.
-Thus, @code{%code} replaces the traditional Yacc prologue,
-@code{%@{@var{code}%@}}, for most purposes.
-For a detailed discussion, see @ref{Prologue Alternatives}.
-
-@comment For Java, the default location is inside the parser class.
-
-(Like all the Yacc prologue alternatives, this directive is experimental.
-More user feedback will help to determine whether it should become a permanent
-feature.)
-@end deffn
-
-@deffn {Directive} %code @var{qualifier} @{@var{code}@}
-This is the qualified form of the @code{%code} directive.
-If you need to specify location-sensitive verbatim @var{code} that does not
-belong at the default location selected by the unqualified @code{%code} form,
-use this form instead.
-
-@var{qualifier} identifies the purpose of @var{code} and thus the location(s)
-where Bison should generate it.
-Not all values of @var{qualifier} are available for all target languages:
-
-@itemize @bullet
-@findex %code requires
-@item requires
-
-@itemize @bullet
-@item Language(s): C, C++
-
-@item Purpose: This is the best place to write dependency code required for
-@code{YYSTYPE} and @code{YYLTYPE}.
-In other words, it's the best place to define types referenced in @code{%union}
-directives, and it's the best place to override Bison's default @code{YYSTYPE}
-and @code{YYLTYPE} definitions.
-
-@item Location(s): The parser header file and the parser source code file
-before the Bison-generated @code{YYSTYPE} and @code{YYLTYPE} definitions.
-@end itemize
-
-@item provides
-@findex %code provides
-
-@itemize @bullet
-@item Language(s): C, C++
-
-@item Purpose: This is the best place to write additional definitions and
-declarations that should be provided to other modules.
-
-@item Location(s): The parser header file and the parser source code file after
-the Bison-generated @code{YYSTYPE}, @code{YYLTYPE}, and token definitions.
-@end itemize
-
-@item top
-@findex %code top
-
-@itemize @bullet
-@item Language(s): C, C++
-
-@item Purpose: The unqualified @code{%code} or @code{%code requires} should
-usually be more appropriate than @code{%code top}.
-However, occasionally it is necessary to insert code much nearer the top of the
-parser source code file.
-For example:
-
-@smallexample
-%code top @{
- #define _GNU_SOURCE
- #include <stdio.h>
-@}
-@end smallexample
-
-@item Location(s): Near the top of the parser source code file.
-@end itemize
-@ignore
-@item imports
-@findex %code imports
-
-@itemize @bullet
-@item Language(s): Java
-
-@item Purpose: This is the best place to write Java import directives.
-
-@item Location(s): The parser Java file after any Java package directive and
-before any class definitions.
-@end itemize
-@end ignore
-@end itemize
-
-(Like all the Yacc prologue alternatives, this directive is experimental.
-More user feedback will help to determine whether it should become a permanent
-feature.)
-
-@cindex Prologue
-For a detailed discussion of how to use @code{%code} in place of the
-traditional Yacc prologue for C/C++, see @ref{Prologue Alternatives}.
+Insert @var{code} verbatim into the output parser source at the
+default location or at the location specified by @var{qualifier}.
+@xref{%code Summary}.
@end deffn
@deffn {Directive} %debug
-In the parser file, define the macro @code{YYDEBUG} to 1 if it is not
-already defined, so that the debugging facilities are compiled.
-@end deffn
+Instrument the output parser for traces. Obsoleted by @samp{%define
+parse.trace}.
@xref{Tracing, ,Tracing Your Parser}.
+@end deffn
-@deffn {Directive} %define @var{define-variable}
-@deffnx {Directive} %define @var{define-variable} @var{value}
-Define a variable to adjust Bison's behavior.
-The list of available variables and their meanings depends on the selected
-target language and/or the parser skeleton (@pxref{Decl Summary,,%language}).
-The @var{value} can be omitted for boolean variables; for
-boolean variables, the skeletons will treat a @var{value} of @samp{0}
-or @samp{false} as the boolean variable being false, and anything else
-as true.
+@deffn {Directive} %define @var{variable}
+@deffnx {Directive} %define @var{variable} @var{value}
+@deffnx {Directive} %define @var{variable} "@var{value}"
+Define a variable to adjust Bison's behavior. @xref{%define Summary}.
@end deffn
@deffn {Directive} %defines
-Write a header file containing macro definitions for the token type
-names defined in the grammar as well as a few other declarations.
-If the parser output file is named @file{@var{name}.c} then this file
-is named @file{@var{name}.h}.
+Write a parser header file containing macro definitions for the token
+type names defined in the grammar as well as a few other declarations.
+If the parser implementation file is named @file{@var{name}.c} then
+the parser header file is named @file{@var{name}.h}.
-For C parsers, the output header declares @code{YYSTYPE} unless
+For C parsers, the parser header file declares @code{YYSTYPE} unless
@code{YYSTYPE} is already defined as a macro or you have used a
-@code{<@var{type}>} tag without using @code{%union}.
-Therefore, if you are using a @code{%union}
-(@pxref{Multiple Types, ,More Than One Value Type}) with components that
-require other definitions, or if you have defined a @code{YYSTYPE} macro
-or type definition
-(@pxref{Value Type, ,Data Types of Semantic Values}), you need to
-arrange for these definitions to be propagated to all modules, e.g., by
-putting them in a prerequisite header that is included both by your
-parser and by any other module that needs @code{YYSTYPE}.
-
-Unless your parser is pure, the output header declares @code{yylval}
-as an external variable. @xref{Pure Decl, ,A Pure (Reentrant)
-Parser}.
-
-If you have also used locations, the output header declares
-@code{YYLTYPE} and @code{yylloc} using a protocol similar to that of
-the @code{YYSTYPE} macro and @code{yylval}. @xref{Locations, ,Tracking
-Locations}.
-
-This output file is normally essential if you wish to put the definition
-of @code{yylex} in a separate source file, because @code{yylex}
-typically needs to be able to refer to the above-mentioned declarations
-and to the token type codes. @xref{Token Values, ,Semantic Values of
-Tokens}.
+@code{<@var{type}>} tag without using @code{%union}. Therefore, if
+you are using a @code{%union} (@pxref{Multiple Types, ,More Than One
+Value Type}) with components that require other definitions, or if you
+have defined a @code{YYSTYPE} macro or type definition (@pxref{Value
+Type, ,Data Types of Semantic Values}), you need to arrange for these
+definitions to be propagated to all modules, e.g., by putting them in
+a prerequisite header that is included both by your parser and by any
+other module that needs @code{YYSTYPE}.
+
+Unless your parser is pure, the parser header file declares
+@code{yylval} as an external variable. @xref{Pure Decl, ,A Pure
+(Reentrant) Parser}.
+
+If you have also used locations, the parser header file declares
+@code{YYLTYPE} and @code{yylloc} using a protocol similar to that of the
+@code{YYSTYPE} macro and @code{yylval}. @xref{Tracking Locations}.
+
+This parser header file is normally essential if you wish to put the
+definition of @code{yylex} in a separate source file, because
+@code{yylex} typically needs to be able to refer to the
+above-mentioned declarations and to the token type codes. @xref{Token
+Values, ,Semantic Values of Tokens}.
@findex %code requires
@findex %code provides
If you have declared @code{%code requires} or @code{%code provides}, the output
header also contains their code.
-@xref{Decl Summary, ,%code}.
+@xref{%code Summary}.
@end deffn
@deffn {Directive} %defines @var{defines-file}
@end deffn
@deffn {Directive} %file-prefix "@var{prefix}"
-Specify a prefix to use for all Bison output file names. The names are
-chosen as if the input file were named @file{@var{prefix}.y}.
+Specify a prefix to use for all Bison output file names. The names
+are chosen as if the grammar file were named @file{@var{prefix}.y}.
@end deffn
@deffn {Directive} %language "@var{language}"
Specify the programming language for the generated parser. Currently
-supported languages include C and C++.
+supported languages include C, C++, and Java.
@var{language} is case-insensitive.
+
+This directive is experimental and its effect may be modified in future
+releases.
@end deffn
@deffn {Directive} %locations
in C parsers
is @code{yyparse}, @code{yylex}, @code{yyerror}, @code{yynerrs},
@code{yylval}, @code{yychar}, @code{yydebug}, and
-(if locations are used) @code{yylloc}. For example, if you use
-@samp{%name-prefix "c_"}, the names become @code{c_parse}, @code{c_lex},
-and so on. In C++ parsers, it is only the surrounding namespace which is
-named @var{prefix} instead of @samp{yy}.
+(if locations are used) @code{yylloc}. If you use a push parser,
+@code{yypush_parse}, @code{yypull_parse}, @code{yypstate},
+@code{yypstate_new} and @code{yypstate_delete} will
+also be renamed. For example, if you use @samp{%name-prefix "c_"}, the
+names become @code{c_parse}, @code{c_lex}, and so on.
+For C++ parsers, see the @samp{%define api.namespace} documentation in this
+section.
@xref{Multiple Parsers, ,Multiple Parsers in the Same Program}.
@end deffn
@end deffn
@end ifset
-@deffn {Directive} %no-parser
-Do not include any C code in the parser file; generate tables only. The
-parser file contains just @code{#define} directives and static variable
-declarations.
-
-This option also tells Bison to write the C code for the grammar actions
-into a file named @file{@var{file}.act}, in the form of a
-brace-surrounded body fit for a @code{switch} statement.
-@end deffn
-
@deffn {Directive} %no-lines
Don't generate any @code{#line} preprocessor commands in the parser
-file. Ordinarily Bison writes these commands in the parser file so that
-the C compiler and debuggers will associate errors and object code with
-your source file (the grammar file). This directive causes them to
-associate errors with the parser file, treating it an independent source
-file in its own right.
+implementation file. Ordinarily Bison writes these commands in the
+parser implementation file so that the C compiler and debuggers will
+associate errors and object code with your source file (the grammar
+file). This directive causes them to associate errors with the parser
+implementation file, treating it as an independent source file in its
+own right.
@end deffn
@deffn {Directive} %output "@var{file}"
-Specify @var{file} for the parser file.
+Specify @var{file} for the parser implementation file.
@end deffn
@deffn {Directive} %pure-parser
-Request a pure (reentrant) parser program (@pxref{Pure Decl, ,A Pure
-(Reentrant) Parser}).
+Deprecated version of @samp{%define api.pure} (@pxref{%define
+Summary,,api.pure}), for which Bison is more careful to warn about
+unreasonable usage.
@end deffn
@deffn {Directive} %require "@var{version}"
@deffn {Directive} %skeleton "@var{file}"
Specify the skeleton to use.
-You probably don't need this option unless you are developing Bison.
-You should use @code{%language} if you want to specify the skeleton for a
-different language, because it is clearer and because it will always choose the
-correct skeleton for non-deterministic or push parsers.
+@c You probably don't need this option unless you are developing Bison.
+@c You should use @code{%language} if you want to specify the skeleton for a
+@c different language, because it is clearer and because it will always choose the
+@c correct skeleton for non-deterministic or push parsers.
If @var{file} does not contain a @code{/}, @var{file} is the name of a skeleton
file in the Bison installation directory.
@end deffn
@deffn {Directive} %token-table
-Generate an array of token names in the parser file. The name of the
-array is @code{yytname}; @code{yytname[@var{i}]} is the name of the
-token whose internal Bison token code number is @var{i}. The first
-three elements of @code{yytname} correspond to the predefined tokens
-@code{"$end"},
-@code{"error"}, and @code{"$undefined"}; after these come the symbols
-defined in the grammar file.
+Generate an array of token names in the parser implementation file.
+The name of the array is @code{yytname}; @code{yytname[@var{i}]} is
+the name of the token whose internal Bison token code number is
+@var{i}. The first three elements of @code{yytname} correspond to the
+predefined tokens @code{"$end"}, @code{"error"}, and
+@code{"$undefined"}; after these come the symbols defined in the
+grammar file.
The name in the table includes all the characters needed to represent
the token in Bison. For single-character literals and literal
@end deffn
-@node Multiple Parsers
-@section Multiple Parsers in the Same Program
-
-Most programs that use Bison parse only one language and therefore contain
-only one Bison parser. But what if you want to parse more than one
-language with the same program? Then you need to avoid a name conflict
-between different definitions of @code{yyparse}, @code{yylval}, and so on.
+@node %define Summary
+@subsection %define Summary
-The easy way to do this is to use the option @samp{-p @var{prefix}}
-(@pxref{Invocation, ,Invoking Bison}). This renames the interface
-functions and variables of the Bison parser to start with @var{prefix}
-instead of @samp{yy}. You can use this to give each parser distinct
-names that do not conflict.
+There are many features of Bison's behavior that can be controlled by
+assigning the feature a single value. For historical reasons, some
+such features are assigned values by dedicated directives, such as
+@code{%start}, which assigns the start symbol. However, newer such
+features are associated with variables, which are assigned by the
+@code{%define} directive:
-The precise list of symbols renamed is @code{yyparse}, @code{yylex},
-@code{yyerror}, @code{yynerrs}, @code{yylval}, @code{yylloc},
-@code{yychar} and @code{yydebug}. For example, if you use @samp{-p c},
-the names become @code{cparse}, @code{clex}, and so on.
+@deffn {Directive} %define @var{variable}
+@deffnx {Directive} %define @var{variable} @var{value}
+@deffnx {Directive} %define @var{variable} "@var{value}"
+Define @var{variable} to @var{value}.
-@strong{All the other variables and macros associated with Bison are not
-renamed.} These others are not global; there is no conflict if the same
-name is used in different parsers. For example, @code{YYSTYPE} is not
-renamed, but defining this in different ways in different parsers causes
-no trouble (@pxref{Value Type, ,Data Types of Semantic Values}).
+@var{value} must be placed in quotation marks if it contains any
+character other than a letter, underscore, period, or non-initial dash
+or digit. Omitting @code{"@var{value}"} entirely is always equivalent
+to specifying @code{""}.
-The @samp{-p} option works by adding macro definitions to the beginning
-of the parser source file, defining @code{yyparse} as
-@code{@var{prefix}parse}, and so on. This effectively substitutes one
-name for the other in the entire parser file.
+It is an error if a @var{variable} is defined by @code{%define}
+multiple times, but see @ref{Bison Options,,-D
+@var{name}[=@var{value}]}.
+@end deffn
-@node Interface
-@chapter Parser C-Language Interface
-@cindex C-language interface
-@cindex interface
+The rest of this section summarizes variables and values that
+@code{%define} accepts.
-The Bison parser is actually a C function named @code{yyparse}. Here we
-describe the interface conventions of @code{yyparse} and the other
-functions that it needs to use.
+Some @var{variable}s take Boolean values. In this case, Bison will
+complain if the variable definition does not meet one of the following
+four conditions:
-Keep in mind that the parser uses many C identifiers starting with
-@samp{yy} and @samp{YY} for internal purposes. If you use such an
-identifier (aside from those in this manual) in an action or in epilogue
-in the grammar file, you are likely to run into trouble.
+@enumerate
+@item @code{@var{value}} is @code{true}
-@menu
-* Parser Function:: How to call @code{yyparse} and what it returns.
-* Lexical:: You must supply a function @code{yylex}
- which reads tokens.
-* Error Reporting:: You must supply a function @code{yyerror}.
-* Action Features:: Special features for use in actions.
-* Internationalization:: How to let the parser speak in the user's
- native language.
-@end menu
+@item @code{@var{value}} is omitted (or @code{""} is specified).
+This is equivalent to @code{true}.
-@node Parser Function
-@section The Parser Function @code{yyparse}
-@findex yyparse
+@item @code{@var{value}} is @code{false}.
-You call the function @code{yyparse} to cause parsing to occur. This
-function reads tokens, executes actions, and ultimately returns when it
-encounters end-of-input or an unrecoverable syntax error. You can also
-write an action which directs @code{yyparse} to return immediately
-without reading further.
+@item @var{variable} is never defined.
+In this case, Bison selects a default value.
+@end enumerate
+What @var{variable}s are accepted, as well as their meanings and default
+values, depend on the selected target language and/or the parser
+skeleton (@pxref{Decl Summary,,%language}, @pxref{Decl
+Summary,,%skeleton}).
+Unaccepted @var{variable}s produce an error.
+Some of the accepted @var{variable}s are:
-@deftypefun int yyparse (void)
-The value returned by @code{yyparse} is 0 if parsing was successful (return
-is due to end-of-input).
+@table @code
+@c ================================================== api.namespace
+@item api.namespace
+@findex %define api.namespace
+@itemize
+@item Languages(s): C++
-The value is 1 if parsing failed because of invalid input, i.e., input
-that contains a syntax error or that causes @code{YYABORT} to be
-invoked.
+@item Purpose: Specify the namespace for the parser class.
+For example, if you specify:
-The value is 2 if parsing failed due to memory exhaustion.
-@end deftypefun
+@example
+%define api.namespace "foo::bar"
+@end example
-In an action, you can cause immediate return from @code{yyparse} by using
-these macros:
+Bison uses @code{foo::bar} verbatim in references such as:
-@defmac YYACCEPT
-@findex YYACCEPT
-Return immediately with value 0 (to report success).
-@end defmac
+@example
+foo::bar::parser::semantic_type
+@end example
-@defmac YYABORT
-@findex YYABORT
-Return immediately with value 1 (to report failure).
-@end defmac
+However, to open a namespace, Bison removes any leading @code{::} and then
+splits on any remaining occurrences:
-If you use a reentrant parser, you can optionally pass additional
-parameter information to it in a reentrant way. To do so, use the
-declaration @code{%parse-param}:
+@example
+namespace foo @{ namespace bar @{
+ class position;
+ class location;
+@} @}
+@end example
-@deffn {Directive} %parse-param @{@var{argument-declaration}@}
-@findex %parse-param
-Declare that an argument declared by the braced-code
-@var{argument-declaration} is an additional @code{yyparse} argument.
-The @var{argument-declaration} is used when declaring
-functions or prototypes. The last identifier in
-@var{argument-declaration} must be the argument name.
-@end deffn
+@item Accepted Values:
+Any absolute or relative C++ namespace reference without a trailing
+@code{"::"}. For example, @code{"foo"} or @code{"::foo::bar"}.
-Here's an example. Write this in the parser:
+@item Default Value:
+The value specified by @code{%name-prefix}, which defaults to @code{yy}.
+This usage of @code{%name-prefix} is for backward compatibility and can
+be confusing since @code{%name-prefix} also specifies the textual prefix
+for the lexical analyzer function. Thus, if you specify
+@code{%name-prefix}, it is best to also specify @samp{%define
+api.namespace} so that @code{%name-prefix} @emph{only} affects the
+lexical analyzer function. For example, if you specify:
@example
-%parse-param @{int *nastiness@}
-%parse-param @{int *randomness@}
+%define api.namespace "foo"
+%name-prefix "bar::"
@end example
-@noindent
-Then call the parser like this:
+The parser namespace is @code{foo} and @code{yylex} is referenced as
+@code{bar::lex}.
+@end itemize
+@c namespace
-@example
-@{
- int nastiness, randomness;
- @dots{} /* @r{Store proper data in @code{nastiness} and @code{randomness}.} */
- value = yyparse (&nastiness, &randomness);
- @dots{}
-@}
-@end example
-@noindent
-In the grammar actions, use expressions like this to refer to the data:
-@example
-exp: @dots{} @{ @dots{}; *randomness += 1; @dots{} @}
-@end example
+@c ================================================== api.pure
+@item api.pure
+@findex %define api.pure
+@itemize @bullet
+@item Language(s): C
-@node Lexical
-@section The Lexical Analyzer Function @code{yylex}
-@findex yylex
-@cindex lexical analyzer
+@item Purpose: Request a pure (reentrant) parser program.
+@xref{Pure Decl, ,A Pure (Reentrant) Parser}.
-The @dfn{lexical analyzer} function, @code{yylex}, recognizes tokens from
-the input stream and returns them to the parser. Bison does not create
-this function automatically; you must write it so that @code{yyparse} can
-call it. The function is sometimes referred to as a lexical scanner.
+@item Accepted Values: Boolean
-In simple programs, @code{yylex} is often defined at the end of the Bison
-grammar file. If @code{yylex} is defined in a separate source file, you
-need to arrange for the token-type macro definitions to be available there.
-To do this, use the @samp{-d} option when you run Bison, so that it will
-write these macro definitions into a separate header file
-@file{@var{name}.tab.h} which you can include in the other source files
-that need it. @xref{Invocation, ,Invoking Bison}.
+@item Default Value: @code{false}
+@end itemize
+@c api.pure
-@menu
-* Calling Convention:: How @code{yyparse} calls @code{yylex}.
-* Token Values:: How @code{yylex} must return the semantic value
- of the token it has read.
-* Token Locations:: How @code{yylex} must return the text location
- (line number, etc.) of the token, if the
- actions want that.
-* Pure Calling:: How the calling convention differs
- in a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).
-@end menu
-@node Calling Convention
-@subsection Calling Convention for @code{yylex}
-The value that @code{yylex} returns must be the positive numeric code
-for the type of token it has just found; a zero or negative value
-signifies end-of-input.
+@c ================================================== api.push-pull
+@item api.push-pull
+@findex %define api.push-pull
-When a token is referred to in the grammar rules by a name, that name
-in the parser file becomes a C macro whose definition is the proper
-numeric code for that token type. So @code{yylex} can use the name
-to indicate that type. @xref{Symbols}.
+@itemize @bullet
+@item Language(s): C (deterministic parsers only)
-When a token is referred to in the grammar rules by a character literal,
-the numeric code for that character is also the code for the token type.
-So @code{yylex} can simply return that character code, possibly converted
-to @code{unsigned char} to avoid sign-extension. The null character
-must not be used this way, because its code is zero and that
-signifies end-of-input.
+@item Purpose: Request a pull parser, a push parser, or both.
+@xref{Push Decl, ,A Push Parser}.
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
-Here is an example showing these things:
+@item Accepted Values: @code{pull}, @code{push}, @code{both}
+
+@item Default Value: @code{pull}
+@end itemize
+@c api.push-pull
+
+
+
+@c ================================================== api.tokens.prefix
+@item api.tokens.prefix
+@findex %define api.tokens.prefix
+
+@itemize
+@item Languages(s): all
+
+@item Purpose:
+Add a prefix to the token names when generating their definition in the
+target language. For instance
@example
-int
-yylex (void)
-@{
- @dots{}
- if (c == EOF) /* Detect end-of-input. */
- return 0;
- @dots{}
- if (c == '+' || c == '-')
- return c; /* Assume token type for `+' is '+'. */
- @dots{}
- return INT; /* Return the type of the token. */
- @dots{}
-@}
+%token FILE for ERROR
+%define api.tokens.prefix "TOK_"
+%%
+start: FILE for ERROR;
@end example
@noindent
-This interface has been designed so that the output from the @code{lex}
-utility can be used without change as the definition of @code{yylex}.
+generates the definition of the symbols @code{TOK_FILE}, @code{TOK_for},
+and @code{TOK_ERROR} in the generated source files. In particular, the
+scanner must use these prefixed token names, while the grammar itself
+may still use the short names (as in the sample rule given above). The
+generated informational files (@file{*.output}, @file{*.xml},
+@file{*.dot}) are not modified by this prefix. See @ref{Calc++ Parser}
+and @ref{Calc++ Scanner}, for a complete example.
+
+@item Accepted Values:
+Any string. Should be a valid identifier prefix in the target language,
+in other words, it should typically be an identifier itself (sequence of
+letters, underscores, and ---not at the beginning--- digits).
+
+@item Default Value:
+empty
+@end itemize
+@c api.tokens.prefix
-If the grammar uses literal string tokens, there are two ways that
-@code{yylex} can determine the token type codes for them:
-@itemize @bullet
-@item
-If the grammar defines symbolic token names as aliases for the
-literal string tokens, @code{yylex} can use these symbolic names like
-all others. In this case, the use of the literal string tokens in
-the grammar file has no effect on @code{yylex}.
+@c ================================================== lex_symbol
+@item lex_symbol
+@findex %define lex_symbol
-@item
-@code{yylex} can find the multicharacter token in the @code{yytname}
-table. The index of the token in the table is the token type's code.
-The name of a multicharacter token is recorded in @code{yytname} with a
-double-quote, the token's characters, and another double-quote. The
-token's characters are escaped as necessary to be suitable as input
-to Bison.
+@itemize @bullet
+@item Language(s):
+C++
-Here's code for looking up a multicharacter token in @code{yytname},
-assuming that the characters of the token are stored in
-@code{token_buffer}, and assuming that the token does not contain any
-characters like @samp{"} that require escaping.
+@item Purpose:
+When variant-based semantic values are enabled (@pxref{C++ Variants}),
+request that symbols be handled as a whole (type, value, and possibly
+location) in the scanner. @xref{Complete Symbols}, for details.
-@smallexample
-for (i = 0; i < YYNTOKENS; i++)
- @{
- if (yytname[i] != 0
- && yytname[i][0] == '"'
- && ! strncmp (yytname[i] + 1, token_buffer,
- strlen (token_buffer))
- && yytname[i][strlen (token_buffer) + 1] == '"'
- && yytname[i][strlen (token_buffer) + 2] == 0)
- break;
- @}
-@end smallexample
+@item Accepted Values:
+Boolean.
-The @code{yytname} table is generated only if you use the
-@code{%token-table} declaration. @xref{Decl Summary}.
+@item Default Value:
+@code{false}
@end itemize
+@c lex_symbol
-@node Token Values
-@subsection Semantic Values of Tokens
-@vindex yylval
-In an ordinary (nonreentrant) parser, the semantic value of the token must
-be stored into the global variable @code{yylval}. When you are using
-just one data type for semantic values, @code{yylval} has that type.
-Thus, if the type is @code{int} (the default), you might write this in
-@code{yylex}:
+@c ================================================== lr.default-reductions
-@example
-@group
- @dots{}
- yylval = value; /* Put value onto Bison stack. */
- return INT; /* Return the type of the token. */
- @dots{}
-@end group
-@end example
+@item lr.default-reductions
+@findex %define lr.default-reductions
-When you are using multiple data types, @code{yylval}'s type is a union
-made from the @code{%union} declaration (@pxref{Union Decl, ,The
-Collection of Value Types}). So when you store a token's value, you
-must use the proper member of the union. If the @code{%union}
-declaration looks like this:
+@itemize @bullet
+@item Language(s): all
-@example
-@group
-%union @{
- int intval;
- double val;
- symrec *tptr;
-@}
-@end group
-@end example
+@item Purpose: Specify the kind of states that are permitted to
+contain default reductions. @xref{Default Reductions}. (The ability to
+specify where default reductions should be used is experimental. More user
+feedback will help to stabilize it.)
-@noindent
-then the code in @code{yylex} might look like this:
+@item Accepted Values: @code{most}, @code{consistent}, @code{accepting}
+@item Default Value:
+@itemize
+@item @code{accepting} if @code{lr.type} is @code{canonical-lr}.
+@item @code{most} otherwise.
+@end itemize
+@end itemize
-@example
-@group
- @dots{}
- yylval.intval = value; /* Put value onto Bison stack. */
- return INT; /* Return the type of the token. */
- @dots{}
-@end group
-@end example
+@c ============================================ lr.keep-unreachable-states
-@node Token Locations
-@subsection Textual Locations of Tokens
+@item lr.keep-unreachable-states
+@findex %define lr.keep-unreachable-states
-@vindex yylloc
-If you are using the @samp{@@@var{n}}-feature (@pxref{Locations, ,
-Tracking Locations}) in actions to keep track of the textual locations
-of tokens and groupings, then you must provide this information in
-@code{yylex}. The function @code{yyparse} expects to find the textual
-location of a token just parsed in the global variable @code{yylloc}.
-So @code{yylex} must store the proper data in that variable.
+@itemize @bullet
+@item Language(s): all
+@item Purpose: Request that Bison allow unreachable parser states to
+remain in the parser tables. @xref{Unreachable States}.
+@item Accepted Values: Boolean
+@item Default Value: @code{false}
+@end itemize
+@c lr.keep-unreachable-states
-By default, the value of @code{yylloc} is a structure and you need only
-initialize the members that are going to be used by the actions. The
-four members are called @code{first_line}, @code{first_column},
-@code{last_line} and @code{last_column}. Note that the use of this
-feature makes the parser noticeably slower.
+@c ================================================== lr.type
-@tindex YYLTYPE
-The data type of @code{yylloc} has the name @code{YYLTYPE}.
+@item lr.type
+@findex %define lr.type
-@node Pure Calling
-@subsection Calling Conventions for Pure Parsers
+@itemize @bullet
+@item Language(s): all
-When you use the Bison declaration @code{%pure-parser} to request a
-pure, reentrant parser, the global communication variables @code{yylval}
-and @code{yylloc} cannot be used. (@xref{Pure Decl, ,A Pure (Reentrant)
-Parser}.) In such parsers the two global variables are replaced by
-pointers passed as arguments to @code{yylex}. You must declare them as
-shown here, and pass the information back by storing it through those
-pointers.
+@item Purpose: Specify the type of parser tables within the
+LR(1) family. @xref{LR Table Construction}. (This feature is experimental.
+More user feedback will help to stabilize it.)
-@example
-int
-yylex (YYSTYPE *lvalp, YYLTYPE *llocp)
-@{
- @dots{}
- *lvalp = value; /* Put value onto Bison stack. */
- return INT; /* Return the type of the token. */
- @dots{}
-@}
-@end example
+@item Accepted Values: @code{lalr}, @code{ielr}, @code{canonical-lr}
-If the grammar file does not use the @samp{@@} constructs to refer to
-textual locations, then the type @code{YYLTYPE} will not be defined. In
-this case, omit the second argument; @code{yylex} will be called with
-only one argument.
+@item Default Value: @code{lalr}
+@end itemize
-If you wish to pass the additional parameter data to @code{yylex}, use
-@code{%lex-param} just like @code{%parse-param} (@pxref{Parser
-Function}).
+@c ================================================== namespace
+@item namespace
+@findex %define namespace
+Obsoleted by @code{api.namespace}
+@c namespace
-@deffn {Directive} lex-param @{@var{argument-declaration}@}
-@findex %lex-param
-Declare that the braced-code @var{argument-declaration} is an
-additional @code{yylex} argument declaration.
-@end deffn
-For instance:
+@c ================================================== parse.assert
+@item parse.assert
+@findex %define parse.assert
-@example
-%parse-param @{int *nastiness@}
-%lex-param @{int *nastiness@}
-%parse-param @{int *randomness@}
-@end example
+@itemize
+@item Languages(s): C++
-@noindent
-results in the following signature:
+@item Purpose: Issue runtime assertions to catch invalid uses.
+In C++, when variants are used (@pxref{C++ Variants}), symbols must be
+constructed and
+destroyed properly. This option checks these constraints.
-@example
-int yylex (int *nastiness);
-int yyparse (int *nastiness, int *randomness);
-@end example
+@item Accepted Values: Boolean
-If @code{%pure-parser} is added:
+@item Default Value: @code{false}
+@end itemize
+@c parse.assert
-@example
-int yylex (YYSTYPE *lvalp, int *nastiness);
-int yyparse (int *nastiness, int *randomness);
-@end example
-@noindent
-and finally, if both @code{%pure-parser} and @code{%locations} are used:
+@c ================================================== parse.error
+@item parse.error
+@findex %define parse.error
+@itemize
+@item Languages(s):
+all
+@item Purpose:
+Control the kind of error messages passed to the error reporting
+function. @xref{Error Reporting, ,The Error Reporting Function
+@code{yyerror}}.
+@item Accepted Values:
+@itemize
+@item @code{simple}
+Error messages passed to @code{yyerror} are simply @w{@code{"syntax
+error"}}.
+@item @code{verbose}
+Error messages report the unexpected token, and possibly the expected ones.
+However, this report can often be incorrect when LAC is not enabled
+(@pxref{LAC}).
+@end itemize
-@example
-int yylex (YYSTYPE *lvalp, YYLTYPE *llocp, int *nastiness);
-int yyparse (int *nastiness, int *randomness);
-@end example
+@item Default Value:
+@code{simple}
+@end itemize
+@c parse.error
-@node Error Reporting
-@section The Error Reporting Function @code{yyerror}
-@cindex error reporting function
-@findex yyerror
-@cindex parse error
-@cindex syntax error
-The Bison parser detects a @dfn{syntax error} or @dfn{parse error}
-whenever it reads a token which cannot satisfy any syntax rule. An
-action in the grammar can also explicitly proclaim an error, using the
-macro @code{YYERROR} (@pxref{Action Features, ,Special Features for Use
-in Actions}).
+@c ================================================== parse.lac
+@item parse.lac
+@findex %define parse.lac
-The Bison parser expects to report the error by calling an error
-reporting function named @code{yyerror}, which you must supply. It is
-called by @code{yyparse} whenever a syntax error is found, and it
-receives one argument. For a syntax error, the string is normally
-@w{@code{"syntax error"}}.
+@itemize
+@item Languages(s): C (deterministic parsers only)
-@findex %error-verbose
-If you invoke the directive @code{%error-verbose} in the Bison
-declarations section (@pxref{Bison Declarations, ,The Bison Declarations
-Section}), then Bison provides a more verbose and specific error message
-string instead of just plain @w{@code{"syntax error"}}.
+@item Purpose: Enable LAC (lookahead correction) to improve
+syntax error handling. @xref{LAC}.
+@item Accepted Values: @code{none}, @code{full}
+@item Default Value: @code{none}
+@end itemize
+@c parse.lac
-The parser can detect one other kind of error: memory exhaustion. This
-can happen when the input contains constructions that are very deeply
-nested. It isn't likely you will encounter this, since the Bison
-parser normally extends its stack automatically up to a very large limit. But
-if memory is exhausted, @code{yyparse} calls @code{yyerror} in the usual
-fashion, except that the argument string is @w{@code{"memory exhausted"}}.
+@c ================================================== parse.trace
+@item parse.trace
+@findex %define parse.trace
-In some cases diagnostics like @w{@code{"syntax error"}} are
-translated automatically from English to some other language before
-they are passed to @code{yyerror}. @xref{Internationalization}.
+@itemize
+@item Languages(s): C, C++
-The following definition suffices in simple programs:
+@item Purpose: Require parser instrumentation for tracing.
+In C/C++, define the macro @code{YYDEBUG} to 1 in the parser implementation
+file if it is not already defined, so that the debugging facilities are
+compiled. @xref{Tracing, ,Tracing Your Parser}.
-@example
-@group
-void
-yyerror (char const *s)
-@{
-@end group
-@group
- fprintf (stderr, "%s\n", s);
-@}
-@end group
-@end example
+@item Accepted Values: Boolean
-After @code{yyerror} returns to @code{yyparse}, the latter will attempt
-error recovery if you have written suitable error recovery grammar rules
-(@pxref{Error Recovery}). If recovery is impossible, @code{yyparse} will
-immediately return 1.
+@item Default Value: @code{false}
+@end itemize
+@c parse.trace
-Obviously, in location tracking pure parsers, @code{yyerror} should have
-an access to the current location.
-This is indeed the case for the @acronym{GLR}
-parsers, but not for the Yacc parser, for historical reasons. I.e., if
-@samp{%locations %pure-parser} is passed then the prototypes for
-@code{yyerror} are:
+@c ================================================== variant
+@item variant
+@findex %define variant
-@example
-void yyerror (char const *msg); /* Yacc parsers. */
-void yyerror (YYLTYPE *locp, char const *msg); /* GLR parsers. */
-@end example
-
-If @samp{%parse-param @{int *nastiness@}} is used, then:
-
-@example
-void yyerror (int *nastiness, char const *msg); /* Yacc parsers. */
-void yyerror (int *nastiness, char const *msg); /* GLR parsers. */
-@end example
-
-Finally, @acronym{GLR} and Yacc parsers share the same @code{yyerror} calling
-convention for absolutely pure parsers, i.e., when the calling
-convention of @code{yylex} @emph{and} the calling convention of
-@code{%pure-parser} are pure. I.e.:
-
-@example
-/* Location tracking. */
-%locations
-/* Pure yylex. */
-%pure-parser
-%lex-param @{int *nastiness@}
-/* Pure yyparse. */
-%parse-param @{int *nastiness@}
-%parse-param @{int *randomness@}
-@end example
+@itemize @bullet
+@item Language(s):
+C++
-@noindent
-results in the following signatures for all the parser kinds:
+@item Purpose:
+Request variant-based semantic values.
+@xref{C++ Variants}.
-@example
-int yylex (YYSTYPE *lvalp, YYLTYPE *llocp, int *nastiness);
-int yyparse (int *nastiness, int *randomness);
-void yyerror (YYLTYPE *locp,
- int *nastiness, int *randomness,
- char const *msg);
-@end example
+@item Accepted Values:
+Boolean.
-@noindent
-The prototypes are only indications of how the code produced by Bison
-uses @code{yyerror}. Bison-generated code always ignores the returned
-value, so @code{yyerror} can return any type, including @code{void}.
-Also, @code{yyerror} can be a variadic function; that is why the
-message is always passed last.
+@item Default Value:
+@code{false}
+@end itemize
+@c variant
+@end table
-Traditionally @code{yyerror} returns an @code{int} that is always
-ignored, but this is purely for historical reasons, and @code{void} is
-preferable since it more accurately describes the return type for
-@code{yyerror}.
-@vindex yynerrs
-The variable @code{yynerrs} contains the number of syntax errors
-reported so far. Normally this variable is global; but if you
-request a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser})
-then it is a local variable which only the actions can access.
+@node %code Summary
+@subsection %code Summary
+@findex %code
+@cindex Prologue
-@node Action Features
-@section Special Features for Use in Actions
-@cindex summary, action features
-@cindex action features summary
+The @code{%code} directive inserts code verbatim into the output
+parser source at any of a predefined set of locations. It thus serves
+as a flexible and user-friendly alternative to the traditional Yacc
+prologue, @code{%@{@var{code}%@}}. This section summarizes the
+functionality of @code{%code} for the various target languages
+supported by Bison. For a detailed discussion of how to use
+@code{%code} in place of @code{%@{@var{code}%@}} for C/C++ and why it
+is advantageous to do so, @pxref{Prologue Alternatives}.
-Here is a table of Bison constructs, variables and macros that
-are useful in actions.
+@deffn {Directive} %code @{@var{code}@}
+This is the unqualified form of the @code{%code} directive. It
+inserts @var{code} verbatim at a language-dependent default location
+in the parser implementation.
-@deffn {Variable} $$
-Acts like a variable that contains the semantic value for the
-grouping made by the current rule. @xref{Actions}.
-@end deffn
+For C/C++, the default location is the parser implementation file
+after the usual contents of the parser header file. Thus, the
+unqualified form replaces @code{%@{@var{code}%@}} for most purposes.
-@deffn {Variable} $@var{n}
-Acts like a variable that contains the semantic value for the
-@var{n}th component of the current rule. @xref{Actions}.
+For Java, the default location is inside the parser class.
@end deffn
-@deffn {Variable} $<@var{typealt}>$
-Like @code{$$} but specifies alternative @var{typealt} in the union
-specified by the @code{%union} declaration. @xref{Action Types, ,Data
-Types of Values in Actions}.
+@deffn {Directive} %code @var{qualifier} @{@var{code}@}
+This is the qualified form of the @code{%code} directive.
+@var{qualifier} identifies the purpose of @var{code} and thus the
+location(s) where Bison should insert it. That is, if you need to
+specify location-sensitive @var{code} that does not belong at the
+default location selected by the unqualified @code{%code} form, use
+this form instead.
@end deffn
-@deffn {Variable} $<@var{typealt}>@var{n}
-Like @code{$@var{n}} but specifies alternative @var{typealt} in the
-union specified by the @code{%union} declaration.
-@xref{Action Types, ,Data Types of Values in Actions}.
-@end deffn
+For any particular qualifier or for the unqualified form, if there are
+multiple occurrences of the @code{%code} directive, Bison concatenates
+the specified code in the order in which it appears in the grammar
+file.
-@deffn {Macro} YYABORT;
-Return immediately from @code{yyparse}, indicating failure.
-@xref{Parser Function, ,The Parser Function @code{yyparse}}.
-@end deffn
+Not all qualifiers are accepted for all target languages. Unaccepted
+qualifiers produce an error. Some of the accepted qualifiers are:
-@deffn {Macro} YYACCEPT;
-Return immediately from @code{yyparse}, indicating success.
-@xref{Parser Function, ,The Parser Function @code{yyparse}}.
-@end deffn
+@table @code
+@item requires
+@findex %code requires
-@deffn {Macro} YYBACKUP (@var{token}, @var{value});
-@findex YYBACKUP
-Unshift a token. This macro is allowed only for rules that reduce
-a single value, and only when there is no lookahead token.
-It is also disallowed in @acronym{GLR} parsers.
-It installs a lookahead token with token type @var{token} and
-semantic value @var{value}; then it discards the value that was
-going to be reduced by this rule.
+@itemize @bullet
+@item Language(s): C, C++
-If the macro is used when it is not valid, such as when there is
-a lookahead token already, then it reports a syntax error with
-a message @samp{cannot back up} and performs ordinary error
-recovery.
+@item Purpose: This is the best place to write dependency code required for
+@code{YYSTYPE} and @code{YYLTYPE}.
+In other words, it's the best place to define types referenced in @code{%union}
+directives, and it's the best place to override Bison's default @code{YYSTYPE}
+and @code{YYLTYPE} definitions.
-In either case, the rest of the action is not executed.
-@end deffn
+@item Location(s): The parser header file and the parser implementation file
+before the Bison-generated @code{YYSTYPE} and @code{YYLTYPE}
+definitions.
+@end itemize
-@deffn {Macro} YYEMPTY
-@vindex YYEMPTY
-Value stored in @code{yychar} when there is no lookahead token.
-@end deffn
+@item provides
+@findex %code provides
-@deffn {Macro} YYEOF
-@vindex YYEOF
-Value stored in @code{yychar} when the lookahead is the end of the input
-stream.
-@end deffn
+@itemize @bullet
+@item Language(s): C, C++
-@deffn {Macro} YYERROR;
-@findex YYERROR
-Cause an immediate syntax error. This statement initiates error
-recovery just as if the parser itself had detected an error; however, it
-does not call @code{yyerror}, and does not print any message. If you
-want to print an error message, call @code{yyerror} explicitly before
-the @samp{YYERROR;} statement. @xref{Error Recovery}.
-@end deffn
+@item Purpose: This is the best place to write additional definitions and
+declarations that should be provided to other modules.
-@deffn {Macro} YYRECOVERING
-@findex YYRECOVERING
-The expression @code{YYRECOVERING ()} yields 1 when the parser
-is recovering from a syntax error, and 0 otherwise.
-@xref{Error Recovery}.
-@end deffn
+@item Location(s): The parser header file and the parser implementation
+file after the Bison-generated @code{YYSTYPE}, @code{YYLTYPE}, and
+token definitions.
+@end itemize
-@deffn {Variable} yychar
-Variable containing either the lookahead token, or @code{YYEOF} when the
-lookahead is the end of the input stream, or @code{YYEMPTY} when no lookahead
-has been performed so the next token is not yet known.
-Do not modify @code{yychar} in a deferred semantic action (@pxref{GLR Semantic
-Actions}).
-@xref{Lookahead, ,Lookahead Tokens}.
-@end deffn
+@item top
+@findex %code top
-@deffn {Macro} yyclearin;
-Discard the current lookahead token. This is useful primarily in
-error rules.
-Do not invoke @code{yyclearin} in a deferred semantic action (@pxref{GLR
-Semantic Actions}).
-@xref{Error Recovery}.
-@end deffn
+@itemize @bullet
+@item Language(s): C, C++
-@deffn {Macro} yyerrok;
-Resume generating error messages immediately for subsequent syntax
-errors. This is useful primarily in error rules.
-@xref{Error Recovery}.
-@end deffn
+@item Purpose: The unqualified @code{%code} or @code{%code requires}
+should usually be more appropriate than @code{%code top}. However,
+occasionally it is necessary to insert code much nearer the top of the
+parser implementation file. For example:
-@deffn {Variable} yylloc
-Variable containing the lookahead token location when @code{yychar} is not set
-to @code{YYEMPTY} or @code{YYEOF}.
-Do not modify @code{yylloc} in a deferred semantic action (@pxref{GLR Semantic
-Actions}).
-@xref{Actions and Locations, ,Actions and Locations}.
-@end deffn
+@example
+%code top @{
+ #define _GNU_SOURCE
+ #include <stdio.h>
+@}
+@end example
-@deffn {Variable} yylval
-Variable containing the lookahead token semantic value when @code{yychar} is
-not set to @code{YYEMPTY} or @code{YYEOF}.
-Do not modify @code{yylval} in a deferred semantic action (@pxref{GLR Semantic
-Actions}).
-@xref{Actions, ,Actions}.
-@end deffn
+@item Location(s): Near the top of the parser implementation file.
+@end itemize
-@deffn {Value} @@$
-@findex @@$
-Acts like a structure variable containing information on the textual location
-of the grouping made by the current rule. @xref{Locations, ,
-Tracking Locations}.
+@item imports
+@findex %code imports
-@c Check if those paragraphs are still useful or not.
+@itemize @bullet
+@item Language(s): Java
-@c @example
-@c struct @{
-@c int first_line, last_line;
-@c int first_column, last_column;
-@c @};
-@c @end example
+@item Purpose: This is the best place to write Java import directives.
-@c Thus, to get the starting line number of the third component, you would
-@c use @samp{@@3.first_line}.
+@item Location(s): The parser Java file after any Java package directive and
+before any class definitions.
+@end itemize
+@end table
-@c In order for the members of this structure to contain valid information,
-@c you must make @code{yylex} supply this information about each token.
-@c If you need only certain members, then @code{yylex} need only fill in
-@c those members.
+Though we say the insertion locations are language-dependent, they are
+technically skeleton-dependent. Writers of non-standard skeletons
+however should choose their locations consistently with the behavior
+of the standard Bison skeletons.
-@c The use of this feature makes the parser noticeably slower.
-@end deffn
-@deffn {Value} @@@var{n}
-@findex @@@var{n}
-Acts like a structure variable containing information on the textual location
-of the @var{n}th component of the current rule. @xref{Locations, ,
-Tracking Locations}.
-@end deffn
+@node Multiple Parsers
+@section Multiple Parsers in the Same Program
-@node Internationalization
-@section Parser Internationalization
-@cindex internationalization
-@cindex i18n
-@cindex NLS
-@cindex gettext
-@cindex bison-po
+Most programs that use Bison parse only one language and therefore contain
+only one Bison parser. But what if you want to parse more than one
+language with the same program? Then you need to avoid a name conflict
+between different definitions of @code{yyparse}, @code{yylval}, and so on.
-A Bison-generated parser can print diagnostics, including error and
-tracing messages. By default, they appear in English. However, Bison
-also supports outputting diagnostics in the user's native language. To
-make this work, the user should set the usual environment variables.
-@xref{Users, , The User's View, gettext, GNU @code{gettext} utilities}.
-For example, the shell command @samp{export LC_ALL=fr_CA.UTF-8} might
-set the user's locale to French Canadian using the @acronym{UTF}-8
-encoding. The exact set of available locales depends on the user's
-installation.
+The easy way to do this is to use the option @samp{-p @var{prefix}}
+(@pxref{Invocation, ,Invoking Bison}). This renames the interface
+functions and variables of the Bison parser to start with @var{prefix}
+instead of @samp{yy}. You can use this to give each parser distinct
+names that do not conflict.
-The maintainer of a package that uses a Bison-generated parser enables
-the internationalization of the parser's output through the following
-steps. Here we assume a package that uses @acronym{GNU} Autoconf and
-@acronym{GNU} Automake.
+The precise list of symbols renamed is @code{yyparse}, @code{yylex},
+@code{yyerror}, @code{yynerrs}, @code{yylval}, @code{yylloc},
+@code{yychar} and @code{yydebug}. If you use a push parser,
+@code{yypush_parse}, @code{yypull_parse}, @code{yypstate},
+@code{yypstate_new} and @code{yypstate_delete} will also be renamed.
+For example, if you use @samp{-p c}, the names become @code{cparse},
+@code{clex}, and so on.
-@enumerate
-@item
-@cindex bison-i18n.m4
-Into the directory containing the @acronym{GNU} Autoconf macros used
-by the package---often called @file{m4}---copy the
-@file{bison-i18n.m4} file installed by Bison under
-@samp{share/aclocal/bison-i18n.m4} in Bison's installation directory.
-For example:
+@strong{All the other variables and macros associated with Bison are not
+renamed.} These others are not global; there is no conflict if the same
+name is used in different parsers. For example, @code{YYSTYPE} is not
+renamed, but defining this in different ways in different parsers causes
+no trouble (@pxref{Value Type, ,Data Types of Semantic Values}).
-@example
-cp /usr/local/share/aclocal/bison-i18n.m4 m4/bison-i18n.m4
-@end example
+The @samp{-p} option works by adding macro definitions to the
+beginning of the parser implementation file, defining @code{yyparse}
+as @code{@var{prefix}parse}, and so on. This effectively substitutes
+one name for the other in the entire parser implementation file.
-@item
-@findex BISON_I18N
-@vindex BISON_LOCALEDIR
-@vindex YYENABLE_NLS
-In the top-level @file{configure.ac}, after the @code{AM_GNU_GETTEXT}
-invocation, add an invocation of @code{BISON_I18N}. This macro is
-defined in the file @file{bison-i18n.m4} that you copied earlier. It
-causes @samp{configure} to find the value of the
-@code{BISON_LOCALEDIR} variable, and it defines the source-language
-symbol @code{YYENABLE_NLS} to enable translations in the
-Bison-generated parser.
+@node Interface
+@chapter Parser C-Language Interface
+@cindex C-language interface
+@cindex interface
-@item
-In the @code{main} function of your program, designate the directory
-containing Bison's runtime message catalog, through a call to
-@samp{bindtextdomain} with domain name @samp{bison-runtime}.
-For example:
+The Bison parser is actually a C function named @code{yyparse}. Here we
+describe the interface conventions of @code{yyparse} and the other
+functions that it needs to use.
-@example
-bindtextdomain ("bison-runtime", BISON_LOCALEDIR);
-@end example
+Keep in mind that the parser uses many C identifiers starting with
+@samp{yy} and @samp{YY} for internal purposes. If you use such an
+identifier (aside from those in this manual) in an action or in epilogue
+in the grammar file, you are likely to run into trouble.
-Typically this appears after any other call @code{bindtextdomain
-(PACKAGE, LOCALEDIR)} that your package already has. Here we rely on
-@samp{BISON_LOCALEDIR} to be defined as a string through the
-@file{Makefile}.
+@menu
+* Parser Function:: How to call @code{yyparse} and what it returns.
+* Push Parser Function:: How to call @code{yypush_parse} and what it returns.
+* Pull Parser Function:: How to call @code{yypull_parse} and what it returns.
+* Parser Create Function:: How to call @code{yypstate_new} and what it returns.
+* Parser Delete Function:: How to call @code{yypstate_delete} and what it returns.
+* Lexical:: You must supply a function @code{yylex}
+ which reads tokens.
+* Error Reporting:: You must supply a function @code{yyerror}.
+* Action Features:: Special features for use in actions.
+* Internationalization:: How to let the parser speak in the user's
+ native language.
+@end menu
-@item
-In the @file{Makefile.am} that controls the compilation of the @code{main}
-function, make @samp{BISON_LOCALEDIR} available as a C preprocessor macro,
-either in @samp{DEFS} or in @samp{AM_CPPFLAGS}. For example:
+@node Parser Function
+@section The Parser Function @code{yyparse}
+@findex yyparse
-@example
-DEFS = @@DEFS@@ -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"'
-@end example
+You call the function @code{yyparse} to cause parsing to occur. This
+function reads tokens, executes actions, and ultimately returns when it
+encounters end-of-input or an unrecoverable syntax error. You can also
+write an action which directs @code{yyparse} to return immediately
+without reading further.
-or:
-@example
-AM_CPPFLAGS = -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"'
-@end example
+@deftypefun int yyparse (void)
+The value returned by @code{yyparse} is 0 if parsing was successful (return
+is due to end-of-input).
-@item
-Finally, invoke the command @command{autoreconf} to generate the build
-infrastructure.
-@end enumerate
+The value is 1 if parsing failed because of invalid input, i.e., input
+that contains a syntax error or that causes @code{YYABORT} to be
+invoked.
+The value is 2 if parsing failed due to memory exhaustion.
+@end deftypefun
-@node Algorithm
-@chapter The Bison Parser Algorithm
-@cindex Bison parser algorithm
-@cindex algorithm of parser
-@cindex shifting
-@cindex reduction
-@cindex parser stack
-@cindex stack, parser
+In an action, you can cause immediate return from @code{yyparse} by using
+these macros:
-As Bison reads tokens, it pushes them onto a stack along with their
-semantic values. The stack is called the @dfn{parser stack}. Pushing a
-token is traditionally called @dfn{shifting}.
+@defmac YYACCEPT
+@findex YYACCEPT
+Return immediately with value 0 (to report success).
+@end defmac
-For example, suppose the infix calculator has read @samp{1 + 5 *}, with a
-@samp{3} to come. The stack will have four elements, one for each token
-that was shifted.
+@defmac YYABORT
+@findex YYABORT
+Return immediately with value 1 (to report failure).
+@end defmac
-But the stack does not always have an element for each token read. When
-the last @var{n} tokens and groupings shifted match the components of a
-grammar rule, they can be combined according to that rule. This is called
-@dfn{reduction}. Those tokens and groupings are replaced on the stack by a
-single grouping whose symbol is the result (left hand side) of that rule.
-Running the rule's action is part of the process of reduction, because this
-is what computes the semantic value of the resulting grouping.
+If you use a reentrant parser, you can optionally pass additional
+parameter information to it in a reentrant way. To do so, use the
+declaration @code{%parse-param}:
-For example, if the infix calculator's parser stack contains this:
+@deffn {Directive} %parse-param @{@var{argument-declaration}@} @dots{}
+@findex %parse-param
+Declare that one or more
+@var{argument-declaration} are additional @code{yyparse} arguments.
+The @var{argument-declaration} is used when declaring
+functions or prototypes. The last identifier in
+@var{argument-declaration} must be the argument name.
+@end deffn
+
+Here's an example. Write this in the parser:
@example
-1 + 5 * 3
+%parse-param @{int *nastiness@} @{int *randomness@}
@end example
@noindent
-and the next input token is a newline character, then the last three
-elements can be reduced to 15 via the rule:
+Then call the parser like this:
@example
-expr: expr '*' expr;
+@{
+ int nastiness, randomness;
+ @dots{} /* @r{Store proper data in @code{nastiness} and @code{randomness}.} */
+ value = yyparse (&nastiness, &randomness);
+ @dots{}
+@}
@end example
@noindent
-Then the stack contains just these three elements:
+In the grammar actions, use expressions like this to refer to the data:
@example
-1 + 15
+exp: @dots{} @{ @dots{}; *randomness += 1; @dots{} @}
@end example
-@noindent
-At this point, another reduction can be made, resulting in the single value
-16. Then the newline token can be shifted.
+@node Push Parser Function
+@section The Push Parser Function @code{yypush_parse}
+@findex yypush_parse
-The parser tries, by shifts and reductions, to reduce the entire input down
-to a single grouping whose symbol is the grammar's start-symbol
-(@pxref{Language and Grammar, ,Languages and Context-Free Grammars}).
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
-This kind of parser is known in the literature as a bottom-up parser.
+You call the function @code{yypush_parse} to parse a single token. This
+function is available if either the @samp{%define api.push-pull push} or
+@samp{%define api.push-pull both} declaration is used.
+@xref{Push Decl, ,A Push Parser}.
-@menu
-* Lookahead:: Parser looks one token ahead when deciding what to do.
-* Shift/Reduce:: Conflicts: when either shifting or reduction is valid.
-* Precedence:: Operator precedence works by resolving conflicts.
-* Contextual Precedence:: When an operator's precedence depends on context.
-* Parser States:: The parser is a finite-state-machine with stack.
-* Reduce/Reduce:: When two rules are applicable in the same situation.
-* Mystery Conflicts:: Reduce/reduce conflicts that look unjustified.
-* Generalized LR Parsing:: Parsing arbitrary context-free grammars.
-* Memory Management:: What happens when memory is exhausted. How to avoid it.
-@end menu
+@deftypefun int yypush_parse (yypstate *yyps)
+The value returned by @code{yypush_parse} is the same as for yyparse with the
+following exception. @code{yypush_parse} will return YYPUSH_MORE if more input
+is required to finish parsing the grammar.
+@end deftypefun
-@node Lookahead
-@section Lookahead Tokens
-@cindex lookahead token
+@node Pull Parser Function
+@section The Pull Parser Function @code{yypull_parse}
+@findex yypull_parse
-The Bison parser does @emph{not} always reduce immediately as soon as the
-last @var{n} tokens and groupings match a rule. This is because such a
-simple strategy is inadequate to handle most languages. Instead, when a
-reduction is possible, the parser sometimes ``looks ahead'' at the next
-token in order to decide what to do.
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
-When a token is read, it is not immediately shifted; first it becomes the
-@dfn{lookahead token}, which is not on the stack. Now the parser can
-perform one or more reductions of tokens and groupings on the stack, while
-the lookahead token remains off to the side. When no more reductions
-should take place, the lookahead token is shifted onto the stack. This
-does not mean that all possible reductions have been done; depending on the
-token type of the lookahead token, some rules may choose to delay their
-application.
+You call the function @code{yypull_parse} to parse the rest of the input
+stream. This function is available if the @samp{%define api.push-pull both}
+declaration is used.
+@xref{Push Decl, ,A Push Parser}.
-Here is a simple case where lookahead is needed. These three rules define
-expressions which contain binary addition operators and postfix unary
-factorial operators (@samp{!}), and allow parentheses for grouping.
+@deftypefun int yypull_parse (yypstate *yyps)
+The value returned by @code{yypull_parse} is the same as for @code{yyparse}.
+@end deftypefun
-@example
-@group
-expr: term '+' expr
- | term
- ;
-@end group
+@node Parser Create Function
+@section The Parser Create Function @code{yystate_new}
+@findex yypstate_new
-@group
-term: '(' expr ')'
- | term '!'
- | NUMBER
- ;
-@end group
-@end example
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
-Suppose that the tokens @w{@samp{1 + 2}} have been read and shifted; what
-should be done? If the following token is @samp{)}, then the first three
-tokens must be reduced to form an @code{expr}. This is the only valid
-course, because shifting the @samp{)} would produce a sequence of symbols
-@w{@code{term ')'}}, and no rule allows this.
+You call the function @code{yypstate_new} to create a new parser instance.
+This function is available if either the @samp{%define api.push-pull push} or
+@samp{%define api.push-pull both} declaration is used.
+@xref{Push Decl, ,A Push Parser}.
-If the following token is @samp{!}, then it must be shifted immediately so
-that @w{@samp{2 !}} can be reduced to make a @code{term}. If instead the
-parser were to reduce before shifting, @w{@samp{1 + 2}} would become an
-@code{expr}. It would then be impossible to shift the @samp{!} because
-doing so would produce on the stack the sequence of symbols @code{expr
-'!'}. No rule allows that sequence.
+@deftypefun yypstate *yypstate_new (void)
+The function will return a valid parser instance if there was memory available
+or 0 if no memory was available.
+In impure mode, it will also return 0 if a parser instance is currently
+allocated.
+@end deftypefun
-@vindex yychar
-@vindex yylval
-@vindex yylloc
-The lookahead token is stored in the variable @code{yychar}.
-Its semantic value and location, if any, are stored in the variables
-@code{yylval} and @code{yylloc}.
-@xref{Action Features, ,Special Features for Use in Actions}.
+@node Parser Delete Function
+@section The Parser Delete Function @code{yystate_delete}
+@findex yypstate_delete
-@node Shift/Reduce
-@section Shift/Reduce Conflicts
-@cindex conflicts
-@cindex shift/reduce conflicts
-@cindex dangling @code{else}
-@cindex @code{else}, dangling
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
-Suppose we are parsing a language which has if-then and if-then-else
-statements, with a pair of rules like this:
+You call the function @code{yypstate_delete} to delete a parser instance.
+function is available if either the @samp{%define api.push-pull push} or
+@samp{%define api.push-pull both} declaration is used.
+@xref{Push Decl, ,A Push Parser}.
-@example
-@group
-if_stmt:
- IF expr THEN stmt
- | IF expr THEN stmt ELSE stmt
- ;
-@end group
-@end example
+@deftypefun void yypstate_delete (yypstate *yyps)
+This function will reclaim the memory associated with a parser instance.
+After this call, you should no longer attempt to use the parser instance.
+@end deftypefun
-@noindent
-Here we assume that @code{IF}, @code{THEN} and @code{ELSE} are
-terminal symbols for specific keyword tokens.
+@node Lexical
+@section The Lexical Analyzer Function @code{yylex}
+@findex yylex
+@cindex lexical analyzer
-When the @code{ELSE} token is read and becomes the lookahead token, the
-contents of the stack (assuming the input is valid) are just right for
-reduction by the first rule. But it is also legitimate to shift the
-@code{ELSE}, because that would lead to eventual reduction by the second
-rule.
+The @dfn{lexical analyzer} function, @code{yylex}, recognizes tokens from
+the input stream and returns them to the parser. Bison does not create
+this function automatically; you must write it so that @code{yyparse} can
+call it. The function is sometimes referred to as a lexical scanner.
-This situation, where either a shift or a reduction would be valid, is
-called a @dfn{shift/reduce conflict}. Bison is designed to resolve
-these conflicts by choosing to shift, unless otherwise directed by
-operator precedence declarations. To see the reason for this, let's
-contrast it with the other alternative.
+In simple programs, @code{yylex} is often defined at the end of the
+Bison grammar file. If @code{yylex} is defined in a separate source
+file, you need to arrange for the token-type macro definitions to be
+available there. To do this, use the @samp{-d} option when you run
+Bison, so that it will write these macro definitions into the separate
+parser header file, @file{@var{name}.tab.h}, which you can include in
+the other source files that need it. @xref{Invocation, ,Invoking
+Bison}.
-Since the parser prefers to shift the @code{ELSE}, the result is to attach
-the else-clause to the innermost if-statement, making these two inputs
-equivalent:
+@menu
+* Calling Convention:: How @code{yyparse} calls @code{yylex}.
+* Token Values:: How @code{yylex} must return the semantic value
+ of the token it has read.
+* Token Locations:: How @code{yylex} must return the text location
+ (line number, etc.) of the token, if the
+ actions want that.
+* Pure Calling:: How the calling convention differs in a pure parser
+ (@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).
+@end menu
-@example
-if x then if y then win (); else lose;
+@node Calling Convention
+@subsection Calling Convention for @code{yylex}
-if x then do; if y then win (); else lose; end;
-@end example
+The value that @code{yylex} returns must be the positive numeric code
+for the type of token it has just found; a zero or negative value
+signifies end-of-input.
-But if the parser chose to reduce when possible rather than shift, the
-result would be to attach the else-clause to the outermost if-statement,
-making these two inputs equivalent:
+When a token is referred to in the grammar rules by a name, that name
+in the parser implementation file becomes a C macro whose definition
+is the proper numeric code for that token type. So @code{yylex} can
+use the name to indicate that type. @xref{Symbols}.
-@example
-if x then if y then win (); else lose;
+When a token is referred to in the grammar rules by a character literal,
+the numeric code for that character is also the code for the token type.
+So @code{yylex} can simply return that character code, possibly converted
+to @code{unsigned char} to avoid sign-extension. The null character
+must not be used this way, because its code is zero and that
+signifies end-of-input.
-if x then do; if y then win (); end; else lose;
-@end example
+Here is an example showing these things:
-The conflict exists because the grammar as written is ambiguous: either
-parsing of the simple nested if-statement is legitimate. The established
-convention is that these ambiguities are resolved by attaching the
-else-clause to the innermost if-statement; this is what Bison accomplishes
-by choosing to shift rather than reduce. (It would ideally be cleaner to
-write an unambiguous grammar, but that is very hard to do in this case.)
-This particular ambiguity was first encountered in the specifications of
-Algol 60 and is called the ``dangling @code{else}'' ambiguity.
+@example
+int
+yylex (void)
+@{
+ @dots{}
+ if (c == EOF) /* Detect end-of-input. */
+ return 0;
+ @dots{}
+ if (c == '+' || c == '-')
+ return c; /* Assume token type for `+' is '+'. */
+ @dots{}
+ return INT; /* Return the type of the token. */
+ @dots{}
+@}
+@end example
-To avoid warnings from Bison about predictable, legitimate shift/reduce
-conflicts, use the @code{%expect @var{n}} declaration. There will be no
-warning as long as the number of shift/reduce conflicts is exactly @var{n}.
-@xref{Expect Decl, ,Suppressing Conflict Warnings}.
+@noindent
+This interface has been designed so that the output from the @code{lex}
+utility can be used without change as the definition of @code{yylex}.
-The definition of @code{if_stmt} above is solely to blame for the
-conflict, but the conflict does not actually appear without additional
-rules. Here is a complete Bison input file that actually manifests the
-conflict:
+If the grammar uses literal string tokens, there are two ways that
+@code{yylex} can determine the token type codes for them:
-@example
-@group
-%token IF THEN ELSE variable
-%%
-@end group
-@group
-stmt: expr
- | if_stmt
- ;
-@end group
+@itemize @bullet
+@item
+If the grammar defines symbolic token names as aliases for the
+literal string tokens, @code{yylex} can use these symbolic names like
+all others. In this case, the use of the literal string tokens in
+the grammar file has no effect on @code{yylex}.
-@group
-if_stmt:
- IF expr THEN stmt
- | IF expr THEN stmt ELSE stmt
- ;
-@end group
+@item
+@code{yylex} can find the multicharacter token in the @code{yytname}
+table. The index of the token in the table is the token type's code.
+The name of a multicharacter token is recorded in @code{yytname} with a
+double-quote, the token's characters, and another double-quote. The
+token's characters are escaped as necessary to be suitable as input
+to Bison.
+
+Here's code for looking up a multicharacter token in @code{yytname},
+assuming that the characters of the token are stored in
+@code{token_buffer}, and assuming that the token does not contain any
+characters like @samp{"} that require escaping.
-expr: variable
- ;
+@example
+for (i = 0; i < YYNTOKENS; i++)
+ @{
+ if (yytname[i] != 0
+ && yytname[i][0] == '"'
+ && ! strncmp (yytname[i] + 1, token_buffer,
+ strlen (token_buffer))
+ && yytname[i][strlen (token_buffer) + 1] == '"'
+ && yytname[i][strlen (token_buffer) + 2] == 0)
+ break;
+ @}
@end example
-@node Precedence
-@section Operator Precedence
-@cindex operator precedence
-@cindex precedence of operators
+The @code{yytname} table is generated only if you use the
+@code{%token-table} declaration. @xref{Decl Summary}.
+@end itemize
-Another situation where shift/reduce conflicts appear is in arithmetic
-expressions. Here shifting is not always the preferred resolution; the
-Bison declarations for operator precedence allow you to specify when to
-shift and when to reduce.
+@node Token Values
+@subsection Semantic Values of Tokens
-@menu
-* Why Precedence:: An example showing why precedence is needed.
-* Using Precedence:: How to specify precedence in Bison grammars.
-* Precedence Examples:: How these features are used in the previous example.
-* How Precedence:: How they work.
-@end menu
+@vindex yylval
+In an ordinary (nonreentrant) parser, the semantic value of the token must
+be stored into the global variable @code{yylval}. When you are using
+just one data type for semantic values, @code{yylval} has that type.
+Thus, if the type is @code{int} (the default), you might write this in
+@code{yylex}:
-@node Why Precedence
-@subsection When Precedence is Needed
+@example
+@group
+ @dots{}
+ yylval = value; /* Put value onto Bison stack. */
+ return INT; /* Return the type of the token. */
+ @dots{}
+@end group
+@end example
-Consider the following ambiguous grammar fragment (ambiguous because the
-input @w{@samp{1 - 2 * 3}} can be parsed in two different ways):
+When you are using multiple data types, @code{yylval}'s type is a union
+made from the @code{%union} declaration (@pxref{Union Decl, ,The
+Collection of Value Types}). So when you store a token's value, you
+must use the proper member of the union. If the @code{%union}
+declaration looks like this:
@example
@group
-expr: expr '-' expr
- | expr '*' expr
- | expr '<' expr
- | '(' expr ')'
- @dots{}
- ;
+%union @{
+ int intval;
+ double val;
+ symrec *tptr;
+@}
@end group
@end example
@noindent
-Suppose the parser has seen the tokens @samp{1}, @samp{-} and @samp{2};
-should it reduce them via the rule for the subtraction operator? It
-depends on the next token. Of course, if the next token is @samp{)}, we
-must reduce; shifting is invalid because no single rule can reduce the
-token sequence @w{@samp{- 2 )}} or anything starting with that. But if
-the next token is @samp{*} or @samp{<}, we have a choice: either
-shifting or reduction would allow the parse to complete, but with
-different results.
+then the code in @code{yylex} might look like this:
-To decide which one Bison should do, we must consider the results. If
-the next operator token @var{op} is shifted, then it must be reduced
-first in order to permit another opportunity to reduce the difference.
-The result is (in effect) @w{@samp{1 - (2 @var{op} 3)}}. On the other
-hand, if the subtraction is reduced before shifting @var{op}, the result
-is @w{@samp{(1 - 2) @var{op} 3}}. Clearly, then, the choice of shift or
-reduce should depend on the relative precedence of the operators
-@samp{-} and @var{op}: @samp{*} should be shifted first, but not
-@samp{<}.
+@example
+@group
+ @dots{}
+ yylval.intval = value; /* Put value onto Bison stack. */
+ return INT; /* Return the type of the token. */
+ @dots{}
+@end group
+@end example
-@cindex associativity
-What about input such as @w{@samp{1 - 2 - 5}}; should this be
-@w{@samp{(1 - 2) - 5}} or should it be @w{@samp{1 - (2 - 5)}}? For most
-operators we prefer the former, which is called @dfn{left association}.
-The latter alternative, @dfn{right association}, is desirable for
-assignment operators. The choice of left or right association is a
-matter of whether the parser chooses to shift or reduce when the stack
-contains @w{@samp{1 - 2}} and the lookahead token is @samp{-}: shifting
-makes right-associativity.
+@node Token Locations
+@subsection Textual Locations of Tokens
-@node Using Precedence
-@subsection Specifying Operator Precedence
-@findex %left
-@findex %right
-@findex %nonassoc
+@vindex yylloc
+If you are using the @samp{@@@var{n}}-feature (@pxref{Tracking Locations})
+in actions to keep track of the textual locations of tokens and groupings,
+then you must provide this information in @code{yylex}. The function
+@code{yyparse} expects to find the textual location of a token just parsed
+in the global variable @code{yylloc}. So @code{yylex} must store the proper
+data in that variable.
-Bison allows you to specify these choices with the operator precedence
-declarations @code{%left} and @code{%right}. Each such declaration
-contains a list of tokens, which are operators whose precedence and
-associativity is being declared. The @code{%left} declaration makes all
-those operators left-associative and the @code{%right} declaration makes
-them right-associative. A third alternative is @code{%nonassoc}, which
-declares that it is a syntax error to find the same operator twice ``in a
-row''.
+By default, the value of @code{yylloc} is a structure and you need only
+initialize the members that are going to be used by the actions. The
+four members are called @code{first_line}, @code{first_column},
+@code{last_line} and @code{last_column}. Note that the use of this
+feature makes the parser noticeably slower.
-The relative precedence of different operators is controlled by the
-order in which they are declared. The first @code{%left} or
-@code{%right} declaration in the file declares the operators whose
-precedence is lowest, the next such declaration declares the operators
-whose precedence is a little higher, and so on.
+@tindex YYLTYPE
+The data type of @code{yylloc} has the name @code{YYLTYPE}.
-@node Precedence Examples
-@subsection Precedence Examples
+@node Pure Calling
+@subsection Calling Conventions for Pure Parsers
-In our example, we would want the following declarations:
+When you use the Bison declaration @samp{%define api.pure} to request a
+pure, reentrant parser, the global communication variables @code{yylval}
+and @code{yylloc} cannot be used. (@xref{Pure Decl, ,A Pure (Reentrant)
+Parser}.) In such parsers the two global variables are replaced by
+pointers passed as arguments to @code{yylex}. You must declare them as
+shown here, and pass the information back by storing it through those
+pointers.
@example
-%left '<'
-%left '-'
-%left '*'
+int
+yylex (YYSTYPE *lvalp, YYLTYPE *llocp)
+@{
+ @dots{}
+ *lvalp = value; /* Put value onto Bison stack. */
+ return INT; /* Return the type of the token. */
+ @dots{}
+@}
@end example
-In a more complete example, which supports other operators as well, we
-would declare them in groups of equal precedence. For example, @code{'+'} is
-declared with @code{'-'}:
+If the grammar file does not use the @samp{@@} constructs to refer to
+textual locations, then the type @code{YYLTYPE} will not be defined. In
+this case, omit the second argument; @code{yylex} will be called with
+only one argument.
+
+If you wish to pass additional arguments to @code{yylex}, use
+@code{%lex-param} just like @code{%parse-param} (@pxref{Parser
+Function}). To pass additional arguments to both @code{yylex} and
+@code{yyparse}, use @code{%param}.
+
+@deffn {Directive} %lex-param @{@var{argument-declaration}@} @dots{}
+@findex %lex-param
+Specify that @var{argument-declaration} are additional @code{yylex} argument
+declarations. You may pass one or more such declarations, which is
+equivalent to repeating @code{%lex-param}.
+@end deffn
+
+@deffn {Directive} %param @{@var{argument-declaration}@} @dots{}
+@findex %param
+Specify that @var{argument-declaration} are additional
+@code{yylex}/@code{yyparse} argument declaration. This is equivalent to
+@samp{%lex-param @{@var{argument-declaration}@} @dots{} %parse-param
+@{@var{argument-declaration}@} @dots{}}. You may pass one or more
+declarations, which is equivalent to repeating @code{%param}.
+@end deffn
+
+For instance:
@example
-%left '<' '>' '=' NE LE GE
-%left '+' '-'
-%left '*' '/'
+%lex-param @{scanner_mode *mode@}
+%parse-param @{parser_mode *mode@}
+%param @{environment_type *env@}
@end example
@noindent
-(Here @code{NE} and so on stand for the operators for ``not equal''
-and so on. We assume that these tokens are more than one character long
-and therefore are represented by names, not character literals.)
-
-@node How Precedence
-@subsection How Precedence Works
+results in the following signature:
-The first effect of the precedence declarations is to assign precedence
-levels to the terminal symbols declared. The second effect is to assign
-precedence levels to certain rules: each rule gets its precedence from
-the last terminal symbol mentioned in the components. (You can also
-specify explicitly the precedence of a rule. @xref{Contextual
-Precedence, ,Context-Dependent Precedence}.)
+@example
+int yylex (scanner_mode *mode, environment_type *env);
+int yyparse (parser_mode *mode, environment_type *env);
+@end example
-Finally, the resolution of conflicts works by comparing the precedence
-of the rule being considered with that of the lookahead token. If the
-token's precedence is higher, the choice is to shift. If the rule's
-precedence is higher, the choice is to reduce. If they have equal
-precedence, the choice is made based on the associativity of that
-precedence level. The verbose output file made by @samp{-v}
-(@pxref{Invocation, ,Invoking Bison}) says how each conflict was
-resolved.
+If @samp{%define api.pure} is added:
-Not all rules and not all tokens have precedence. If either the rule or
-the lookahead token has no precedence, then the default is to shift.
+@example
+int yylex (YYSTYPE *lvalp, scanner_mode *mode, environment_type *env);
+int yyparse (parser_mode *mode, environment_type *env);
+@end example
-@node Contextual Precedence
-@section Context-Dependent Precedence
-@cindex context-dependent precedence
-@cindex unary operator precedence
-@cindex precedence, context-dependent
-@cindex precedence, unary operator
-@findex %prec
+@noindent
+and finally, if both @samp{%define api.pure} and @code{%locations} are used:
-Often the precedence of an operator depends on the context. This sounds
-outlandish at first, but it is really very common. For example, a minus
-sign typically has a very high precedence as a unary operator, and a
-somewhat lower precedence (lower than multiplication) as a binary operator.
+@example
+int yylex (YYSTYPE *lvalp, YYLTYPE *llocp,
+ scanner_mode *mode, environment_type *env);
+int yyparse (parser_mode *mode, environment_type *env);
+@end example
-The Bison precedence declarations, @code{%left}, @code{%right} and
-@code{%nonassoc}, can only be used once for a given token; so a token has
-only one precedence declared in this way. For context-dependent
-precedence, you need to use an additional mechanism: the @code{%prec}
-modifier for rules.
+@node Error Reporting
+@section The Error Reporting Function @code{yyerror}
+@cindex error reporting function
+@findex yyerror
+@cindex parse error
+@cindex syntax error
-The @code{%prec} modifier declares the precedence of a particular rule by
-specifying a terminal symbol whose precedence should be used for that rule.
-It's not necessary for that symbol to appear otherwise in the rule. The
-modifier's syntax is:
+The Bison parser detects a @dfn{syntax error} (or @dfn{parse error})
+whenever it reads a token which cannot satisfy any syntax rule. An
+action in the grammar can also explicitly proclaim an error, using the
+macro @code{YYERROR} (@pxref{Action Features, ,Special Features for Use
+in Actions}).
-@example
-%prec @var{terminal-symbol}
-@end example
+The Bison parser expects to report the error by calling an error
+reporting function named @code{yyerror}, which you must supply. It is
+called by @code{yyparse} whenever a syntax error is found, and it
+receives one argument. For a syntax error, the string is normally
+@w{@code{"syntax error"}}.
-@noindent
-and it is written after the components of the rule. Its effect is to
-assign the rule the precedence of @var{terminal-symbol}, overriding
-the precedence that would be deduced for it in the ordinary way. The
-altered rule precedence then affects how conflicts involving that rule
-are resolved (@pxref{Precedence, ,Operator Precedence}).
+@findex %define parse.error
+If you invoke @samp{%define parse.error verbose} in the Bison declarations
+section (@pxref{Bison Declarations, ,The Bison Declarations Section}), then
+Bison provides a more verbose and specific error message string instead of
+just plain @w{@code{"syntax error"}}. However, that message sometimes
+contains incorrect information if LAC is not enabled (@pxref{LAC}).
-Here is how @code{%prec} solves the problem of unary minus. First, declare
-a precedence for a fictitious terminal symbol named @code{UMINUS}. There
-are no tokens of this type, but the symbol serves to stand for its
-precedence:
+The parser can detect one other kind of error: memory exhaustion. This
+can happen when the input contains constructions that are very deeply
+nested. It isn't likely you will encounter this, since the Bison
+parser normally extends its stack automatically up to a very large limit. But
+if memory is exhausted, @code{yyparse} calls @code{yyerror} in the usual
+fashion, except that the argument string is @w{@code{"memory exhausted"}}.
-@example
-@dots{}
-%left '+' '-'
-%left '*'
-%left UMINUS
-@end example
+In some cases diagnostics like @w{@code{"syntax error"}} are
+translated automatically from English to some other language before
+they are passed to @code{yyerror}. @xref{Internationalization}.
-Now the precedence of @code{UMINUS} can be used in specific rules:
+The following definition suffices in simple programs:
@example
@group
-exp: @dots{}
- | exp '-' exp
- @dots{}
- | '-' exp %prec UMINUS
+void
+yyerror (char const *s)
+@{
+@end group
+@group
+ fprintf (stderr, "%s\n", s);
+@}
@end group
@end example
-@ifset defaultprec
-If you forget to append @code{%prec UMINUS} to the rule for unary
-minus, Bison silently assumes that minus has its usual precedence.
-This kind of problem can be tricky to debug, since one typically
-discovers the mistake only by testing the code.
-
-The @code{%no-default-prec;} declaration makes it easier to discover
-this kind of problem systematically. It causes rules that lack a
-@code{%prec} modifier to have no precedence, even if the last terminal
-symbol mentioned in their components has a declared precedence.
-
-If @code{%no-default-prec;} is in effect, you must specify @code{%prec}
-for all rules that participate in precedence conflict resolution.
-Then you will see any shift/reduce conflict until you tell Bison how
-to resolve it, either by changing your grammar or by adding an
-explicit precedence. This will probably add declarations to the
-grammar, but it helps to protect against incorrect rule precedences.
-
-The effect of @code{%no-default-prec;} can be reversed by giving
-@code{%default-prec;}, which is the default.
-@end ifset
-
-@node Parser States
-@section Parser States
-@cindex finite-state machine
-@cindex parser state
-@cindex state (of parser)
-
-The function @code{yyparse} is implemented using a finite-state machine.
-The values pushed on the parser stack are not simply token type codes; they
-represent the entire sequence of terminal and nonterminal symbols at or
-near the top of the stack. The current state collects all the information
-about previous input which is relevant to deciding what to do next.
+After @code{yyerror} returns to @code{yyparse}, the latter will attempt
+error recovery if you have written suitable error recovery grammar rules
+(@pxref{Error Recovery}). If recovery is impossible, @code{yyparse} will
+immediately return 1.
-Each time a lookahead token is read, the current parser state together
-with the type of lookahead token are looked up in a table. This table
-entry can say, ``Shift the lookahead token.'' In this case, it also
-specifies the new parser state, which is pushed onto the top of the
-parser stack. Or it can say, ``Reduce using rule number @var{n}.''
-This means that a certain number of tokens or groupings are taken off
-the top of the stack, and replaced by one grouping. In other words,
-that number of states are popped from the stack, and one new state is
-pushed.
+Obviously, in location tracking pure parsers, @code{yyerror} should have
+an access to the current location.
+This is indeed the case for the GLR
+parsers, but not for the Yacc parser, for historical reasons. I.e., if
+@samp{%locations %define api.pure} is passed then the prototypes for
+@code{yyerror} are:
-There is one other alternative: the table can say that the lookahead token
-is erroneous in the current state. This causes error processing to begin
-(@pxref{Error Recovery}).
+@example
+void yyerror (char const *msg); /* Yacc parsers. */
+void yyerror (YYLTYPE *locp, char const *msg); /* GLR parsers. */
+@end example
-@node Reduce/Reduce
-@section Reduce/Reduce Conflicts
-@cindex reduce/reduce conflict
-@cindex conflicts, reduce/reduce
+If @samp{%parse-param @{int *nastiness@}} is used, then:
-A reduce/reduce conflict occurs if there are two or more rules that apply
-to the same sequence of input. This usually indicates a serious error
-in the grammar.
+@example
+void yyerror (int *nastiness, char const *msg); /* Yacc parsers. */
+void yyerror (int *nastiness, char const *msg); /* GLR parsers. */
+@end example
-For example, here is an erroneous attempt to define a sequence
-of zero or more @code{word} groupings.
+Finally, GLR and Yacc parsers share the same @code{yyerror} calling
+convention for absolutely pure parsers, i.e., when the calling
+convention of @code{yylex} @emph{and} the calling convention of
+@samp{%define api.pure} are pure.
+I.e.:
@example
-sequence: /* empty */
- @{ printf ("empty sequence\n"); @}
- | maybeword
- | sequence word
- @{ printf ("added word %s\n", $2); @}
- ;
-
-maybeword: /* empty */
- @{ printf ("empty maybeword\n"); @}
- | word
- @{ printf ("single word %s\n", $1); @}
- ;
+/* Location tracking. */
+%locations
+/* Pure yylex. */
+%define api.pure
+%lex-param @{int *nastiness@}
+/* Pure yyparse. */
+%parse-param @{int *nastiness@}
+%parse-param @{int *randomness@}
@end example
@noindent
-The error is an ambiguity: there is more than one way to parse a single
-@code{word} into a @code{sequence}. It could be reduced to a
-@code{maybeword} and then into a @code{sequence} via the second rule.
-Alternatively, nothing-at-all could be reduced into a @code{sequence}
-via the first rule, and this could be combined with the @code{word}
-using the third rule for @code{sequence}.
-
-There is also more than one way to reduce nothing-at-all into a
-@code{sequence}. This can be done directly via the first rule,
-or indirectly via @code{maybeword} and then the second rule.
-
-You might think that this is a distinction without a difference, because it
-does not change whether any particular input is valid or not. But it does
-affect which actions are run. One parsing order runs the second rule's
-action; the other runs the first rule's action and the third rule's action.
-In this example, the output of the program changes.
-
-Bison resolves a reduce/reduce conflict by choosing to use the rule that
-appears first in the grammar, but it is very risky to rely on this. Every
-reduce/reduce conflict must be studied and usually eliminated. Here is the
-proper way to define @code{sequence}:
+results in the following signatures for all the parser kinds:
@example
-sequence: /* empty */
- @{ printf ("empty sequence\n"); @}
- | sequence word
- @{ printf ("added word %s\n", $2); @}
- ;
+int yylex (YYSTYPE *lvalp, YYLTYPE *llocp, int *nastiness);
+int yyparse (int *nastiness, int *randomness);
+void yyerror (YYLTYPE *locp,
+ int *nastiness, int *randomness,
+ char const *msg);
@end example
-Here is another common error that yields a reduce/reduce conflict:
-
-@example
-sequence: /* empty */
- | sequence words
- | sequence redirects
- ;
+@noindent
+The prototypes are only indications of how the code produced by Bison
+uses @code{yyerror}. Bison-generated code always ignores the returned
+value, so @code{yyerror} can return any type, including @code{void}.
+Also, @code{yyerror} can be a variadic function; that is why the
+message is always passed last.
-words: /* empty */
- | words word
- ;
+Traditionally @code{yyerror} returns an @code{int} that is always
+ignored, but this is purely for historical reasons, and @code{void} is
+preferable since it more accurately describes the return type for
+@code{yyerror}.
-redirects:/* empty */
- | redirects redirect
- ;
-@end example
+@vindex yynerrs
+The variable @code{yynerrs} contains the number of syntax errors
+reported so far. Normally this variable is global; but if you
+request a pure parser (@pxref{Pure Decl, ,A Pure (Reentrant) Parser})
+then it is a local variable which only the actions can access.
-@noindent
-The intention here is to define a sequence which can contain either
-@code{word} or @code{redirect} groupings. The individual definitions of
-@code{sequence}, @code{words} and @code{redirects} are error-free, but the
-three together make a subtle ambiguity: even an empty input can be parsed
-in infinitely many ways!
+@node Action Features
+@section Special Features for Use in Actions
+@cindex summary, action features
+@cindex action features summary
-Consider: nothing-at-all could be a @code{words}. Or it could be two
-@code{words} in a row, or three, or any number. It could equally well be a
-@code{redirects}, or two, or any number. Or it could be a @code{words}
-followed by three @code{redirects} and another @code{words}. And so on.
+Here is a table of Bison constructs, variables and macros that
+are useful in actions.
-Here are two ways to correct these rules. First, to make it a single level
-of sequence:
+@deffn {Variable} $$
+Acts like a variable that contains the semantic value for the
+grouping made by the current rule. @xref{Actions}.
+@end deffn
-@example
-sequence: /* empty */
- | sequence word
- | sequence redirect
- ;
-@end example
+@deffn {Variable} $@var{n}
+Acts like a variable that contains the semantic value for the
+@var{n}th component of the current rule. @xref{Actions}.
+@end deffn
-Second, to prevent either a @code{words} or a @code{redirects}
-from being empty:
+@deffn {Variable} $<@var{typealt}>$
+Like @code{$$} but specifies alternative @var{typealt} in the union
+specified by the @code{%union} declaration. @xref{Action Types, ,Data
+Types of Values in Actions}.
+@end deffn
-@example
-sequence: /* empty */
- | sequence words
- | sequence redirects
- ;
+@deffn {Variable} $<@var{typealt}>@var{n}
+Like @code{$@var{n}} but specifies alternative @var{typealt} in the
+union specified by the @code{%union} declaration.
+@xref{Action Types, ,Data Types of Values in Actions}.
+@end deffn
-words: word
- | words word
- ;
+@deffn {Macro} YYABORT;
+Return immediately from @code{yyparse}, indicating failure.
+@xref{Parser Function, ,The Parser Function @code{yyparse}}.
+@end deffn
-redirects:redirect
- | redirects redirect
- ;
-@end example
+@deffn {Macro} YYACCEPT;
+Return immediately from @code{yyparse}, indicating success.
+@xref{Parser Function, ,The Parser Function @code{yyparse}}.
+@end deffn
-@node Mystery Conflicts
-@section Mysterious Reduce/Reduce Conflicts
+@deffn {Macro} YYBACKUP (@var{token}, @var{value});
+@findex YYBACKUP
+Unshift a token. This macro is allowed only for rules that reduce
+a single value, and only when there is no lookahead token.
+It is also disallowed in GLR parsers.
+It installs a lookahead token with token type @var{token} and
+semantic value @var{value}; then it discards the value that was
+going to be reduced by this rule.
-Sometimes reduce/reduce conflicts can occur that don't look warranted.
-Here is an example:
+If the macro is used when it is not valid, such as when there is
+a lookahead token already, then it reports a syntax error with
+a message @samp{cannot back up} and performs ordinary error
+recovery.
-@example
-@group
-%token ID
+In either case, the rest of the action is not executed.
+@end deffn
-%%
-def: param_spec return_spec ','
- ;
-param_spec:
- type
- | name_list ':' type
- ;
+@deffn {Macro} YYEMPTY
+@vindex YYEMPTY
+Value stored in @code{yychar} when there is no lookahead token.
+@end deffn
+
+@deffn {Macro} YYEOF
+@vindex YYEOF
+Value stored in @code{yychar} when the lookahead is the end of the input
+stream.
+@end deffn
+
+@deffn {Macro} YYERROR;
+@findex YYERROR
+Cause an immediate syntax error. This statement initiates error
+recovery just as if the parser itself had detected an error; however, it
+does not call @code{yyerror}, and does not print any message. If you
+want to print an error message, call @code{yyerror} explicitly before
+the @samp{YYERROR;} statement. @xref{Error Recovery}.
+@end deffn
+
+@deffn {Macro} YYRECOVERING
+@findex YYRECOVERING
+The expression @code{YYRECOVERING ()} yields 1 when the parser
+is recovering from a syntax error, and 0 otherwise.
+@xref{Error Recovery}.
+@end deffn
+
+@deffn {Variable} yychar
+Variable containing either the lookahead token, or @code{YYEOF} when the
+lookahead is the end of the input stream, or @code{YYEMPTY} when no lookahead
+has been performed so the next token is not yet known.
+Do not modify @code{yychar} in a deferred semantic action (@pxref{GLR Semantic
+Actions}).
+@xref{Lookahead, ,Lookahead Tokens}.
+@end deffn
+
+@deffn {Macro} yyclearin;
+Discard the current lookahead token. This is useful primarily in
+error rules.
+Do not invoke @code{yyclearin} in a deferred semantic action (@pxref{GLR
+Semantic Actions}).
+@xref{Error Recovery}.
+@end deffn
+
+@deffn {Macro} yyerrok;
+Resume generating error messages immediately for subsequent syntax
+errors. This is useful primarily in error rules.
+@xref{Error Recovery}.
+@end deffn
+
+@deffn {Variable} yylloc
+Variable containing the lookahead token location when @code{yychar} is not set
+to @code{YYEMPTY} or @code{YYEOF}.
+Do not modify @code{yylloc} in a deferred semantic action (@pxref{GLR Semantic
+Actions}).
+@xref{Actions and Locations, ,Actions and Locations}.
+@end deffn
+
+@deffn {Variable} yylval
+Variable containing the lookahead token semantic value when @code{yychar} is
+not set to @code{YYEMPTY} or @code{YYEOF}.
+Do not modify @code{yylval} in a deferred semantic action (@pxref{GLR Semantic
+Actions}).
+@xref{Actions, ,Actions}.
+@end deffn
+
+@deffn {Value} @@$
+@findex @@$
+Acts like a structure variable containing information on the textual
+location of the grouping made by the current rule. @xref{Tracking
+Locations}.
+
+@c Check if those paragraphs are still useful or not.
+
+@c @example
+@c struct @{
+@c int first_line, last_line;
+@c int first_column, last_column;
+@c @};
+@c @end example
+
+@c Thus, to get the starting line number of the third component, you would
+@c use @samp{@@3.first_line}.
+
+@c In order for the members of this structure to contain valid information,
+@c you must make @code{yylex} supply this information about each token.
+@c If you need only certain members, then @code{yylex} need only fill in
+@c those members.
+
+@c The use of this feature makes the parser noticeably slower.
+@end deffn
+
+@deffn {Value} @@@var{n}
+@findex @@@var{n}
+Acts like a structure variable containing information on the textual
+location of the @var{n}th component of the current rule. @xref{Tracking
+Locations}.
+@end deffn
+
+@node Internationalization
+@section Parser Internationalization
+@cindex internationalization
+@cindex i18n
+@cindex NLS
+@cindex gettext
+@cindex bison-po
+
+A Bison-generated parser can print diagnostics, including error and
+tracing messages. By default, they appear in English. However, Bison
+also supports outputting diagnostics in the user's native language. To
+make this work, the user should set the usual environment variables.
+@xref{Users, , The User's View, gettext, GNU @code{gettext} utilities}.
+For example, the shell command @samp{export LC_ALL=fr_CA.UTF-8} might
+set the user's locale to French Canadian using the UTF-8
+encoding. The exact set of available locales depends on the user's
+installation.
+
+The maintainer of a package that uses a Bison-generated parser enables
+the internationalization of the parser's output through the following
+steps. Here we assume a package that uses GNU Autoconf and
+GNU Automake.
+
+@enumerate
+@item
+@cindex bison-i18n.m4
+Into the directory containing the GNU Autoconf macros used
+by the package---often called @file{m4}---copy the
+@file{bison-i18n.m4} file installed by Bison under
+@samp{share/aclocal/bison-i18n.m4} in Bison's installation directory.
+For example:
+
+@example
+cp /usr/local/share/aclocal/bison-i18n.m4 m4/bison-i18n.m4
+@end example
+
+@item
+@findex BISON_I18N
+@vindex BISON_LOCALEDIR
+@vindex YYENABLE_NLS
+In the top-level @file{configure.ac}, after the @code{AM_GNU_GETTEXT}
+invocation, add an invocation of @code{BISON_I18N}. This macro is
+defined in the file @file{bison-i18n.m4} that you copied earlier. It
+causes @samp{configure} to find the value of the
+@code{BISON_LOCALEDIR} variable, and it defines the source-language
+symbol @code{YYENABLE_NLS} to enable translations in the
+Bison-generated parser.
+
+@item
+In the @code{main} function of your program, designate the directory
+containing Bison's runtime message catalog, through a call to
+@samp{bindtextdomain} with domain name @samp{bison-runtime}.
+For example:
+
+@example
+bindtextdomain ("bison-runtime", BISON_LOCALEDIR);
+@end example
+
+Typically this appears after any other call @code{bindtextdomain
+(PACKAGE, LOCALEDIR)} that your package already has. Here we rely on
+@samp{BISON_LOCALEDIR} to be defined as a string through the
+@file{Makefile}.
+
+@item
+In the @file{Makefile.am} that controls the compilation of the @code{main}
+function, make @samp{BISON_LOCALEDIR} available as a C preprocessor macro,
+either in @samp{DEFS} or in @samp{AM_CPPFLAGS}. For example:
+
+@example
+DEFS = @@DEFS@@ -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"'
+@end example
+
+or:
+
+@example
+AM_CPPFLAGS = -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"'
+@end example
+
+@item
+Finally, invoke the command @command{autoreconf} to generate the build
+infrastructure.
+@end enumerate
+
+
+@node Algorithm
+@chapter The Bison Parser Algorithm
+@cindex Bison parser algorithm
+@cindex algorithm of parser
+@cindex shifting
+@cindex reduction
+@cindex parser stack
+@cindex stack, parser
+
+As Bison reads tokens, it pushes them onto a stack along with their
+semantic values. The stack is called the @dfn{parser stack}. Pushing a
+token is traditionally called @dfn{shifting}.
+
+For example, suppose the infix calculator has read @samp{1 + 5 *}, with a
+@samp{3} to come. The stack will have four elements, one for each token
+that was shifted.
+
+But the stack does not always have an element for each token read. When
+the last @var{n} tokens and groupings shifted match the components of a
+grammar rule, they can be combined according to that rule. This is called
+@dfn{reduction}. Those tokens and groupings are replaced on the stack by a
+single grouping whose symbol is the result (left hand side) of that rule.
+Running the rule's action is part of the process of reduction, because this
+is what computes the semantic value of the resulting grouping.
+
+For example, if the infix calculator's parser stack contains this:
+
+@example
+1 + 5 * 3
+@end example
+
+@noindent
+and the next input token is a newline character, then the last three
+elements can be reduced to 15 via the rule:
+
+@example
+expr: expr '*' expr;
+@end example
+
+@noindent
+Then the stack contains just these three elements:
+
+@example
+1 + 15
+@end example
+
+@noindent
+At this point, another reduction can be made, resulting in the single value
+16. Then the newline token can be shifted.
+
+The parser tries, by shifts and reductions, to reduce the entire input down
+to a single grouping whose symbol is the grammar's start-symbol
+(@pxref{Language and Grammar, ,Languages and Context-Free Grammars}).
+
+This kind of parser is known in the literature as a bottom-up parser.
+
+@menu
+* Lookahead:: Parser looks one token ahead when deciding what to do.
+* Shift/Reduce:: Conflicts: when either shifting or reduction is valid.
+* Precedence:: Operator precedence works by resolving conflicts.
+* Contextual Precedence:: When an operator's precedence depends on context.
+* Parser States:: The parser is a finite-state-machine with stack.
+* Reduce/Reduce:: When two rules are applicable in the same situation.
+* Mysterious Conflicts:: Conflicts that look unjustified.
+* Tuning LR:: How to tune fundamental aspects of LR-based parsing.
+* Generalized LR Parsing:: Parsing arbitrary context-free grammars.
+* Memory Management:: What happens when memory is exhausted. How to avoid it.
+@end menu
+
+@node Lookahead
+@section Lookahead Tokens
+@cindex lookahead token
+
+The Bison parser does @emph{not} always reduce immediately as soon as the
+last @var{n} tokens and groupings match a rule. This is because such a
+simple strategy is inadequate to handle most languages. Instead, when a
+reduction is possible, the parser sometimes ``looks ahead'' at the next
+token in order to decide what to do.
+
+When a token is read, it is not immediately shifted; first it becomes the
+@dfn{lookahead token}, which is not on the stack. Now the parser can
+perform one or more reductions of tokens and groupings on the stack, while
+the lookahead token remains off to the side. When no more reductions
+should take place, the lookahead token is shifted onto the stack. This
+does not mean that all possible reductions have been done; depending on the
+token type of the lookahead token, some rules may choose to delay their
+application.
+
+Here is a simple case where lookahead is needed. These three rules define
+expressions which contain binary addition operators and postfix unary
+factorial operators (@samp{!}), and allow parentheses for grouping.
+
+@example
+@group
+expr:
+ term '+' expr
+| term
+;
+@end group
+
+@group
+term:
+ '(' expr ')'
+| term '!'
+| NUMBER
+;
+@end group
+@end example
+
+Suppose that the tokens @w{@samp{1 + 2}} have been read and shifted; what
+should be done? If the following token is @samp{)}, then the first three
+tokens must be reduced to form an @code{expr}. This is the only valid
+course, because shifting the @samp{)} would produce a sequence of symbols
+@w{@code{term ')'}}, and no rule allows this.
+
+If the following token is @samp{!}, then it must be shifted immediately so
+that @w{@samp{2 !}} can be reduced to make a @code{term}. If instead the
+parser were to reduce before shifting, @w{@samp{1 + 2}} would become an
+@code{expr}. It would then be impossible to shift the @samp{!} because
+doing so would produce on the stack the sequence of symbols @code{expr
+'!'}. No rule allows that sequence.
+
+@vindex yychar
+@vindex yylval
+@vindex yylloc
+The lookahead token is stored in the variable @code{yychar}.
+Its semantic value and location, if any, are stored in the variables
+@code{yylval} and @code{yylloc}.
+@xref{Action Features, ,Special Features for Use in Actions}.
+
+@node Shift/Reduce
+@section Shift/Reduce Conflicts
+@cindex conflicts
+@cindex shift/reduce conflicts
+@cindex dangling @code{else}
+@cindex @code{else}, dangling
+
+Suppose we are parsing a language which has if-then and if-then-else
+statements, with a pair of rules like this:
+
+@example
+@group
+if_stmt:
+ IF expr THEN stmt
+| IF expr THEN stmt ELSE stmt
+;
+@end group
+@end example
+
+@noindent
+Here we assume that @code{IF}, @code{THEN} and @code{ELSE} are
+terminal symbols for specific keyword tokens.
+
+When the @code{ELSE} token is read and becomes the lookahead token, the
+contents of the stack (assuming the input is valid) are just right for
+reduction by the first rule. But it is also legitimate to shift the
+@code{ELSE}, because that would lead to eventual reduction by the second
+rule.
+
+This situation, where either a shift or a reduction would be valid, is
+called a @dfn{shift/reduce conflict}. Bison is designed to resolve
+these conflicts by choosing to shift, unless otherwise directed by
+operator precedence declarations. To see the reason for this, let's
+contrast it with the other alternative.
+
+Since the parser prefers to shift the @code{ELSE}, the result is to attach
+the else-clause to the innermost if-statement, making these two inputs
+equivalent:
+
+@example
+if x then if y then win (); else lose;
+
+if x then do; if y then win (); else lose; end;
+@end example
+
+But if the parser chose to reduce when possible rather than shift, the
+result would be to attach the else-clause to the outermost if-statement,
+making these two inputs equivalent:
+
+@example
+if x then if y then win (); else lose;
+
+if x then do; if y then win (); end; else lose;
+@end example
+
+The conflict exists because the grammar as written is ambiguous: either
+parsing of the simple nested if-statement is legitimate. The established
+convention is that these ambiguities are resolved by attaching the
+else-clause to the innermost if-statement; this is what Bison accomplishes
+by choosing to shift rather than reduce. (It would ideally be cleaner to
+write an unambiguous grammar, but that is very hard to do in this case.)
+This particular ambiguity was first encountered in the specifications of
+Algol 60 and is called the ``dangling @code{else}'' ambiguity.
+
+To avoid warnings from Bison about predictable, legitimate shift/reduce
+conflicts, use the @code{%expect @var{n}} declaration.
+There will be no warning as long as the number of shift/reduce conflicts
+is exactly @var{n}, and Bison will report an error if there is a
+different number.
+@xref{Expect Decl, ,Suppressing Conflict Warnings}.
+
+The definition of @code{if_stmt} above is solely to blame for the
+conflict, but the conflict does not actually appear without additional
+rules. Here is a complete Bison grammar file that actually manifests
+the conflict:
+
+@example
+@group
+%token IF THEN ELSE variable
+%%
+@end group
+@group
+stmt:
+ expr
+| if_stmt
+;
+@end group
+
+@group
+if_stmt:
+ IF expr THEN stmt
+| IF expr THEN stmt ELSE stmt
+;
+@end group
+
+expr:
+ variable
+;
+@end example
+
+@node Precedence
+@section Operator Precedence
+@cindex operator precedence
+@cindex precedence of operators
+
+Another situation where shift/reduce conflicts appear is in arithmetic
+expressions. Here shifting is not always the preferred resolution; the
+Bison declarations for operator precedence allow you to specify when to
+shift and when to reduce.
+
+@menu
+* Why Precedence:: An example showing why precedence is needed.
+* Using Precedence:: How to specify precedence and associativity.
+* Precedence Only:: How to specify precedence only.
+* Precedence Examples:: How these features are used in the previous example.
+* How Precedence:: How they work.
+@end menu
+
+@node Why Precedence
+@subsection When Precedence is Needed
+
+Consider the following ambiguous grammar fragment (ambiguous because the
+input @w{@samp{1 - 2 * 3}} can be parsed in two different ways):
+
+@example
+@group
+expr:
+ expr '-' expr
+| expr '*' expr
+| expr '<' expr
+| '(' expr ')'
+@dots{}
+;
+@end group
+@end example
+
+@noindent
+Suppose the parser has seen the tokens @samp{1}, @samp{-} and @samp{2};
+should it reduce them via the rule for the subtraction operator? It
+depends on the next token. Of course, if the next token is @samp{)}, we
+must reduce; shifting is invalid because no single rule can reduce the
+token sequence @w{@samp{- 2 )}} or anything starting with that. But if
+the next token is @samp{*} or @samp{<}, we have a choice: either
+shifting or reduction would allow the parse to complete, but with
+different results.
+
+To decide which one Bison should do, we must consider the results. If
+the next operator token @var{op} is shifted, then it must be reduced
+first in order to permit another opportunity to reduce the difference.
+The result is (in effect) @w{@samp{1 - (2 @var{op} 3)}}. On the other
+hand, if the subtraction is reduced before shifting @var{op}, the result
+is @w{@samp{(1 - 2) @var{op} 3}}. Clearly, then, the choice of shift or
+reduce should depend on the relative precedence of the operators
+@samp{-} and @var{op}: @samp{*} should be shifted first, but not
+@samp{<}.
+
+@cindex associativity
+What about input such as @w{@samp{1 - 2 - 5}}; should this be
+@w{@samp{(1 - 2) - 5}} or should it be @w{@samp{1 - (2 - 5)}}? For most
+operators we prefer the former, which is called @dfn{left association}.
+The latter alternative, @dfn{right association}, is desirable for
+assignment operators. The choice of left or right association is a
+matter of whether the parser chooses to shift or reduce when the stack
+contains @w{@samp{1 - 2}} and the lookahead token is @samp{-}: shifting
+makes right-associativity.
+
+@node Using Precedence
+@subsection Specifying Operator Precedence
+@findex %left
+@findex %nonassoc
+@findex %precedence
+@findex %right
+
+Bison allows you to specify these choices with the operator precedence
+declarations @code{%left} and @code{%right}. Each such declaration
+contains a list of tokens, which are operators whose precedence and
+associativity is being declared. The @code{%left} declaration makes all
+those operators left-associative and the @code{%right} declaration makes
+them right-associative. A third alternative is @code{%nonassoc}, which
+declares that it is a syntax error to find the same operator twice ``in a
+row''.
+The last alternative, @code{%precedence}, allows to define only
+precedence and no associativity at all. As a result, any
+associativity-related conflict that remains will be reported as an
+compile-time error. The directive @code{%nonassoc} creates run-time
+error: using the operator in a associative way is a syntax error. The
+directive @code{%precedence} creates compile-time errors: an operator
+@emph{can} be involved in an associativity-related conflict, contrary to
+what expected the grammar author.
+
+The relative precedence of different operators is controlled by the
+order in which they are declared. The first precedence/associativity
+declaration in the file declares the operators whose
+precedence is lowest, the next such declaration declares the operators
+whose precedence is a little higher, and so on.
+
+@node Precedence Only
+@subsection Specifying Precedence Only
+@findex %precedence
+
+Since POSIX Yacc defines only @code{%left}, @code{%right}, and
+@code{%nonassoc}, which all defines precedence and associativity, little
+attention is paid to the fact that precedence cannot be defined without
+defining associativity. Yet, sometimes, when trying to solve a
+conflict, precedence suffices. In such a case, using @code{%left},
+@code{%right}, or @code{%nonassoc} might hide future (associativity
+related) conflicts that would remain hidden.
+
+The dangling @code{else} ambiguity (@pxref{Shift/Reduce, , Shift/Reduce
+Conflicts}) can be solved explicitly. This shift/reduce conflicts occurs
+in the following situation, where the period denotes the current parsing
+state:
+
+@example
+if @var{e1} then if @var{e2} then @var{s1} . else @var{s2}
+@end example
+
+The conflict involves the reduction of the rule @samp{IF expr THEN
+stmt}, which precedence is by default that of its last token
+(@code{THEN}), and the shifting of the token @code{ELSE}. The usual
+disambiguation (attach the @code{else} to the closest @code{if}),
+shifting must be preferred, i.e., the precedence of @code{ELSE} must be
+higher than that of @code{THEN}. But neither is expected to be involved
+in an associativity related conflict, which can be specified as follows.
+
+@example
+%precedence THEN
+%precedence ELSE
+@end example
+
+The unary-minus is another typical example where associativity is
+usually over-specified, see @ref{Infix Calc, , Infix Notation
+Calculator: @code{calc}}. The @code{%left} directive is traditionally
+used to declare the precedence of @code{NEG}, which is more than needed
+since it also defines its associativity. While this is harmless in the
+traditional example, who knows how @code{NEG} might be used in future
+evolutions of the grammar@dots{}
+
+@node Precedence Examples
+@subsection Precedence Examples
+
+In our example, we would want the following declarations:
+
+@example
+%left '<'
+%left '-'
+%left '*'
+@end example
+
+In a more complete example, which supports other operators as well, we
+would declare them in groups of equal precedence. For example, @code{'+'} is
+declared with @code{'-'}:
+
+@example
+%left '<' '>' '=' NE LE GE
+%left '+' '-'
+%left '*' '/'
+@end example
+
+@noindent
+(Here @code{NE} and so on stand for the operators for ``not equal''
+and so on. We assume that these tokens are more than one character long
+and therefore are represented by names, not character literals.)
+
+@node How Precedence
+@subsection How Precedence Works
+
+The first effect of the precedence declarations is to assign precedence
+levels to the terminal symbols declared. The second effect is to assign
+precedence levels to certain rules: each rule gets its precedence from
+the last terminal symbol mentioned in the components. (You can also
+specify explicitly the precedence of a rule. @xref{Contextual
+Precedence, ,Context-Dependent Precedence}.)
+
+Finally, the resolution of conflicts works by comparing the precedence
+of the rule being considered with that of the lookahead token. If the
+token's precedence is higher, the choice is to shift. If the rule's
+precedence is higher, the choice is to reduce. If they have equal
+precedence, the choice is made based on the associativity of that
+precedence level. The verbose output file made by @samp{-v}
+(@pxref{Invocation, ,Invoking Bison}) says how each conflict was
+resolved.
+
+Not all rules and not all tokens have precedence. If either the rule or
+the lookahead token has no precedence, then the default is to shift.
+
+@node Contextual Precedence
+@section Context-Dependent Precedence
+@cindex context-dependent precedence
+@cindex unary operator precedence
+@cindex precedence, context-dependent
+@cindex precedence, unary operator
+@findex %prec
+
+Often the precedence of an operator depends on the context. This sounds
+outlandish at first, but it is really very common. For example, a minus
+sign typically has a very high precedence as a unary operator, and a
+somewhat lower precedence (lower than multiplication) as a binary operator.
+
+The Bison precedence declarations
+can only be used once for a given token; so a token has
+only one precedence declared in this way. For context-dependent
+precedence, you need to use an additional mechanism: the @code{%prec}
+modifier for rules.
+
+The @code{%prec} modifier declares the precedence of a particular rule by
+specifying a terminal symbol whose precedence should be used for that rule.
+It's not necessary for that symbol to appear otherwise in the rule. The
+modifier's syntax is:
+
+@example
+%prec @var{terminal-symbol}
+@end example
+
+@noindent
+and it is written after the components of the rule. Its effect is to
+assign the rule the precedence of @var{terminal-symbol}, overriding
+the precedence that would be deduced for it in the ordinary way. The
+altered rule precedence then affects how conflicts involving that rule
+are resolved (@pxref{Precedence, ,Operator Precedence}).
+
+Here is how @code{%prec} solves the problem of unary minus. First, declare
+a precedence for a fictitious terminal symbol named @code{UMINUS}. There
+are no tokens of this type, but the symbol serves to stand for its
+precedence:
+
+@example
+@dots{}
+%left '+' '-'
+%left '*'
+%left UMINUS
+@end example
+
+Now the precedence of @code{UMINUS} can be used in specific rules:
+
+@example
+@group
+exp:
+ @dots{}
+| exp '-' exp
+ @dots{}
+| '-' exp %prec UMINUS
+@end group
+@end example
+
+@ifset defaultprec
+If you forget to append @code{%prec UMINUS} to the rule for unary
+minus, Bison silently assumes that minus has its usual precedence.
+This kind of problem can be tricky to debug, since one typically
+discovers the mistake only by testing the code.
+
+The @code{%no-default-prec;} declaration makes it easier to discover
+this kind of problem systematically. It causes rules that lack a
+@code{%prec} modifier to have no precedence, even if the last terminal
+symbol mentioned in their components has a declared precedence.
+
+If @code{%no-default-prec;} is in effect, you must specify @code{%prec}
+for all rules that participate in precedence conflict resolution.
+Then you will see any shift/reduce conflict until you tell Bison how
+to resolve it, either by changing your grammar or by adding an
+explicit precedence. This will probably add declarations to the
+grammar, but it helps to protect against incorrect rule precedences.
+
+The effect of @code{%no-default-prec;} can be reversed by giving
+@code{%default-prec;}, which is the default.
+@end ifset
+
+@node Parser States
+@section Parser States
+@cindex finite-state machine
+@cindex parser state
+@cindex state (of parser)
+
+The function @code{yyparse} is implemented using a finite-state machine.
+The values pushed on the parser stack are not simply token type codes; they
+represent the entire sequence of terminal and nonterminal symbols at or
+near the top of the stack. The current state collects all the information
+about previous input which is relevant to deciding what to do next.
+
+Each time a lookahead token is read, the current parser state together
+with the type of lookahead token are looked up in a table. This table
+entry can say, ``Shift the lookahead token.'' In this case, it also
+specifies the new parser state, which is pushed onto the top of the
+parser stack. Or it can say, ``Reduce using rule number @var{n}.''
+This means that a certain number of tokens or groupings are taken off
+the top of the stack, and replaced by one grouping. In other words,
+that number of states are popped from the stack, and one new state is
+pushed.
+
+There is one other alternative: the table can say that the lookahead token
+is erroneous in the current state. This causes error processing to begin
+(@pxref{Error Recovery}).
+
+@node Reduce/Reduce
+@section Reduce/Reduce Conflicts
+@cindex reduce/reduce conflict
+@cindex conflicts, reduce/reduce
+
+A reduce/reduce conflict occurs if there are two or more rules that apply
+to the same sequence of input. This usually indicates a serious error
+in the grammar.
+
+For example, here is an erroneous attempt to define a sequence
+of zero or more @code{word} groupings.
+
+@example
+@group
+sequence:
+ /* empty */ @{ printf ("empty sequence\n"); @}
+| maybeword
+| sequence word @{ printf ("added word %s\n", $2); @}
+;
+@end group
+
+@group
+maybeword:
+ /* empty */ @{ printf ("empty maybeword\n"); @}
+| word @{ printf ("single word %s\n", $1); @}
+;
+@end group
+@end example
+
+@noindent
+The error is an ambiguity: there is more than one way to parse a single
+@code{word} into a @code{sequence}. It could be reduced to a
+@code{maybeword} and then into a @code{sequence} via the second rule.
+Alternatively, nothing-at-all could be reduced into a @code{sequence}
+via the first rule, and this could be combined with the @code{word}
+using the third rule for @code{sequence}.
+
+There is also more than one way to reduce nothing-at-all into a
+@code{sequence}. This can be done directly via the first rule,
+or indirectly via @code{maybeword} and then the second rule.
+
+You might think that this is a distinction without a difference, because it
+does not change whether any particular input is valid or not. But it does
+affect which actions are run. One parsing order runs the second rule's
+action; the other runs the first rule's action and the third rule's action.
+In this example, the output of the program changes.
+
+Bison resolves a reduce/reduce conflict by choosing to use the rule that
+appears first in the grammar, but it is very risky to rely on this. Every
+reduce/reduce conflict must be studied and usually eliminated. Here is the
+proper way to define @code{sequence}:
+
+@example
+sequence:
+ /* empty */ @{ printf ("empty sequence\n"); @}
+| sequence word @{ printf ("added word %s\n", $2); @}
+;
+@end example
+
+Here is another common error that yields a reduce/reduce conflict:
+
+@example
+sequence:
+ /* empty */
+| sequence words
+| sequence redirects
+;
+
+words:
+ /* empty */
+| words word
+;
+
+redirects:
+ /* empty */
+| redirects redirect
+;
+@end example
+
+@noindent
+The intention here is to define a sequence which can contain either
+@code{word} or @code{redirect} groupings. The individual definitions of
+@code{sequence}, @code{words} and @code{redirects} are error-free, but the
+three together make a subtle ambiguity: even an empty input can be parsed
+in infinitely many ways!
+
+Consider: nothing-at-all could be a @code{words}. Or it could be two
+@code{words} in a row, or three, or any number. It could equally well be a
+@code{redirects}, or two, or any number. Or it could be a @code{words}
+followed by three @code{redirects} and another @code{words}. And so on.
+
+Here are two ways to correct these rules. First, to make it a single level
+of sequence:
+
+@example
+sequence:
+ /* empty */
+| sequence word
+| sequence redirect
+;
+@end example
+
+Second, to prevent either a @code{words} or a @code{redirects}
+from being empty:
+
+@example
+@group
+sequence:
+ /* empty */
+| sequence words
+| sequence redirects
+;
+@end group
+
+@group
+words:
+ word
+| words word
+;
+@end group
+
+@group
+redirects:
+ redirect
+| redirects redirect
+;
+@end group
+@end example
+
+@node Mysterious Conflicts
+@section Mysterious Conflicts
+@cindex Mysterious Conflicts
+
+Sometimes reduce/reduce conflicts can occur that don't look warranted.
+Here is an example:
+
+@example
+@group
+%token ID
+
+%%
+def: param_spec return_spec ',';
+param_spec:
+ type
+| name_list ':' type
+;
+@end group
+@group
+return_spec:
+ type
+| name ':' type
+;
+@end group
+@group
+type: ID;
+@end group
+@group
+name: ID;
+name_list:
+ name
+| name ',' name_list
+;
+@end group
+@end example
+
+It would seem that this grammar can be parsed with only a single token
+of lookahead: when a @code{param_spec} is being read, an @code{ID} is
+a @code{name} if a comma or colon follows, or a @code{type} if another
+@code{ID} follows. In other words, this grammar is LR(1).
+
+@cindex LR
+@cindex LALR
+However, for historical reasons, Bison cannot by default handle all
+LR(1) grammars.
+In this grammar, two contexts, that after an @code{ID} at the beginning
+of a @code{param_spec} and likewise at the beginning of a
+@code{return_spec}, are similar enough that Bison assumes they are the
+same.
+They appear similar because the same set of rules would be
+active---the rule for reducing to a @code{name} and that for reducing to
+a @code{type}. Bison is unable to determine at that stage of processing
+that the rules would require different lookahead tokens in the two
+contexts, so it makes a single parser state for them both. Combining
+the two contexts causes a conflict later. In parser terminology, this
+occurrence means that the grammar is not LALR(1).
+
+@cindex IELR
+@cindex canonical LR
+For many practical grammars (specifically those that fall into the non-LR(1)
+class), the limitations of LALR(1) result in difficulties beyond just
+mysterious reduce/reduce conflicts. The best way to fix all these problems
+is to select a different parser table construction algorithm. Either
+IELR(1) or canonical LR(1) would suffice, but the former is more efficient
+and easier to debug during development. @xref{LR Table Construction}, for
+details. (Bison's IELR(1) and canonical LR(1) implementations are
+experimental. More user feedback will help to stabilize them.)
+
+If you instead wish to work around LALR(1)'s limitations, you
+can often fix a mysterious conflict by identifying the two parser states
+that are being confused, and adding something to make them look
+distinct. In the above example, adding one rule to
+@code{return_spec} as follows makes the problem go away:
+
+@example
+@group
+%token BOGUS
+@dots{}
+%%
+@dots{}
+return_spec:
+ type
+| name ':' type
+| ID BOGUS /* This rule is never used. */
+;
+@end group
+@end example
+
+This corrects the problem because it introduces the possibility of an
+additional active rule in the context after the @code{ID} at the beginning of
+@code{return_spec}. This rule is not active in the corresponding context
+in a @code{param_spec}, so the two contexts receive distinct parser states.
+As long as the token @code{BOGUS} is never generated by @code{yylex},
+the added rule cannot alter the way actual input is parsed.
+
+In this particular example, there is another way to solve the problem:
+rewrite the rule for @code{return_spec} to use @code{ID} directly
+instead of via @code{name}. This also causes the two confusing
+contexts to have different sets of active rules, because the one for
+@code{return_spec} activates the altered rule for @code{return_spec}
+rather than the one for @code{name}.
+
+@example
+param_spec:
+ type
+| name_list ':' type
+;
+return_spec:
+ type
+| ID ':' type
+;
+@end example
+
+For a more detailed exposition of LALR(1) parsers and parser
+generators, @pxref{Bibliography,,DeRemer 1982}.
+
+@node Tuning LR
+@section Tuning LR
+
+The default behavior of Bison's LR-based parsers is chosen mostly for
+historical reasons, but that behavior is often not robust. For example, in
+the previous section, we discussed the mysterious conflicts that can be
+produced by LALR(1), Bison's default parser table construction algorithm.
+Another example is Bison's @code{%define parse.error verbose} directive,
+which instructs the generated parser to produce verbose syntax error
+messages, which can sometimes contain incorrect information.
+
+In this section, we explore several modern features of Bison that allow you
+to tune fundamental aspects of the generated LR-based parsers. Some of
+these features easily eliminate shortcomings like those mentioned above.
+Others can be helpful purely for understanding your parser.
+
+Most of the features discussed in this section are still experimental. More
+user feedback will help to stabilize them.
+
+@menu
+* LR Table Construction:: Choose a different construction algorithm.
+* Default Reductions:: Disable default reductions.
+* LAC:: Correct lookahead sets in the parser states.
+* Unreachable States:: Keep unreachable parser states for debugging.
+@end menu
+
+@node LR Table Construction
+@subsection LR Table Construction
+@cindex Mysterious Conflict
+@cindex LALR
+@cindex IELR
+@cindex canonical LR
+@findex %define lr.type
+
+For historical reasons, Bison constructs LALR(1) parser tables by default.
+However, LALR does not possess the full language-recognition power of LR.
+As a result, the behavior of parsers employing LALR parser tables is often
+mysterious. We presented a simple example of this effect in @ref{Mysterious
+Conflicts}.
+
+As we also demonstrated in that example, the traditional approach to
+eliminating such mysterious behavior is to restructure the grammar.
+Unfortunately, doing so correctly is often difficult. Moreover, merely
+discovering that LALR causes mysterious behavior in your parser can be
+difficult as well.
+
+Fortunately, Bison provides an easy way to eliminate the possibility of such
+mysterious behavior altogether. You simply need to activate a more powerful
+parser table construction algorithm by using the @code{%define lr.type}
+directive.
+
+@deffn {Directive} {%define lr.type @var{TYPE}}
+Specify the type of parser tables within the LR(1) family. The accepted
+values for @var{TYPE} are:
+
+@itemize
+@item @code{lalr} (default)
+@item @code{ielr}
+@item @code{canonical-lr}
+@end itemize
+
+(This feature is experimental. More user feedback will help to stabilize
+it.)
+@end deffn
+
+For example, to activate IELR, you might add the following directive to you
+grammar file:
+
+@example
+%define lr.type ielr
+@end example
+
+@noindent For the example in @ref{Mysterious Conflicts}, the mysterious
+conflict is then eliminated, so there is no need to invest time in
+comprehending the conflict or restructuring the grammar to fix it. If,
+during future development, the grammar evolves such that all mysterious
+behavior would have disappeared using just LALR, you need not fear that
+continuing to use IELR will result in unnecessarily large parser tables.
+That is, IELR generates LALR tables when LALR (using a deterministic parsing
+algorithm) is sufficient to support the full language-recognition power of
+LR. Thus, by enabling IELR at the start of grammar development, you can
+safely and completely eliminate the need to consider LALR's shortcomings.
+
+While IELR is almost always preferable, there are circumstances where LALR
+or the canonical LR parser tables described by Knuth
+(@pxref{Bibliography,,Knuth 1965}) can be useful. Here we summarize the
+relative advantages of each parser table construction algorithm within
+Bison:
+
+@itemize
+@item LALR
+
+There are at least two scenarios where LALR can be worthwhile:
+
+@itemize
+@item GLR without static conflict resolution.
+
+@cindex GLR with LALR
+When employing GLR parsers (@pxref{GLR Parsers}), if you do not resolve any
+conflicts statically (for example, with @code{%left} or @code{%prec}), then
+the parser explores all potential parses of any given input. In this case,
+the choice of parser table construction algorithm is guaranteed not to alter
+the language accepted by the parser. LALR parser tables are the smallest
+parser tables Bison can currently construct, so they may then be preferable.
+Nevertheless, once you begin to resolve conflicts statically, GLR behaves
+more like a deterministic parser in the syntactic contexts where those
+conflicts appear, and so either IELR or canonical LR can then be helpful to
+avoid LALR's mysterious behavior.
+
+@item Malformed grammars.
+
+Occasionally during development, an especially malformed grammar with a
+major recurring flaw may severely impede the IELR or canonical LR parser
+table construction algorithm. LALR can be a quick way to construct parser
+tables in order to investigate such problems while ignoring the more subtle
+differences from IELR and canonical LR.
+@end itemize
+
+@item IELR
+
+IELR (Inadequacy Elimination LR) is a minimal LR algorithm. That is, given
+any grammar (LR or non-LR), parsers using IELR or canonical LR parser tables
+always accept exactly the same set of sentences. However, like LALR, IELR
+merges parser states during parser table construction so that the number of
+parser states is often an order of magnitude less than for canonical LR.
+More importantly, because canonical LR's extra parser states may contain
+duplicate conflicts in the case of non-LR grammars, the number of conflicts
+for IELR is often an order of magnitude less as well. This effect can
+significantly reduce the complexity of developing a grammar.
+
+@item Canonical LR
+
+@cindex delayed syntax error detection
+@cindex LAC
+@findex %nonassoc
+While inefficient, canonical LR parser tables can be an interesting means to
+explore a grammar because they possess a property that IELR and LALR tables
+do not. That is, if @code{%nonassoc} is not used and default reductions are
+left disabled (@pxref{Default Reductions}), then, for every left context of
+every canonical LR state, the set of tokens accepted by that state is
+guaranteed to be the exact set of tokens that is syntactically acceptable in
+that left context. It might then seem that an advantage of canonical LR
+parsers in production is that, under the above constraints, they are
+guaranteed to detect a syntax error as soon as possible without performing
+any unnecessary reductions. However, IELR parsers that use LAC are also
+able to achieve this behavior without sacrificing @code{%nonassoc} or
+default reductions. For details and a few caveats of LAC, @pxref{LAC}.
+@end itemize
+
+For a more detailed exposition of the mysterious behavior in LALR parsers
+and the benefits of IELR, @pxref{Bibliography,,Denny 2008 March}, and
+@ref{Bibliography,,Denny 2010 November}.
+
+@node Default Reductions
+@subsection Default Reductions
+@cindex default reductions
+@findex %define lr.default-reductions
+@findex %nonassoc
+
+After parser table construction, Bison identifies the reduction with the
+largest lookahead set in each parser state. To reduce the size of the
+parser state, traditional Bison behavior is to remove that lookahead set and
+to assign that reduction to be the default parser action. Such a reduction
+is known as a @dfn{default reduction}.
+
+Default reductions affect more than the size of the parser tables. They
+also affect the behavior of the parser:
+
+@itemize
+@item Delayed @code{yylex} invocations.
+
+@cindex delayed yylex invocations
+@cindex consistent states
+@cindex defaulted states
+A @dfn{consistent state} is a state that has only one possible parser
+action. If that action is a reduction and is encoded as a default
+reduction, then that consistent state is called a @dfn{defaulted state}.
+Upon reaching a defaulted state, a Bison-generated parser does not bother to
+invoke @code{yylex} to fetch the next token before performing the reduction.
+In other words, whether default reductions are enabled in consistent states
+determines how soon a Bison-generated parser invokes @code{yylex} for a
+token: immediately when it @emph{reaches} that token in the input or when it
+eventually @emph{needs} that token as a lookahead to determine the next
+parser action. Traditionally, default reductions are enabled, and so the
+parser exhibits the latter behavior.
+
+The presence of defaulted states is an important consideration when
+designing @code{yylex} and the grammar file. That is, if the behavior of
+@code{yylex} can influence or be influenced by the semantic actions
+associated with the reductions in defaulted states, then the delay of the
+next @code{yylex} invocation until after those reductions is significant.
+For example, the semantic actions might pop a scope stack that @code{yylex}
+uses to determine what token to return. Thus, the delay might be necessary
+to ensure that @code{yylex} does not look up the next token in a scope that
+should already be considered closed.
+
+@item Delayed syntax error detection.
+
+@cindex delayed syntax error detection
+When the parser fetches a new token by invoking @code{yylex}, it checks
+whether there is an action for that token in the current parser state. The
+parser detects a syntax error if and only if either (1) there is no action
+for that token or (2) the action for that token is the error action (due to
+the use of @code{%nonassoc}). However, if there is a default reduction in
+that state (which might or might not be a defaulted state), then it is
+impossible for condition 1 to exist. That is, all tokens have an action.
+Thus, the parser sometimes fails to detect the syntax error until it reaches
+a later state.
+
+@cindex LAC
+@c If there's an infinite loop, default reductions can prevent an incorrect
+@c sentence from being rejected.
+While default reductions never cause the parser to accept syntactically
+incorrect sentences, the delay of syntax error detection can have unexpected
+effects on the behavior of the parser. However, the delay can be caused
+anyway by parser state merging and the use of @code{%nonassoc}, and it can
+be fixed by another Bison feature, LAC. We discuss the effects of delayed
+syntax error detection and LAC more in the next section (@pxref{LAC}).
+@end itemize
+
+For canonical LR, the only default reduction that Bison enables by default
+is the accept action, which appears only in the accepting state, which has
+no other action and is thus a defaulted state. However, the default accept
+action does not delay any @code{yylex} invocation or syntax error detection
+because the accept action ends the parse.
+
+For LALR and IELR, Bison enables default reductions in nearly all states by
+default. There are only two exceptions. First, states that have a shift
+action on the @code{error} token do not have default reductions because
+delayed syntax error detection could then prevent the @code{error} token
+from ever being shifted in that state. However, parser state merging can
+cause the same effect anyway, and LAC fixes it in both cases, so future
+versions of Bison might drop this exception when LAC is activated. Second,
+GLR parsers do not record the default reduction as the action on a lookahead
+token for which there is a conflict. The correct action in this case is to
+split the parse instead.
+
+To adjust which states have default reductions enabled, use the
+@code{%define lr.default-reductions} directive.
+
+@deffn {Directive} {%define lr.default-reductions @var{WHERE}}
+Specify the kind of states that are permitted to contain default reductions.
+The accepted values of @var{WHERE} are:
+@itemize
+@item @code{most} (default for LALR and IELR)
+@item @code{consistent}
+@item @code{accepting} (default for canonical LR)
+@end itemize
+
+(The ability to specify where default reductions are permitted is
+experimental. More user feedback will help to stabilize it.)
+@end deffn
+
+@node LAC
+@subsection LAC
+@findex %define parse.lac
+@cindex LAC
+@cindex lookahead correction
+
+Canonical LR, IELR, and LALR can suffer from a couple of problems upon
+encountering a syntax error. First, the parser might perform additional
+parser stack reductions before discovering the syntax error. Such
+reductions can perform user semantic actions that are unexpected because
+they are based on an invalid token, and they cause error recovery to begin
+in a different syntactic context than the one in which the invalid token was
+encountered. Second, when verbose error messages are enabled (@pxref{Error
+Reporting}), the expected token list in the syntax error message can both
+contain invalid tokens and omit valid tokens.
+
+The culprits for the above problems are @code{%nonassoc}, default reductions
+in inconsistent states (@pxref{Default Reductions}), and parser state
+merging. Because IELR and LALR merge parser states, they suffer the most.
+Canonical LR can suffer only if @code{%nonassoc} is used or if default
+reductions are enabled for inconsistent states.
+
+LAC (Lookahead Correction) is a new mechanism within the parsing algorithm
+that solves these problems for canonical LR, IELR, and LALR without
+sacrificing @code{%nonassoc}, default reductions, or state merging. You can
+enable LAC with the @code{%define parse.lac} directive.
+
+@deffn {Directive} {%define parse.lac @var{VALUE}}
+Enable LAC to improve syntax error handling.
+@itemize
+@item @code{none} (default)
+@item @code{full}
+@end itemize
+(This feature is experimental. More user feedback will help to stabilize
+it. Moreover, it is currently only available for deterministic parsers in
+C.)
+@end deffn
+
+Conceptually, the LAC mechanism is straight-forward. Whenever the parser
+fetches a new token from the scanner so that it can determine the next
+parser action, it immediately suspends normal parsing and performs an
+exploratory parse using a temporary copy of the normal parser state stack.
+During this exploratory parse, the parser does not perform user semantic
+actions. If the exploratory parse reaches a shift action, normal parsing
+then resumes on the normal parser stacks. If the exploratory parse reaches
+an error instead, the parser reports a syntax error. If verbose syntax
+error messages are enabled, the parser must then discover the list of
+expected tokens, so it performs a separate exploratory parse for each token
+in the grammar.
+
+There is one subtlety about the use of LAC. That is, when in a consistent
+parser state with a default reduction, the parser will not attempt to fetch
+a token from the scanner because no lookahead is needed to determine the
+next parser action. Thus, whether default reductions are enabled in
+consistent states (@pxref{Default Reductions}) affects how soon the parser
+detects a syntax error: immediately when it @emph{reaches} an erroneous
+token or when it eventually @emph{needs} that token as a lookahead to
+determine the next parser action. The latter behavior is probably more
+intuitive, so Bison currently provides no way to achieve the former behavior
+while default reductions are enabled in consistent states.
+
+Thus, when LAC is in use, for some fixed decision of whether to enable
+default reductions in consistent states, canonical LR and IELR behave almost
+exactly the same for both syntactically acceptable and syntactically
+unacceptable input. While LALR still does not support the full
+language-recognition power of canonical LR and IELR, LAC at least enables
+LALR's syntax error handling to correctly reflect LALR's
+language-recognition power.
+
+There are a few caveats to consider when using LAC:
+
+@itemize
+@item Infinite parsing loops.
+
+IELR plus LAC does have one shortcoming relative to canonical LR. Some
+parsers generated by Bison can loop infinitely. LAC does not fix infinite
+parsing loops that occur between encountering a syntax error and detecting
+it, but enabling canonical LR or disabling default reductions sometimes
+does.
+
+@item Verbose error message limitations.
+
+Because of internationalization considerations, Bison-generated parsers
+limit the size of the expected token list they are willing to report in a
+verbose syntax error message. If the number of expected tokens exceeds that
+limit, the list is simply dropped from the message. Enabling LAC can
+increase the size of the list and thus cause the parser to drop it. Of
+course, dropping the list is better than reporting an incorrect list.
+
+@item Performance.
+
+Because LAC requires many parse actions to be performed twice, it can have a
+performance penalty. However, not all parse actions must be performed
+twice. Specifically, during a series of default reductions in consistent
+states and shift actions, the parser never has to initiate an exploratory
+parse. Moreover, the most time-consuming tasks in a parse are often the
+file I/O, the lexical analysis performed by the scanner, and the user's
+semantic actions, but none of these are performed during the exploratory
+parse. Finally, the base of the temporary stack used during an exploratory
+parse is a pointer into the normal parser state stack so that the stack is
+never physically copied. In our experience, the performance penalty of LAC
+has proved insignificant for practical grammars.
+@end itemize
+
+While the LAC algorithm shares techniques that have been recognized in the
+parser community for years, for the publication that introduces LAC,
+@pxref{Bibliography,,Denny 2010 May}.
+
+@node Unreachable States
+@subsection Unreachable States
+@findex %define lr.keep-unreachable-states
+@cindex unreachable states
+
+If there exists no sequence of transitions from the parser's start state to
+some state @var{s}, then Bison considers @var{s} to be an @dfn{unreachable
+state}. A state can become unreachable during conflict resolution if Bison
+disables a shift action leading to it from a predecessor state.
+
+By default, Bison removes unreachable states from the parser after conflict
+resolution because they are useless in the generated parser. However,
+keeping unreachable states is sometimes useful when trying to understand the
+relationship between the parser and the grammar.
+
+@deffn {Directive} {%define lr.keep-unreachable-states @var{VALUE}}
+Request that Bison allow unreachable states to remain in the parser tables.
+@var{VALUE} must be a Boolean. The default is @code{false}.
+@end deffn
+
+There are a few caveats to consider:
+
+@itemize @bullet
+@item Missing or extraneous warnings.
+
+Unreachable states may contain conflicts and may use rules not used in any
+other state. Thus, keeping unreachable states may induce warnings that are
+irrelevant to your parser's behavior, and it may eliminate warnings that are
+relevant. Of course, the change in warnings may actually be relevant to a
+parser table analysis that wants to keep unreachable states, so this
+behavior will likely remain in future Bison releases.
+
+@item Other useless states.
+
+While Bison is able to remove unreachable states, it is not guaranteed to
+remove other kinds of useless states. Specifically, when Bison disables
+reduce actions during conflict resolution, some goto actions may become
+useless, and thus some additional states may become useless. If Bison were
+to compute which goto actions were useless and then disable those actions,
+it could identify such states as unreachable and then remove those states.
+However, Bison does not compute which goto actions are useless.
+@end itemize
+
+@node Generalized LR Parsing
+@section Generalized LR (GLR) Parsing
+@cindex GLR parsing
+@cindex generalized LR (GLR) parsing
+@cindex ambiguous grammars
+@cindex nondeterministic parsing
+
+Bison produces @emph{deterministic} parsers that choose uniquely
+when to reduce and which reduction to apply
+based on a summary of the preceding input and on one extra token of lookahead.
+As a result, normal Bison handles a proper subset of the family of
+context-free languages.
+Ambiguous grammars, since they have strings with more than one possible
+sequence of reductions cannot have deterministic parsers in this sense.
+The same is true of languages that require more than one symbol of
+lookahead, since the parser lacks the information necessary to make a
+decision at the point it must be made in a shift-reduce parser.
+Finally, as previously mentioned (@pxref{Mysterious Conflicts}),
+there are languages where Bison's default choice of how to
+summarize the input seen so far loses necessary information.
+
+When you use the @samp{%glr-parser} declaration in your grammar file,
+Bison generates a parser that uses a different algorithm, called
+Generalized LR (or GLR). A Bison GLR
+parser uses the same basic
+algorithm for parsing as an ordinary Bison parser, but behaves
+differently in cases where there is a shift-reduce conflict that has not
+been resolved by precedence rules (@pxref{Precedence}) or a
+reduce-reduce conflict. When a GLR parser encounters such a
+situation, it
+effectively @emph{splits} into a several parsers, one for each possible
+shift or reduction. These parsers then proceed as usual, consuming
+tokens in lock-step. Some of the stacks may encounter other conflicts
+and split further, with the result that instead of a sequence of states,
+a Bison GLR parsing stack is what is in effect a tree of states.
+
+In effect, each stack represents a guess as to what the proper parse
+is. Additional input may indicate that a guess was wrong, in which case
+the appropriate stack silently disappears. Otherwise, the semantics
+actions generated in each stack are saved, rather than being executed
+immediately. When a stack disappears, its saved semantic actions never
+get executed. When a reduction causes two stacks to become equivalent,
+their sets of semantic actions are both saved with the state that
+results from the reduction. We say that two stacks are equivalent
+when they both represent the same sequence of states,
+and each pair of corresponding states represents a
+grammar symbol that produces the same segment of the input token
+stream.
+
+Whenever the parser makes a transition from having multiple
+states to having one, it reverts to the normal deterministic parsing
+algorithm, after resolving and executing the saved-up actions.
+At this transition, some of the states on the stack will have semantic
+values that are sets (actually multisets) of possible actions. The
+parser tries to pick one of the actions by first finding one whose rule
+has the highest dynamic precedence, as set by the @samp{%dprec}
+declaration. Otherwise, if the alternative actions are not ordered by
+precedence, but there the same merging function is declared for both
+rules by the @samp{%merge} declaration,
+Bison resolves and evaluates both and then calls the merge function on
+the result. Otherwise, it reports an ambiguity.
+
+It is possible to use a data structure for the GLR parsing tree that
+permits the processing of any LR(1) grammar in linear time (in the
+size of the input), any unambiguous (not necessarily
+LR(1)) grammar in
+quadratic worst-case time, and any general (possibly ambiguous)
+context-free grammar in cubic worst-case time. However, Bison currently
+uses a simpler data structure that requires time proportional to the
+length of the input times the maximum number of stacks required for any
+prefix of the input. Thus, really ambiguous or nondeterministic
+grammars can require exponential time and space to process. Such badly
+behaving examples, however, are not generally of practical interest.
+Usually, nondeterminism in a grammar is local---the parser is ``in
+doubt'' only for a few tokens at a time. Therefore, the current data
+structure should generally be adequate. On LR(1) portions of a
+grammar, in particular, it is only slightly slower than with the
+deterministic LR(1) Bison parser.
+
+For a more detailed exposition of GLR parsers, @pxref{Bibliography,,Scott
+2000}.
+
+@node Memory Management
+@section Memory Management, and How to Avoid Memory Exhaustion
+@cindex memory exhaustion
+@cindex memory management
+@cindex stack overflow
+@cindex parser stack overflow
+@cindex overflow of parser stack
+
+The Bison parser stack can run out of memory if too many tokens are shifted and
+not reduced. When this happens, the parser function @code{yyparse}
+calls @code{yyerror} and then returns 2.
+
+Because Bison parsers have growing stacks, hitting the upper limit
+usually results from using a right recursion instead of a left
+recursion, @xref{Recursion, ,Recursive Rules}.
+
+@vindex YYMAXDEPTH
+By defining the macro @code{YYMAXDEPTH}, you can control how deep the
+parser stack can become before memory is exhausted. Define the
+macro with a value that is an integer. This value is the maximum number
+of tokens that can be shifted (and not reduced) before overflow.
+
+The stack space allowed is not necessarily allocated. If you specify a
+large value for @code{YYMAXDEPTH}, the parser normally allocates a small
+stack at first, and then makes it bigger by stages as needed. This
+increasing allocation happens automatically and silently. Therefore,
+you do not need to make @code{YYMAXDEPTH} painfully small merely to save
+space for ordinary inputs that do not need much stack.
+
+However, do not allow @code{YYMAXDEPTH} to be a value so large that
+arithmetic overflow could occur when calculating the size of the stack
+space. Also, do not allow @code{YYMAXDEPTH} to be less than
+@code{YYINITDEPTH}.
+
+@cindex default stack limit
+The default value of @code{YYMAXDEPTH}, if you do not define it, is
+10000.
+
+@vindex YYINITDEPTH
+You can control how much stack is allocated initially by defining the
+macro @code{YYINITDEPTH} to a positive integer. For the deterministic
+parser in C, this value must be a compile-time constant
+unless you are assuming C99 or some other target language or compiler
+that allows variable-length arrays. The default is 200.
+
+Do not allow @code{YYINITDEPTH} to be greater than @code{YYMAXDEPTH}.
+
+You can generate a deterministic parser containing C++ user code from
+the default (C) skeleton, as well as from the C++ skeleton
+(@pxref{C++ Parsers}). However, if you do use the default skeleton
+and want to allow the parsing stack to grow,
+be careful not to use semantic types or location types that require
+non-trivial copy constructors.
+The C skeleton bypasses these constructors when copying data to
+new, larger stacks.
+
+@node Error Recovery
+@chapter Error Recovery
+@cindex error recovery
+@cindex recovery from errors
+
+It is not usually acceptable to have a program terminate on a syntax
+error. For example, a compiler should recover sufficiently to parse the
+rest of the input file and check it for errors; a calculator should accept
+another expression.
+
+In a simple interactive command parser where each input is one line, it may
+be sufficient to allow @code{yyparse} to return 1 on error and have the
+caller ignore the rest of the input line when that happens (and then call
+@code{yyparse} again). But this is inadequate for a compiler, because it
+forgets all the syntactic context leading up to the error. A syntax error
+deep within a function in the compiler input should not cause the compiler
+to treat the following line like the beginning of a source file.
+
+@findex error
+You can define how to recover from a syntax error by writing rules to
+recognize the special token @code{error}. This is a terminal symbol that
+is always defined (you need not declare it) and reserved for error
+handling. The Bison parser generates an @code{error} token whenever a
+syntax error happens; if you have provided a rule to recognize this token
+in the current context, the parse can continue.
+
+For example:
+
+@example
+stmts:
+ /* empty string */
+| stmts '\n'
+| stmts exp '\n'
+| stmts error '\n'
+@end example
+
+The fourth rule in this example says that an error followed by a newline
+makes a valid addition to any @code{stmts}.
+
+What happens if a syntax error occurs in the middle of an @code{exp}? The
+error recovery rule, interpreted strictly, applies to the precise sequence
+of a @code{stmts}, an @code{error} and a newline. If an error occurs in
+the middle of an @code{exp}, there will probably be some additional tokens
+and subexpressions on the stack after the last @code{stmts}, and there
+will be tokens to read before the next newline. So the rule is not
+applicable in the ordinary way.
+
+But Bison can force the situation to fit the rule, by discarding part of
+the semantic context and part of the input. First it discards states
+and objects from the stack until it gets back to a state in which the
+@code{error} token is acceptable. (This means that the subexpressions
+already parsed are discarded, back to the last complete @code{stmts}.)
+At this point the @code{error} token can be shifted. Then, if the old
+lookahead token is not acceptable to be shifted next, the parser reads
+tokens and discards them until it finds a token which is acceptable. In
+this example, Bison reads and discards input until the next newline so
+that the fourth rule can apply. Note that discarded symbols are
+possible sources of memory leaks, see @ref{Destructor Decl, , Freeing
+Discarded Symbols}, for a means to reclaim this memory.
+
+The choice of error rules in the grammar is a choice of strategies for
+error recovery. A simple and useful strategy is simply to skip the rest of
+the current input line or current statement if an error is detected:
+
+@example
+stmt: error ';' /* On error, skip until ';' is read. */
+@end example
+
+It is also useful to recover to the matching close-delimiter of an
+opening-delimiter that has already been parsed. Otherwise the
+close-delimiter will probably appear to be unmatched, and generate another,
+spurious error message:
+
+@example
+primary:
+ '(' expr ')'
+| '(' error ')'
+@dots{}
+;
+@end example
+
+Error recovery strategies are necessarily guesses. When they guess wrong,
+one syntax error often leads to another. In the above example, the error
+recovery rule guesses that an error is due to bad input within one
+@code{stmt}. Suppose that instead a spurious semicolon is inserted in the
+middle of a valid @code{stmt}. After the error recovery rule recovers
+from the first error, another syntax error will be found straightaway,
+since the text following the spurious semicolon is also an invalid
+@code{stmt}.
+
+To prevent an outpouring of error messages, the parser will output no error
+message for another syntax error that happens shortly after the first; only
+after three consecutive input tokens have been successfully shifted will
+error messages resume.
+
+Note that rules which accept the @code{error} token may have actions, just
+as any other rules can.
+
+@findex yyerrok
+You can make error messages resume immediately by using the macro
+@code{yyerrok} in an action. If you do this in the error rule's action, no
+error messages will be suppressed. This macro requires no arguments;
+@samp{yyerrok;} is a valid C statement.
+
+@findex yyclearin
+The previous lookahead token is reanalyzed immediately after an error. If
+this is unacceptable, then the macro @code{yyclearin} may be used to clear
+this token. Write the statement @samp{yyclearin;} in the error rule's
+action.
+@xref{Action Features, ,Special Features for Use in Actions}.
+
+For example, suppose that on a syntax error, an error handling routine is
+called that advances the input stream to some point where parsing should
+once again commence. The next symbol returned by the lexical scanner is
+probably correct. The previous lookahead token ought to be discarded
+with @samp{yyclearin;}.
+
+@vindex YYRECOVERING
+The expression @code{YYRECOVERING ()} yields 1 when the parser
+is recovering from a syntax error, and 0 otherwise.
+Syntax error diagnostics are suppressed while recovering from a syntax
+error.
+
+@node Context Dependency
+@chapter Handling Context Dependencies
+
+The Bison paradigm is to parse tokens first, then group them into larger
+syntactic units. In many languages, the meaning of a token is affected by
+its context. Although this violates the Bison paradigm, certain techniques
+(known as @dfn{kludges}) may enable you to write Bison parsers for such
+languages.
+
+@menu
+* Semantic Tokens:: Token parsing can depend on the semantic context.
+* Lexical Tie-ins:: Token parsing can depend on the syntactic context.
+* Tie-in Recovery:: Lexical tie-ins have implications for how
+ error recovery rules must be written.
+@end menu
+
+(Actually, ``kludge'' means any technique that gets its job done but is
+neither clean nor robust.)
+
+@node Semantic Tokens
+@section Semantic Info in Token Types
+
+The C language has a context dependency: the way an identifier is used
+depends on what its current meaning is. For example, consider this:
+
+@example
+foo (x);
+@end example
+
+This looks like a function call statement, but if @code{foo} is a typedef
+name, then this is actually a declaration of @code{x}. How can a Bison
+parser for C decide how to parse this input?
+
+The method used in GNU C is to have two different token types,
+@code{IDENTIFIER} and @code{TYPENAME}. When @code{yylex} finds an
+identifier, it looks up the current declaration of the identifier in order
+to decide which token type to return: @code{TYPENAME} if the identifier is
+declared as a typedef, @code{IDENTIFIER} otherwise.
+
+The grammar rules can then express the context dependency by the choice of
+token type to recognize. @code{IDENTIFIER} is accepted as an expression,
+but @code{TYPENAME} is not. @code{TYPENAME} can start a declaration, but
+@code{IDENTIFIER} cannot. In contexts where the meaning of the identifier
+is @emph{not} significant, such as in declarations that can shadow a
+typedef name, either @code{TYPENAME} or @code{IDENTIFIER} is
+accepted---there is one rule for each of the two token types.
+
+This technique is simple to use if the decision of which kinds of
+identifiers to allow is made at a place close to where the identifier is
+parsed. But in C this is not always so: C allows a declaration to
+redeclare a typedef name provided an explicit type has been specified
+earlier:
+
+@example
+typedef int foo, bar;
+int baz (void)
+@group
+@{
+ static bar (bar); /* @r{redeclare @code{bar} as static variable} */
+ extern foo foo (foo); /* @r{redeclare @code{foo} as function} */
+ return foo (bar);
+@}
+@end group
+@end example
+
+Unfortunately, the name being declared is separated from the declaration
+construct itself by a complicated syntactic structure---the ``declarator''.
+
+As a result, part of the Bison parser for C needs to be duplicated, with
+all the nonterminal names changed: once for parsing a declaration in
+which a typedef name can be redefined, and once for parsing a
+declaration in which that can't be done. Here is a part of the
+duplication, with actions omitted for brevity:
+
+@example
+@group
+initdcl:
+ declarator maybeasm '=' init
+| declarator maybeasm
+;
@end group
+
@group
-return_spec:
- type
- | name ':' type
- ;
+notype_initdcl:
+ notype_declarator maybeasm '=' init
+| notype_declarator maybeasm
+;
+@end group
+@end example
+
+@noindent
+Here @code{initdcl} can redeclare a typedef name, but @code{notype_initdcl}
+cannot. The distinction between @code{declarator} and
+@code{notype_declarator} is the same sort of thing.
+
+There is some similarity between this technique and a lexical tie-in
+(described next), in that information which alters the lexical analysis is
+changed during parsing by other parts of the program. The difference is
+here the information is global, and is used for other purposes in the
+program. A true lexical tie-in has a special-purpose flag controlled by
+the syntactic context.
+
+@node Lexical Tie-ins
+@section Lexical Tie-ins
+@cindex lexical tie-in
+
+One way to handle context-dependency is the @dfn{lexical tie-in}: a flag
+which is set by Bison actions, whose purpose is to alter the way tokens are
+parsed.
+
+For example, suppose we have a language vaguely like C, but with a special
+construct @samp{hex (@var{hex-expr})}. After the keyword @code{hex} comes
+an expression in parentheses in which all integers are hexadecimal. In
+particular, the token @samp{a1b} must be treated as an integer rather than
+as an identifier if it appears in that context. Here is how you can do it:
+
+@example
+@group
+%@{
+ int hexflag;
+ int yylex (void);
+ void yyerror (char const *);
+%@}
+%%
+@dots{}
@end group
@group
-type: ID
- ;
+expr:
+ IDENTIFIER
+| constant
+| HEX '(' @{ hexflag = 1; @}
+ expr ')' @{ hexflag = 0; $$ = $4; @}
+| expr '+' expr @{ $$ = make_sum ($1, $3); @}
+@dots{}
+;
+@end group
+
+@group
+constant:
+ INTEGER
+| STRING
+;
+@end group
+@end example
+
+@noindent
+Here we assume that @code{yylex} looks at the value of @code{hexflag}; when
+it is nonzero, all integers are parsed in hexadecimal, and tokens starting
+with letters are parsed as integers if possible.
+
+The declaration of @code{hexflag} shown in the prologue of the grammar
+file is needed to make it accessible to the actions (@pxref{Prologue,
+,The Prologue}). You must also write the code in @code{yylex} to obey
+the flag.
+
+@node Tie-in Recovery
+@section Lexical Tie-ins and Error Recovery
+
+Lexical tie-ins make strict demands on any error recovery rules you have.
+@xref{Error Recovery}.
+
+The reason for this is that the purpose of an error recovery rule is to
+abort the parsing of one construct and resume in some larger construct.
+For example, in C-like languages, a typical error recovery rule is to skip
+tokens until the next semicolon, and then start a new statement, like this:
+
+@example
+stmt:
+ expr ';'
+| IF '(' expr ')' stmt @{ @dots{} @}
+@dots{}
+| error ';' @{ hexflag = 0; @}
+;
+@end example
+
+If there is a syntax error in the middle of a @samp{hex (@var{expr})}
+construct, this error rule will apply, and then the action for the
+completed @samp{hex (@var{expr})} will never run. So @code{hexflag} would
+remain set for the entire rest of the input, or until the next @code{hex}
+keyword, causing identifiers to be misinterpreted as integers.
+
+To avoid this problem the error recovery rule itself clears @code{hexflag}.
+
+There may also be an error recovery rule that works within expressions.
+For example, there could be a rule which applies within parentheses
+and skips to the close-parenthesis:
+
+@example
+@group
+expr:
+ @dots{}
+| '(' expr ')' @{ $$ = $2; @}
+| '(' error ')'
+@dots{}
+@end group
+@end example
+
+If this rule acts within the @code{hex} construct, it is not going to abort
+that construct (since it applies to an inner level of parentheses within
+the construct). Therefore, it should not clear the flag: the rest of
+the @code{hex} construct should be parsed with the flag still in effect.
+
+What if there is an error recovery rule which might abort out of the
+@code{hex} construct or might not, depending on circumstances? There is no
+way you can write the action to determine whether a @code{hex} construct is
+being aborted or not. So if you are using a lexical tie-in, you had better
+make sure your error recovery rules are not of this kind. Each rule must
+be such that you can be sure that it always will, or always won't, have to
+clear the flag.
+
+@c ================================================== Debugging Your Parser
+
+@node Debugging
+@chapter Debugging Your Parser
+
+Developing a parser can be a challenge, especially if you don't
+understand the algorithm (@pxref{Algorithm, ,The Bison Parser
+Algorithm}). Even so, sometimes a detailed description of the automaton
+can help (@pxref{Understanding, , Understanding Your Parser}), or
+tracing the execution of the parser can give some insight on why it
+behaves improperly (@pxref{Tracing, , Tracing Your Parser}).
+
+@menu
+* Understanding:: Understanding the structure of your parser.
+* Tracing:: Tracing the execution of your parser.
+@end menu
+
+@node Understanding
+@section Understanding Your Parser
+
+As documented elsewhere (@pxref{Algorithm, ,The Bison Parser Algorithm})
+Bison parsers are @dfn{shift/reduce automata}. In some cases (much more
+frequent than one would hope), looking at this automaton is required to
+tune or simply fix a parser. Bison provides two different
+representation of it, either textually or graphically (as a DOT file).
+
+The textual file is generated when the options @option{--report} or
+@option{--verbose} are specified, see @xref{Invocation, , Invoking
+Bison}. Its name is made by removing @samp{.tab.c} or @samp{.c} from
+the parser implementation file name, and adding @samp{.output}
+instead. Therefore, if the grammar file is @file{foo.y}, then the
+parser implementation file is called @file{foo.tab.c} by default. As
+a consequence, the verbose output file is called @file{foo.output}.
+
+The following grammar file, @file{calc.y}, will be used in the sequel:
+
+@example
+%token NUM STR
+%left '+' '-'
+%left '*'
+%%
+exp:
+ exp '+' exp
+| exp '-' exp
+| exp '*' exp
+| exp '/' exp
+| NUM
+;
+useless: STR;
+%%
+@end example
+
+@command{bison} reports:
+
+@example
+calc.y: warning: 1 nonterminal useless in grammar
+calc.y: warning: 1 rule useless in grammar
+calc.y:11.1-7: warning: nonterminal useless in grammar: useless
+calc.y:11.10-12: warning: rule useless in grammar: useless: STR
+calc.y: conflicts: 7 shift/reduce
+@end example
+
+When given @option{--report=state}, in addition to @file{calc.tab.c}, it
+creates a file @file{calc.output} with contents detailed below. The
+order of the output and the exact presentation might vary, but the
+interpretation is the same.
+
+@noindent
+@cindex token, useless
+@cindex useless token
+@cindex nonterminal, useless
+@cindex useless nonterminal
+@cindex rule, useless
+@cindex useless rule
+The first section reports useless tokens, nonterminals and rules. Useless
+nonterminals and rules are removed in order to produce a smaller parser, but
+useless tokens are preserved, since they might be used by the scanner (note
+the difference between ``useless'' and ``unused'' below):
+
+@example
+Nonterminals useless in grammar
+ useless
+
+Terminals unused in grammar
+ STR
+
+Rules useless in grammar
+ 6 useless: STR
+@end example
+
+@noindent
+The next section lists states that still have conflicts.
+
+@example
+State 8 conflicts: 1 shift/reduce
+State 9 conflicts: 1 shift/reduce
+State 10 conflicts: 1 shift/reduce
+State 11 conflicts: 4 shift/reduce
+@end example
+
+@noindent
+Then Bison reproduces the exact grammar it used:
+
+@example
+Grammar
+
+ 0 $accept: exp $end
+
+ 1 exp: exp '+' exp
+ 2 | exp '-' exp
+ 3 | exp '*' exp
+ 4 | exp '/' exp
+ 5 | NUM
+@end example
+
+@noindent
+and reports the uses of the symbols:
+
+@example
+@group
+Terminals, with rules where they appear
+
+$end (0) 0
+'*' (42) 3
+'+' (43) 1
+'-' (45) 2
+'/' (47) 4
+error (256)
+NUM (258) 5
+STR (259)
@end group
+
@group
-name: ID
- ;
-name_list:
- name
- | name ',' name_list
- ;
+Nonterminals, with rules where they appear
+
+$accept (9)
+ on left: 0
+exp (10)
+ on left: 1 2 3 4 5, on right: 0 1 2 3 4
@end group
@end example
-It would seem that this grammar can be parsed with only a single token
-of lookahead: when a @code{param_spec} is being read, an @code{ID} is
-a @code{name} if a comma or colon follows, or a @code{type} if another
-@code{ID} follows. In other words, this grammar is @acronym{LR}(1).
-
-@cindex @acronym{LR}(1)
-@cindex @acronym{LALR}(1)
-However, Bison, like most parser generators, cannot actually handle all
-@acronym{LR}(1) grammars. In this grammar, two contexts, that after
-an @code{ID}
-at the beginning of a @code{param_spec} and likewise at the beginning of
-a @code{return_spec}, are similar enough that Bison assumes they are the
-same. They appear similar because the same set of rules would be
-active---the rule for reducing to a @code{name} and that for reducing to
-a @code{type}. Bison is unable to determine at that stage of processing
-that the rules would require different lookahead tokens in the two
-contexts, so it makes a single parser state for them both. Combining
-the two contexts causes a conflict later. In parser terminology, this
-occurrence means that the grammar is not @acronym{LALR}(1).
-
-In general, it is better to fix deficiencies than to document them. But
-this particular deficiency is intrinsically hard to fix; parser
-generators that can handle @acronym{LR}(1) grammars are hard to write
-and tend to
-produce parsers that are very large. In practice, Bison is more useful
-as it is now.
-
-When the problem arises, you can often fix it by identifying the two
-parser states that are being confused, and adding something to make them
-look distinct. In the above example, adding one rule to
-@code{return_spec} as follows makes the problem go away:
+@noindent
+@cindex item
+@cindex pointed rule
+@cindex rule, pointed
+Bison then proceeds onto the automaton itself, describing each state
+with its set of @dfn{items}, also known as @dfn{pointed rules}. Each
+item is a production rule together with a point (@samp{.}) marking
+the location of the input cursor.
@example
-@group
-%token BOGUS
-@dots{}
-%%
-@dots{}
-return_spec:
- type
- | name ':' type
- /* This rule is never used. */
- | ID BOGUS
- ;
-@end group
+state 0
+
+ 0 $accept: . exp $end
+
+ NUM shift, and go to state 1
+
+ exp go to state 2
@end example
-This corrects the problem because it introduces the possibility of an
-additional active rule in the context after the @code{ID} at the beginning of
-@code{return_spec}. This rule is not active in the corresponding context
-in a @code{param_spec}, so the two contexts receive distinct parser states.
-As long as the token @code{BOGUS} is never generated by @code{yylex},
-the added rule cannot alter the way actual input is parsed.
+This reads as follows: ``state 0 corresponds to being at the very
+beginning of the parsing, in the initial rule, right before the start
+symbol (here, @code{exp}). When the parser returns to this state right
+after having reduced a rule that produced an @code{exp}, the control
+flow jumps to state 2. If there is no such transition on a nonterminal
+symbol, and the lookahead is a @code{NUM}, then this token is shifted onto
+the parse stack, and the control flow jumps to state 1. Any other
+lookahead triggers a syntax error.''
-In this particular example, there is another way to solve the problem:
-rewrite the rule for @code{return_spec} to use @code{ID} directly
-instead of via @code{name}. This also causes the two confusing
-contexts to have different sets of active rules, because the one for
-@code{return_spec} activates the altered rule for @code{return_spec}
-rather than the one for @code{name}.
+@cindex core, item set
+@cindex item set core
+@cindex kernel, item set
+@cindex item set core
+Even though the only active rule in state 0 seems to be rule 0, the
+report lists @code{NUM} as a lookahead token because @code{NUM} can be
+at the beginning of any rule deriving an @code{exp}. By default Bison
+reports the so-called @dfn{core} or @dfn{kernel} of the item set, but if
+you want to see more detail you can invoke @command{bison} with
+@option{--report=itemset} to list the derived items as well:
@example
-param_spec:
- type
- | name_list ':' type
- ;
-return_spec:
- type
- | ID ':' type
- ;
+state 0
+
+ 0 $accept: . exp $end
+ 1 exp: . exp '+' exp
+ 2 | . exp '-' exp
+ 3 | . exp '*' exp
+ 4 | . exp '/' exp
+ 5 | . NUM
+
+ NUM shift, and go to state 1
+
+ exp go to state 2
@end example
-For a more detailed exposition of @acronym{LALR}(1) parsers and parser
-generators, please see:
-Frank DeRemer and Thomas Pennello, Efficient Computation of
-@acronym{LALR}(1) Look-Ahead Sets, @cite{@acronym{ACM} Transactions on
-Programming Languages and Systems}, Vol.@: 4, No.@: 4 (October 1982),
-pp.@: 615--649 @uref{http://doi.acm.org/10.1145/69622.357187}.
+@noindent
+In the state 1@dots{}
-@node Generalized LR Parsing
-@section Generalized @acronym{LR} (@acronym{GLR}) Parsing
-@cindex @acronym{GLR} parsing
-@cindex generalized @acronym{LR} (@acronym{GLR}) parsing
-@cindex ambiguous grammars
-@cindex nondeterministic parsing
+@example
+state 1
-Bison produces @emph{deterministic} parsers that choose uniquely
-when to reduce and which reduction to apply
-based on a summary of the preceding input and on one extra token of lookahead.
-As a result, normal Bison handles a proper subset of the family of
-context-free languages.
-Ambiguous grammars, since they have strings with more than one possible
-sequence of reductions cannot have deterministic parsers in this sense.
-The same is true of languages that require more than one symbol of
-lookahead, since the parser lacks the information necessary to make a
-decision at the point it must be made in a shift-reduce parser.
-Finally, as previously mentioned (@pxref{Mystery Conflicts}),
-there are languages where Bison's particular choice of how to
-summarize the input seen so far loses necessary information.
+ 5 exp: NUM .
-When you use the @samp{%glr-parser} declaration in your grammar file,
-Bison generates a parser that uses a different algorithm, called
-Generalized @acronym{LR} (or @acronym{GLR}). A Bison @acronym{GLR}
-parser uses the same basic
-algorithm for parsing as an ordinary Bison parser, but behaves
-differently in cases where there is a shift-reduce conflict that has not
-been resolved by precedence rules (@pxref{Precedence}) or a
-reduce-reduce conflict. When a @acronym{GLR} parser encounters such a
-situation, it
-effectively @emph{splits} into a several parsers, one for each possible
-shift or reduction. These parsers then proceed as usual, consuming
-tokens in lock-step. Some of the stacks may encounter other conflicts
-and split further, with the result that instead of a sequence of states,
-a Bison @acronym{GLR} parsing stack is what is in effect a tree of states.
+ $default reduce using rule 5 (exp)
+@end example
-In effect, each stack represents a guess as to what the proper parse
-is. Additional input may indicate that a guess was wrong, in which case
-the appropriate stack silently disappears. Otherwise, the semantics
-actions generated in each stack are saved, rather than being executed
-immediately. When a stack disappears, its saved semantic actions never
-get executed. When a reduction causes two stacks to become equivalent,
-their sets of semantic actions are both saved with the state that
-results from the reduction. We say that two stacks are equivalent
-when they both represent the same sequence of states,
-and each pair of corresponding states represents a
-grammar symbol that produces the same segment of the input token
-stream.
+@noindent
+the rule 5, @samp{exp: NUM;}, is completed. Whatever the lookahead token
+(@samp{$default}), the parser will reduce it. If it was coming from
+state 0, then, after this reduction it will return to state 0, and will
+jump to state 2 (@samp{exp: go to state 2}).
-Whenever the parser makes a transition from having multiple
-states to having one, it reverts to the normal @acronym{LALR}(1) parsing
-algorithm, after resolving and executing the saved-up actions.
-At this transition, some of the states on the stack will have semantic
-values that are sets (actually multisets) of possible actions. The
-parser tries to pick one of the actions by first finding one whose rule
-has the highest dynamic precedence, as set by the @samp{%dprec}
-declaration. Otherwise, if the alternative actions are not ordered by
-precedence, but there the same merging function is declared for both
-rules by the @samp{%merge} declaration,
-Bison resolves and evaluates both and then calls the merge function on
-the result. Otherwise, it reports an ambiguity.
+@example
+state 2
-It is possible to use a data structure for the @acronym{GLR} parsing tree that
-permits the processing of any @acronym{LALR}(1) grammar in linear time (in the
-size of the input), any unambiguous (not necessarily
-@acronym{LALR}(1)) grammar in
-quadratic worst-case time, and any general (possibly ambiguous)
-context-free grammar in cubic worst-case time. However, Bison currently
-uses a simpler data structure that requires time proportional to the
-length of the input times the maximum number of stacks required for any
-prefix of the input. Thus, really ambiguous or nondeterministic
-grammars can require exponential time and space to process. Such badly
-behaving examples, however, are not generally of practical interest.
-Usually, nondeterminism in a grammar is local---the parser is ``in
-doubt'' only for a few tokens at a time. Therefore, the current data
-structure should generally be adequate. On @acronym{LALR}(1) portions of a
-grammar, in particular, it is only slightly slower than with the default
-Bison parser.
+ 0 $accept: exp . $end
+ 1 exp: exp . '+' exp
+ 2 | exp . '-' exp
+ 3 | exp . '*' exp
+ 4 | exp . '/' exp
-For a more detailed exposition of @acronym{GLR} parsers, please see: Elizabeth
-Scott, Adrian Johnstone and Shamsa Sadaf Hussain, Tomita-Style
-Generalised @acronym{LR} Parsers, Royal Holloway, University of
-London, Department of Computer Science, TR-00-12,
-@uref{http://www.cs.rhul.ac.uk/research/languages/publications/tomita_style_1.ps},
-(2000-12-24).
+ $end shift, and go to state 3
+ '+' shift, and go to state 4
+ '-' shift, and go to state 5
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+@end example
-@node Memory Management
-@section Memory Management, and How to Avoid Memory Exhaustion
-@cindex memory exhaustion
-@cindex memory management
-@cindex stack overflow
-@cindex parser stack overflow
-@cindex overflow of parser stack
+@noindent
+In state 2, the automaton can only shift a symbol. For instance,
+because of the item @samp{exp: exp . '+' exp}, if the lookahead is
+@samp{+} it is shifted onto the parse stack, and the automaton
+jumps to state 4, corresponding to the item @samp{exp: exp '+' . exp}.
+Since there is no default action, any lookahead not listed triggers a syntax
+error.
-The Bison parser stack can run out of memory if too many tokens are shifted and
-not reduced. When this happens, the parser function @code{yyparse}
-calls @code{yyerror} and then returns 2.
+@cindex accepting state
+The state 3 is named the @dfn{final state}, or the @dfn{accepting
+state}:
-Because Bison parsers have growing stacks, hitting the upper limit
-usually results from using a right recursion instead of a left
-recursion, @xref{Recursion, ,Recursive Rules}.
+@example
+state 3
-@vindex YYMAXDEPTH
-By defining the macro @code{YYMAXDEPTH}, you can control how deep the
-parser stack can become before memory is exhausted. Define the
-macro with a value that is an integer. This value is the maximum number
-of tokens that can be shifted (and not reduced) before overflow.
+ 0 $accept: exp $end .
-The stack space allowed is not necessarily allocated. If you specify a
-large value for @code{YYMAXDEPTH}, the parser normally allocates a small
-stack at first, and then makes it bigger by stages as needed. This
-increasing allocation happens automatically and silently. Therefore,
-you do not need to make @code{YYMAXDEPTH} painfully small merely to save
-space for ordinary inputs that do not need much stack.
+ $default accept
+@end example
-However, do not allow @code{YYMAXDEPTH} to be a value so large that
-arithmetic overflow could occur when calculating the size of the stack
-space. Also, do not allow @code{YYMAXDEPTH} to be less than
-@code{YYINITDEPTH}.
+@noindent
+the initial rule is completed (the start symbol and the end-of-input were
+read), the parsing exits successfully.
+
+The interpretation of states 4 to 7 is straightforward, and is left to
+the reader.
+
+@example
+state 4
+
+ 1 exp: exp '+' . exp
+
+ NUM shift, and go to state 1
+
+ exp go to state 8
+
+
+state 5
+
+ 2 exp: exp '-' . exp
+
+ NUM shift, and go to state 1
+
+ exp go to state 9
+
+
+state 6
+
+ 3 exp: exp '*' . exp
+
+ NUM shift, and go to state 1
+
+ exp go to state 10
+
+
+state 7
+
+ 4 exp: exp '/' . exp
+
+ NUM shift, and go to state 1
+
+ exp go to state 11
+@end example
+
+As was announced in beginning of the report, @samp{State 8 conflicts:
+1 shift/reduce}:
+
+@example
+state 8
+
+ 1 exp: exp . '+' exp
+ 1 | exp '+' exp .
+ 2 | exp . '-' exp
+ 3 | exp . '*' exp
+ 4 | exp . '/' exp
+
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '/' [reduce using rule 1 (exp)]
+ $default reduce using rule 1 (exp)
+@end example
+
+Indeed, there are two actions associated to the lookahead @samp{/}:
+either shifting (and going to state 7), or reducing rule 1. The
+conflict means that either the grammar is ambiguous, or the parser lacks
+information to make the right decision. Indeed the grammar is
+ambiguous, as, since we did not specify the precedence of @samp{/}, the
+sentence @samp{NUM + NUM / NUM} can be parsed as @samp{NUM + (NUM /
+NUM)}, which corresponds to shifting @samp{/}, or as @samp{(NUM + NUM) /
+NUM}, which corresponds to reducing rule 1.
+
+Because in deterministic parsing a single decision can be made, Bison
+arbitrarily chose to disable the reduction, see @ref{Shift/Reduce, ,
+Shift/Reduce Conflicts}. Discarded actions are reported between
+square brackets.
-@cindex default stack limit
-The default value of @code{YYMAXDEPTH}, if you do not define it, is
-10000.
+Note that all the previous states had a single possible action: either
+shifting the next token and going to the corresponding state, or
+reducing a single rule. In the other cases, i.e., when shifting
+@emph{and} reducing is possible or when @emph{several} reductions are
+possible, the lookahead is required to select the action. State 8 is
+one such state: if the lookahead is @samp{*} or @samp{/} then the action
+is shifting, otherwise the action is reducing rule 1. In other words,
+the first two items, corresponding to rule 1, are not eligible when the
+lookahead token is @samp{*}, since we specified that @samp{*} has higher
+precedence than @samp{+}. More generally, some items are eligible only
+with some set of possible lookahead tokens. When run with
+@option{--report=lookahead}, Bison specifies these lookahead tokens:
-@vindex YYINITDEPTH
-You can control how much stack is allocated initially by defining the
-macro @code{YYINITDEPTH} to a positive integer. For the C
-@acronym{LALR}(1) parser, this value must be a compile-time constant
-unless you are assuming C99 or some other target language or compiler
-that allows variable-length arrays. The default is 200.
+@example
+state 8
-Do not allow @code{YYINITDEPTH} to be greater than @code{YYMAXDEPTH}.
+ 1 exp: exp . '+' exp
+ 1 | exp '+' exp . [$end, '+', '-', '/']
+ 2 | exp . '-' exp
+ 3 | exp . '*' exp
+ 4 | exp . '/' exp
-@c FIXME: C++ output.
-Because of semantical differences between C and C++, the
-@acronym{LALR}(1) parsers in C produced by Bison cannot grow when compiled
-by C++ compilers. In this precise case (compiling a C parser as C++) you are
-suggested to grow @code{YYINITDEPTH}. The Bison maintainers hope to fix
-this deficiency in a future release.
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
-@node Error Recovery
-@chapter Error Recovery
-@cindex error recovery
-@cindex recovery from errors
+ '/' [reduce using rule 1 (exp)]
+ $default reduce using rule 1 (exp)
+@end example
-It is not usually acceptable to have a program terminate on a syntax
-error. For example, a compiler should recover sufficiently to parse the
-rest of the input file and check it for errors; a calculator should accept
-another expression.
+Note however that while @samp{NUM + NUM / NUM} is ambiguous (which results in
+the conflicts on @samp{/}), @samp{NUM + NUM * NUM} is not: the conflict was
+solved thanks to associativity and precedence directives. If invoked with
+@option{--report=solved}, Bison includes information about the solved
+conflicts in the report:
-In a simple interactive command parser where each input is one line, it may
-be sufficient to allow @code{yyparse} to return 1 on error and have the
-caller ignore the rest of the input line when that happens (and then call
-@code{yyparse} again). But this is inadequate for a compiler, because it
-forgets all the syntactic context leading up to the error. A syntax error
-deep within a function in the compiler input should not cause the compiler
-to treat the following line like the beginning of a source file.
+@example
+Conflict between rule 1 and token '+' resolved as reduce (%left '+').
+Conflict between rule 1 and token '-' resolved as reduce (%left '-').
+Conflict between rule 1 and token '*' resolved as shift ('+' < '*').
+@end example
-@findex error
-You can define how to recover from a syntax error by writing rules to
-recognize the special token @code{error}. This is a terminal symbol that
-is always defined (you need not declare it) and reserved for error
-handling. The Bison parser generates an @code{error} token whenever a
-syntax error happens; if you have provided a rule to recognize this token
-in the current context, the parse can continue.
-For example:
+The remaining states are similar:
@example
-stmnts: /* empty string */
- | stmnts '\n'
- | stmnts exp '\n'
- | stmnts error '\n'
-@end example
+@group
+state 9
-The fourth rule in this example says that an error followed by a newline
-makes a valid addition to any @code{stmnts}.
+ 1 exp: exp . '+' exp
+ 2 | exp . '-' exp
+ 2 | exp '-' exp .
+ 3 | exp . '*' exp
+ 4 | exp . '/' exp
-What happens if a syntax error occurs in the middle of an @code{exp}? The
-error recovery rule, interpreted strictly, applies to the precise sequence
-of a @code{stmnts}, an @code{error} and a newline. If an error occurs in
-the middle of an @code{exp}, there will probably be some additional tokens
-and subexpressions on the stack after the last @code{stmnts}, and there
-will be tokens to read before the next newline. So the rule is not
-applicable in the ordinary way.
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
-But Bison can force the situation to fit the rule, by discarding part of
-the semantic context and part of the input. First it discards states
-and objects from the stack until it gets back to a state in which the
-@code{error} token is acceptable. (This means that the subexpressions
-already parsed are discarded, back to the last complete @code{stmnts}.)
-At this point the @code{error} token can be shifted. Then, if the old
-lookahead token is not acceptable to be shifted next, the parser reads
-tokens and discards them until it finds a token which is acceptable. In
-this example, Bison reads and discards input until the next newline so
-that the fourth rule can apply. Note that discarded symbols are
-possible sources of memory leaks, see @ref{Destructor Decl, , Freeing
-Discarded Symbols}, for a means to reclaim this memory.
+ '/' [reduce using rule 2 (exp)]
+ $default reduce using rule 2 (exp)
+@end group
-The choice of error rules in the grammar is a choice of strategies for
-error recovery. A simple and useful strategy is simply to skip the rest of
-the current input line or current statement if an error is detected:
+@group
+state 10
-@example
-stmnt: error ';' /* On error, skip until ';' is read. */
-@end example
+ 1 exp: exp . '+' exp
+ 2 | exp . '-' exp
+ 3 | exp . '*' exp
+ 3 | exp '*' exp .
+ 4 | exp . '/' exp
-It is also useful to recover to the matching close-delimiter of an
-opening-delimiter that has already been parsed. Otherwise the
-close-delimiter will probably appear to be unmatched, and generate another,
-spurious error message:
+ '/' shift, and go to state 7
-@example
-primary: '(' expr ')'
- | '(' error ')'
- @dots{}
- ;
+ '/' [reduce using rule 3 (exp)]
+ $default reduce using rule 3 (exp)
+@end group
+
+@group
+state 11
+
+ 1 exp: exp . '+' exp
+ 2 | exp . '-' exp
+ 3 | exp . '*' exp
+ 4 | exp . '/' exp
+ 4 | exp '/' exp .
+
+ '+' shift, and go to state 4
+ '-' shift, and go to state 5
+ '*' shift, and go to state 6
+ '/' shift, and go to state 7
+
+ '+' [reduce using rule 4 (exp)]
+ '-' [reduce using rule 4 (exp)]
+ '*' [reduce using rule 4 (exp)]
+ '/' [reduce using rule 4 (exp)]
+ $default reduce using rule 4 (exp)
+@end group
@end example
-Error recovery strategies are necessarily guesses. When they guess wrong,
-one syntax error often leads to another. In the above example, the error
-recovery rule guesses that an error is due to bad input within one
-@code{stmnt}. Suppose that instead a spurious semicolon is inserted in the
-middle of a valid @code{stmnt}. After the error recovery rule recovers
-from the first error, another syntax error will be found straightaway,
-since the text following the spurious semicolon is also an invalid
-@code{stmnt}.
+@noindent
+Observe that state 11 contains conflicts not only due to the lack of
+precedence of @samp{/} with respect to @samp{+}, @samp{-}, and
+@samp{*}, but also because the
+associativity of @samp{/} is not specified.
-To prevent an outpouring of error messages, the parser will output no error
-message for another syntax error that happens shortly after the first; only
-after three consecutive input tokens have been successfully shifted will
-error messages resume.
-Note that rules which accept the @code{error} token may have actions, just
-as any other rules can.
+@node Tracing
+@section Tracing Your Parser
+@findex yydebug
+@cindex debugging
+@cindex tracing the parser
-@findex yyerrok
-You can make error messages resume immediately by using the macro
-@code{yyerrok} in an action. If you do this in the error rule's action, no
-error messages will be suppressed. This macro requires no arguments;
-@samp{yyerrok;} is a valid C statement.
+If a Bison grammar compiles properly but doesn't do what you want when it
+runs, the @code{yydebug} parser-trace feature can help you figure out why.
-@findex yyclearin
-The previous lookahead token is reanalyzed immediately after an error. If
-this is unacceptable, then the macro @code{yyclearin} may be used to clear
-this token. Write the statement @samp{yyclearin;} in the error rule's
-action.
-@xref{Action Features, ,Special Features for Use in Actions}.
+There are several means to enable compilation of trace facilities:
-For example, suppose that on a syntax error, an error handling routine is
-called that advances the input stream to some point where parsing should
-once again commence. The next symbol returned by the lexical scanner is
-probably correct. The previous lookahead token ought to be discarded
-with @samp{yyclearin;}.
+@table @asis
+@item the macro @code{YYDEBUG}
+@findex YYDEBUG
+Define the macro @code{YYDEBUG} to a nonzero value when you compile the
+parser. This is compliant with POSIX Yacc. You could use
+@samp{-DYYDEBUG=1} as a compiler option or you could put @samp{#define
+YYDEBUG 1} in the prologue of the grammar file (@pxref{Prologue, , The
+Prologue}).
-@vindex YYRECOVERING
-The expression @code{YYRECOVERING ()} yields 1 when the parser
-is recovering from a syntax error, and 0 otherwise.
-Syntax error diagnostics are suppressed while recovering from a syntax
-error.
+@item the option @option{-t}, @option{--debug}
+Use the @samp{-t} option when you run Bison (@pxref{Invocation,
+,Invoking Bison}). This is POSIX compliant too.
-@node Context Dependency
-@chapter Handling Context Dependencies
+@item the directive @samp{%debug}
+@findex %debug
+Add the @code{%debug} directive (@pxref{Decl Summary, ,Bison Declaration
+Summary}). This Bison extension is maintained for backward
+compatibility with previous versions of Bison.
+
+@item the variable @samp{parse.trace}
+@findex %define parse.trace
+Add the @samp{%define parse.trace} directive (@pxref{%define
+Summary,,parse.trace}), or pass the @option{-Dparse.trace} option
+(@pxref{Bison Options}). This is a Bison extension, which is especially
+useful for languages that don't use a preprocessor. Unless POSIX and Yacc
+portability matter to you, this is the preferred solution.
+@end table
-The Bison paradigm is to parse tokens first, then group them into larger
-syntactic units. In many languages, the meaning of a token is affected by
-its context. Although this violates the Bison paradigm, certain techniques
-(known as @dfn{kludges}) may enable you to write Bison parsers for such
-languages.
+We suggest that you always enable the trace option so that debugging is
+always possible.
-@menu
-* Semantic Tokens:: Token parsing can depend on the semantic context.
-* Lexical Tie-ins:: Token parsing can depend on the syntactic context.
-* Tie-in Recovery:: Lexical tie-ins have implications for how
- error recovery rules must be written.
-@end menu
+The trace facility outputs messages with macro calls of the form
+@code{YYFPRINTF (stderr, @var{format}, @var{args})} where
+@var{format} and @var{args} are the usual @code{printf} format and variadic
+arguments. If you define @code{YYDEBUG} to a nonzero value but do not
+define @code{YYFPRINTF}, @code{<stdio.h>} is automatically included
+and @code{YYFPRINTF} is defined to @code{fprintf}.
-(Actually, ``kludge'' means any technique that gets its job done but is
-neither clean nor robust.)
+Once you have compiled the program with trace facilities, the way to
+request a trace is to store a nonzero value in the variable @code{yydebug}.
+You can do this by making the C code do it (in @code{main}, perhaps), or
+you can alter the value with a C debugger.
-@node Semantic Tokens
-@section Semantic Info in Token Types
+Each step taken by the parser when @code{yydebug} is nonzero produces a
+line or two of trace information, written on @code{stderr}. The trace
+messages tell you these things:
-The C language has a context dependency: the way an identifier is used
-depends on what its current meaning is. For example, consider this:
+@itemize @bullet
+@item
+Each time the parser calls @code{yylex}, what kind of token was read.
-@example
-foo (x);
-@end example
+@item
+Each time a token is shifted, the depth and complete contents of the
+state stack (@pxref{Parser States}).
-This looks like a function call statement, but if @code{foo} is a typedef
-name, then this is actually a declaration of @code{x}. How can a Bison
-parser for C decide how to parse this input?
+@item
+Each time a rule is reduced, which rule it is, and the complete contents
+of the state stack afterward.
+@end itemize
-The method used in @acronym{GNU} C is to have two different token types,
-@code{IDENTIFIER} and @code{TYPENAME}. When @code{yylex} finds an
-identifier, it looks up the current declaration of the identifier in order
-to decide which token type to return: @code{TYPENAME} if the identifier is
-declared as a typedef, @code{IDENTIFIER} otherwise.
+To make sense of this information, it helps to refer to the listing file
+produced by the Bison @samp{-v} option (@pxref{Invocation, ,Invoking
+Bison}). This file shows the meaning of each state in terms of
+positions in various rules, and also what each state will do with each
+possible input token. As you read the successive trace messages, you
+can see that the parser is functioning according to its specification in
+the listing file. Eventually you will arrive at the place where
+something undesirable happens, and you will see which parts of the
+grammar are to blame.
-The grammar rules can then express the context dependency by the choice of
-token type to recognize. @code{IDENTIFIER} is accepted as an expression,
-but @code{TYPENAME} is not. @code{TYPENAME} can start a declaration, but
-@code{IDENTIFIER} cannot. In contexts where the meaning of the identifier
-is @emph{not} significant, such as in declarations that can shadow a
-typedef name, either @code{TYPENAME} or @code{IDENTIFIER} is
-accepted---there is one rule for each of the two token types.
+The parser implementation file is a C program and you can use C
+debuggers on it, but it's not easy to interpret what it is doing. The
+parser function is a finite-state machine interpreter, and aside from
+the actions it executes the same code over and over. Only the values
+of variables show where in the grammar it is working.
-This technique is simple to use if the decision of which kinds of
-identifiers to allow is made at a place close to where the identifier is
-parsed. But in C this is not always so: C allows a declaration to
-redeclare a typedef name provided an explicit type has been specified
-earlier:
+@findex YYPRINT
+The debugging information normally gives the token type of each token
+read, but not its semantic value. You can optionally define a macro
+named @code{YYPRINT} to provide a way to print the value. If you define
+@code{YYPRINT}, it should take three arguments. The parser will pass a
+standard I/O stream, the numeric code for the token type, and the token
+value (from @code{yylval}).
+
+Here is an example of @code{YYPRINT} suitable for the multi-function
+calculator (@pxref{Mfcalc Declarations, ,Declarations for @code{mfcalc}}):
@example
-typedef int foo, bar;
-int baz (void)
+%@{
+ static void print_token_value (FILE *, int, YYSTYPE);
+ #define YYPRINT(file, type, value) \
+ print_token_value (file, type, value)
+%@}
+
+@dots{} %% @dots{} %% @dots{}
+
+static void
+print_token_value (FILE *file, int type, YYSTYPE value)
@{
- static bar (bar); /* @r{redeclare @code{bar} as static variable} */
- extern foo foo (foo); /* @r{redeclare @code{foo} as function} */
- return foo (bar);
+ if (type == VAR)
+ fprintf (file, "%s", value.tptr->name);
+ else if (type == NUM)
+ fprintf (file, "%d", value.val);
@}
@end example
-Unfortunately, the name being declared is separated from the declaration
-construct itself by a complicated syntactic structure---the ``declarator''.
+@c ================================================= Invoking Bison
+
+@node Invocation
+@chapter Invoking Bison
+@cindex invoking Bison
+@cindex Bison invocation
+@cindex options for invoking Bison
+
+The usual way to invoke Bison is as follows:
+
+@example
+bison @var{infile}
+@end example
+
+Here @var{infile} is the grammar file name, which usually ends in
+@samp{.y}. The parser implementation file's name is made by replacing
+the @samp{.y} with @samp{.tab.c} and removing any leading directory.
+Thus, the @samp{bison foo.y} file name yields @file{foo.tab.c}, and
+the @samp{bison hack/foo.y} file name yields @file{foo.tab.c}. It's
+also possible, in case you are writing C++ code instead of C in your
+grammar file, to name it @file{foo.ypp} or @file{foo.y++}. Then, the
+output files will take an extension like the given one as input
+(respectively @file{foo.tab.cpp} and @file{foo.tab.c++}). This
+feature takes effect with all options that manipulate file names like
+@samp{-o} or @samp{-d}.
-As a result, part of the Bison parser for C needs to be duplicated, with
-all the nonterminal names changed: once for parsing a declaration in
-which a typedef name can be redefined, and once for parsing a
-declaration in which that can't be done. Here is a part of the
-duplication, with actions omitted for brevity:
+For example :
@example
-initdcl:
- declarator maybeasm '='
- init
- | declarator maybeasm
- ;
-
-notype_initdcl:
- notype_declarator maybeasm '='
- init
- | notype_declarator maybeasm
- ;
+bison -d @var{infile.yxx}
@end example
-
@noindent
-Here @code{initdcl} can redeclare a typedef name, but @code{notype_initdcl}
-cannot. The distinction between @code{declarator} and
-@code{notype_declarator} is the same sort of thing.
+will produce @file{infile.tab.cxx} and @file{infile.tab.hxx}, and
-There is some similarity between this technique and a lexical tie-in
-(described next), in that information which alters the lexical analysis is
-changed during parsing by other parts of the program. The difference is
-here the information is global, and is used for other purposes in the
-program. A true lexical tie-in has a special-purpose flag controlled by
-the syntactic context.
+@example
+bison -d -o @var{output.c++} @var{infile.y}
+@end example
+@noindent
+will produce @file{output.c++} and @file{outfile.h++}.
-@node Lexical Tie-ins
-@section Lexical Tie-ins
-@cindex lexical tie-in
+For compatibility with POSIX, the standard Bison
+distribution also contains a shell script called @command{yacc} that
+invokes Bison with the @option{-y} option.
-One way to handle context-dependency is the @dfn{lexical tie-in}: a flag
-which is set by Bison actions, whose purpose is to alter the way tokens are
-parsed.
+@menu
+* Bison Options:: All the options described in detail,
+ in alphabetical order by short options.
+* Option Cross Key:: Alphabetical list of long options.
+* Yacc Library:: Yacc-compatible @code{yylex} and @code{main}.
+@end menu
-For example, suppose we have a language vaguely like C, but with a special
-construct @samp{hex (@var{hex-expr})}. After the keyword @code{hex} comes
-an expression in parentheses in which all integers are hexadecimal. In
-particular, the token @samp{a1b} must be treated as an integer rather than
-as an identifier if it appears in that context. Here is how you can do it:
+@node Bison Options
+@section Bison Options
-@example
-@group
-%@{
- int hexflag;
- int yylex (void);
- void yyerror (char const *);
-%@}
-%%
-@dots{}
-@end group
-@group
-expr: IDENTIFIER
- | constant
- | HEX '('
- @{ hexflag = 1; @}
- expr ')'
- @{ hexflag = 0;
- $$ = $4; @}
- | expr '+' expr
- @{ $$ = make_sum ($1, $3); @}
- @dots{}
- ;
-@end group
+Bison supports both traditional single-letter options and mnemonic long
+option names. Long option names are indicated with @samp{--} instead of
+@samp{-}. Abbreviations for option names are allowed as long as they
+are unique. When a long option takes an argument, like
+@samp{--file-prefix}, connect the option name and the argument with
+@samp{=}.
-@group
-constant:
- INTEGER
- | STRING
- ;
-@end group
-@end example
+Here is a list of options that can be used with Bison, alphabetized by
+short option. It is followed by a cross key alphabetized by long
+option.
+@c Please, keep this ordered as in `bison --help'.
@noindent
-Here we assume that @code{yylex} looks at the value of @code{hexflag}; when
-it is nonzero, all integers are parsed in hexadecimal, and tokens starting
-with letters are parsed as integers if possible.
+Operations modes:
+@table @option
+@item -h
+@itemx --help
+Print a summary of the command-line options to Bison and exit.
-The declaration of @code{hexflag} shown in the prologue of the parser file
-is needed to make it accessible to the actions (@pxref{Prologue, ,The Prologue}).
-You must also write the code in @code{yylex} to obey the flag.
+@item -V
+@itemx --version
+Print the version number of Bison and exit.
-@node Tie-in Recovery
-@section Lexical Tie-ins and Error Recovery
+@item --print-localedir
+Print the name of the directory containing locale-dependent data.
-Lexical tie-ins make strict demands on any error recovery rules you have.
-@xref{Error Recovery}.
+@item --print-datadir
+Print the name of the directory containing skeletons and XSLT.
-The reason for this is that the purpose of an error recovery rule is to
-abort the parsing of one construct and resume in some larger construct.
-For example, in C-like languages, a typical error recovery rule is to skip
-tokens until the next semicolon, and then start a new statement, like this:
+@item -y
+@itemx --yacc
+Act more like the traditional Yacc command. This can cause different
+diagnostics to be generated, and may change behavior in other minor
+ways. Most importantly, imitate Yacc's output file name conventions,
+so that the parser implementation file is called @file{y.tab.c}, and
+the other outputs are called @file{y.output} and @file{y.tab.h}.
+Also, if generating a deterministic parser in C, generate
+@code{#define} statements in addition to an @code{enum} to associate
+token numbers with token names. Thus, the following shell script can
+substitute for Yacc, and the Bison distribution contains such a script
+for compatibility with POSIX:
@example
-stmt: expr ';'
- | IF '(' expr ')' stmt @{ @dots{} @}
- @dots{}
- error ';'
- @{ hexflag = 0; @}
- ;
+#! /bin/sh
+bison -y "$@@"
@end example
-If there is a syntax error in the middle of a @samp{hex (@var{expr})}
-construct, this error rule will apply, and then the action for the
-completed @samp{hex (@var{expr})} will never run. So @code{hexflag} would
-remain set for the entire rest of the input, or until the next @code{hex}
-keyword, causing identifiers to be misinterpreted as integers.
+The @option{-y}/@option{--yacc} option is intended for use with
+traditional Yacc grammars. If your grammar uses a Bison extension
+like @samp{%glr-parser}, Bison might not be Yacc-compatible even if
+this option is specified.
-To avoid this problem the error recovery rule itself clears @code{hexflag}.
+@item -W [@var{category}]
+@itemx --warnings[=@var{category}]
+Output warnings falling in @var{category}. @var{category} can be one
+of:
+@table @code
+@item midrule-values
+Warn about mid-rule values that are set but not used within any of the actions
+of the parent rule.
+For example, warn about unused @code{$2} in:
-There may also be an error recovery rule that works within expressions.
-For example, there could be a rule which applies within parentheses
-and skips to the close-parenthesis:
+@example
+exp: '1' @{ $$ = 1; @} '+' exp @{ $$ = $1 + $4; @};
+@end example
+
+Also warn about mid-rule values that are used but not set.
+For example, warn about unset @code{$$} in the mid-rule action in:
@example
-@group
-expr: @dots{}
- | '(' expr ')'
- @{ $$ = $2; @}
- | '(' error ')'
- @dots{}
-@end group
+exp: '1' @{ $1 = 1; @} '+' exp @{ $$ = $2 + $4; @};
@end example
-If this rule acts within the @code{hex} construct, it is not going to abort
-that construct (since it applies to an inner level of parentheses within
-the construct). Therefore, it should not clear the flag: the rest of
-the @code{hex} construct should be parsed with the flag still in effect.
+These warnings are not enabled by default since they sometimes prove to
+be false alarms in existing grammars employing the Yacc constructs
+@code{$0} or @code{$-@var{n}} (where @var{n} is some positive integer).
-What if there is an error recovery rule which might abort out of the
-@code{hex} construct or might not, depending on circumstances? There is no
-way you can write the action to determine whether a @code{hex} construct is
-being aborted or not. So if you are using a lexical tie-in, you had better
-make sure your error recovery rules are not of this kind. Each rule must
-be such that you can be sure that it always will, or always won't, have to
-clear the flag.
+@item yacc
+Incompatibilities with POSIX Yacc.
-@c ================================================== Debugging Your Parser
+@item conflicts-sr
+@itemx conflicts-rr
+S/R and R/R conflicts. These warnings are enabled by default. However, if
+the @code{%expect} or @code{%expect-rr} directive is specified, an
+unexpected number of conflicts is an error, and an expected number of
+conflicts is not reported, so @option{-W} and @option{--warning} then have
+no effect on the conflict report.
-@node Debugging
-@chapter Debugging Your Parser
+@item other
+All warnings not categorized above. These warnings are enabled by default.
-Developing a parser can be a challenge, especially if you don't
-understand the algorithm (@pxref{Algorithm, ,The Bison Parser
-Algorithm}). Even so, sometimes a detailed description of the automaton
-can help (@pxref{Understanding, , Understanding Your Parser}), or
-tracing the execution of the parser can give some insight on why it
-behaves improperly (@pxref{Tracing, , Tracing Your Parser}).
+This category is provided merely for the sake of completeness. Future
+releases of Bison may move warnings from this category to new, more specific
+categories.
-@menu
-* Understanding:: Understanding the structure of your parser.
-* Tracing:: Tracing the execution of your parser.
-@end menu
+@item all
+All the warnings.
+@item none
+Turn off all the warnings.
+@item error
+Treat warnings as errors.
+@end table
-@node Understanding
-@section Understanding Your Parser
+A category can be turned off by prefixing its name with @samp{no-}. For
+instance, @option{-Wno-yacc} will hide the warnings about
+POSIX Yacc incompatibilities.
+@end table
-As documented elsewhere (@pxref{Algorithm, ,The Bison Parser Algorithm})
-Bison parsers are @dfn{shift/reduce automata}. In some cases (much more
-frequent than one would hope), looking at this automaton is required to
-tune or simply fix a parser. Bison provides two different
-representation of it, either textually or graphically (as a DOT file).
+@noindent
+Tuning the parser:
-The textual file is generated when the options @option{--report} or
-@option{--verbose} are specified, see @xref{Invocation, , Invoking
-Bison}. Its name is made by removing @samp{.tab.c} or @samp{.c} from
-the parser output file name, and adding @samp{.output} instead.
-Therefore, if the input file is @file{foo.y}, then the parser file is
-called @file{foo.tab.c} by default. As a consequence, the verbose
-output file is called @file{foo.output}.
+@table @option
+@item -t
+@itemx --debug
+In the parser implementation file, define the macro @code{YYDEBUG} to
+1 if it is not already defined, so that the debugging facilities are
+compiled. @xref{Tracing, ,Tracing Your Parser}.
+
+@item -D @var{name}[=@var{value}]
+@itemx --define=@var{name}[=@var{value}]
+@itemx -F @var{name}[=@var{value}]
+@itemx --force-define=@var{name}[=@var{value}]
+Each of these is equivalent to @samp{%define @var{name} "@var{value}"}
+(@pxref{%define Summary}) except that Bison processes multiple
+definitions for the same @var{name} as follows:
-The following grammar file, @file{calc.y}, will be used in the sequel:
+@itemize
+@item
+Bison quietly ignores all command-line definitions for @var{name} except
+the last.
+@item
+If that command-line definition is specified by a @code{-D} or
+@code{--define}, Bison reports an error for any @code{%define}
+definition for @var{name}.
+@item
+If that command-line definition is specified by a @code{-F} or
+@code{--force-define} instead, Bison quietly ignores all @code{%define}
+definitions for @var{name}.
+@item
+Otherwise, Bison reports an error if there are multiple @code{%define}
+definitions for @var{name}.
+@end itemize
-@example
-%token NUM STR
-%left '+' '-'
-%left '*'
-%%
-exp: exp '+' exp
- | exp '-' exp
- | exp '*' exp
- | exp '/' exp
- | NUM
- ;
-useless: STR;
-%%
-@end example
+You should avoid using @code{-F} and @code{--force-define} in your
+make files unless you are confident that it is safe to quietly ignore
+any conflicting @code{%define} that may be added to the grammar file.
-@command{bison} reports:
+@item -L @var{language}
+@itemx --language=@var{language}
+Specify the programming language for the generated parser, as if
+@code{%language} was specified (@pxref{Decl Summary, , Bison Declaration
+Summary}). Currently supported languages include C, C++, and Java.
+@var{language} is case-insensitive.
-@example
-calc.y: warning: 1 useless nonterminal and 1 useless rule
-calc.y:11.1-7: warning: useless nonterminal: useless
-calc.y:11.10-12: warning: useless rule: useless: STR
-calc.y: conflicts: 7 shift/reduce
-@end example
+This option is experimental and its effect may be modified in future
+releases.
-When given @option{--report=state}, in addition to @file{calc.tab.c}, it
-creates a file @file{calc.output} with contents detailed below. The
-order of the output and the exact presentation might vary, but the
-interpretation is the same.
+@item --locations
+Pretend that @code{%locations} was specified. @xref{Decl Summary}.
-The first section includes details on conflicts that were solved thanks
-to precedence and/or associativity:
+@item -p @var{prefix}
+@itemx --name-prefix=@var{prefix}
+Pretend that @code{%name-prefix "@var{prefix}"} was specified.
+@xref{Decl Summary}.
-@example
-Conflict in state 8 between rule 2 and token '+' resolved as reduce.
-Conflict in state 8 between rule 2 and token '-' resolved as reduce.
-Conflict in state 8 between rule 2 and token '*' resolved as shift.
-@exdent @dots{}
-@end example
+@item -l
+@itemx --no-lines
+Don't put any @code{#line} preprocessor commands in the parser
+implementation file. Ordinarily Bison puts them in the parser
+implementation file so that the C compiler and debuggers will
+associate errors with your source file, the grammar file. This option
+causes them to associate errors with the parser implementation file,
+treating it as an independent source file in its own right.
-@noindent
-The next section lists states that still have conflicts.
+@item -S @var{file}
+@itemx --skeleton=@var{file}
+Specify the skeleton to use, similar to @code{%skeleton}
+(@pxref{Decl Summary, , Bison Declaration Summary}).
-@example
-State 8 conflicts: 1 shift/reduce
-State 9 conflicts: 1 shift/reduce
-State 10 conflicts: 1 shift/reduce
-State 11 conflicts: 4 shift/reduce
-@end example
+@c You probably don't need this option unless you are developing Bison.
+@c You should use @option{--language} if you want to specify the skeleton for a
+@c different language, because it is clearer and because it will always
+@c choose the correct skeleton for non-deterministic or push parsers.
+
+If @var{file} does not contain a @code{/}, @var{file} is the name of a skeleton
+file in the Bison installation directory.
+If it does, @var{file} is an absolute file name or a file name relative to the
+current working directory.
+This is similar to how most shells resolve commands.
+
+@item -k
+@itemx --token-table
+Pretend that @code{%token-table} was specified. @xref{Decl Summary}.
+@end table
@noindent
-@cindex token, useless
-@cindex useless token
-@cindex nonterminal, useless
-@cindex useless nonterminal
-@cindex rule, useless
-@cindex useless rule
-The next section reports useless tokens, nonterminal and rules. Useless
-nonterminals and rules are removed in order to produce a smaller parser,
-but useless tokens are preserved, since they might be used by the
-scanner (note the difference between ``useless'' and ``not used''
-below):
+Adjust the output:
-@example
-Useless nonterminals:
- useless
+@table @option
+@item --defines[=@var{file}]
+Pretend that @code{%defines} was specified, i.e., write an extra output
+file containing macro definitions for the token type names defined in
+the grammar, as well as a few other declarations. @xref{Decl Summary}.
-Terminals which are not used:
- STR
+@item -d
+This is the same as @code{--defines} except @code{-d} does not accept a
+@var{file} argument since POSIX Yacc requires that @code{-d} can be bundled
+with other short options.
-Useless rules:
-#6 useless: STR;
-@end example
+@item -b @var{file-prefix}
+@itemx --file-prefix=@var{prefix}
+Pretend that @code{%file-prefix} was specified, i.e., specify prefix to use
+for all Bison output file names. @xref{Decl Summary}.
-@noindent
-The next section reproduces the exact grammar that Bison used:
+@item -r @var{things}
+@itemx --report=@var{things}
+Write an extra output file containing verbose description of the comma
+separated list of @var{things} among:
-@example
-Grammar
+@table @code
+@item state
+Description of the grammar, conflicts (resolved and unresolved), and
+parser's automaton.
- Number, Line, Rule
- 0 5 $accept -> exp $end
- 1 5 exp -> exp '+' exp
- 2 6 exp -> exp '-' exp
- 3 7 exp -> exp '*' exp
- 4 8 exp -> exp '/' exp
- 5 9 exp -> NUM
-@end example
+@item lookahead
+Implies @code{state} and augments the description of the automaton with
+each rule's lookahead set.
-@noindent
-and reports the uses of the symbols:
+@item itemset
+Implies @code{state} and augments the description of the automaton with
+the full set of items for each state, instead of its core only.
+@end table
-@example
-Terminals, with rules where they appear
+@item --report-file=@var{file}
+Specify the @var{file} for the verbose description.
-$end (0) 0
-'*' (42) 3
-'+' (43) 1
-'-' (45) 2
-'/' (47) 4
-error (256)
-NUM (258) 5
+@item -v
+@itemx --verbose
+Pretend that @code{%verbose} was specified, i.e., write an extra output
+file containing verbose descriptions of the grammar and
+parser. @xref{Decl Summary}.
-Nonterminals, with rules where they appear
+@item -o @var{file}
+@itemx --output=@var{file}
+Specify the @var{file} for the parser implementation file.
-$accept (8)
- on left: 0
-exp (9)
- on left: 1 2 3 4 5, on right: 0 1 2 3 4
-@end example
+The other output files' names are constructed from @var{file} as
+described under the @samp{-v} and @samp{-d} options.
-@noindent
-@cindex item
-@cindex pointed rule
-@cindex rule, pointed
-Bison then proceeds onto the automaton itself, describing each state
-with it set of @dfn{items}, also known as @dfn{pointed rules}. Each
-item is a production rule together with a point (marked by @samp{.})
-that the input cursor.
+@item -g [@var{file}]
+@itemx --graph[=@var{file}]
+Output a graphical representation of the parser's
+automaton computed by Bison, in @uref{http://www.graphviz.org/, Graphviz}
+@uref{http://www.graphviz.org/doc/info/lang.html, DOT} format.
+@code{@var{file}} is optional.
+If omitted and the grammar file is @file{foo.y}, the output file will be
+@file{foo.dot}.
+
+@item -x [@var{file}]
+@itemx --xml[=@var{file}]
+Output an XML report of the parser's automaton computed by Bison.
+@code{@var{file}} is optional.
+If omitted and the grammar file is @file{foo.y}, the output file will be
+@file{foo.xml}.
+(The current XML schema is experimental and may evolve.
+More user feedback will help to stabilize it.)
+@end table
-@example
-state 0
+@node Option Cross Key
+@section Option Cross Key
- $accept -> . exp $ (rule 0)
+Here is a list of options, alphabetized by long option, to help you find
+the corresponding short option and directive.
- NUM shift, and go to state 1
+@multitable {@option{--force-define=@var{name}[=@var{value}]}} {@option{-F @var{name}[=@var{value}]}} {@code{%nondeterministic-parser}}
+@headitem Long Option @tab Short Option @tab Bison Directive
+@include cross-options.texi
+@end multitable
- exp go to state 2
-@end example
+@node Yacc Library
+@section Yacc Library
-This reads as follows: ``state 0 corresponds to being at the very
-beginning of the parsing, in the initial rule, right before the start
-symbol (here, @code{exp}). When the parser returns to this state right
-after having reduced a rule that produced an @code{exp}, the control
-flow jumps to state 2. If there is no such transition on a nonterminal
-symbol, and the lookahead is a @code{NUM}, then this token is shifted on
-the parse stack, and the control flow jumps to state 1. Any other
-lookahead triggers a syntax error.''
+The Yacc library contains default implementations of the
+@code{yyerror} and @code{main} functions. These default
+implementations are normally not useful, but POSIX requires
+them. To use the Yacc library, link your program with the
+@option{-ly} option. Note that Bison's implementation of the Yacc
+library is distributed under the terms of the GNU General
+Public License (@pxref{Copying}).
-@cindex core, item set
-@cindex item set core
-@cindex kernel, item set
-@cindex item set core
-Even though the only active rule in state 0 seems to be rule 0, the
-report lists @code{NUM} as a lookahead token because @code{NUM} can be
-at the beginning of any rule deriving an @code{exp}. By default Bison
-reports the so-called @dfn{core} or @dfn{kernel} of the item set, but if
-you want to see more detail you can invoke @command{bison} with
-@option{--report=itemset} to list all the items, include those that can
-be derived:
+If you use the Yacc library's @code{yyerror} function, you should
+declare @code{yyerror} as follows:
@example
-state 0
-
- $accept -> . exp $ (rule 0)
- exp -> . exp '+' exp (rule 1)
- exp -> . exp '-' exp (rule 2)
- exp -> . exp '*' exp (rule 3)
- exp -> . exp '/' exp (rule 4)
- exp -> . NUM (rule 5)
+int yyerror (char const *);
+@end example
- NUM shift, and go to state 1
+Bison ignores the @code{int} value returned by this @code{yyerror}.
+If you use the Yacc library's @code{main} function, your
+@code{yyparse} function should have the following type signature:
- exp go to state 2
+@example
+int yyparse (void);
@end example
-@noindent
-In the state 1...
+@c ================================================= C++ Bison
-@example
-state 1
+@node Other Languages
+@chapter Parsers Written In Other Languages
- exp -> NUM . (rule 5)
+@menu
+* C++ Parsers:: The interface to generate C++ parser classes
+* Java Parsers:: The interface to generate Java parser classes
+@end menu
- $default reduce using rule 5 (exp)
-@end example
+@node C++ Parsers
+@section C++ Parsers
-@noindent
-the rule 5, @samp{exp: NUM;}, is completed. Whatever the lookahead token
-(@samp{$default}), the parser will reduce it. If it was coming from
-state 0, then, after this reduction it will return to state 0, and will
-jump to state 2 (@samp{exp: go to state 2}).
+@menu
+* C++ Bison Interface:: Asking for C++ parser generation
+* C++ Semantic Values:: %union vs. C++
+* C++ Location Values:: The position and location classes
+* C++ Parser Interface:: Instantiating and running the parser
+* C++ Scanner Interface:: Exchanges between yylex and parse
+* A Complete C++ Example:: Demonstrating their use
+@end menu
-@example
-state 2
+@node C++ Bison Interface
+@subsection C++ Bison Interface
+@c - %skeleton "lalr1.cc"
+@c - Always pure
+@c - initial action
- $accept -> exp . $ (rule 0)
- exp -> exp . '+' exp (rule 1)
- exp -> exp . '-' exp (rule 2)
- exp -> exp . '*' exp (rule 3)
- exp -> exp . '/' exp (rule 4)
+The C++ deterministic parser is selected using the skeleton directive,
+@samp{%skeleton "lalr1.cc"}, or the synonymous command-line option
+@option{--skeleton=lalr1.cc}.
+@xref{Decl Summary}.
- $ shift, and go to state 3
- '+' shift, and go to state 4
- '-' shift, and go to state 5
- '*' shift, and go to state 6
- '/' shift, and go to state 7
-@end example
+When run, @command{bison} will create several entities in the @samp{yy}
+namespace.
+@findex %define api.namespace
+Use the @samp{%define api.namespace} directive to change the namespace name,
+see @ref{%define Summary,,api.namespace}. The various classes are generated
+in the following files:
-@noindent
-In state 2, the automaton can only shift a symbol. For instance,
-because of the item @samp{exp -> exp . '+' exp}, if the lookahead if
-@samp{+}, it will be shifted on the parse stack, and the automaton
-control will jump to state 4, corresponding to the item @samp{exp -> exp
-'+' . exp}. Since there is no default action, any other token than
-those listed above will trigger a syntax error.
+@table @file
+@item position.hh
+@itemx location.hh
+The definition of the classes @code{position} and @code{location},
+used for location tracking when enabled. @xref{C++ Location Values}.
-The state 3 is named the @dfn{final state}, or the @dfn{accepting
-state}:
+@item stack.hh
+An auxiliary class @code{stack} used by the parser.
-@example
-state 3
+@item @var{file}.hh
+@itemx @var{file}.cc
+(Assuming the extension of the grammar file was @samp{.yy}.) The
+declaration and implementation of the C++ parser class. The basename
+and extension of these two files follow the same rules as with regular C
+parsers (@pxref{Invocation}).
- $accept -> exp $ . (rule 0)
+The header is @emph{mandatory}; you must either pass
+@option{-d}/@option{--defines} to @command{bison}, or use the
+@samp{%defines} directive.
+@end table
- $default accept
-@end example
+All these files are documented using Doxygen; run @command{doxygen}
+for a complete and accurate documentation.
-@noindent
-the initial rule is completed (the start symbol and the end
-of input were read), the parsing exits successfully.
+@node C++ Semantic Values
+@subsection C++ Semantic Values
+@c - No objects in unions
+@c - YYSTYPE
+@c - Printer and destructor
-The interpretation of states 4 to 7 is straightforward, and is left to
-the reader.
+Bison supports two different means to handle semantic values in C++. One is
+alike the C interface, and relies on unions (@pxref{C++ Unions}). As C++
+practitioners know, unions are inconvenient in C++, therefore another
+approach is provided, based on variants (@pxref{C++ Variants}).
-@example
-state 4
+@menu
+* C++ Unions:: Semantic values cannot be objects
+* C++ Variants:: Using objects as semantic values
+@end menu
- exp -> exp '+' . exp (rule 1)
+@node C++ Unions
+@subsubsection C++ Unions
- NUM shift, and go to state 1
+The @code{%union} directive works as for C, see @ref{Union Decl, ,The
+Collection of Value Types}. In particular it produces a genuine
+@code{union}, which have a few specific features in C++.
+@itemize @minus
+@item
+The type @code{YYSTYPE} is defined but its use is discouraged: rather
+you should refer to the parser's encapsulated type
+@code{yy::parser::semantic_type}.
+@item
+Non POD (Plain Old Data) types cannot be used. C++ forbids any
+instance of classes with constructors in unions: only @emph{pointers}
+to such objects are allowed.
+@end itemize
- exp go to state 8
+Because objects have to be stored via pointers, memory is not
+reclaimed automatically: using the @code{%destructor} directive is the
+only means to avoid leaks. @xref{Destructor Decl, , Freeing Discarded
+Symbols}.
-state 5
+@node C++ Variants
+@subsubsection C++ Variants
- exp -> exp '-' . exp (rule 2)
+Starting with version 2.6, Bison provides a @emph{variant} based
+implementation of semantic values for C++. This alleviates all the
+limitations reported in the previous section, and in particular, object
+types can be used without pointers.
- NUM shift, and go to state 1
+To enable variant-based semantic values, set @code{%define} variable
+@code{variant} (@pxref{%define Summary,, variant}). Once this defined,
+@code{%union} is ignored, and instead of using the name of the fields of the
+@code{%union} to ``type'' the symbols, use genuine types.
- exp go to state 9
+For instance, instead of
-state 6
+@example
+%union
+@{
+ int ival;
+ std::string* sval;
+@}
+%token <ival> NUMBER;
+%token <sval> STRING;
+@end example
- exp -> exp '*' . exp (rule 3)
+@noindent
+write
- NUM shift, and go to state 1
+@example
+%token <int> NUMBER;
+%token <std::string> STRING;
+@end example
- exp go to state 10
+@code{STRING} is no longer a pointer, which should fairly simplify the user
+actions in the grammar and in the scanner (in particular the memory
+management).
-state 7
+Since C++ features destructors, and since it is customary to specialize
+@code{operator<<} to support uniform printing of values, variants also
+typically simplify Bison printers and destructors.
- exp -> exp '/' . exp (rule 4)
+Variants are stricter than unions. When based on unions, you may play any
+dirty game with @code{yylval}, say storing an @code{int}, reading a
+@code{char*}, and then storing a @code{double} in it. This is no longer
+possible with variants: they must be initialized, then assigned to, and
+eventually, destroyed.
- NUM shift, and go to state 1
+@deftypemethod {semantic_type} {T&} build<T> ()
+Initialize, but leave empty. Returns the address where the actual value may
+be stored. Requires that the variant was not initialized yet.
+@end deftypemethod
- exp go to state 11
-@end example
+@deftypemethod {semantic_type} {T&} build<T> (const T& @var{t})
+Initialize, and copy-construct from @var{t}.
+@end deftypemethod
-As was announced in beginning of the report, @samp{State 8 conflicts:
-1 shift/reduce}:
-@example
-state 8
+@strong{Warning}: We do not use Boost.Variant, for two reasons. First, it
+appeared unacceptable to require Boost on the user's machine (i.e., the
+machine on which the generated parser will be compiled, not the machine on
+which @command{bison} was run). Second, for each possible semantic value,
+Boost.Variant not only stores the value, but also a tag specifying its
+type. But the parser already ``knows'' the type of the semantic value, so
+that would be duplicating the information.
- exp -> exp . '+' exp (rule 1)
- exp -> exp '+' exp . (rule 1)
- exp -> exp . '-' exp (rule 2)
- exp -> exp . '*' exp (rule 3)
- exp -> exp . '/' exp (rule 4)
+Therefore we developed light-weight variants whose type tag is external (so
+they are really like @code{unions} for C++ actually). But our code is much
+less mature that Boost.Variant. So there is a number of limitations in
+(the current implementation of) variants:
+@itemize
+@item
+Alignment must be enforced: values should be aligned in memory according to
+the most demanding type. Computing the smallest alignment possible requires
+meta-programming techniques that are not currently implemented in Bison, and
+therefore, since, as far as we know, @code{double} is the most demanding
+type on all platforms, alignments are enforced for @code{double} whatever
+types are actually used. This may waste space in some cases.
- '*' shift, and go to state 6
- '/' shift, and go to state 7
+@item
+Our implementation is not conforming with strict aliasing rules. Alias
+analysis is a technique used in optimizing compilers to detect when two
+pointers are disjoint (they cannot ``meet''). Our implementation breaks
+some of the rules that G++ 4.4 uses in its alias analysis, so @emph{strict
+alias analysis must be disabled}. Use the option
+@option{-fno-strict-aliasing} to compile the generated parser.
- '/' [reduce using rule 1 (exp)]
- $default reduce using rule 1 (exp)
-@end example
+@item
+There might be portability issues we are not aware of.
+@end itemize
-Indeed, there are two actions associated to the lookahead @samp{/}:
-either shifting (and going to state 7), or reducing rule 1. The
-conflict means that either the grammar is ambiguous, or the parser lacks
-information to make the right decision. Indeed the grammar is
-ambiguous, as, since we did not specify the precedence of @samp{/}, the
-sentence @samp{NUM + NUM / NUM} can be parsed as @samp{NUM + (NUM /
-NUM)}, which corresponds to shifting @samp{/}, or as @samp{(NUM + NUM) /
-NUM}, which corresponds to reducing rule 1.
+As far as we know, these limitations @emph{can} be alleviated. All it takes
+is some time and/or some talented C++ hacker willing to contribute to Bison.
-Because in @acronym{LALR}(1) parsing a single decision can be made, Bison
-arbitrarily chose to disable the reduction, see @ref{Shift/Reduce, ,
-Shift/Reduce Conflicts}. Discarded actions are reported in between
-square brackets.
+@node C++ Location Values
+@subsection C++ Location Values
+@c - %locations
+@c - class Position
+@c - class Location
+@c - %define filename_type "const symbol::Symbol"
-Note that all the previous states had a single possible action: either
-shifting the next token and going to the corresponding state, or
-reducing a single rule. In the other cases, i.e., when shifting
-@emph{and} reducing is possible or when @emph{several} reductions are
-possible, the lookahead is required to select the action. State 8 is
-one such state: if the lookahead is @samp{*} or @samp{/} then the action
-is shifting, otherwise the action is reducing rule 1. In other words,
-the first two items, corresponding to rule 1, are not eligible when the
-lookahead token is @samp{*}, since we specified that @samp{*} has higher
-precedence than @samp{+}. More generally, some items are eligible only
-with some set of possible lookahead tokens. When run with
-@option{--report=lookahead}, Bison specifies these lookahead tokens:
+When the directive @code{%locations} is used, the C++ parser supports
+location tracking, see @ref{Tracking Locations}. Two auxiliary classes
+define a @code{position}, a single point in a file, and a @code{location}, a
+range composed of a pair of @code{position}s (possibly spanning several
+files).
-@example
-state 8
+@tindex uint
+In this section @code{uint} is an abbreviation for @code{unsigned int}: in
+genuine code only the latter is used.
- exp -> exp . '+' exp [$, '+', '-', '/'] (rule 1)
- exp -> exp '+' exp . [$, '+', '-', '/'] (rule 1)
- exp -> exp . '-' exp (rule 2)
- exp -> exp . '*' exp (rule 3)
- exp -> exp . '/' exp (rule 4)
+@menu
+* C++ position:: One point in the source file
+* C++ location:: Two points in the source file
+@end menu
- '*' shift, and go to state 6
- '/' shift, and go to state 7
+@node C++ position
+@subsubsection C++ @code{position}
- '/' [reduce using rule 1 (exp)]
- $default reduce using rule 1 (exp)
-@end example
+@deftypeop {Constructor} {position} {} position (std::string* @var{file} = 0, uint @var{line} = 1, uint @var{col} = 1)
+Create a @code{position} denoting a given point. Note that @code{file} is
+not reclaimed when the @code{position} is destroyed: memory managed must be
+handled elsewhere.
+@end deftypeop
-The remaining states are similar:
+@deftypemethod {position} {void} initialize (std::string* @var{file} = 0, uint @var{line} = 1, uint @var{col} = 1)
+Reset the position to the given values.
+@end deftypemethod
-@example
-state 9
+@deftypeivar {position} {std::string*} file
+The name of the file. It will always be handled as a pointer, the
+parser will never duplicate nor deallocate it. As an experimental
+feature you may change it to @samp{@var{type}*} using @samp{%define
+filename_type "@var{type}"}.
+@end deftypeivar
+
+@deftypeivar {position} {uint} line
+The line, starting at 1.
+@end deftypeivar
- exp -> exp . '+' exp (rule 1)
- exp -> exp . '-' exp (rule 2)
- exp -> exp '-' exp . (rule 2)
- exp -> exp . '*' exp (rule 3)
- exp -> exp . '/' exp (rule 4)
+@deftypemethod {position} {uint} lines (int @var{height} = 1)
+Advance by @var{height} lines, resetting the column number.
+@end deftypemethod
- '*' shift, and go to state 6
- '/' shift, and go to state 7
+@deftypeivar {position} {uint} column
+The column, starting at 1.
+@end deftypeivar
- '/' [reduce using rule 2 (exp)]
- $default reduce using rule 2 (exp)
+@deftypemethod {position} {uint} columns (int @var{width} = 1)
+Advance by @var{width} columns, without changing the line number.
+@end deftypemethod
-state 10
+@deftypemethod {position} {position&} operator+= (int @var{width})
+@deftypemethodx {position} {position} operator+ (int @var{width})
+@deftypemethodx {position} {position&} operator-= (int @var{width})
+@deftypemethodx {position} {position} operator- (int @var{width})
+Various forms of syntactic sugar for @code{columns}.
+@end deftypemethod
- exp -> exp . '+' exp (rule 1)
- exp -> exp . '-' exp (rule 2)
- exp -> exp . '*' exp (rule 3)
- exp -> exp '*' exp . (rule 3)
- exp -> exp . '/' exp (rule 4)
+@deftypemethod {position} {bool} operator== (const position& @var{that})
+@deftypemethodx {position} {bool} operator!= (const position& @var{that})
+Whether @code{*this} and @code{that} denote equal/different positions.
+@end deftypemethod
- '/' shift, and go to state 7
+@deftypefun {std::ostream&} operator<< (std::ostream& @var{o}, const position& @var{p})
+Report @var{p} on @var{o} like this:
+@samp{@var{file}:@var{line}.@var{column}}, or
+@samp{@var{line}.@var{column}} if @var{file} is null.
+@end deftypefun
- '/' [reduce using rule 3 (exp)]
- $default reduce using rule 3 (exp)
+@node C++ location
+@subsubsection C++ @code{location}
-state 11
+@deftypeop {Constructor} {location} {} location (const position& @var{begin}, const position& @var{end})
+Create a @code{Location} from the endpoints of the range.
+@end deftypeop
- exp -> exp . '+' exp (rule 1)
- exp -> exp . '-' exp (rule 2)
- exp -> exp . '*' exp (rule 3)
- exp -> exp . '/' exp (rule 4)
- exp -> exp '/' exp . (rule 4)
+@deftypeop {Constructor} {location} {} location (const position& @var{pos} = position())
+@deftypeopx {Constructor} {location} {} location (std::string* @var{file}, uint @var{line}, uint @var{col})
+Create a @code{Location} denoting an empty range located at a given point.
+@end deftypeop
- '+' shift, and go to state 4
- '-' shift, and go to state 5
- '*' shift, and go to state 6
- '/' shift, and go to state 7
+@deftypemethod {location} {void} initialize (std::string* @var{file} = 0, uint @var{line} = 1, uint @var{col} = 1)
+Reset the location to an empty range at the given values.
+@end deftypemethod
- '+' [reduce using rule 4 (exp)]
- '-' [reduce using rule 4 (exp)]
- '*' [reduce using rule 4 (exp)]
- '/' [reduce using rule 4 (exp)]
- $default reduce using rule 4 (exp)
-@end example
+@deftypeivar {location} {position} begin
+@deftypeivarx {location} {position} end
+The first, inclusive, position of the range, and the first beyond.
+@end deftypeivar
-@noindent
-Observe that state 11 contains conflicts not only due to the lack of
-precedence of @samp{/} with respect to @samp{+}, @samp{-}, and
-@samp{*}, but also because the
-associativity of @samp{/} is not specified.
+@deftypemethod {location} {uint} columns (int @var{width} = 1)
+@deftypemethodx {location} {uint} lines (int @var{height} = 1)
+Advance the @code{end} position.
+@end deftypemethod
+@deftypemethod {location} {location} operator+ (const location& @var{end})
+@deftypemethodx {location} {location} operator+ (int @var{width})
+@deftypemethodx {location} {location} operator+= (int @var{width})
+Various forms of syntactic sugar.
+@end deftypemethod
-@node Tracing
-@section Tracing Your Parser
-@findex yydebug
-@cindex debugging
-@cindex tracing the parser
+@deftypemethod {location} {void} step ()
+Move @code{begin} onto @code{end}.
+@end deftypemethod
-If a Bison grammar compiles properly but doesn't do what you want when it
-runs, the @code{yydebug} parser-trace feature can help you figure out why.
+@deftypemethod {location} {bool} operator== (const location& @var{that})
+@deftypemethodx {location} {bool} operator!= (const location& @var{that})
+Whether @code{*this} and @code{that} denote equal/different ranges of
+positions.
+@end deftypemethod
-There are several means to enable compilation of trace facilities:
+@deftypefun {std::ostream&} operator<< (std::ostream& @var{o}, const location& @var{p})
+Report @var{p} on @var{o}, taking care of special cases such as: no
+@code{filename} defined, or equal filename/line or column.
+@end deftypefun
-@table @asis
-@item the macro @code{YYDEBUG}
-@findex YYDEBUG
-Define the macro @code{YYDEBUG} to a nonzero value when you compile the
-parser. This is compliant with @acronym{POSIX} Yacc. You could use
-@samp{-DYYDEBUG=1} as a compiler option or you could put @samp{#define
-YYDEBUG 1} in the prologue of the grammar file (@pxref{Prologue, , The
-Prologue}).
+@node C++ Parser Interface
+@subsection C++ Parser Interface
+@c - define parser_class_name
+@c - Ctor
+@c - parse, error, set_debug_level, debug_level, set_debug_stream,
+@c debug_stream.
+@c - Reporting errors
-@item the option @option{-t}, @option{--debug}
-Use the @samp{-t} option when you run Bison (@pxref{Invocation,
-,Invoking Bison}). This is @acronym{POSIX} compliant too.
+The output files @file{@var{output}.hh} and @file{@var{output}.cc}
+declare and define the parser class in the namespace @code{yy}. The
+class name defaults to @code{parser}, but may be changed using
+@samp{%define parser_class_name "@var{name}"}. The interface of
+this class is detailed below. It can be extended using the
+@code{%parse-param} feature: its semantics is slightly changed since
+it describes an additional member of the parser class, and an
+additional argument for its constructor.
-@item the directive @samp{%debug}
-@findex %debug
-Add the @code{%debug} directive (@pxref{Decl Summary, ,Bison
-Declaration Summary}). This is a Bison extension, which will prove
-useful when Bison will output parsers for languages that don't use a
-preprocessor. Unless @acronym{POSIX} and Yacc portability matter to
-you, this is
-the preferred solution.
-@end table
+@defcv {Type} {parser} {semantic_type}
+@defcvx {Type} {parser} {location_type}
+The types for semantic values and locations (if enabled).
+@end defcv
-We suggest that you always enable the debug option so that debugging is
-always possible.
+@defcv {Type} {parser} {token}
+A structure that contains (only) the @code{yytokentype} enumeration, which
+defines the tokens. To refer to the token @code{FOO},
+use @code{yy::parser::token::FOO}. The scanner can use
+@samp{typedef yy::parser::token token;} to ``import'' the token enumeration
+(@pxref{Calc++ Scanner}).
+@end defcv
-The trace facility outputs messages with macro calls of the form
-@code{YYFPRINTF (stderr, @var{format}, @var{args})} where
-@var{format} and @var{args} are the usual @code{printf} format and
-arguments. If you define @code{YYDEBUG} to a nonzero value but do not
-define @code{YYFPRINTF}, @code{<stdio.h>} is automatically included
-and @code{YYFPRINTF} is defined to @code{fprintf}.
+@defcv {Type} {parser} {syntax_error}
+This class derives from @code{std::runtime_error}. Throw instances of it
+from the scanner or from the user actions to raise parse errors. This is
+equivalent with first
+invoking @code{error} to report the location and message of the syntax
+error, and then to invoke @code{YYERROR} to enter the error-recovery mode.
+But contrary to @code{YYERROR} which can only be invoked from user actions
+(i.e., written in the action itself), the exception can be thrown from
+function invoked from the user action.
+@end defcv
-Once you have compiled the program with trace facilities, the way to
-request a trace is to store a nonzero value in the variable @code{yydebug}.
-You can do this by making the C code do it (in @code{main}, perhaps), or
-you can alter the value with a C debugger.
+@deftypemethod {parser} {} parser (@var{type1} @var{arg1}, ...)
+Build a new parser object. There are no arguments by default, unless
+@samp{%parse-param @{@var{type1} @var{arg1}@}} was used.
+@end deftypemethod
-Each step taken by the parser when @code{yydebug} is nonzero produces a
-line or two of trace information, written on @code{stderr}. The trace
-messages tell you these things:
+@deftypemethod {syntax_error} {} syntax_error (const location_type& @var{l}, const std::string& @var{m})
+@deftypemethodx {syntax_error} {} syntax_error (const std::string& @var{m})
+Instantiate a syntax-error exception.
+@end deftypemethod
-@itemize @bullet
-@item
-Each time the parser calls @code{yylex}, what kind of token was read.
+@deftypemethod {parser} {int} parse ()
+Run the syntactic analysis, and return 0 on success, 1 otherwise.
+@end deftypemethod
-@item
-Each time a token is shifted, the depth and complete contents of the
-state stack (@pxref{Parser States}).
+@deftypemethod {parser} {std::ostream&} debug_stream ()
+@deftypemethodx {parser} {void} set_debug_stream (std::ostream& @var{o})
+Get or set the stream used for tracing the parsing. It defaults to
+@code{std::cerr}.
+@end deftypemethod
-@item
-Each time a rule is reduced, which rule it is, and the complete contents
-of the state stack afterward.
-@end itemize
+@deftypemethod {parser} {debug_level_type} debug_level ()
+@deftypemethodx {parser} {void} set_debug_level (debug_level @var{l})
+Get or set the tracing level. Currently its value is either 0, no trace,
+or nonzero, full tracing.
+@end deftypemethod
-To make sense of this information, it helps to refer to the listing file
-produced by the Bison @samp{-v} option (@pxref{Invocation, ,Invoking
-Bison}). This file shows the meaning of each state in terms of
-positions in various rules, and also what each state will do with each
-possible input token. As you read the successive trace messages, you
-can see that the parser is functioning according to its specification in
-the listing file. Eventually you will arrive at the place where
-something undesirable happens, and you will see which parts of the
-grammar are to blame.
+@deftypemethod {parser} {void} error (const location_type& @var{l}, const std::string& @var{m})
+@deftypemethodx {parser} {void} error (const std::string& @var{m})
+The definition for this member function must be supplied by the user:
+the parser uses it to report a parser error occurring at @var{l},
+described by @var{m}. If location tracking is not enabled, the second
+signature is used.
+@end deftypemethod
-The parser file is a C program and you can use C debuggers on it, but it's
-not easy to interpret what it is doing. The parser function is a
-finite-state machine interpreter, and aside from the actions it executes
-the same code over and over. Only the values of variables show where in
-the grammar it is working.
-@findex YYPRINT
-The debugging information normally gives the token type of each token
-read, but not its semantic value. You can optionally define a macro
-named @code{YYPRINT} to provide a way to print the value. If you define
-@code{YYPRINT}, it should take three arguments. The parser will pass a
-standard I/O stream, the numeric code for the token type, and the token
-value (from @code{yylval}).
+@node C++ Scanner Interface
+@subsection C++ Scanner Interface
+@c - prefix for yylex.
+@c - Pure interface to yylex
+@c - %lex-param
-Here is an example of @code{YYPRINT} suitable for the multi-function
-calculator (@pxref{Mfcalc Decl, ,Declarations for @code{mfcalc}}):
+The parser invokes the scanner by calling @code{yylex}. Contrary to C
+parsers, C++ parsers are always pure: there is no point in using the
+@samp{%define api.pure} directive. The actual interface with @code{yylex}
+depends whether you use unions, or variants.
-@smallexample
-%@{
- static void print_token_value (FILE *, int, YYSTYPE);
- #define YYPRINT(file, type, value) print_token_value (file, type, value)
-%@}
+@menu
+* Split Symbols:: Passing symbols as two/three components
+* Complete Symbols:: Making symbols a whole
+@end menu
-@dots{} %% @dots{} %% @dots{}
+@node Split Symbols
+@subsubsection Split Symbols
-static void
-print_token_value (FILE *file, int type, YYSTYPE value)
-@{
- if (type == VAR)
- fprintf (file, "%s", value.tptr->name);
- else if (type == NUM)
- fprintf (file, "%d", value.val);
-@}
-@end smallexample
+Therefore the interface is as follows.
-@c ================================================= Invoking Bison
+@deftypemethod {parser} {int} yylex (semantic_type* @var{yylval}, location_type* @var{yylloc}, @var{type1} @var{arg1}, ...)
+@deftypemethodx {parser} {int} yylex (semantic_type* @var{yylval}, @var{type1} @var{arg1}, ...)
+Return the next token. Its type is the return value, its semantic value and
+location (if enabled) being @var{yylval} and @var{yylloc}. Invocations of
+@samp{%lex-param @{@var{type1} @var{arg1}@}} yield additional arguments.
+@end deftypemethod
-@node Invocation
-@chapter Invoking Bison
-@cindex invoking Bison
-@cindex Bison invocation
-@cindex options for invoking Bison
+Note that when using variants, the interface for @code{yylex} is the same,
+but @code{yylval} is handled differently.
-The usual way to invoke Bison is as follows:
+Regular union-based code in Lex scanner typically look like:
@example
-bison @var{infile}
+[0-9]+ @{
+ yylval.ival = text_to_int (yytext);
+ return yy::parser::INTEGER;
+ @}
+[a-z]+ @{
+ yylval.sval = new std::string (yytext);
+ return yy::parser::IDENTIFIER;
+ @}
@end example
-Here @var{infile} is the grammar file name, which usually ends in
-@samp{.y}. The parser file's name is made by replacing the @samp{.y}
-with @samp{.tab.c} and removing any leading directory. Thus, the
-@samp{bison foo.y} file name yields
-@file{foo.tab.c}, and the @samp{bison hack/foo.y} file name yields
-@file{foo.tab.c}. It's also possible, in case you are writing
-C++ code instead of C in your grammar file, to name it @file{foo.ypp}
-or @file{foo.y++}. Then, the output files will take an extension like
-the given one as input (respectively @file{foo.tab.cpp} and
-@file{foo.tab.c++}).
-This feature takes effect with all options that manipulate file names like
-@samp{-o} or @samp{-d}.
-
-For example :
+Using variants, @code{yylval} is already constructed, but it is not
+initialized. So the code would look like:
@example
-bison -d @var{infile.yxx}
+[0-9]+ @{
+ yylval.build<int>() = text_to_int (yytext);
+ return yy::parser::INTEGER;
+ @}
+[a-z]+ @{
+ yylval.build<std::string> = yytext;
+ return yy::parser::IDENTIFIER;
+ @}
@end example
+
@noindent
-will produce @file{infile.tab.cxx} and @file{infile.tab.hxx}, and
+or
@example
-bison -d -o @var{output.c++} @var{infile.y}
+[0-9]+ @{
+ yylval.build(text_to_int (yytext));
+ return yy::parser::INTEGER;
+ @}
+[a-z]+ @{
+ yylval.build(yytext);
+ return yy::parser::IDENTIFIER;
+ @}
@end example
-@noindent
-will produce @file{output.c++} and @file{outfile.h++}.
-
-For compatibility with @acronym{POSIX}, the standard Bison
-distribution also contains a shell script called @command{yacc} that
-invokes Bison with the @option{-y} option.
-@menu
-* Bison Options:: All the options described in detail,
- in alphabetical order by short options.
-* Option Cross Key:: Alphabetical list of long options.
-* Yacc Library:: Yacc-compatible @code{yylex} and @code{main}.
-@end menu
-@node Bison Options
-@section Bison Options
+@node Complete Symbols
+@subsubsection Complete Symbols
-Bison supports both traditional single-letter options and mnemonic long
-option names. Long option names are indicated with @samp{--} instead of
-@samp{-}. Abbreviations for option names are allowed as long as they
-are unique. When a long option takes an argument, like
-@samp{--file-prefix}, connect the option name and the argument with
-@samp{=}.
+If you specified both @code{%define variant} and @code{%define lex_symbol},
+the @code{parser} class also defines the class @code{parser::symbol_type}
+which defines a @emph{complete} symbol, aggregating its type (i.e., the
+traditional value returned by @code{yylex}), its semantic value (i.e., the
+value passed in @code{yylval}, and possibly its location (@code{yylloc}).
-Here is a list of options that can be used with Bison, alphabetized by
-short option. It is followed by a cross key alphabetized by long
-option.
+@deftypemethod {symbol_type} {} symbol_type (token_type @var{type}, const semantic_type& @var{value}, const location_type& @var{location})
+Build a complete terminal symbol which token type is @var{type}, and which
+semantic value is @var{value}. If location tracking is enabled, also pass
+the @var{location}.
+@end deftypemethod
-@c Please, keep this ordered as in `bison --help'.
-@noindent
-Operations modes:
-@table @option
-@item -h
-@itemx --help
-Print a summary of the command-line options to Bison and exit.
+This interface is low-level and should not be used for two reasons. First,
+it is inconvenient, as you still have to build the semantic value, which is
+a variant, and second, because consistency is not enforced: as with unions,
+it is still possible to give an integer as semantic value for a string.
-@item -V
-@itemx --version
-Print the version number of Bison and exit.
+So for each token type, Bison generates named constructors as follows.
-@item --print-localedir
-Print the name of the directory containing locale-dependent data.
+@deftypemethod {symbol_type} {} make_@var{token} (const @var{value_type}& @var{value}, const location_type& @var{location})
+@deftypemethodx {symbol_type} {} make_@var{token} (const location_type& @var{location})
+Build a complete terminal symbol for the token type @var{token} (not
+including the @code{api.tokens.prefix}) whose possible semantic value is
+@var{value} of adequate @var{value_type}. If location tracking is enabled,
+also pass the @var{location}.
+@end deftypemethod
-@item -y
-@itemx --yacc
-Act more like the traditional Yacc command. This can cause
-different diagnostics to be generated, and may change behavior in
-other minor ways. Most importantly, imitate Yacc's output
-file name conventions, so that the parser output file is called
-@file{y.tab.c}, and the other outputs are called @file{y.output} and
-@file{y.tab.h}.
-Also, if generating an @acronym{LALR}(1) parser in C, generate @code{#define}
-statements in addition to an @code{enum} to associate token numbers with token
-names.
-Thus, the following shell script can substitute for Yacc, and the Bison
-distribution contains such a script for compatibility with @acronym{POSIX}:
+For instance, given the following declarations:
@example
-#! /bin/sh
-bison -y "$@@"
+%define api.tokens.prefix "TOK_"
+%token <std::string> IDENTIFIER;
+%token <int> INTEGER;
+%token COLON;
@end example
-The @option{-y}/@option{--yacc} option is intended for use with
-traditional Yacc grammars. If your grammar uses a Bison extension
-like @samp{%glr-parser}, Bison might not be Yacc-compatible even if
-this option is specified.
+@noindent
+Bison generates the following functions:
-@end table
+@example
+symbol_type make_IDENTIFIER(const std::string& v,
+ const location_type& l);
+symbol_type make_INTEGER(const int& v,
+ const location_type& loc);
+symbol_type make_COLON(const location_type& loc);
+@end example
@noindent
-Tuning the parser:
+which should be used in a Lex-scanner as follows.
-@table @option
-@item -t
-@itemx --debug
-In the parser file, define the macro @code{YYDEBUG} to 1 if it is not
-already defined, so that the debugging facilities are compiled.
-@xref{Tracing, ,Tracing Your Parser}.
+@example
+[0-9]+ return yy::parser::make_INTEGER(text_to_int (yytext), loc);
+[a-z]+ return yy::parser::make_IDENTIFIER(yytext, loc);
+":" return yy::parser::make_COLON(loc);
+@end example
-@item -L @var{language}
-@itemx --language=@var{language}
-Specify the programming language for the generated parser, as if
-@code{%language} was specified (@pxref{Decl Summary, , Bison Declaration
-Summary}). Currently supported languages include C and C++.
-@var{language} is case-insensitive.
+Tokens that do not have an identifier are not accessible: you cannot simply
+use characters such as @code{':'}, they must be declared with @code{%token}.
-@item --locations
-Pretend that @code{%locations} was specified. @xref{Decl Summary}.
+@node A Complete C++ Example
+@subsection A Complete C++ Example
-@item -p @var{prefix}
-@itemx --name-prefix=@var{prefix}
-Pretend that @code{%name-prefix "@var{prefix}"} was specified.
-@xref{Decl Summary}.
+This section demonstrates the use of a C++ parser with a simple but
+complete example. This example should be available on your system,
+ready to compile, in the directory @dfn{.../bison/examples/calc++}. It
+focuses on the use of Bison, therefore the design of the various C++
+classes is very naive: no accessors, no encapsulation of members etc.
+We will use a Lex scanner, and more precisely, a Flex scanner, to
+demonstrate the various interactions. A hand-written scanner is
+actually easier to interface with.
-@item -l
-@itemx --no-lines
-Don't put any @code{#line} preprocessor commands in the parser file.
-Ordinarily Bison puts them in the parser file so that the C compiler
-and debuggers will associate errors with your source file, the
-grammar file. This option causes them to associate errors with the
-parser file, treating it as an independent source file in its own right.
+@menu
+* Calc++ --- C++ Calculator:: The specifications
+* Calc++ Parsing Driver:: An active parsing context
+* Calc++ Parser:: A parser class
+* Calc++ Scanner:: A pure C++ Flex scanner
+* Calc++ Top Level:: Conducting the band
+@end menu
+
+@node Calc++ --- C++ Calculator
+@subsubsection Calc++ --- C++ Calculator
+
+Of course the grammar is dedicated to arithmetics, a single
+expression, possibly preceded by variable assignments. An
+environment containing possibly predefined variables such as
+@code{one} and @code{two}, is exchanged with the parser. An example
+of valid input follows.
+
+@example
+three := 3
+seven := one + two * three
+seven * seven
+@end example
+
+@node Calc++ Parsing Driver
+@subsubsection Calc++ Parsing Driver
+@c - An env
+@c - A place to store error messages
+@c - A place for the result
+
+To support a pure interface with the parser (and the scanner) the
+technique of the ``parsing context'' is convenient: a structure
+containing all the data to exchange. Since, in addition to simply
+launch the parsing, there are several auxiliary tasks to execute (open
+the file for parsing, instantiate the parser etc.), we recommend
+transforming the simple parsing context structure into a fully blown
+@dfn{parsing driver} class.
-@item -n
-@itemx --no-parser
-Pretend that @code{%no-parser} was specified. @xref{Decl Summary}.
+The declaration of this driver class, @file{calc++-driver.hh}, is as
+follows. The first part includes the CPP guard and imports the
+required standard library components, and the declaration of the parser
+class.
-@item -S @var{file}
-@itemx --skeleton=@var{file}
-Specify the skeleton to use, similar to @code{%skeleton}
-(@pxref{Decl Summary, , Bison Declaration Summary}).
+@comment file: calc++-driver.hh
+@example
+#ifndef CALCXX_DRIVER_HH
+# define CALCXX_DRIVER_HH
+# include <string>
+# include <map>
+# include "calc++-parser.hh"
+@end example
-You probably don't need this option unless you are developing Bison.
-You should use @option{--language} if you want to specify the skeleton for a
-different language, because it is clearer and because it will always
-choose the correct skeleton for non-deterministic or push parsers.
-If @var{file} does not contain a @code{/}, @var{file} is the name of a skeleton
-file in the Bison installation directory.
-If it does, @var{file} is an absolute file name or a file name relative to the
-current working directory.
-This is similar to how most shells resolve commands.
+@noindent
+Then comes the declaration of the scanning function. Flex expects
+the signature of @code{yylex} to be defined in the macro
+@code{YY_DECL}, and the C++ parser expects it to be declared. We can
+factor both as follows.
-@item -k
-@itemx --token-table
-Pretend that @code{%token-table} was specified. @xref{Decl Summary}.
-@end table
+@comment file: calc++-driver.hh
+@example
+// Tell Flex the lexer's prototype ...
+# define YY_DECL \
+ yy::calcxx_parser::symbol_type yylex (calcxx_driver& driver)
+// ... and declare it for the parser's sake.
+YY_DECL;
+@end example
@noindent
-Adjust the output:
-
-@table @option
-@item -d
-@itemx --defines
-Pretend that @code{%defines} was specified, i.e., write an extra output
-file containing macro definitions for the token type names defined in
-the grammar, as well as a few other declarations. @xref{Decl Summary}.
+The @code{calcxx_driver} class is then declared with its most obvious
+members.
-@item --defines=@var{defines-file}
-Same as above, but save in the file @var{defines-file}.
+@comment file: calc++-driver.hh
+@example
+// Conducting the whole scanning and parsing of Calc++.
+class calcxx_driver
+@{
+public:
+ calcxx_driver ();
+ virtual ~calcxx_driver ();
-@item -b @var{file-prefix}
-@itemx --file-prefix=@var{prefix}
-Pretend that @code{%file-prefix} was specified, i.e., specify prefix to use
-for all Bison output file names. @xref{Decl Summary}.
+ std::map<std::string, int> variables;
-@item -r @var{things}
-@itemx --report=@var{things}
-Write an extra output file containing verbose description of the comma
-separated list of @var{things} among:
+ int result;
+@end example
-@table @code
-@item state
-Description of the grammar, conflicts (resolved and unresolved), and
-@acronym{LALR} automaton.
+@noindent
+To encapsulate the coordination with the Flex scanner, it is useful to have
+member functions to open and close the scanning phase.
-@item lookahead
-Implies @code{state} and augments the description of the automaton with
-each rule's lookahead set.
+@comment file: calc++-driver.hh
+@example
+ // Handling the scanner.
+ void scan_begin ();
+ void scan_end ();
+ bool trace_scanning;
+@end example
-@item itemset
-Implies @code{state} and augments the description of the automaton with
-the full set of items for each state, instead of its core only.
-@end table
+@noindent
+Similarly for the parser itself.
-@item -v
-@itemx --verbose
-Pretend that @code{%verbose} was specified, i.e., write an extra output
-file containing verbose descriptions of the grammar and
-parser. @xref{Decl Summary}.
+@comment file: calc++-driver.hh
+@example
+ // Run the parser on file F.
+ // Return 0 on success.
+ int parse (const std::string& f);
+ // The name of the file being parsed.
+ // Used later to pass the file name to the location tracker.
+ std::string file;
+ // Whether parser traces should be generated.
+ bool trace_parsing;
+@end example
-@item -o @var{file}
-@itemx --output=@var{file}
-Specify the @var{file} for the parser file.
+@noindent
+To demonstrate pure handling of parse errors, instead of simply
+dumping them on the standard error output, we will pass them to the
+compiler driver using the following two member functions. Finally, we
+close the class declaration and CPP guard.
-The other output files' names are constructed from @var{file} as
-described under the @samp{-v} and @samp{-d} options.
+@comment file: calc++-driver.hh
+@example
+ // Error handling.
+ void error (const yy::location& l, const std::string& m);
+ void error (const std::string& m);
+@};
+#endif // ! CALCXX_DRIVER_HH
+@end example
-@item -g
-Output a graphical representation of the @acronym{LALR}(1) grammar
-automaton computed by Bison, in @uref{http://www.graphviz.org/, Graphviz}
-@uref{http://www.graphviz.org/doc/info/lang.html, @acronym{DOT}} format.
-If the grammar file is @file{foo.y}, the output file will
-be @file{foo.dot}.
-
-@item --graph=@var{graph-file}
-The behavior of @var{--graph} is the same than @samp{-g}. The only
-difference is that it has an optional argument which is the name of
-the output graph file.
-@end table
+The implementation of the driver is straightforward. The @code{parse}
+member function deserves some attention. The @code{error} functions
+are simple stubs, they should actually register the located error
+messages and set error state.
-@node Option Cross Key
-@section Option Cross Key
+@comment file: calc++-driver.cc
+@example
+#include "calc++-driver.hh"
+#include "calc++-parser.hh"
-@c FIXME: How about putting the directives too?
-Here is a list of options, alphabetized by long option, to help you find
-the corresponding short option.
-
-@multitable {@option{--defines=@var{defines-file}}} {@option{-b @var{file-prefix}XXX}}
-@headitem Long Option @tab Short Option
-@item @option{--debug} @tab @option{-t}
-@item @option{--defines=@var{defines-file}} @tab @option{-d}
-@item @option{--file-prefix=@var{prefix}} @tab @option{-b @var{file-prefix}}
-@item @option{--graph=@var{graph-file}} @tab @option{-d}
-@item @option{--help} @tab @option{-h}
-@item @option{--name-prefix=@var{prefix}} @tab @option{-p @var{name-prefix}}
-@item @option{--no-lines} @tab @option{-l}
-@item @option{--no-parser} @tab @option{-n}
-@item @option{--output=@var{outfile}} @tab @option{-o @var{outfile}}
-@item @option{--print-localedir} @tab
-@item @option{--token-table} @tab @option{-k}
-@item @option{--verbose} @tab @option{-v}
-@item @option{--version} @tab @option{-V}
-@item @option{--yacc} @tab @option{-y}
-@end multitable
+calcxx_driver::calcxx_driver ()
+ : trace_scanning (false), trace_parsing (false)
+@{
+ variables["one"] = 1;
+ variables["two"] = 2;
+@}
-@node Yacc Library
-@section Yacc Library
+calcxx_driver::~calcxx_driver ()
+@{
+@}
-The Yacc library contains default implementations of the
-@code{yyerror} and @code{main} functions. These default
-implementations are normally not useful, but @acronym{POSIX} requires
-them. To use the Yacc library, link your program with the
-@option{-ly} option. Note that Bison's implementation of the Yacc
-library is distributed under the terms of the @acronym{GNU} General
-Public License (@pxref{Copying}).
+int
+calcxx_driver::parse (const std::string &f)
+@{
+ file = f;
+ scan_begin ();
+ yy::calcxx_parser parser (*this);
+ parser.set_debug_level (trace_parsing);
+ int res = parser.parse ();
+ scan_end ();
+ return res;
+@}
-If you use the Yacc library's @code{yyerror} function, you should
-declare @code{yyerror} as follows:
+void
+calcxx_driver::error (const yy::location& l, const std::string& m)
+@{
+ std::cerr << l << ": " << m << std::endl;
+@}
-@example
-int yyerror (char const *);
+void
+calcxx_driver::error (const std::string& m)
+@{
+ std::cerr << m << std::endl;
+@}
@end example
-Bison ignores the @code{int} value returned by this @code{yyerror}.
-If you use the Yacc library's @code{main} function, your
-@code{yyparse} function should have the following type signature:
+@node Calc++ Parser
+@subsubsection Calc++ Parser
+The grammar file @file{calc++-parser.yy} starts by asking for the C++
+deterministic parser skeleton, the creation of the parser header file,
+and specifies the name of the parser class. Because the C++ skeleton
+changed several times, it is safer to require the version you designed
+the grammar for.
+
+@comment file: calc++-parser.yy
@example
-int yyparse (void);
+%skeleton "lalr1.cc" /* -*- C++ -*- */
+%require "@value{VERSION}"
+%defines
+%define parser_class_name "calcxx_parser"
@end example
-@c ================================================= C++ Bison
+@noindent
+@findex %define variant
+@findex %define lex_symbol
+This example will use genuine C++ objects as semantic values, therefore, we
+require the variant-based interface. To make sure we properly use it, we
+enable assertions. To fully benefit from type-safety and more natural
+definition of ``symbol'', we enable @code{lex_symbol}.
-@node C++ Language Interface
-@chapter C++ Language Interface
+@comment file: calc++-parser.yy
+@example
+%define variant
+%define parse.assert
+%define lex_symbol
+@end example
-@menu
-* C++ Parsers:: The interface to generate C++ parser classes
-* A Complete C++ Example:: Demonstrating their use
-@end menu
+@noindent
+@findex %code requires
+Then come the declarations/inclusions needed by the semantic values.
+Because the parser uses the parsing driver and reciprocally, both would like
+to include the header of the other, which is, of course, insane. This
+mutual dependency will be broken using forward declarations. Because the
+driver's header needs detailed knowledge about the parser class (in
+particular its inner types), it is the parser's header which will use a
+forward declaration of the driver. @xref{%code Summary}.
-@node C++ Parsers
-@section C++ Parsers
+@comment file: calc++-parser.yy
+@example
+%code requires
+@{
+# include <string>
+class calcxx_driver;
+@}
+@end example
-@menu
-* C++ Bison Interface:: Asking for C++ parser generation
-* C++ Semantic Values:: %union vs. C++
-* C++ Location Values:: The position and location classes
-* C++ Parser Interface:: Instantiating and running the parser
-* C++ Scanner Interface:: Exchanges between yylex and parse
-@end menu
+@noindent
+The driver is passed by reference to the parser and to the scanner.
+This provides a simple but effective pure interface, not relying on
+global variables.
-@node C++ Bison Interface
-@subsection C++ Bison Interface
-@c - %language "C++"
-@c - Always pure
-@c - initial action
+@comment file: calc++-parser.yy
+@example
+// The parsing context.
+%param @{ calcxx_driver& driver @}
+@end example
-The C++ @acronym{LALR}(1) parser is selected using the language directive,
-@samp{%language "C++"}, or the synonymous command-line option
-@option{--language=c++}.
-@xref{Decl Summary}.
+@noindent
+Then we request location tracking, and initialize the
+first location's file name. Afterward new locations are computed
+relatively to the previous locations: the file name will be
+propagated.
-When run, @command{bison} will create several
-entities in the @samp{yy} namespace. Use the @samp{%name-prefix}
-directive to change the namespace name, see @ref{Decl Summary}. The
-various classes are generated in the following files:
+@comment file: calc++-parser.yy
+@example
+%locations
+%initial-action
+@{
+ // Initialize the initial location.
+ @@$.begin.filename = @@$.end.filename = &driver.file;
+@};
+@end example
-@table @file
-@item position.hh
-@itemx location.hh
-The definition of the classes @code{position} and @code{location},
-used for location tracking. @xref{C++ Location Values}.
+@noindent
+Use the following two directives to enable parser tracing and verbose error
+messages. However, verbose error messages can contain incorrect information
+(@pxref{LAC}).
-@item stack.hh
-An auxiliary class @code{stack} used by the parser.
+@comment file: calc++-parser.yy
+@example
+%define parse.trace
+%define parse.error verbose
+@end example
-@item @var{file}.hh
-@itemx @var{file}.cc
-(Assuming the extension of the input file was @samp{.yy}.) The
-declaration and implementation of the C++ parser class. The basename
-and extension of these two files follow the same rules as with regular C
-parsers (@pxref{Invocation}).
+@noindent
+@findex %code
+The code between @samp{%code @{} and @samp{@}} is output in the
+@file{*.cc} file; it needs detailed knowledge about the driver.
-The header is @emph{mandatory}; you must either pass
-@option{-d}/@option{--defines} to @command{bison}, or use the
-@samp{%defines} directive.
-@end table
+@comment file: calc++-parser.yy
+@example
+%code
+@{
+# include "calc++-driver.hh"
+@}
+@end example
-All these files are documented using Doxygen; run @command{doxygen}
-for a complete and accurate documentation.
-@node C++ Semantic Values
-@subsection C++ Semantic Values
-@c - No objects in unions
-@c - YYSTYPE
-@c - Printer and destructor
+@noindent
+The token numbered as 0 corresponds to end of file; the following line
+allows for nicer error messages referring to ``end of file'' instead of
+``$end''. Similarly user friendly names are provided for each symbol. To
+avoid name clashes in the generated files (@pxref{Calc++ Scanner}), prefix
+tokens with @code{TOK_} (@pxref{%define Summary,,api.tokens.prefix}).
-The @code{%union} directive works as for C, see @ref{Union Decl, ,The
-Collection of Value Types}. In particular it produces a genuine
-@code{union}@footnote{In the future techniques to allow complex types
-within pseudo-unions (similar to Boost variants) might be implemented to
-alleviate these issues.}, which have a few specific features in C++.
-@itemize @minus
-@item
-The type @code{YYSTYPE} is defined but its use is discouraged: rather
-you should refer to the parser's encapsulated type
-@code{yy::parser::semantic_type}.
-@item
-Non POD (Plain Old Data) types cannot be used. C++ forbids any
-instance of classes with constructors in unions: only @emph{pointers}
-to such objects are allowed.
-@end itemize
+@comment file: calc++-parser.yy
+@example
+%define api.tokens.prefix "TOK_"
+%token
+ END 0 "end of file"
+ ASSIGN ":="
+ MINUS "-"
+ PLUS "+"
+ STAR "*"
+ SLASH "/"
+ LPAREN "("
+ RPAREN ")"
+;
+@end example
-Because objects have to be stored via pointers, memory is not
-reclaimed automatically: using the @code{%destructor} directive is the
-only means to avoid leaks. @xref{Destructor Decl, , Freeing Discarded
-Symbols}.
+@noindent
+Since we use variant-based semantic values, @code{%union} is not used, and
+both @code{%type} and @code{%token} expect genuine types, as opposed to type
+tags.
+@comment file: calc++-parser.yy
+@example
+%token <std::string> IDENTIFIER "identifier"
+%token <int> NUMBER "number"
+%type <int> exp
+@end example
-@node C++ Location Values
-@subsection C++ Location Values
-@c - %locations
-@c - class Position
-@c - class Location
-@c - %define filename_type "const symbol::Symbol"
+@noindent
+No @code{%destructor} is needed to enable memory deallocation during error
+recovery; the memory, for strings for instance, will be reclaimed by the
+regular destructors. All the values are printed using their
+@code{operator<<}.
-When the directive @code{%locations} is used, the C++ parser supports
-location tracking, see @ref{Locations, , Locations Overview}. Two
-auxiliary classes define a @code{position}, a single point in a file,
-and a @code{location}, a range composed of a pair of
-@code{position}s (possibly spanning several files).
+@c FIXME: Document %printer, and mention that it takes a braced-code operand.
+@comment file: calc++-parser.yy
+@example
+%printer @{ debug_stream () << $$; @} <*>;
+@end example
-@deftypemethod {position} {std::string*} file
-The name of the file. It will always be handled as a pointer, the
-parser will never duplicate nor deallocate it. As an experimental
-feature you may change it to @samp{@var{type}*} using @samp{%define
-filename_type "@var{type}"}.
-@end deftypemethod
+@noindent
+The grammar itself is straightforward (@pxref{Location Tracking Calc, ,
+Location Tracking Calculator: @code{ltcalc}}).
-@deftypemethod {position} {unsigned int} line
-The line, starting at 1.
-@end deftypemethod
+@comment file: calc++-parser.yy
+@example
+%%
+%start unit;
+unit: assignments exp @{ driver.result = $2; @};
-@deftypemethod {position} {unsigned int} lines (int @var{height} = 1)
-Advance by @var{height} lines, resetting the column number.
-@end deftypemethod
+assignments:
+ /* Nothing. */ @{@}
+| assignments assignment @{@};
-@deftypemethod {position} {unsigned int} column
-The column, starting at 0.
-@end deftypemethod
+assignment:
+ "identifier" ":=" exp @{ driver.variables[$1] = $3; @};
+
+%left "+" "-";
+%left "*" "/";
+exp:
+ exp "+" exp @{ $$ = $1 + $3; @}
+| exp "-" exp @{ $$ = $1 - $3; @}
+| exp "*" exp @{ $$ = $1 * $3; @}
+| exp "/" exp @{ $$ = $1 / $3; @}
+| "(" exp ")" @{ std::swap ($$, $2); @}
+| "identifier" @{ $$ = driver.variables[$1]; @}
+| "number" @{ std::swap ($$, $1); @};
+%%
+@end example
-@deftypemethod {position} {unsigned int} columns (int @var{width} = 1)
-Advance by @var{width} columns, without changing the line number.
-@end deftypemethod
+@noindent
+Finally the @code{error} member function registers the errors to the
+driver.
-@deftypemethod {position} {position&} operator+= (position& @var{pos}, int @var{width})
-@deftypemethodx {position} {position} operator+ (const position& @var{pos}, int @var{width})
-@deftypemethodx {position} {position&} operator-= (const position& @var{pos}, int @var{width})
-@deftypemethodx {position} {position} operator- (position& @var{pos}, int @var{width})
-Various forms of syntactic sugar for @code{columns}.
-@end deftypemethod
+@comment file: calc++-parser.yy
+@example
+void
+yy::calcxx_parser::error (const location_type& l,
+ const std::string& m)
+@{
+ driver.error (l, m);
+@}
+@end example
-@deftypemethod {position} {position} operator<< (std::ostream @var{o}, const position& @var{p})
-Report @var{p} on @var{o} like this:
-@samp{@var{file}:@var{line}.@var{column}}, or
-@samp{@var{line}.@var{column}} if @var{file} is null.
-@end deftypemethod
+@node Calc++ Scanner
+@subsubsection Calc++ Scanner
-@deftypemethod {location} {position} begin
-@deftypemethodx {location} {position} end
-The first, inclusive, position of the range, and the first beyond.
-@end deftypemethod
+The Flex scanner first includes the driver declaration, then the
+parser's to get the set of defined tokens.
-@deftypemethod {location} {unsigned int} columns (int @var{width} = 1)
-@deftypemethodx {location} {unsigned int} lines (int @var{height} = 1)
-Advance the @code{end} position.
-@end deftypemethod
+@comment file: calc++-scanner.ll
+@example
+%@{ /* -*- C++ -*- */
+# include <cerrno>
+# include <climits>
+# include <cstdlib>
+# include <string>
+# include "calc++-driver.hh"
+# include "calc++-parser.hh"
-@deftypemethod {location} {location} operator+ (const location& @var{begin}, const location& @var{end})
-@deftypemethodx {location} {location} operator+ (const location& @var{begin}, int @var{width})
-@deftypemethodx {location} {location} operator+= (const location& @var{loc}, int @var{width})
-Various forms of syntactic sugar.
-@end deftypemethod
+// Work around an incompatibility in flex (at least versions
+// 2.5.31 through 2.5.33): it generates code that does
+// not conform to C89. See Debian bug 333231
+// <http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=333231>.
+# undef yywrap
+# define yywrap() 1
-@deftypemethod {location} {void} step ()
-Move @code{begin} onto @code{end}.
-@end deftypemethod
+// The location of the current token.
+static yy::location loc;
+%@}
+@end example
+@noindent
+Because there is no @code{#include}-like feature we don't need
+@code{yywrap}, we don't need @code{unput} either, and we parse an
+actual file, this is not an interactive session with the user.
+Finally, we enable scanner tracing.
-@node C++ Parser Interface
-@subsection C++ Parser Interface
-@c - define parser_class_name
-@c - Ctor
-@c - parse, error, set_debug_level, debug_level, set_debug_stream,
-@c debug_stream.
-@c - Reporting errors
+@comment file: calc++-scanner.ll
+@example
+%option noyywrap nounput batch debug
+@end example
-The output files @file{@var{output}.hh} and @file{@var{output}.cc}
-declare and define the parser class in the namespace @code{yy}. The
-class name defaults to @code{parser}, but may be changed using
-@samp{%define parser_class_name "@var{name}"}. The interface of
-this class is detailed below. It can be extended using the
-@code{%parse-param} feature: its semantics is slightly changed since
-it describes an additional member of the parser class, and an
-additional argument for its constructor.
+@noindent
+Abbreviations allow for more readable rules.
-@defcv {Type} {parser} {semantic_value_type}
-@defcvx {Type} {parser} {location_value_type}
-The types for semantics value and locations.
-@end defcv
+@comment file: calc++-scanner.ll
+@example
+id [a-zA-Z][a-zA-Z_0-9]*
+int [0-9]+
+blank [ \t]
+@end example
-@deftypemethod {parser} {} parser (@var{type1} @var{arg1}, ...)
-Build a new parser object. There are no arguments by default, unless
-@samp{%parse-param @{@var{type1} @var{arg1}@}} was used.
-@end deftypemethod
+@noindent
+The following paragraph suffices to track locations accurately. Each
+time @code{yylex} is invoked, the begin position is moved onto the end
+position. Then when a pattern is matched, its width is added to the end
+column. When matching ends of lines, the end
+cursor is adjusted, and each time blanks are matched, the begin cursor
+is moved onto the end cursor to effectively ignore the blanks
+preceding tokens. Comments would be treated equally.
-@deftypemethod {parser} {int} parse ()
-Run the syntactic analysis, and return 0 on success, 1 otherwise.
-@end deftypemethod
+@comment file: calc++-scanner.ll
+@example
+@group
+%@{
+ // Code run each time a pattern is matched.
+ # define YY_USER_ACTION loc.columns (yyleng);
+%@}
+@end group
+%%
+@group
+%@{
+ // Code run each time yylex is called.
+ loc.step ();
+%@}
+@end group
+@{blank@}+ loc.step ();
+[\n]+ loc.lines (yyleng); loc.step ();
+@end example
-@deftypemethod {parser} {std::ostream&} debug_stream ()
-@deftypemethodx {parser} {void} set_debug_stream (std::ostream& @var{o})
-Get or set the stream used for tracing the parsing. It defaults to
-@code{std::cerr}.
-@end deftypemethod
+@noindent
+The rules are simple. The driver is used to report errors.
-@deftypemethod {parser} {debug_level_type} debug_level ()
-@deftypemethodx {parser} {void} set_debug_level (debug_level @var{l})
-Get or set the tracing level. Currently its value is either 0, no trace,
-or nonzero, full tracing.
-@end deftypemethod
+@comment file: calc++-scanner.ll
+@example
+"-" return yy::calcxx_parser::make_MINUS(loc);
+"+" return yy::calcxx_parser::make_PLUS(loc);
+"*" return yy::calcxx_parser::make_STAR(loc);
+"/" return yy::calcxx_parser::make_SLASH(loc);
+"(" return yy::calcxx_parser::make_LPAREN(loc);
+")" return yy::calcxx_parser::make_RPAREN(loc);
+":=" return yy::calcxx_parser::make_ASSIGN(loc);
-@deftypemethod {parser} {void} error (const location_type& @var{l}, const std::string& @var{m})
-The definition for this member function must be supplied by the user:
-the parser uses it to report a parser error occurring at @var{l},
-described by @var{m}.
-@end deftypemethod
+@group
+@{int@} @{
+ errno = 0;
+ long n = strtol (yytext, NULL, 10);
+ if (! (INT_MIN <= n && n <= INT_MAX && errno != ERANGE))
+ driver.error (loc, "integer is out of range");
+ return yy::calcxx_parser::make_NUMBER(n, loc);
+@}
+@end group
+@{id@} return yy::calcxx_parser::make_IDENTIFIER(yytext, loc);
+. driver.error (loc, "invalid character");
+<<EOF>> return yy::calcxx_parser::make_END(loc);
+%%
+@end example
+@noindent
+Finally, because the scanner-related driver's member-functions depend
+on the scanner's data, it is simpler to implement them in this file.
-@node C++ Scanner Interface
-@subsection C++ Scanner Interface
-@c - prefix for yylex.
-@c - Pure interface to yylex
-@c - %lex-param
+@comment file: calc++-scanner.ll
+@example
+@group
+void
+calcxx_driver::scan_begin ()
+@{
+ yy_flex_debug = trace_scanning;
+ if (file == "-")
+ yyin = stdin;
+ else if (!(yyin = fopen (file.c_str (), "r")))
+ @{
+ error ("cannot open " + file + ": " + strerror(errno));
+ exit (EXIT_FAILURE);
+ @}
+@}
+@end group
-The parser invokes the scanner by calling @code{yylex}. Contrary to C
-parsers, C++ parsers are always pure: there is no point in using the
-@code{%pure-parser} directive. Therefore the interface is as follows.
+@group
+void
+calcxx_driver::scan_end ()
+@{
+ fclose (yyin);
+@}
+@end group
+@end example
-@deftypemethod {parser} {int} yylex (semantic_value_type& @var{yylval}, location_type& @var{yylloc}, @var{type1} @var{arg1}, ...)
-Return the next token. Its type is the return value, its semantic
-value and location being @var{yylval} and @var{yylloc}. Invocations of
-@samp{%lex-param @{@var{type1} @var{arg1}@}} yield additional arguments.
-@end deftypemethod
+@node Calc++ Top Level
+@subsubsection Calc++ Top Level
+The top level file, @file{calc++.cc}, poses no problem.
-@node A Complete C++ Example
-@section A Complete C++ Example
+@comment file: calc++.cc
+@example
+#include <iostream>
+#include "calc++-driver.hh"
-This section demonstrates the use of a C++ parser with a simple but
-complete example. This example should be available on your system,
-ready to compile, in the directory @dfn{../bison/examples/calc++}. It
-focuses on the use of Bison, therefore the design of the various C++
-classes is very naive: no accessors, no encapsulation of members etc.
-We will use a Lex scanner, and more precisely, a Flex scanner, to
-demonstrate the various interaction. A hand written scanner is
-actually easier to interface with.
+@group
+int
+main (int argc, char *argv[])
+@{
+ int res = 0;
+ calcxx_driver driver;
+ for (++argv; argv[0]; ++argv)
+ if (*argv == std::string ("-p"))
+ driver.trace_parsing = true;
+ else if (*argv == std::string ("-s"))
+ driver.trace_scanning = true;
+ else if (!driver.parse (*argv))
+ std::cout << driver.result << std::endl;
+ else
+ res = 1;
+ return res;
+@}
+@end group
+@end example
+
+@node Java Parsers
+@section Java Parsers
@menu
-* Calc++ --- C++ Calculator:: The specifications
-* Calc++ Parsing Driver:: An active parsing context
-* Calc++ Parser:: A parser class
-* Calc++ Scanner:: A pure C++ Flex scanner
-* Calc++ Top Level:: Conducting the band
+* Java Bison Interface:: Asking for Java parser generation
+* Java Semantic Values:: %type and %token vs. Java
+* Java Location Values:: The position and location classes
+* Java Parser Interface:: Instantiating and running the parser
+* Java Scanner Interface:: Specifying the scanner for the parser
+* Java Action Features:: Special features for use in actions
+* Java Differences:: Differences between C/C++ and Java Grammars
+* Java Declarations Summary:: List of Bison declarations used with Java
@end menu
-@node Calc++ --- C++ Calculator
-@subsection Calc++ --- C++ Calculator
+@node Java Bison Interface
+@subsection Java Bison Interface
+@c - %language "Java"
+
+(The current Java interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
+
+The Java parser skeletons are selected using the @code{%language "Java"}
+directive or the @option{-L java}/@option{--language=java} option.
+
+@c FIXME: Documented bug.
+When generating a Java parser, @code{bison @var{basename}.y} will
+create a single Java source file named @file{@var{basename}.java}
+containing the parser implementation. Using a grammar file without a
+@file{.y} suffix is currently broken. The basename of the parser
+implementation file can be changed by the @code{%file-prefix}
+directive or the @option{-p}/@option{--name-prefix} option. The
+entire parser implementation file name can be changed by the
+@code{%output} directive or the @option{-o}/@option{--output} option.
+The parser implementation file contains a single class for the parser.
+
+You can create documentation for generated parsers using Javadoc.
+
+Contrary to C parsers, Java parsers do not use global variables; the
+state of the parser is always local to an instance of the parser class.
+Therefore, all Java parsers are ``pure'', and the @code{%pure-parser}
+and @samp{%define api.pure} directives does not do anything when used in
+Java.
+
+Push parsers are currently unsupported in Java and @code{%define
+api.push-pull} have no effect.
+
+GLR parsers are currently unsupported in Java. Do not use the
+@code{glr-parser} directive.
+
+No header file can be generated for Java parsers. Do not use the
+@code{%defines} directive or the @option{-d}/@option{--defines} options.
+
+@c FIXME: Possible code change.
+Currently, support for tracing is always compiled
+in. Thus the @samp{%define parse.trace} and @samp{%token-table}
+directives and the
+@option{-t}/@option{--debug} and @option{-k}/@option{--token-table}
+options have no effect. This may change in the future to eliminate
+unused code in the generated parser, so use @samp{%define parse.trace}
+explicitly
+if needed. Also, in the future the
+@code{%token-table} directive might enable a public interface to
+access the token names and codes.
+
+Getting a ``code too large'' error from the Java compiler means the code
+hit the 64KB bytecode per method limitation of the Java class file.
+Try reducing the amount of code in actions and static initializers;
+otherwise, report a bug so that the parser skeleton will be improved.
+
+
+@node Java Semantic Values
+@subsection Java Semantic Values
+@c - No %union, specify type in %type/%token.
+@c - YYSTYPE
+@c - Printer and destructor
-Of course the grammar is dedicated to arithmetics, a single
-expression, possibly preceded by variable assignments. An
-environment containing possibly predefined variables such as
-@code{one} and @code{two}, is exchanged with the parser. An example
-of valid input follows.
+There is no @code{%union} directive in Java parsers. Instead, the
+semantic values' types (class names) should be specified in the
+@code{%type} or @code{%token} directive:
@example
-three := 3
-seven := one + two * three
-seven * seven
+%type <Expression> expr assignment_expr term factor
+%type <Integer> number
@end example
-@node Calc++ Parsing Driver
-@subsection Calc++ Parsing Driver
-@c - An env
-@c - A place to store error messages
-@c - A place for the result
-
-To support a pure interface with the parser (and the scanner) the
-technique of the ``parsing context'' is convenient: a structure
-containing all the data to exchange. Since, in addition to simply
-launch the parsing, there are several auxiliary tasks to execute (open
-the file for parsing, instantiate the parser etc.), we recommend
-transforming the simple parsing context structure into a fully blown
-@dfn{parsing driver} class.
-
-The declaration of this driver class, @file{calc++-driver.hh}, is as
-follows. The first part includes the CPP guard and imports the
-required standard library components, and the declaration of the parser
-class.
+By default, the semantic stack is declared to have @code{Object} members,
+which means that the class types you specify can be of any class.
+To improve the type safety of the parser, you can declare the common
+superclass of all the semantic values using the @samp{%define stype}
+directive. For example, after the following declaration:
-@comment file: calc++-driver.hh
@example
-#ifndef CALCXX_DRIVER_HH
-# define CALCXX_DRIVER_HH
-# include <string>
-# include <map>
-# include "calc++-parser.hh"
+%define stype "ASTNode"
@end example
-
@noindent
-Then comes the declaration of the scanning function. Flex expects
-the signature of @code{yylex} to be defined in the macro
-@code{YY_DECL}, and the C++ parser expects it to be declared. We can
-factor both as follows.
+any @code{%type} or @code{%token} specifying a semantic type which
+is not a subclass of ASTNode, will cause a compile-time error.
-@comment file: calc++-driver.hh
-@example
-// Tell Flex the lexer's prototype ...
-# define YY_DECL \
- yy::calcxx_parser::token_type \
- yylex (yy::calcxx_parser::semantic_type* yylval, \
- yy::calcxx_parser::location_type* yylloc, \
- calcxx_driver& driver)
-// ... and declare it for the parser's sake.
-YY_DECL;
-@end example
+@c FIXME: Documented bug.
+Types used in the directives may be qualified with a package name.
+Primitive data types are accepted for Java version 1.5 or later. Note
+that in this case the autoboxing feature of Java 1.5 will be used.
+Generic types may not be used; this is due to a limitation in the
+implementation of Bison, and may change in future releases.
-@noindent
-The @code{calcxx_driver} class is then declared with its most obvious
-members.
+Java parsers do not support @code{%destructor}, since the language
+adopts garbage collection. The parser will try to hold references
+to semantic values for as little time as needed.
-@comment file: calc++-driver.hh
-@example
-// Conducting the whole scanning and parsing of Calc++.
-class calcxx_driver
-@{
-public:
- calcxx_driver ();
- virtual ~calcxx_driver ();
+Java parsers do not support @code{%printer}, as @code{toString()}
+can be used to print the semantic values. This however may change
+(in a backwards-compatible way) in future versions of Bison.
- std::map<std::string, int> variables;
- int result;
-@end example
+@node Java Location Values
+@subsection Java Location Values
+@c - %locations
+@c - class Position
+@c - class Location
-@noindent
-To encapsulate the coordination with the Flex scanner, it is useful to
-have two members function to open and close the scanning phase.
+When the directive @code{%locations} is used, the Java parser supports
+location tracking, see @ref{Tracking Locations}. An auxiliary user-defined
+class defines a @dfn{position}, a single point in a file; Bison itself
+defines a class representing a @dfn{location}, a range composed of a pair of
+positions (possibly spanning several files). The location class is an inner
+class of the parser; the name is @code{Location} by default, and may also be
+renamed using @samp{%define location_type "@var{class-name}"}.
-@comment file: calc++-driver.hh
-@example
- // Handling the scanner.
- void scan_begin ();
- void scan_end ();
- bool trace_scanning;
-@end example
+The location class treats the position as a completely opaque value.
+By default, the class name is @code{Position}, but this can be changed
+with @samp{%define position_type "@var{class-name}"}. This class must
+be supplied by the user.
-@noindent
-Similarly for the parser itself.
-@comment file: calc++-driver.hh
-@example
- // Run the parser. Return 0 on success.
- int parse (const std::string& f);
- std::string file;
- bool trace_parsing;
-@end example
+@deftypeivar {Location} {Position} begin
+@deftypeivarx {Location} {Position} end
+The first, inclusive, position of the range, and the first beyond.
+@end deftypeivar
-@noindent
-To demonstrate pure handling of parse errors, instead of simply
-dumping them on the standard error output, we will pass them to the
-compiler driver using the following two member functions. Finally, we
-close the class declaration and CPP guard.
+@deftypeop {Constructor} {Location} {} Location (Position @var{loc})
+Create a @code{Location} denoting an empty range located at a given point.
+@end deftypeop
-@comment file: calc++-driver.hh
-@example
- // Error handling.
- void error (const yy::location& l, const std::string& m);
- void error (const std::string& m);
-@};
-#endif // ! CALCXX_DRIVER_HH
-@end example
+@deftypeop {Constructor} {Location} {} Location (Position @var{begin}, Position @var{end})
+Create a @code{Location} from the endpoints of the range.
+@end deftypeop
-The implementation of the driver is straightforward. The @code{parse}
-member function deserves some attention. The @code{error} functions
-are simple stubs, they should actually register the located error
-messages and set error state.
+@deftypemethod {Location} {String} toString ()
+Prints the range represented by the location. For this to work
+properly, the position class should override the @code{equals} and
+@code{toString} methods appropriately.
+@end deftypemethod
-@comment file: calc++-driver.cc
-@example
-#include "calc++-driver.hh"
-#include "calc++-parser.hh"
-calcxx_driver::calcxx_driver ()
- : trace_scanning (false), trace_parsing (false)
-@{
- variables["one"] = 1;
- variables["two"] = 2;
-@}
+@node Java Parser Interface
+@subsection Java Parser Interface
+@c - define parser_class_name
+@c - Ctor
+@c - parse, error, set_debug_level, debug_level, set_debug_stream,
+@c debug_stream.
+@c - Reporting errors
-calcxx_driver::~calcxx_driver ()
-@{
-@}
+The name of the generated parser class defaults to @code{YYParser}. The
+@code{YY} prefix may be changed using the @code{%name-prefix} directive
+or the @option{-p}/@option{--name-prefix} option. Alternatively, use
+@samp{%define parser_class_name "@var{name}"} to give a custom name to
+the class. The interface of this class is detailed below.
+
+By default, the parser class has package visibility. A declaration
+@samp{%define public} will change to public visibility. Remember that,
+according to the Java language specification, the name of the @file{.java}
+file should match the name of the class in this case. Similarly, you can
+use @code{abstract}, @code{final} and @code{strictfp} with the
+@code{%define} declaration to add other modifiers to the parser class.
+A single @samp{%define annotations "@var{annotations}"} directive can
+be used to add any number of annotations to the parser class.
+
+The Java package name of the parser class can be specified using the
+@samp{%define package} directive. The superclass and the implemented
+interfaces of the parser class can be specified with the @code{%define
+extends} and @samp{%define implements} directives.
+
+The parser class defines an inner class, @code{Location}, that is used
+for location tracking (see @ref{Java Location Values}), and a inner
+interface, @code{Lexer} (see @ref{Java Scanner Interface}). Other than
+these inner class/interface, and the members described in the interface
+below, all the other members and fields are preceded with a @code{yy} or
+@code{YY} prefix to avoid clashes with user code.
+
+The parser class can be extended using the @code{%parse-param}
+directive. Each occurrence of the directive will add a @code{protected
+final} field to the parser class, and an argument to its constructor,
+which initialize them automatically.
+
+@deftypeop {Constructor} {YYParser} {} YYParser (@var{lex_param}, @dots{}, @var{parse_param}, @dots{})
+Build a new parser object with embedded @code{%code lexer}. There are
+no parameters, unless @code{%param}s and/or @code{%parse-param}s and/or
+@code{%lex-param}s are used.
+
+Use @code{%code init} for code added to the start of the constructor
+body. This is especially useful to initialize superclasses. Use
+@samp{%define init_throws} to specify any uncaught exceptions.
+@end deftypeop
+
+@deftypeop {Constructor} {YYParser} {} YYParser (Lexer @var{lexer}, @var{parse_param}, @dots{})
+Build a new parser object using the specified scanner. There are no
+additional parameters unless @code{%param}s and/or @code{%parse-param}s are
+used.
+
+If the scanner is defined by @code{%code lexer}, this constructor is
+declared @code{protected} and is called automatically with a scanner
+created with the correct @code{%param}s and/or @code{%lex-param}s.
+
+Use @code{%code init} for code added to the start of the constructor
+body. This is especially useful to initialize superclasses. Use
+@samp{%define init_throws} to specify any uncaught exceptions.
+@end deftypeop
+
+@deftypemethod {YYParser} {boolean} parse ()
+Run the syntactic analysis, and return @code{true} on success,
+@code{false} otherwise.
+@end deftypemethod
-int
-calcxx_driver::parse (const std::string &f)
-@{
- file = f;
- scan_begin ();
- yy::calcxx_parser parser (*this);
- parser.set_debug_level (trace_parsing);
- int res = parser.parse ();
- scan_end ();
- return res;
-@}
+@deftypemethod {YYParser} {boolean} getErrorVerbose ()
+@deftypemethodx {YYParser} {void} setErrorVerbose (boolean @var{verbose})
+Get or set the option to produce verbose error messages. These are only
+available with @samp{%define parse.error verbose}, which also turns on
+verbose error messages.
+@end deftypemethod
-void
-calcxx_driver::error (const yy::location& l, const std::string& m)
-@{
- std::cerr << l << ": " << m << std::endl;
-@}
+@deftypemethod {YYParser} {void} yyerror (String @var{msg})
+@deftypemethodx {YYParser} {void} yyerror (Position @var{pos}, String @var{msg})
+@deftypemethodx {YYParser} {void} yyerror (Location @var{loc}, String @var{msg})
+Print an error message using the @code{yyerror} method of the scanner
+instance in use. The @code{Location} and @code{Position} parameters are
+available only if location tracking is active.
+@end deftypemethod
-void
-calcxx_driver::error (const std::string& m)
-@{
- std::cerr << m << std::endl;
-@}
-@end example
+@deftypemethod {YYParser} {boolean} recovering ()
+During the syntactic analysis, return @code{true} if recovering
+from a syntax error.
+@xref{Error Recovery}.
+@end deftypemethod
-@node Calc++ Parser
-@subsection Calc++ Parser
+@deftypemethod {YYParser} {java.io.PrintStream} getDebugStream ()
+@deftypemethodx {YYParser} {void} setDebugStream (java.io.printStream @var{o})
+Get or set the stream used for tracing the parsing. It defaults to
+@code{System.err}.
+@end deftypemethod
-The parser definition file @file{calc++-parser.yy} starts by asking for
-the C++ LALR(1) skeleton, the creation of the parser header file, and
-specifies the name of the parser class. Because the C++ skeleton
-changed several times, it is safer to require the version you designed
-the grammar for.
+@deftypemethod {YYParser} {int} getDebugLevel ()
+@deftypemethodx {YYParser} {void} setDebugLevel (int @var{l})
+Get or set the tracing level. Currently its value is either 0, no trace,
+or nonzero, full tracing.
+@end deftypemethod
-@comment file: calc++-parser.yy
-@example
-%language "C++" /* -*- C++ -*- */
-%require "@value{VERSION}"
-%defines
-%define parser_class_name "calcxx_parser"
-@end example
+@deftypecv {Constant} {YYParser} {String} {bisonVersion}
+@deftypecvx {Constant} {YYParser} {String} {bisonSkeleton}
+Identify the Bison version and skeleton used to generate this parser.
+@end deftypecv
-@noindent
-@findex %code requires
-Then come the declarations/inclusions needed to define the
-@code{%union}. Because the parser uses the parsing driver and
-reciprocally, both cannot include the header of the other. Because the
-driver's header needs detailed knowledge about the parser class (in
-particular its inner types), it is the parser's header which will simply
-use a forward declaration of the driver.
-@xref{Decl Summary, ,%code}.
-@comment file: calc++-parser.yy
-@example
-%code requires @{
-# include <string>
-class calcxx_driver;
-@}
-@end example
+@node Java Scanner Interface
+@subsection Java Scanner Interface
+@c - %code lexer
+@c - %lex-param
+@c - Lexer interface
+
+There are two possible ways to interface a Bison-generated Java parser
+with a scanner: the scanner may be defined by @code{%code lexer}, or
+defined elsewhere. In either case, the scanner has to implement the
+@code{Lexer} inner interface of the parser class. This interface also
+contain constants for all user-defined token names and the predefined
+@code{EOF} token.
+
+In the first case, the body of the scanner class is placed in
+@code{%code lexer} blocks. If you want to pass parameters from the
+parser constructor to the scanner constructor, specify them with
+@code{%lex-param}; they are passed before @code{%parse-param}s to the
+constructor.
+
+In the second case, the scanner has to implement the @code{Lexer} interface,
+which is defined within the parser class (e.g., @code{YYParser.Lexer}).
+The constructor of the parser object will then accept an object
+implementing the interface; @code{%lex-param} is not used in this
+case.
+
+In both cases, the scanner has to implement the following methods.
+
+@deftypemethod {Lexer} {void} yyerror (Location @var{loc}, String @var{msg})
+This method is defined by the user to emit an error message. The first
+parameter is omitted if location tracking is not active. Its type can be
+changed using @samp{%define location_type "@var{class-name}".}
+@end deftypemethod
-@noindent
-The driver is passed by reference to the parser and to the scanner.
-This provides a simple but effective pure interface, not relying on
-global variables.
+@deftypemethod {Lexer} {int} yylex ()
+Return the next token. Its type is the return value, its semantic
+value and location are saved and returned by the their methods in the
+interface.
-@comment file: calc++-parser.yy
-@example
-// The parsing context.
-%parse-param @{ calcxx_driver& driver @}
-%lex-param @{ calcxx_driver& driver @}
-@end example
+Use @samp{%define lex_throws} to specify any uncaught exceptions.
+Default is @code{java.io.IOException}.
+@end deftypemethod
-@noindent
-Then we request the location tracking feature, and initialize the
-first location's file name. Afterwards new locations are computed
-relatively to the previous locations: the file name will be
-automatically propagated.
+@deftypemethod {Lexer} {Position} getStartPos ()
+@deftypemethodx {Lexer} {Position} getEndPos ()
+Return respectively the first position of the last token that
+@code{yylex} returned, and the first position beyond it. These
+methods are not needed unless location tracking is active.
-@comment file: calc++-parser.yy
-@example
-%locations
-%initial-action
-@{
- // Initialize the initial location.
- @@$.begin.filename = @@$.end.filename = &driver.file;
-@};
-@end example
+The return type can be changed using @samp{%define position_type
+"@var{class-name}".}
+@end deftypemethod
-@noindent
-Use the two following directives to enable parser tracing and verbose
-error messages.
+@deftypemethod {Lexer} {Object} getLVal ()
+Return the semantic value of the last token that yylex returned.
-@comment file: calc++-parser.yy
-@example
-%debug
-%error-verbose
-@end example
+The return type can be changed using @samp{%define stype
+"@var{class-name}".}
+@end deftypemethod
-@noindent
-Semantic values cannot use ``real'' objects, but only pointers to
-them.
-@comment file: calc++-parser.yy
-@example
-// Symbols.
-%union
-@{
- int ival;
- std::string *sval;
-@};
-@end example
+@node Java Action Features
+@subsection Special Features for Use in Java Actions
+
+The following special constructs can be uses in Java actions.
+Other analogous C action features are currently unavailable for Java.
+
+Use @samp{%define throws} to specify any uncaught exceptions from parser
+actions, and initial actions specified by @code{%initial-action}.
+
+@defvar $@var{n}
+The semantic value for the @var{n}th component of the current rule.
+This may not be assigned to.
+@xref{Java Semantic Values}.
+@end defvar
+
+@defvar $<@var{typealt}>@var{n}
+Like @code{$@var{n}} but specifies a alternative type @var{typealt}.
+@xref{Java Semantic Values}.
+@end defvar
+
+@defvar $$
+The semantic value for the grouping made by the current rule. As a
+value, this is in the base type (@code{Object} or as specified by
+@samp{%define stype}) as in not cast to the declared subtype because
+casts are not allowed on the left-hand side of Java assignments.
+Use an explicit Java cast if the correct subtype is needed.
+@xref{Java Semantic Values}.
+@end defvar
+
+@defvar $<@var{typealt}>$
+Same as @code{$$} since Java always allow assigning to the base type.
+Perhaps we should use this and @code{$<>$} for the value and @code{$$}
+for setting the value but there is currently no easy way to distinguish
+these constructs.
+@xref{Java Semantic Values}.
+@end defvar
+
+@defvar @@@var{n}
+The location information of the @var{n}th component of the current rule.
+This may not be assigned to.
+@xref{Java Location Values}.
+@end defvar
+
+@defvar @@$
+The location information of the grouping made by the current rule.
+@xref{Java Location Values}.
+@end defvar
+
+@deffn {Statement} {return YYABORT;}
+Return immediately from the parser, indicating failure.
+@xref{Java Parser Interface}.
+@end deffn
-@noindent
-@findex %code
-The code between @samp{%code @{} and @samp{@}} is output in the
-@file{*.cc} file; it needs detailed knowledge about the driver.
+@deffn {Statement} {return YYACCEPT;}
+Return immediately from the parser, indicating success.
+@xref{Java Parser Interface}.
+@end deffn
-@comment file: calc++-parser.yy
-@example
-%code @{
-# include "calc++-driver.hh"
-@}
-@end example
+@deffn {Statement} {return YYERROR;}
+Start error recovery without printing an error message.
+@xref{Error Recovery}.
+@end deffn
+@deftypefn {Function} {boolean} recovering ()
+Return whether error recovery is being done. In this state, the parser
+reads token until it reaches a known state, and then restarts normal
+operation.
+@xref{Error Recovery}.
+@end deftypefn
-@noindent
-The token numbered as 0 corresponds to end of file; the following line
-allows for nicer error messages referring to ``end of file'' instead
-of ``$end''. Similarly user friendly named are provided for each
-symbol. Note that the tokens names are prefixed by @code{TOKEN_} to
-avoid name clashes.
+@deftypefn {Function} {void} yyerror (String @var{msg})
+@deftypefnx {Function} {void} yyerror (Position @var{loc}, String @var{msg})
+@deftypefnx {Function} {void} yyerror (Location @var{loc}, String @var{msg})
+Print an error message using the @code{yyerror} method of the scanner
+instance in use. The @code{Location} and @code{Position} parameters are
+available only if location tracking is active.
+@end deftypefn
-@comment file: calc++-parser.yy
-@example
-%token END 0 "end of file"
-%token ASSIGN ":="
-%token <sval> IDENTIFIER "identifier"
-%token <ival> NUMBER "number"
-%type <ival> exp
-@end example
-@noindent
-To enable memory deallocation during error recovery, use
-@code{%destructor}.
+@node Java Differences
+@subsection Differences between C/C++ and Java Grammars
-@c FIXME: Document %printer, and mention that it takes a braced-code operand.
-@comment file: calc++-parser.yy
-@example
-%printer @{ debug_stream () << *$$; @} "identifier"
-%destructor @{ delete $$; @} "identifier"
+The different structure of the Java language forces several differences
+between C/C++ grammars, and grammars designed for Java parsers. This
+section summarizes these differences.
-%printer @{ debug_stream () << $$; @} <ival>
-@end example
+@itemize
+@item
+Java lacks a preprocessor, so the @code{YYERROR}, @code{YYACCEPT},
+@code{YYABORT} symbols (@pxref{Table of Symbols}) cannot obviously be
+macros. Instead, they should be preceded by @code{return} when they
+appear in an action. The actual definition of these symbols is
+opaque to the Bison grammar, and it might change in the future. The
+only meaningful operation that you can do, is to return them.
+See @pxref{Java Action Features}.
+
+Note that of these three symbols, only @code{YYACCEPT} and
+@code{YYABORT} will cause a return from the @code{yyparse}
+method@footnote{Java parsers include the actions in a separate
+method than @code{yyparse} in order to have an intuitive syntax that
+corresponds to these C macros.}.
-@noindent
-The grammar itself is straightforward.
+@item
+Java lacks unions, so @code{%union} has no effect. Instead, semantic
+values have a common base type: @code{Object} or as specified by
+@samp{%define stype}. Angle brackets on @code{%token}, @code{type},
+@code{$@var{n}} and @code{$$} specify subtypes rather than fields of
+an union. The type of @code{$$}, even with angle brackets, is the base
+type since Java casts are not allow on the left-hand side of assignments.
+Also, @code{$@var{n}} and @code{@@@var{n}} are not allowed on the
+left-hand side of assignments. See @pxref{Java Semantic Values} and
+@pxref{Java Action Features}.
+
+@item
+The prologue declarations have a different meaning than in C/C++ code.
+@table @asis
+@item @code{%code imports}
+blocks are placed at the beginning of the Java source code. They may
+include copyright notices. For a @code{package} declarations, it is
+suggested to use @samp{%define package} instead.
+
+@item unqualified @code{%code}
+blocks are placed inside the parser class.
+
+@item @code{%code lexer}
+blocks, if specified, should include the implementation of the
+scanner. If there is no such block, the scanner can be any class
+that implements the appropriate interface (see @pxref{Java Scanner
+Interface}).
+@end table
+
+Other @code{%code} blocks are not supported in Java parsers.
+In particular, @code{%@{ @dots{} %@}} blocks should not be used
+and may give an error in future versions of Bison.
+
+The epilogue has the same meaning as in C/C++ code and it can
+be used to define other classes used by the parser @emph{outside}
+the parser class.
+@end itemize
+
+
+@node Java Declarations Summary
+@subsection Java Declarations Summary
+
+This summary only include declarations specific to Java or have special
+meaning when used in a Java parser.
+
+@deffn {Directive} {%language "Java"}
+Generate a Java class for the parser.
+@end deffn
+
+@deffn {Directive} %lex-param @{@var{type} @var{name}@}
+A parameter for the lexer class defined by @code{%code lexer}
+@emph{only}, added as parameters to the lexer constructor and the parser
+constructor that @emph{creates} a lexer. Default is none.
+@xref{Java Scanner Interface}.
+@end deffn
+
+@deffn {Directive} %name-prefix "@var{prefix}"
+The prefix of the parser class name @code{@var{prefix}Parser} if
+@samp{%define parser_class_name} is not used. Default is @code{YY}.
+@xref{Java Bison Interface}.
+@end deffn
-@comment file: calc++-parser.yy
-@example
-%%
-%start unit;
-unit: assignments exp @{ driver.result = $2; @};
+@deffn {Directive} %parse-param @{@var{type} @var{name}@}
+A parameter for the parser class added as parameters to constructor(s)
+and as fields initialized by the constructor(s). Default is none.
+@xref{Java Parser Interface}.
+@end deffn
-assignments: assignments assignment @{@}
- | /* Nothing. */ @{@};
+@deffn {Directive} %token <@var{type}> @var{token} @dots{}
+Declare tokens. Note that the angle brackets enclose a Java @emph{type}.
+@xref{Java Semantic Values}.
+@end deffn
-assignment:
- "identifier" ":=" exp
- @{ driver.variables[*$1] = $3; delete $1; @};
-
-%left '+' '-';
-%left '*' '/';
-exp: exp '+' exp @{ $$ = $1 + $3; @}
- | exp '-' exp @{ $$ = $1 - $3; @}
- | exp '*' exp @{ $$ = $1 * $3; @}
- | exp '/' exp @{ $$ = $1 / $3; @}
- | "identifier" @{ $$ = driver.variables[*$1]; delete $1; @}
- | "number" @{ $$ = $1; @};
-%%
-@end example
+@deffn {Directive} %type <@var{type}> @var{nonterminal} @dots{}
+Declare the type of nonterminals. Note that the angle brackets enclose
+a Java @emph{type}.
+@xref{Java Semantic Values}.
+@end deffn
-@noindent
-Finally the @code{error} member function registers the errors to the
-driver.
+@deffn {Directive} %code @{ @var{code} @dots{} @}
+Code appended to the inside of the parser class.
+@xref{Java Differences}.
+@end deffn
-@comment file: calc++-parser.yy
-@example
-void
-yy::calcxx_parser::error (const yy::calcxx_parser::location_type& l,
- const std::string& m)
-@{
- driver.error (l, m);
-@}
-@end example
+@deffn {Directive} {%code imports} @{ @var{code} @dots{} @}
+Code inserted just after the @code{package} declaration.
+@xref{Java Differences}.
+@end deffn
-@node Calc++ Scanner
-@subsection Calc++ Scanner
+@deffn {Directive} {%code init} @{ @var{code} @dots{} @}
+Code inserted at the beginning of the parser constructor body.
+@xref{Java Parser Interface}.
+@end deffn
-The Flex scanner first includes the driver declaration, then the
-parser's to get the set of defined tokens.
+@deffn {Directive} {%code lexer} @{ @var{code} @dots{} @}
+Code added to the body of a inner lexer class within the parser class.
+@xref{Java Scanner Interface}.
+@end deffn
-@comment file: calc++-scanner.ll
-@example
-%@{ /* -*- C++ -*- */
-# include <cstdlib>
-# include <errno.h>
-# include <limits.h>
-# include <string>
-# include "calc++-driver.hh"
-# include "calc++-parser.hh"
+@deffn {Directive} %% @var{code} @dots{}
+Code (after the second @code{%%}) appended to the end of the file,
+@emph{outside} the parser class.
+@xref{Java Differences}.
+@end deffn
-/* Work around an incompatibility in flex (at least versions
- 2.5.31 through 2.5.33): it generates code that does
- not conform to C89. See Debian bug 333231
- <http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=333231>. */
-# undef yywrap
-# define yywrap() 1
+@deffn {Directive} %@{ @var{code} @dots{} %@}
+Not supported. Use @code{%code imports} instead.
+@xref{Java Differences}.
+@end deffn
-/* By default yylex returns int, we use token_type.
- Unfortunately yyterminate by default returns 0, which is
- not of token_type. */
-#define yyterminate() return token::END
-%@}
-@end example
+@deffn {Directive} {%define abstract}
+Whether the parser class is declared @code{abstract}. Default is false.
+@xref{Java Bison Interface}.
+@end deffn
-@noindent
-Because there is no @code{#include}-like feature we don't need
-@code{yywrap}, we don't need @code{unput} either, and we parse an
-actual file, this is not an interactive session with the user.
-Finally we enable the scanner tracing features.
+@deffn {Directive} {%define annotations} "@var{annotations}"
+The Java annotations for the parser class. Default is none.
+@xref{Java Bison Interface}.
+@end deffn
-@comment file: calc++-scanner.ll
-@example
-%option noyywrap nounput batch debug
-@end example
+@deffn {Directive} {%define extends} "@var{superclass}"
+The superclass of the parser class. Default is none.
+@xref{Java Bison Interface}.
+@end deffn
-@noindent
-Abbreviations allow for more readable rules.
+@deffn {Directive} {%define final}
+Whether the parser class is declared @code{final}. Default is false.
+@xref{Java Bison Interface}.
+@end deffn
-@comment file: calc++-scanner.ll
-@example
-id [a-zA-Z][a-zA-Z_0-9]*
-int [0-9]+
-blank [ \t]
-@end example
+@deffn {Directive} {%define implements} "@var{interfaces}"
+The implemented interfaces of the parser class, a comma-separated list.
+Default is none.
+@xref{Java Bison Interface}.
+@end deffn
-@noindent
-The following paragraph suffices to track locations accurately. Each
-time @code{yylex} is invoked, the begin position is moved onto the end
-position. Then when a pattern is matched, the end position is
-advanced of its width. In case it matched ends of lines, the end
-cursor is adjusted, and each time blanks are matched, the begin cursor
-is moved onto the end cursor to effectively ignore the blanks
-preceding tokens. Comments would be treated equally.
+@deffn {Directive} {%define init_throws} "@var{exceptions}"
+The exceptions thrown by @code{%code init} from the parser class
+constructor. Default is none.
+@xref{Java Parser Interface}.
+@end deffn
-@comment file: calc++-scanner.ll
-@example
-%@{
-# define YY_USER_ACTION yylloc->columns (yyleng);
-%@}
-%%
-%@{
- yylloc->step ();
-%@}
-@{blank@}+ yylloc->step ();
-[\n]+ yylloc->lines (yyleng); yylloc->step ();
-@end example
+@deffn {Directive} {%define lex_throws} "@var{exceptions}"
+The exceptions thrown by the @code{yylex} method of the lexer, a
+comma-separated list. Default is @code{java.io.IOException}.
+@xref{Java Scanner Interface}.
+@end deffn
-@noindent
-The rules are simple, just note the use of the driver to report errors.
-It is convenient to use a typedef to shorten
-@code{yy::calcxx_parser::token::identifier} into
-@code{token::identifier} for instance.
+@deffn {Directive} {%define location_type} "@var{class}"
+The name of the class used for locations (a range between two
+positions). This class is generated as an inner class of the parser
+class by @command{bison}. Default is @code{Location}.
+@xref{Java Location Values}.
+@end deffn
-@comment file: calc++-scanner.ll
-@example
-%@{
- typedef yy::calcxx_parser::token token;
-%@}
- /* Convert ints to the actual type of tokens. */
-[-+*/] return yy::calcxx_parser::token_type (yytext[0]);
-":=" return token::ASSIGN;
-@{int@} @{
- errno = 0;
- long n = strtol (yytext, NULL, 10);
- if (! (INT_MIN <= n && n <= INT_MAX && errno != ERANGE))
- driver.error (*yylloc, "integer is out of range");
- yylval->ival = n;
- return token::NUMBER;
-@}
-@{id@} yylval->sval = new std::string (yytext); return token::IDENTIFIER;
-. driver.error (*yylloc, "invalid character");
-%%
-@end example
+@deffn {Directive} {%define package} "@var{package}"
+The package to put the parser class in. Default is none.
+@xref{Java Bison Interface}.
+@end deffn
-@noindent
-Finally, because the scanner related driver's member function depend
-on the scanner's data, it is simpler to implement them in this file.
+@deffn {Directive} {%define parser_class_name} "@var{name}"
+The name of the parser class. Default is @code{YYParser} or
+@code{@var{name-prefix}Parser}.
+@xref{Java Bison Interface}.
+@end deffn
-@comment file: calc++-scanner.ll
-@example
-void
-calcxx_driver::scan_begin ()
-@{
- yy_flex_debug = trace_scanning;
- if (file == "-")
- yyin = stdin;
- else if (!(yyin = fopen (file.c_str (), "r")))
- @{
- error (std::string ("cannot open ") + file);
- exit (1);
- @}
-@}
+@deffn {Directive} {%define position_type} "@var{class}"
+The name of the class used for positions. This class must be supplied by
+the user. Default is @code{Position}.
+@xref{Java Location Values}.
+@end deffn
-void
-calcxx_driver::scan_end ()
-@{
- fclose (yyin);
-@}
-@end example
+@deffn {Directive} {%define public}
+Whether the parser class is declared @code{public}. Default is false.
+@xref{Java Bison Interface}.
+@end deffn
-@node Calc++ Top Level
-@subsection Calc++ Top Level
+@deffn {Directive} {%define stype} "@var{class}"
+The base type of semantic values. Default is @code{Object}.
+@xref{Java Semantic Values}.
+@end deffn
-The top level file, @file{calc++.cc}, poses no problem.
+@deffn {Directive} {%define strictfp}
+Whether the parser class is declared @code{strictfp}. Default is false.
+@xref{Java Bison Interface}.
+@end deffn
-@comment file: calc++.cc
-@example
-#include <iostream>
-#include "calc++-driver.hh"
+@deffn {Directive} {%define throws} "@var{exceptions}"
+The exceptions thrown by user-supplied parser actions and
+@code{%initial-action}, a comma-separated list. Default is none.
+@xref{Java Parser Interface}.
+@end deffn
-int
-main (int argc, char *argv[])
-@{
- calcxx_driver driver;
- for (++argv; argv[0]; ++argv)
- if (*argv == std::string ("-p"))
- driver.trace_parsing = true;
- else if (*argv == std::string ("-s"))
- driver.trace_scanning = true;
- else if (!driver.parse (*argv))
- std::cout << driver.result << std::endl;
-@}
-@end example
@c ================================================= FAQ
* Strings are Destroyed:: @code{yylval} Loses Track of Strings
* Implementing Gotos/Loops:: Control Flow in the Calculator
* Multiple start-symbols:: Factoring closely related grammars
-* Secure? Conform?:: Is Bison @acronym{POSIX} safe?
+* Secure? Conform?:: Is Bison POSIX safe?
* I can't build Bison:: Troubleshooting
* Where can I find help?:: Troubleshouting
* Bug Reports:: Troublereporting
-* Other Languages:: Parsers in Java and others
+* More Languages:: Parsers in C++, Java, and so on
* Beta Testing:: Experimenting development versions
* Mailing Lists:: Meeting other Bison users
@end menu
@node Memory Exhausted
@section Memory Exhausted
-@display
+@quotation
My parser returns with error with a @samp{memory exhausted}
message. What can I do?
-@end display
+@end quotation
This question is already addressed elsewhere, @xref{Recursion,
,Recursive Rules}.
The following phenomenon has several symptoms, resulting in the
following typical questions:
-@display
+@quotation
I invoke @code{yyparse} several times, and on correct input it works
properly; but when a parse error is found, all the other calls fail
too. How can I reset the error flag of @code{yyparse}?
-@end display
+@end quotation
@noindent
or
-@display
+@quotation
My parser includes support for an @samp{#include}-like feature, in
which case I run @code{yyparse} from @code{yyparse}. This fails
-although I did specify I needed a @code{%pure-parser}.
-@end display
+although I did specify @samp{%define api.pure}.
+@end quotation
These problems typically come not from Bison itself, but from
Lex-generated scanners. Because these scanners use large buffers for
demonstration, consider the following source file,
@file{first-line.l}:
-@verbatim
-%{
+@example
+@group
+%@{
#include <stdio.h>
#include <stdlib.h>
-%}
+%@}
+@end group
%%
.*\n ECHO; return 1;
%%
+@group
int
yyparse (char const *file)
-{
+@{
yyin = fopen (file, "r");
if (!yyin)
- exit (2);
+ @{
+ perror ("fopen");
+ exit (EXIT_FAILURE);
+ @}
+@end group
+@group
/* One token only. */
yylex ();
if (fclose (yyin) != 0)
- exit (3);
+ @{
+ perror ("fclose");
+ exit (EXIT_FAILURE);
+ @}
return 0;
-}
+@}
+@end group
+@group
int
main (void)
-{
+@{
yyparse ("input");
yyparse ("input");
return 0;
-}
-@end verbatim
+@}
+@end group
+@end example
@noindent
If the file @file{input} contains
-@verbatim
+@example
input:1: Hello,
input:2: World!
-@end verbatim
+@end example
@noindent
then instead of getting the first line twice, you get:
@node Strings are Destroyed
@section Strings are Destroyed
-@display
+@quotation
My parser seems to destroy old strings, or maybe it loses track of
them. Instead of reporting @samp{"foo", "bar"}, it reports
@samp{"bar", "bar"}, or even @samp{"foo\nbar", "bar"}.
-@end display
+@end quotation
This error is probably the single most frequent ``bug report'' sent to
Bison lists, but is only concerned with a misunderstanding of the role
of the scanner. Consider the following Lex code:
-@verbatim
-%{
+@example
+@group
+%@{
#include <stdio.h>
char *yylval = NULL;
-%}
+%@}
+@end group
+@group
%%
.* yylval = yytext; return 1;
\n /* IGNORE */
%%
+@end group
+@group
int
main ()
-{
+@{
/* Similar to using $1, $2 in a Bison action. */
char *fst = (yylex (), yylval);
char *snd = (yylex (), yylval);
printf ("\"%s\", \"%s\"\n", fst, snd);
return 0;
-}
-@end verbatim
+@}
+@end group
+@end example
If you compile and run this code, you get:
@node Implementing Gotos/Loops
@section Implementing Gotos/Loops
-@display
+@quotation
My simple calculator supports variables, assignments, and functions,
but how can I implement gotos, or loops?
-@end display
+@end quotation
Although very pedagogical, the examples included in the document blur
the distinction to make between the parser---whose job is to recover
execute simple instructions one after the others.
@cindex abstract syntax tree
-@cindex @acronym{AST}
+@cindex AST
If you want a richer model, you will probably need to use the parser
to construct a tree that does represent the structure it has
recovered; this tree is usually called the @dfn{abstract syntax tree},
-or @dfn{@acronym{AST}} for short. Then, walking through this tree,
+or @dfn{AST} for short. Then, walking through this tree,
traversing it in various ways, will enable treatments such as its
execution or its translation, which will result in an interpreter or a
compiler.
@node Multiple start-symbols
@section Multiple start-symbols
-@display
+@quotation
I have several closely related grammars, and I would like to share their
implementations. In fact, I could use a single grammar but with
multiple entry points.
-@end display
+@end quotation
Bison does not support multiple start-symbols, but there is a very
simple means to simulate them. If @code{foo} and @code{bar} are the two
@example
%token START_FOO START_BAR;
%start start;
-start: START_FOO foo
- | START_BAR bar;
+start:
+ START_FOO foo
+| START_BAR bar;
@end example
These tokens prevents the introduction of new conflicts. As far as the
@node Secure? Conform?
@section Secure? Conform?
-@display
+@quotation
Is Bison secure? Does it conform to POSIX?
-@end display
+@end quotation
If you're looking for a guarantee or certification, we don't provide it.
However, Bison is intended to be a reliable program that conforms to the
-@acronym{POSIX} specification for Yacc. If you run into problems,
+POSIX specification for Yacc. If you run into problems,
please send us a bug report.
@node I can't build Bison
@section I can't build Bison
-@display
+@quotation
I can't build Bison because @command{make} complains that
@code{msgfmt} is not found.
What should I do?
-@end display
+@end quotation
Like most GNU packages with internationalization support, that feature
is turned on by default. If you have problems building in the @file{po}
@node Where can I find help?
@section Where can I find help?
-@display
+@quotation
I'm having trouble using Bison. Where can I find help?
-@end display
+@end quotation
First, read this fine manual. Beyond that, you can send mail to
@email{help-bison@@gnu.org}. This mailing list is intended to be
@node Bug Reports
@section Bug Reports
-@display
+@quotation
I found a bug. What should I include in the bug report?
-@end display
+@end quotation
Before you send a bug report, make sure you are using the latest
version. Check @url{ftp://ftp.gnu.org/pub/gnu/bison/} or one of its
send additional files as well (such as `config.h' or `config.cache').
Patches are most welcome, but not required. That is, do not hesitate to
-send a bug report just because you can not provide a fix.
+send a bug report just because you cannot provide a fix.
Send bug reports to @email{bug-bison@@gnu.org}.
-@node Other Languages
-@section Other Languages
+@node More Languages
+@section More Languages
-@display
-Will Bison ever have C++ support? How about Java or @var{insert your
+@quotation
+Will Bison ever have C++ and Java support? How about @var{insert your
favorite language here}?
-@end display
+@end quotation
-C++ support is there now, and is documented. We'd love to add other
+C++ and Java support is there now, and is documented. We'd love to add other
languages; contributions are welcome.
@node Beta Testing
@section Beta Testing
-@display
+@quotation
What is involved in being a beta tester?
-@end display
+@end quotation
It's not terribly involved. Basically, you would download a test
release, compile it, and use it to build and run a parser or two. After
@node Mailing Lists
@section Mailing Lists
-@display
+@quotation
How do I join the help-bison and bug-bison mailing lists?
-@end display
+@end quotation
See @url{http://lists.gnu.org/}.
@deffn {Variable} @@$
In an action, the location of the left-hand side of the rule.
-@xref{Locations, , Locations Overview}.
+@xref{Tracking Locations}.
@end deffn
@deffn {Variable} @@@var{n}
-In an action, the location of the @var{n}-th symbol of the right-hand
-side of the rule. @xref{Locations, , Locations Overview}.
+In an action, the location of the @var{n}-th symbol of the right-hand side
+of the rule. @xref{Tracking Locations}.
+@end deffn
+
+@deffn {Variable} @@@var{name}
+In an action, the location of a symbol addressed by name. @xref{Tracking
+Locations}.
+@end deffn
+
+@deffn {Variable} @@[@var{name}]
+In an action, the location of a symbol addressed by name. @xref{Tracking
+Locations}.
@end deffn
@deffn {Variable} $$
right-hand side of the rule. @xref{Actions}.
@end deffn
+@deffn {Variable} $@var{name}
+In an action, the semantic value of a symbol addressed by name.
+@xref{Actions}.
+@end deffn
+
+@deffn {Variable} $[@var{name}]
+In an action, the semantic value of a symbol addressed by name.
+@xref{Actions}.
+@end deffn
+
@deffn {Delimiter} %%
Delimiter used to separate the grammar rule section from the
Bison declarations section or the epilogue.
@c Don't insert spaces, or check the DVI output.
@deffn {Delimiter} %@{@var{code}%@}
-All code listed between @samp{%@{} and @samp{%@}} is copied directly to
-the output file uninterpreted. Such code forms the prologue of the input
-file. @xref{Grammar Outline, ,Outline of a Bison
+All code listed between @samp{%@{} and @samp{%@}} is copied verbatim
+to the parser implementation file. Such code forms the prologue of
+the grammar file. @xref{Grammar Outline, ,Outline of a Bison
Grammar}.
@end deffn
+@deffn {Directive} %?@{@var{expression}@}
+Predicate actions. This is a type of action clause that may appear in
+rules. The expression is evaluated, and if false, causes a syntax error. In
+GLR parsers during nondeterministic operation,
+this silently causes an alternative parse to die. During deterministic
+operation, it is the same as the effect of YYERROR.
+@xref{Semantic Predicates}.
+
+This feature is experimental.
+More user feedback will help to determine whether it should become a permanent
+feature.
+@end deffn
+
@deffn {Construct} /*@dots{}*/
Comment delimiters, as in C.
@end deffn
@deffn {Directive} %code @{@var{code}@}
@deffnx {Directive} %code @var{qualifier} @{@var{code}@}
-Insert @var{code} verbatim into output parser source.
-@xref{Decl Summary,,%code}.
-@end deffn
-
-@deffn {Directive} %debug
-Equip the parser for debugging. @xref{Decl Summary}.
+Insert @var{code} verbatim into the output parser source at the
+default location or at the location specified by @var{qualifier}.
+@xref{%code Summary}.
@end deffn
@deffn {Directive} %debug
@end deffn
@end ifset
-@deffn {Directive} %define @var{define-variable}
-@deffnx {Directive} %define @var{define-variable} @var{value}
-Define a variable to adjust Bison's behavior.
-@xref{Decl Summary,,%define}.
+@deffn {Directive} %define @var{variable}
+@deffnx {Directive} %define @var{variable} @var{value}
+@deffnx {Directive} %define @var{variable} "@var{value}"
+Define a variable to adjust Bison's behavior. @xref{%define Summary}.
@end deffn
@deffn {Directive} %defines
-Bison declaration to create a header file meant for the scanner.
-@xref{Decl Summary}.
+Bison declaration to create a parser header file, which is usually
+meant for the scanner. @xref{Decl Summary}.
@end deffn
@deffn {Directive} %defines @var{defines-file}
@deffn {Directive} %dprec
Bison declaration to assign a precedence to a rule that is used at parse
time to resolve reduce/reduce conflicts. @xref{GLR Parsers, ,Writing
-@acronym{GLR} Parsers}.
+GLR Parsers}.
@end deffn
@deffn {Symbol} $end
@end deffn
@deffn {Directive} %error-verbose
-Bison declaration to request verbose, specific error message strings
-when @code{yyerror} is called.
+An obsolete directive standing for @samp{%define parse.error verbose}
+(@pxref{Error Reporting, ,The Error Reporting Function @code{yyerror}}).
@end deffn
@deffn {Directive} %file-prefix "@var{prefix}"
@end deffn
@deffn {Directive} %glr-parser
-Bison declaration to produce a @acronym{GLR} parser. @xref{GLR
-Parsers, ,Writing @acronym{GLR} Parsers}.
+Bison declaration to produce a GLR parser. @xref{GLR
+Parsers, ,Writing GLR Parsers}.
@end deffn
@deffn {Directive} %initial-action
@end deffn
@deffn {Directive} %left
-Bison declaration to assign left associativity to token(s).
+Bison declaration to assign precedence and left associativity to token(s).
@xref{Precedence Decl, ,Operator Precedence}.
@end deffn
-@deffn {Directive} %lex-param @{@var{argument-declaration}@}
-Bison declaration to specifying an additional parameter that
+@deffn {Directive} %lex-param @{@var{argument-declaration}@} @dots{}
+Bison declaration to specifying additional arguments that
@code{yylex} should accept. @xref{Pure Calling,, Calling Conventions
for Pure Parsers}.
@end deffn
Bison declaration to assign a merging function to a rule. If there is a
reduce/reduce conflict with a rule having the same merging function, the
function is applied to the two semantic values to get a single result.
-@xref{GLR Parsers, ,Writing @acronym{GLR} Parsers}.
+@xref{GLR Parsers, ,Writing GLR Parsers}.
@end deffn
@deffn {Directive} %name-prefix "@var{prefix}"
@deffn {Directive} %no-lines
Bison declaration to avoid generating @code{#line} directives in the
-parser file. @xref{Decl Summary}.
+parser implementation file. @xref{Decl Summary}.
@end deffn
@deffn {Directive} %nonassoc
-Bison declaration to assign nonassociativity to token(s).
+Bison declaration to assign precedence and nonassociativity to token(s).
@xref{Precedence Decl, ,Operator Precedence}.
@end deffn
@deffn {Directive} %output "@var{file}"
-Bison declaration to set the name of the parser file. @xref{Decl
-Summary}.
+Bison declaration to set the name of the parser implementation file.
+@xref{Decl Summary}.
+@end deffn
+
+@deffn {Directive} %param @{@var{argument-declaration}@} @dots{}
+Bison declaration to specify additional arguments that both
+@code{yylex} and @code{yyparse} should accept. @xref{Parser Function,, The
+Parser Function @code{yyparse}}.
@end deffn
-@deffn {Directive} %parse-param @{@var{argument-declaration}@}
-Bison declaration to specifying an additional parameter that
-@code{yyparse} should accept. @xref{Parser Function,, The Parser
-Function @code{yyparse}}.
+@deffn {Directive} %parse-param @{@var{argument-declaration}@} @dots{}
+Bison declaration to specify additional arguments that @code{yyparse}
+should accept. @xref{Parser Function,, The Parser Function @code{yyparse}}.
@end deffn
@deffn {Directive} %prec
@xref{Contextual Precedence, ,Context-Dependent Precedence}.
@end deffn
+@deffn {Directive} %precedence
+Bison declaration to assign precedence to token(s), but no associativity
+@xref{Precedence Decl, ,Operator Precedence}.
+@end deffn
+
@deffn {Directive} %pure-parser
-Bison declaration to request a pure (reentrant) parser.
-@xref{Pure Decl, ,A Pure (Reentrant) Parser}.
+Deprecated version of @samp{%define api.pure} (@pxref{%define
+Summary,,api.pure}), for which Bison is more careful to warn about
+unreasonable usage.
@end deffn
@deffn {Directive} %require "@var{version}"
@end deffn
@deffn {Directive} %right
-Bison declaration to assign right associativity to token(s).
+Bison declaration to assign precedence and right associativity to token(s).
@xref{Precedence Decl, ,Operator Precedence}.
@end deffn
@end deffn
@deffn {Directive} %token-table
-Bison declaration to include a token name table in the parser file.
-@xref{Decl Summary}.
+Bison declaration to include a token name table in the parser
+implementation file. @xref{Decl Summary}.
@end deffn
@deffn {Directive} %type
making @code{yyparse} return 1 immediately. The error reporting
function @code{yyerror} is not called. @xref{Parser Function, ,The
Parser Function @code{yyparse}}.
+
+For Java parsers, this functionality is invoked using @code{return YYABORT;}
+instead.
@end deffn
@deffn {Macro} YYACCEPT
Macro to pretend that a complete utterance of the language has been
read, by making @code{yyparse} return 0 immediately.
@xref{Parser Function, ,The Parser Function @code{yyparse}}.
+
+For Java parsers, this functionality is invoked using @code{return YYACCEPT;}
+instead.
@end deffn
@deffn {Macro} YYBACKUP
@code{yyerror} and then perform normal error recovery if possible
(@pxref{Error Recovery}), or (if recovery is impossible) make
@code{yyparse} return 1. @xref{Error Recovery}.
+
+For Java parsers, this functionality is invoked using @code{return YYERROR;}
+instead.
@end deffn
@deffn {Function} yyerror
User-supplied function to be called by @code{yyparse} on error.
-@xref{Error Reporting, ,The Error
-Reporting Function @code{yyerror}}.
+@xref{Error Reporting, ,The Error Reporting Function @code{yyerror}}.
@end deffn
@deffn {Macro} YYERROR_VERBOSE
-An obsolete macro that you define with @code{#define} in the prologue
-to request verbose, specific error message strings
-when @code{yyerror} is called. It doesn't matter what definition you
-use for @code{YYERROR_VERBOSE}, just whether you define it. Using
-@code{%error-verbose} is preferred.
+An obsolete macro used in the @file{yacc.c} skeleton, that you define
+with @code{#define} in the prologue to request verbose, specific error
+message strings when @code{yyerror} is called. It doesn't matter what
+definition you use for @code{YYERROR_VERBOSE}, just whether you define
+it. Using @samp{%define parse.error verbose} is preferred
+(@pxref{Error Reporting, ,The Error Reporting Function @code{yyerror}}).
@end deffn
@deffn {Macro} YYINITDEPTH
@deffn {Variable} yynerrs
Global variable which Bison increments each time it reports a syntax error.
-(In a pure parser, it is a local variable within @code{yyparse}.)
+(In a pure parser, it is a local variable within @code{yyparse}. In a
+pure push parser, it is a member of yypstate.)
@xref{Error Reporting, ,The Error Reporting Function @code{yyerror}}.
@end deffn
parsing. @xref{Parser Function, ,The Parser Function @code{yyparse}}.
@end deffn
+@deffn {Function} yypstate_delete
+The function to delete a parser instance, produced by Bison in push mode;
+call this function to delete the memory associated with a parser.
+@xref{Parser Delete Function, ,The Parser Delete Function
+@code{yypstate_delete}}.
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
+@end deffn
+
+@deffn {Function} yypstate_new
+The function to create a parser instance, produced by Bison in push mode;
+call this function to create a new parser.
+@xref{Parser Create Function, ,The Parser Create Function
+@code{yypstate_new}}.
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
+@end deffn
+
+@deffn {Function} yypull_parse
+The parser function produced by Bison in push mode; call this function to
+parse the rest of the input stream.
+@xref{Pull Parser Function, ,The Pull Parser Function
+@code{yypull_parse}}.
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
+@end deffn
+
+@deffn {Function} yypush_parse
+The parser function produced by Bison in push mode; call this function to
+parse a single token. @xref{Push Parser Function, ,The Push Parser Function
+@code{yypush_parse}}.
+(The current push parsing interface is experimental and may evolve.
+More user feedback will help to stabilize it.)
+@end deffn
+
@deffn {Macro} YYPARSE_PARAM
An obsolete macro for specifying the name of a parameter that
@code{yyparse} should accept. The use of this macro is deprecated, and
@end deffn
@deffn {Macro} YYSTACK_USE_ALLOCA
-Macro used to control the use of @code{alloca} when the C
-@acronym{LALR}(1) parser needs to extend its stacks. If defined to 0,
+Macro used to control the use of @code{alloca} when the
+deterministic parser in C needs to extend its stacks. If defined to 0,
the parser will use @code{malloc} to extend its stacks. If defined to
1, the parser will use @code{alloca}. Values other than 0 and 1 are
reserved for future Bison extensions. If not defined,
@cindex glossary
@table @asis
-@item Backus-Naur Form (@acronym{BNF}; also called ``Backus Normal Form'')
+@item Accepting state
+A state whose only action is the accept action.
+The accepting state is thus a consistent state.
+@xref{Understanding,,}.
+
+@item Backus-Naur Form (BNF; also called ``Backus Normal Form'')
Formal method of specifying context-free grammars originally proposed
by John Backus, and slightly improved by Peter Naur in his 1960-01-02
committee document contributing to what became the Algol 60 report.
@xref{Language and Grammar, ,Languages and Context-Free Grammars}.
+@item Consistent state
+A state containing only one possible action. @xref{Default Reductions}.
+
@item Context-free grammars
Grammars specified as rules that can be applied regardless of context.
Thus, if there is a rule which says that an integer can be used as an
permitted. @xref{Language and Grammar, ,Languages and Context-Free
Grammars}.
+@item Default reduction
+The reduction that a parser should perform if the current parser state
+contains no other action for the lookahead token. In permitted parser
+states, Bison declares the reduction with the largest lookahead set to be
+the default reduction and removes that lookahead set. @xref{Default
+Reductions}.
+
+@item Defaulted state
+A consistent state with a default reduction. @xref{Default Reductions}.
+
@item Dynamic allocation
Allocation of memory that occurs during execution, rather than at
compile time or on entry to a function.
parsed, and the states correspond to various stages in the grammar
rules. @xref{Algorithm, ,The Bison Parser Algorithm}.
-@item Generalized @acronym{LR} (@acronym{GLR})
+@item Generalized LR (GLR)
A parsing algorithm that can handle all context-free grammars, including those
-that are not @acronym{LALR}(1). It resolves situations that Bison's
-usual @acronym{LALR}(1)
+that are not LR(1). It resolves situations that Bison's
+deterministic parsing
algorithm cannot by effectively splitting off multiple parsers, trying all
possible parsers, and discarding those that fail in the light of additional
right context. @xref{Generalized LR Parsing, ,Generalized
-@acronym{LR} Parsing}.
+LR Parsing}.
@item Grouping
A language construct that is (in general) grammatically divisible;
for example, `expression' or `declaration' in C@.
@xref{Language and Grammar, ,Languages and Context-Free Grammars}.
+@item IELR(1) (Inadequacy Elimination LR(1))
+A minimal LR(1) parser table construction algorithm. That is, given any
+context-free grammar, IELR(1) generates parser tables with the full
+language-recognition power of canonical LR(1) but with nearly the same
+number of parser states as LALR(1). This reduction in parser states is
+often an order of magnitude. More importantly, because canonical LR(1)'s
+extra parser states may contain duplicate conflicts in the case of non-LR(1)
+grammars, the number of conflicts for IELR(1) is often an order of magnitude
+less as well. This can significantly reduce the complexity of developing a
+grammar. @xref{LR Table Construction}.
+
@item Infix operator
An arithmetic operator that is placed between the operands on which it
performs some operation.
@item Input stream
A continuous flow of data between devices or programs.
+@item LAC (Lookahead Correction)
+A parsing mechanism that fixes the problem of delayed syntax error
+detection, which is caused by LR state merging, default reductions, and the
+use of @code{%nonassoc}. Delayed syntax error detection results in
+unexpected semantic actions, initiation of error recovery in the wrong
+syntactic context, and an incorrect list of expected tokens in a verbose
+syntax error message. @xref{LAC}.
+
@item Language construct
One of the typical usage schemas of the language. For example, one of
the constructs of the C language is the @code{if} statement.
A token already read but not yet shifted. @xref{Lookahead, ,Lookahead
Tokens}.
-@item @acronym{LALR}(1)
+@item LALR(1)
The class of context-free grammars that Bison (like most other parser
-generators) can handle; a subset of @acronym{LR}(1). @xref{Mystery
-Conflicts, ,Mysterious Reduce/Reduce Conflicts}.
+generators) can handle by default; a subset of LR(1).
+@xref{Mysterious Conflicts}.
-@item @acronym{LR}(1)
+@item LR(1)
The class of context-free grammars in which at most one token of
lookahead is needed to disambiguate the parsing of any piece of input.
A grammar symbol that has no rules in the grammar and therefore is
grammatically indivisible. The piece of text it represents is a token.
@xref{Language and Grammar, ,Languages and Context-Free Grammars}.
+
+@item Unreachable state
+A parser state to which there does not exist a sequence of transitions from
+the parser's start state. A state can become unreachable during conflict
+resolution. @xref{Unreachable States}.
@end table
@node Copying This Manual
@appendix Copying This Manual
+@include fdl.texi
-@menu
-* GNU Free Documentation License:: License for copying this manual.
-@end menu
+@node Bibliography
+@unnumbered Bibliography
-@include fdl.texi
+@table @asis
+@item [Denny 2008]
+Joel E. Denny and Brian A. Malloy, IELR(1): Practical LR(1) Parser Tables
+for Non-LR(1) Grammars with Conflict Resolution, in @cite{Proceedings of the
+2008 ACM Symposium on Applied Computing} (SAC'08), ACM, New York, NY, USA,
+pp.@: 240--245. @uref{http://dx.doi.org/10.1145/1363686.1363747}
+
+@item [Denny 2010 May]
+Joel E. Denny, PSLR(1): Pseudo-Scannerless Minimal LR(1) for the
+Deterministic Parsing of Composite Languages, Ph.D. Dissertation, Clemson
+University, Clemson, SC, USA (May 2010).
+@uref{http://proquest.umi.com/pqdlink?did=2041473591&Fmt=7&clientId=79356&RQT=309&VName=PQD}
+
+@item [Denny 2010 November]
+Joel E. Denny and Brian A. Malloy, The IELR(1) Algorithm for Generating
+Minimal LR(1) Parser Tables for Non-LR(1) Grammars with Conflict Resolution,
+in @cite{Science of Computer Programming}, Vol.@: 75, Issue 11 (November
+2010), pp.@: 943--979. @uref{http://dx.doi.org/10.1016/j.scico.2009.08.001}
+
+@item [DeRemer 1982]
+Frank DeRemer and Thomas Pennello, Efficient Computation of LALR(1)
+Look-Ahead Sets, in @cite{ACM Transactions on Programming Languages and
+Systems}, Vol.@: 4, No.@: 4 (October 1982), pp.@:
+615--649. @uref{http://dx.doi.org/10.1145/69622.357187}
+
+@item [Knuth 1965]
+Donald E. Knuth, On the Translation of Languages from Left to Right, in
+@cite{Information and Control}, Vol.@: 8, Issue 6 (December 1965), pp.@:
+607--639. @uref{http://dx.doi.org/10.1016/S0019-9958(65)90426-2}
+
+@item [Scott 2000]
+Elizabeth Scott, Adrian Johnstone, and Shamsa Sadaf Hussain,
+@cite{Tomita-Style Generalised LR Parsers}, Royal Holloway, University of
+London, Department of Computer Science, TR-00-12 (December 2000).
+@uref{http://www.cs.rhul.ac.uk/research/languages/publications/tomita_style_1.ps}
+@end table
@node Index
@unnumbered Index
@bye
-@c LocalWords: texinfo setfilename settitle setchapternewpage finalout
-@c LocalWords: ifinfo smallbook shorttitlepage titlepage GPL FIXME iftex
-@c LocalWords: akim fn cp syncodeindex vr tp synindex dircategory direntry
-@c LocalWords: ifset vskip pt filll insertcopying sp ISBN Etienne Suvasa
-@c LocalWords: ifnottex yyparse detailmenu GLR RPN Calc var Decls Rpcalc
-@c LocalWords: rpcalc Lexer Gen Comp Expr ltcalc mfcalc Decl Symtab yylex
-@c LocalWords: yyerror pxref LR yylval cindex dfn LALR samp gpl BNF xref
-@c LocalWords: const int paren ifnotinfo AC noindent emph expr stmt findex
-@c LocalWords: glr YYSTYPE TYPENAME prog dprec printf decl init stmtMerge
-@c LocalWords: pre STDC GNUC endif yy YY alloca lf stddef stdlib YYDEBUG
-@c LocalWords: NUM exp subsubsection kbd Ctrl ctype EOF getchar isdigit
-@c LocalWords: ungetc stdin scanf sc calc ulator ls lm cc NEG prec yyerrok
-@c LocalWords: longjmp fprintf stderr yylloc YYLTYPE cos ln
-@c LocalWords: smallexample symrec val tptr FNCT fnctptr func struct sym
-@c LocalWords: fnct putsym getsym fname arith fncts atan ptr malloc sizeof
-@c LocalWords: strlen strcpy fctn strcmp isalpha symbuf realloc isalnum
-@c LocalWords: ptypes itype YYPRINT trigraphs yytname expseq vindex dtype
-@c LocalWords: Rhs YYRHSLOC LE nonassoc op deffn typeless yynerrs
-@c LocalWords: yychar yydebug msg YYNTOKENS YYNNTS YYNRULES YYNSTATES
-@c LocalWords: cparse clex deftypefun NE defmac YYACCEPT YYABORT param
-@c LocalWords: strncmp intval tindex lvalp locp llocp typealt YYBACKUP
-@c LocalWords: YYEMPTY YYEOF YYRECOVERING yyclearin GE def UMINUS maybeword
-@c LocalWords: Johnstone Shamsa Sadaf Hussain Tomita TR uref YYMAXDEPTH
-@c LocalWords: YYINITDEPTH stmnts ref stmnt initdcl maybeasm notype
-@c LocalWords: hexflag STR exdent itemset asis DYYDEBUG YYFPRINTF args
-@c LocalWords: infile ypp yxx outfile itemx tex leaderfill
-@c LocalWords: hbox hss hfill tt ly yyin fopen fclose ofirst gcc ll
-@c LocalWords: nbar yytext fst snd osplit ntwo strdup AST
-@c LocalWords: YYSTACK DVI fdl printindex
+@c LocalWords: texinfo setfilename settitle setchapternewpage finalout texi FSF
+@c LocalWords: ifinfo smallbook shorttitlepage titlepage GPL FIXME iftex FSF's
+@c LocalWords: akim fn cp syncodeindex vr tp synindex dircategory direntry Naur
+@c LocalWords: ifset vskip pt filll insertcopying sp ISBN Etienne Suvasa Multi
+@c LocalWords: ifnottex yyparse detailmenu GLR RPN Calc var Decls Rpcalc multi
+@c LocalWords: rpcalc Lexer Expr ltcalc mfcalc yylex defaultprec Donnelly Gotos
+@c LocalWords: yyerror pxref LR yylval cindex dfn LALR samp gpl BNF xref yypush
+@c LocalWords: const int paren ifnotinfo AC noindent emph expr stmt findex lr
+@c LocalWords: glr YYSTYPE TYPENAME prog dprec printf decl init stmtMerge POSIX
+@c LocalWords: pre STDC GNUC endif yy YY alloca lf stddef stdlib YYDEBUG yypull
+@c LocalWords: NUM exp subsubsection kbd Ctrl ctype EOF getchar isdigit nonfree
+@c LocalWords: ungetc stdin scanf sc calc ulator ls lm cc NEG prec yyerrok rr
+@c LocalWords: longjmp fprintf stderr yylloc YYLTYPE cos ln Stallman Destructor
+@c LocalWords: symrec val tptr FNCT fnctptr func struct sym enum IEC syntaxes
+@c LocalWords: fnct putsym getsym fname arith fncts atan ptr malloc sizeof Lex
+@c LocalWords: strlen strcpy fctn strcmp isalpha symbuf realloc isalnum DOTDOT
+@c LocalWords: ptypes itype YYPRINT trigraphs yytname expseq vindex dtype Unary
+@c LocalWords: Rhs YYRHSLOC LE nonassoc op deffn typeless yynerrs nonterminal
+@c LocalWords: yychar yydebug msg YYNTOKENS YYNNTS YYNRULES YYNSTATES reentrant
+@c LocalWords: cparse clex deftypefun NE defmac YYACCEPT YYABORT param yypstate
+@c LocalWords: strncmp intval tindex lvalp locp llocp typealt YYBACKUP subrange
+@c LocalWords: YYEMPTY YYEOF YYRECOVERING yyclearin GE def UMINUS maybeword loc
+@c LocalWords: Johnstone Shamsa Sadaf Hussain Tomita TR uref YYMAXDEPTH inline
+@c LocalWords: YYINITDEPTH stmts ref initdcl maybeasm notype Lookahead yyoutput
+@c LocalWords: hexflag STR exdent itemset asis DYYDEBUG YYFPRINTF args Autoconf
+@c LocalWords: infile ypp yxx outfile itemx tex leaderfill Troubleshouting sqrt
+@c LocalWords: hbox hss hfill tt ly yyin fopen fclose ofirst gcc ll lookahead
+@c LocalWords: nbar yytext fst snd osplit ntwo strdup AST Troublereporting th
+@c LocalWords: YYSTACK DVI fdl printindex IELR nondeterministic nonterminals ps
+@c LocalWords: subexpressions declarator nondeferred config libintl postfix LAC
+@c LocalWords: preprocessor nonpositive unary nonnumeric typedef extern rhs sr
+@c LocalWords: yytokentype destructor multicharacter nonnull EBCDIC nterm LR's
+@c LocalWords: lvalue nonnegative XNUM CHR chr TAGLESS tagless stdout api TOK
+@c LocalWords: destructors Reentrancy nonreentrant subgrammar nonassociative Ph
+@c LocalWords: deffnx namespace xml goto lalr ielr runtime lex yacc yyps env
+@c LocalWords: yystate variadic Unshift NLS gettext po UTF Automake LOCALEDIR
+@c LocalWords: YYENABLE bindtextdomain Makefile DEFS CPPFLAGS DBISON DeRemer
+@c LocalWords: autoreconf Pennello multisets nondeterminism Generalised baz ACM
+@c LocalWords: redeclare automata Dparse localedir datadir XSLT midrule Wno
+@c LocalWords: Graphviz multitable headitem hh basename Doxygen fno filename
+@c LocalWords: doxygen ival sval deftypemethod deallocate pos deftypemethodx
+@c LocalWords: Ctor defcv defcvx arg accessors arithmetics CPP ifndef CALCXX
+@c LocalWords: lexer's calcxx bool LPAREN RPAREN deallocation cerrno climits
+@c LocalWords: cstdlib Debian undef yywrap unput noyywrap nounput zA yyleng
+@c LocalWords: errno strtol ERANGE str strerror iostream argc argv Javadoc PSLR
+@c LocalWords: bytecode initializers superclass stype ASTNode autoboxing nls
+@c LocalWords: toString deftypeivar deftypeivarx deftypeop YYParser strictfp
+@c LocalWords: superclasses boolean getErrorVerbose setErrorVerbose deftypecv
+@c LocalWords: getDebugStream setDebugStream getDebugLevel setDebugLevel url
+@c LocalWords: bisonVersion deftypecvx bisonSkeleton getStartPos getEndPos
+@c LocalWords: getLVal defvar deftypefn deftypefnx gotos msgfmt Corbett LALR's
+@c LocalWords: subdirectory Solaris nonassociativity perror schemas Malloy
+@c LocalWords: Scannerless ispell american
+
+@c Local Variables:
+@c ispell-dictionary: "american"
+@c fill-column: 76
+@c End: