* src/reduce.c (dump_grammar): Move to...
* src/gram.h, src/gram.c (grammar_dump): here.
Be sure to separate long item numbers.
Don't read the members of a rule's prec if its nil.
* src/output.c (table_size, table_grow): New.
(MAXTABLE): Remove, replace uses with table_size.
(pack_vector): Instead of dying when the table is too big, grow it.
* src/reader.c (token_translations_init): 256 is now the default
value for the error token, i.e., it will be assigned another
number if the user assigned 256 to one of her tokens.
(reader): Don't force 256 to error.
* doc/bison.texinfo (Symbols): Adjust.
* tests/torture.at (AT_DATA_HORIZONTAL_GRAMMAR)
(AT_DATA_TRIANGULAR_GRAMMAR): Number the tokens as 1, 2, 3
etc. instead of 10, 20, 30 (which was used to `jump' over error
(256) and undefined (2)).
* doc/bison.texinfo (Actions): Make clear that `|' is not the same
as Lex/Flex'.
(Debugging): More details about enabling the debugging features.
(Table of Symbols): Describe $$, $n, @$, and @n.
Suggested by Tim Josling.
* tests/calc.at (_AT_CHECK_CALC_ERROR): Receive as argument the
full stderr, and strip it according to the bison options, instead
of composing the error message from different bits.
This makes it easier to check for several error messages.
Adjust all the invocations.
Add an invocation exercising the error token.
Add an invocation demonstrating a stupid error message.
(_AT_DATA_CALC_Y): Follow the GCS: initial column is 1, not 0.
Adjust the tests.
Error message are for stderr, not stdout.
* src/gram.h, src/gram.c (error_token_number): Remove, use
errtoken->number.
* src/reader.c (reader): Don't specify the user token number (2)
for $undefined, as it uselessly prevents using it.
* src/gram.h (token_number_t): Move to...
* src/symtab.h: here.
(state_t.number): Is a token_number_t.
* src/print.c, src/reader.c: Use undeftoken->number instead of
hard coded 2.
(Even though this 2 is not the same as above: the number of the
undeftoken remains being 2, it is its user token number which
might not be 2).
* src/output.c (prepare_tokens): Rename the `maxtok' muscle with
`user_token_number_max'.
Output `undef_token_number'.
* data/bison.simple, data/bison.c++: Use them.
Be sure to map invalid yylex return values to
`undef_token_number'. This saves us from gratuitous SEGV.
* tests/conflicts.at (Solved SR Conflicts)
(Unresolved SR Conflicts): Adjust.
* tests/regression.at (Web2c Actions): Adjust.
* data/bison.c++: s/b4_item_number_max/b4_rhs_number_max/.
Adding #line.
Remove the duplicate `typedefs'.
(RhsNumberType): Fix the declaration and various other typos.
Use __ofile__.
* data/bison.simple: Use __ofile__.
* src/scan-skel.l: Handle __ofile__.
* src/gram.h (item_number_t): New, the type of item numbers in
RITEM. Note that it must be able to code symbol numbers as
positive number, and the negation of rule numbers as negative
numbers.
Adjust all dependencies (pretty many).
* src/reduce.c (rule): Remove this `short *' pointer: use
item_number_t.
* src/system.h (MINSHORT, MAXSHORT): Remove.
Include `limits.h'.
Adjust dependencies to using SHRT_MAX and SHRT_MIN.
(shortcpy): Remove.
(MAXTABLE): Move to...
* src/output.c (MAXTABLE): here.
(prepare_rules): Use output_int_table to output rhs.
* data/bison.simple, data/bison.c++: Adjust.
* tests/torture.at (Big triangle): Move the limit from 254 to
500.
* tests/regression.at (Web2c Actions): Ajust.
Trying with bigger grammars shows various phenomena: at 3000 (28Mb
of grammar file) bison is killed by my system, at 2000 (12Mb) bison
passes, but produces negative #line number, once fixed, GCC is
killed while compiling 14Mb, at 1500 (6.7 Mb of grammar, 8.2Mb of
C), it passes.
* src/state.h (state_h): Code input lines on ints, not shorts.
* src/gram.h (item_number_t): New, the type of item numbers in
RITEM. Note that it must be able to code symbol numbers as
positive number, and the negation of rule numbers as negative
numbers.
Adjust all dependencies (pretty many).
* src/reduce.c (rule): Remove this `short *' pointer: use
item_number_t.
* src/system.h (MINSHORT, MAXSHORT): Remove.
Include `limits.h'.
Adjust dependencies to using SHRT_MAX and SHRT_MIN.
(shortcpy): Remove.
(MAXTABLE): Move to...
* src/output.c (MAXTABLE): here.
(prepare_rules): Use output_int_table to output rhs.
* data/bison.simple, data/bison.c++: Adjust.
* tests/torture.at (Big triangle): Move the limit from 254 to
500.
* tests/regression.at (Web2c Actions): Ajust.
Trying with bigger grammars shows various phenomena: at 3000 (28Mb
of grammar file) bison is killed by my system, at 2000 (12Mb) bison
passes, but produces negative #line number, once fixed, GCC is
killed while compiling 14Mb, at 1500 (6.7 Mb of grammar, 8.2Mb of
C), it passes.
* src/state.h (state_h): Code input lines on ints, not shorts.
* src/muscle_tab.h (MUSCLE_INSERT_LONG_INT): New.
* src/output.c (output_table_data): Return the longest number.
(prepare_tokens): Output `token_number_max').
* data/bison.simple, data/bison.c++ (b4_sint_type, b4_uint_type):
New.
Use them to define yy_token_number_type/TokenNumberType.
Use this type for yytranslate.
* tests/torture.at (Big triangle): Push the limit from 124 to
253.
* tests/regression.at (Web2c Actions): Adjust.
Use lib/hash for the symbol table.
* src/gram.c (ntokens): Initialize to 1, to reserve a slot for
EOF.
* src/lex.c (lex): Set the `number' member of new terminals.
* src/reader.c (bucket_check_defined, bucket_make_alias)
(bucket_check_alias_consistence, bucket_translation): New.
(reader, grammar_free, readgram, token_translations_init)
(packsymbols): Adjust.
(reader): Number the predefined tokens.
* src/reduce.c (inaccessable_symbols): Just use hard coded numbers
for predefined tokens.
* src/symtab.h (bucket): Remove all the hash table related
members.
* src/symtab.c (symtab): Replace by...
(bucket_table): this.
(bucket_new, bucket_free, hash_compare_bucket, hash_bucket)
(buckets_new, buckets_do): New.
* src/gram.h (rule_s): prec and precsym are now pointers
to the bucket giving the priority/associativity.
Member `associativity' removed: useless.
* src/reduce.c, src/conflicts.c: Adjust.
* src/gram.h (rule_t): Rename `number' as `user_number'.
`number' is a new member.
Adjust dependencies.
* src/reduce.c (reduce_grammar_tables): Renumber rule_t.number.
Be sure never to walk through RITEMS, but use only data related to
the rules themselves. RITEMS should be banished.
* src/output.c (output_token_translations): Rename as...
(prepare_tokens): this.
In addition to `translate', prepare the muscles `tname' and
`toknum', which were handled by...
(output_rule_data): this.
Remove, and move the remainder of its outputs into...
(prepare_rules): this new routines, which also merges content from
(output_gram): this.
(prepare_rules): Be sure never to walk through RITEMS.
(output_stos): Rename as...
(prepare_stos): this.
(output): Always invoke prepare_states, after all, just don't use it
in the output if you don't need it.
* src/LR0.c (new_state): Display `nstates' as the name of the
newly created state.
Adjust to initialize first_state and last_state if needed.
Be sure to distinguish the initial from the final state.
(new_states): Create the itemset of the initial state, and use
new_state.
* src/closure.c (closure): Now that the initial state has its
items properly set, there is no need for a special case when
creating `ruleset'.
As a result, now the rule 0, reducing to $axiom, is visible in the
outputs. Adjust the test suite.
* tests/conflicts.at (Solved SR Conflicts)
(Unresolved SR Conflicts): Adjust.
* tests/regression.at (Web2c Report, Rule Line Numbers): Idem.
* tests/conflicts.at (S/R in initial): New.
* src/gram.h (rule_t): `lhs' is now a pointer to the symbol's
bucket.
Adjust all dependencies.
* src/reduce.c (nonterminals_reduce): Don't forget to renumber the
`number' of the buckets too.
* src/gram.h: Include `symtab.h'.
(associativity): Move to...
* src/symtab.h: here.
No longer include `gram.h'.
* src/gram.h, src/gram.c (rules_rhs_length): New.
(ritem_longest_rhs): Use it.
* src/gram.h (rule_t): `number' is a new member.
* src/reader.c (packgram): Set it.
* src/reduce.c (reduce_grammar_tables): Move the useless rules at
the end of `rules', and count them out of `nrules'.
(reduce_output, dump_grammar): Adjust.
* src/print.c (print_grammar): It is no longer needed to check for
the usefulness of a rule, as useless rules are beyond `nrules + 1'.
* tests/reduce.at (Reduced Automaton): New test.
* src/gram.h, src/gram.c (rules_swap): New.
(ritem_longest_rhs): Use it.
* src/gram.h (rule_t): `number' is a new member.
* src/reader.c (packgram): Set it.
* src/reduce.c (reduce_grammar_tables): Move the useless rules at
the end of `rules', and count them out of `nrules'.
(reduce_output, dump_grammar): Adjust.
* src/print.c (print_grammar): It is no longer needed to check for
the usefulness of a rule, as useless rules are beyond `nrules + 1'.
* tests/reduce.at (Reduced Automaton): New test.
Remove the useless rules from the parser.
* src/gram.h, src/gram.c (rules_swap, rule_rhs_length): New.
(ritem_longest_rhs): Use the latter.
* src/gram.h (rule_t): `number' is a new member.
* src/reader.c (packgram): Set it.
* src/reduce.c (reduce_grammar_tables): Move the useless rules at
the end of `rules', and count them out of `nrules'.
(reduce_output, dump_grammar): Adjust.
* src/print.c (print_grammar): It is no longer needed to check for
the usefulness of a rule, as useless rules are beyond `nrules + 1'.
* tests/reduce.at (Reduced Automaton): New test.
Changes in version 1.49a:
* False `Token not used' report fixed.
On a grammar such as
/* Allocate input grammar variables for bison,
This file is part of Bison, the GNU Compiler Compiler.
int error_token_number;
/*------------------------.
| Dump RITEM for traces. |
`------------------------*/
size_t
ritem_longest_rhs (void)
{
int i;
return max;
}
typedef struct rule_s
{
short lhs;
short *rhs;
short prec;
extern int error_token_number;
/* Dump RITEM for traces. */
void ritem_print PARAMS ((FILE *out));
fprintf (out, "%snn", _("Grammar"));
fprintf (out, " %sn", _("Number, Line, Rule"));
for (i = 1; i < nrules + 1; i++)
fputs ("nn", out);
while (p)
{
bucket *ruleprec = p->ruleprec;
rules[ruleno].lhs = p->sym->number;
rules[ruleno].rhs = ritem + itemno;
rules[ruleno].line = p->line;
bitset_set (V1, rules[i].precsym);
}
static void
reduce_grammar_tables (void)
{
if (nuseless_productions > 0)
{
int pn;
for (pn = 1; pn < nrules + 1; pn++)
rules[pn].useful = bitset_test (P, pn);
}
}
{
int i;
fprintf (out, "%snn", _("Useless rules:"));
fputs ("nn", out);
}
}
fprintf (out, "nn");
fprintf (out, "Rulesn-----nn");
fprintf (out, "Num (Prec, Assoc, Useful, Ritem Range) Lhs -> Rhs (Ritem range) [Num]n");
{
int rhs_count = 0;
/* Find the last RHS index in ritems. */
}
fprintf (out, "nn");
fprintf (out, "Rules interpretedn-----------------nn");
{
fprintf (out, "%-5d %s :", i, symbols[rules[i].lhs]->tag);
for (r = rules[i].rhs; *r >= 0; r++)
## ------------------- ##
## Underivable Rules. ##
## ------------------- ##
* src/reduce.c (inaccessable_symbols): Fix a buglet: because of a
lacking `+ 1' to nrules, Bison reported as useless a token if it
was used solely to set the precedence of the last rule...
* data/bison.c++, data/bison.simple: Don't output the current file
name in #line, to avoid useless diffs between two identical
outputs under different names.
* src/closure.c, src/derives.c, src/gram.h, src/lalr.c,
* src/nullable.c, src/output.c, src/print.c, src/print_graph.c,
* src/reader.c, src/reduce.c: Let rule_t.rhs point directly to the
RHS, instead of being an index in RITEMS.
Paul Eggert [Thu, 4 Apr 2002 21:34:34 +0000 (21:34 +0000)]
* doc/bison.texinfo: Update copyright date.
(Rpcalc Lexer, Symbols, Token Decl): Don't assume ASCII.
(Symbols): Warn about running Bison in one character set,
but compiling and/or running in an incompatible one.
Warn about character code 256, too.
Paul Eggert [Wed, 20 Mar 2002 07:30:00 +0000 (07:30 +0000)]
* src/bison.simple (YYCOPY): New macro.
(YYSTACK_RELOCATE): Use it.
Remove Type arg; no longer needed. All callers changed.
(yymemcpy): Remove; no longer needed.
Akim Demaille [Tue, 19 Mar 2002 08:10:21 +0000 (08:10 +0000)]
* tests/regression.at (%nonassoc and eof, Unresolved SR Conflicts)
(Solved SR Conflicts, %expect not enough, %expect right)
(%expect too much): Move to...
* tests/conflicts.at: this new file.
Akim Demaille [Tue, 19 Mar 2002 07:48:47 +0000 (07:48 +0000)]
* data/m4sugar/m4sugar.m4: Update from CVS Autoconf.
* data/bison.simple, data/bison.c++: Handle the `#define' part, so
that we can move to enums for instance.
* src/output.c (token_definitions_output): Output a list of
`token-name, token-number' instead of the #define.
(output_skeleton): Name this list `b4_tokens', not `b4_tokendefs'.
Robert Anisko [Mon, 4 Mar 2002 16:37:52 +0000 (16:37 +0000)]
* data/bison.c++: Unmerge value as yylval and value as yyval. Unmerge
location as yylloc and location as yyloc. Use YYLLOC_DEFAULT, and
provide a default implementation.
Akim Demaille [Mon, 4 Mar 2002 16:23:35 +0000 (16:23 +0000)]
* tests/input.at (Invalid $n, Invalid @n): Add the ending `;'.
* tests/output.at (AT_CHECK_OUTPUT): Likewise.
* tests/headers.at (AT_TEST_CPP_GUARD_H): Ditto.
* tests/semantic.at (Parsing Guards): Similarly.
* src/reader.at (readgram): Complain if the last rule is not ended
with a semi-colon.
Akim Demaille [Mon, 4 Mar 2002 13:56:41 +0000 (13:56 +0000)]
* src/closure.c, src/conflicts.c, src/lalr.c, src/print.c,
* src/reduce.c: Remove the `bitset_zero's following the
`bitset_create's, as now it is performed by the latter.