Mercurial > repo
comparison ply-3.8/doc/ply.html @ 7267:343ff337a19b
<ais523> ` tar -xf ply-3.8.tar.gz
author | HackBot |
---|---|
date | Wed, 23 Mar 2016 02:40:16 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
7266:61a39a120dee | 7267:343ff337a19b |
---|---|
1 <html> | |
2 <head> | |
3 <title>PLY (Python Lex-Yacc)</title> | |
4 </head> | |
5 <body bgcolor="#ffffff"> | |
6 | |
7 <h1>PLY (Python Lex-Yacc)</h1> | |
8 | |
9 <b> | |
10 David M. Beazley <br> | |
11 dave@dabeaz.com<br> | |
12 </b> | |
13 | |
14 <p> | |
15 <b>PLY Version: 3.6</b> | |
16 <p> | |
17 | |
18 <!-- INDEX --> | |
19 <div class="sectiontoc"> | |
20 <ul> | |
21 <li><a href="#ply_nn1">Preface and Requirements</a> | |
22 <li><a href="#ply_nn1">Introduction</a> | |
23 <li><a href="#ply_nn2">PLY Overview</a> | |
24 <li><a href="#ply_nn3">Lex</a> | |
25 <ul> | |
26 <li><a href="#ply_nn4">Lex Example</a> | |
27 <li><a href="#ply_nn5">The tokens list</a> | |
28 <li><a href="#ply_nn6">Specification of tokens</a> | |
29 <li><a href="#ply_nn7">Token values</a> | |
30 <li><a href="#ply_nn8">Discarded tokens</a> | |
31 <li><a href="#ply_nn9">Line numbers and positional information</a> | |
32 <li><a href="#ply_nn10">Ignored characters</a> | |
33 <li><a href="#ply_nn11">Literal characters</a> | |
34 <li><a href="#ply_nn12">Error handling</a> | |
35 <li><a href="#ply_nn14">EOF Handling</a> | |
36 <li><a href="#ply_nn13">Building and using the lexer</a> | |
37 <li><a href="#ply_nn14">The @TOKEN decorator</a> | |
38 <li><a href="#ply_nn15">Optimized mode</a> | |
39 <li><a href="#ply_nn16">Debugging</a> | |
40 <li><a href="#ply_nn17">Alternative specification of lexers</a> | |
41 <li><a href="#ply_nn18">Maintaining state</a> | |
42 <li><a href="#ply_nn19">Lexer cloning</a> | |
43 <li><a href="#ply_nn20">Internal lexer state</a> | |
44 <li><a href="#ply_nn21">Conditional lexing and start conditions</a> | |
45 <li><a href="#ply_nn21">Miscellaneous Issues</a> | |
46 </ul> | |
47 <li><a href="#ply_nn22">Parsing basics</a> | |
48 <li><a href="#ply_nn23">Yacc</a> | |
49 <ul> | |
50 <li><a href="#ply_nn24">An example</a> | |
51 <li><a href="#ply_nn25">Combining Grammar Rule Functions</a> | |
52 <li><a href="#ply_nn26">Character Literals</a> | |
53 <li><a href="#ply_nn26">Empty Productions</a> | |
54 <li><a href="#ply_nn28">Changing the starting symbol</a> | |
55 <li><a href="#ply_nn27">Dealing With Ambiguous Grammars</a> | |
56 <li><a href="#ply_nn28">The parser.out file</a> | |
57 <li><a href="#ply_nn29">Syntax Error Handling</a> | |
58 <ul> | |
59 <li><a href="#ply_nn30">Recovery and resynchronization with error rules</a> | |
60 <li><a href="#ply_nn31">Panic mode recovery</a> | |
61 <li><a href="#ply_nn35">Signalling an error from a production</a> | |
62 <li><a href="#ply_nn38">When Do Syntax Errors Get Reported</a> | |
63 <li><a href="#ply_nn32">General comments on error handling</a> | |
64 </ul> | |
65 <li><a href="#ply_nn33">Line Number and Position Tracking</a> | |
66 <li><a href="#ply_nn34">AST Construction</a> | |
67 <li><a href="#ply_nn35">Embedded Actions</a> | |
68 <li><a href="#ply_nn36">Miscellaneous Yacc Notes</a> | |
69 </ul> | |
70 <li><a href="#ply_nn37">Multiple Parsers and Lexers</a> | |
71 <li><a href="#ply_nn38">Using Python's Optimized Mode</a> | |
72 <li><a href="#ply_nn44">Advanced Debugging</a> | |
73 <ul> | |
74 <li><a href="#ply_nn45">Debugging the lex() and yacc() commands</a> | |
75 <li><a href="#ply_nn46">Run-time Debugging</a> | |
76 </ul> | |
77 <li><a href="#ply_nn49">Packaging Advice</a> | |
78 <li><a href="#ply_nn39">Where to go from here?</a> | |
79 </ul> | |
80 </div> | |
81 <!-- INDEX --> | |
82 | |
83 | |
84 | |
85 | |
86 | |
87 <H2><a name="ply_nn1"></a>1. Preface and Requirements</H2> | |
88 | |
89 | |
90 <p> | |
91 This document provides an overview of lexing and parsing with PLY. | |
92 Given the intrinsic complexity of parsing, I would strongly advise | |
93 that you read (or at least skim) this entire document before jumping | |
94 into a big development project with PLY. | |
95 </p> | |
96 | |
97 <p> | |
98 PLY-3.5 is compatible with both Python 2 and Python 3. If you are using | |
99 Python 2, you have to use Python 2.6 or newer. | |
100 </p> | |
101 | |
102 <H2><a name="ply_nn1"></a>2. Introduction</H2> | |
103 | |
104 | |
105 PLY is a pure-Python implementation of the popular compiler | |
106 construction tools lex and yacc. The main goal of PLY is to stay | |
107 fairly faithful to the way in which traditional lex/yacc tools work. | |
108 This includes supporting LALR(1) parsing as well as providing | |
109 extensive input validation, error reporting, and diagnostics. Thus, | |
110 if you've used yacc in another programming language, it should be | |
111 relatively straightforward to use PLY. | |
112 | |
113 <p> | |
114 Early versions of PLY were developed to support an Introduction to | |
115 Compilers Course I taught in 2001 at the University of Chicago. | |
116 Since PLY was primarily developed as an instructional tool, you will | |
117 find it to be fairly picky about token and grammar rule | |
118 specification. In part, this | |
119 added formality is meant to catch common programming mistakes made by | |
120 novice users. However, advanced users will also find such features to | |
121 be useful when building complicated grammars for real programming | |
122 languages. It should also be noted that PLY does not provide much in | |
123 the way of bells and whistles (e.g., automatic construction of | |
124 abstract syntax trees, tree traversal, etc.). Nor would I consider it | |
125 to be a parsing framework. Instead, you will find a bare-bones, yet | |
126 fully capable lex/yacc implementation written entirely in Python. | |
127 | |
128 <p> | |
129 The rest of this document assumes that you are somewhat familiar with | |
130 parsing theory, syntax directed translation, and the use of compiler | |
131 construction tools such as lex and yacc in other programming | |
132 languages. If you are unfamiliar with these topics, you will probably | |
133 want to consult an introductory text such as "Compilers: Principles, | |
134 Techniques, and Tools", by Aho, Sethi, and Ullman. O'Reilly's "Lex | |
135 and Yacc" by John Levine may also be handy. In fact, the O'Reilly book can be | |
136 used as a reference for PLY as the concepts are virtually identical. | |
137 | |
138 <H2><a name="ply_nn2"></a>3. PLY Overview</H2> | |
139 | |
140 | |
141 <p> | |
142 PLY consists of two separate modules; <tt>lex.py</tt> and | |
143 <tt>yacc.py</tt>, both of which are found in a Python package | |
144 called <tt>ply</tt>. The <tt>lex.py</tt> module is used to break input text into a | |
145 collection of tokens specified by a collection of regular expression | |
146 rules. <tt>yacc.py</tt> is used to recognize language syntax that has | |
147 been specified in the form of a context free grammar. | |
148 </p> | |
149 | |
150 <p> | |
151 The two tools are meant to work together. Specifically, | |
152 <tt>lex.py</tt> provides an external interface in the form of a | |
153 <tt>token()</tt> function that returns the next valid token on the | |
154 input stream. <tt>yacc.py</tt> calls this repeatedly to retrieve | |
155 tokens and invoke grammar rules. The output of <tt>yacc.py</tt> is | |
156 often an Abstract Syntax Tree (AST). However, this is entirely up to | |
157 the user. If desired, <tt>yacc.py</tt> can also be used to implement | |
158 simple one-pass compilers. | |
159 | |
160 <p> | |
161 Like its Unix counterpart, <tt>yacc.py</tt> provides most of the | |
162 features you expect including extensive error checking, grammar | |
163 validation, support for empty productions, error tokens, and ambiguity | |
164 resolution via precedence rules. In fact, almost everything that is possible in traditional yacc | |
165 should be supported in PLY. | |
166 | |
167 <p> | |
168 The primary difference between | |
169 <tt>yacc.py</tt> and Unix <tt>yacc</tt> is that <tt>yacc.py</tt> | |
170 doesn't involve a separate code-generation process. | |
171 Instead, PLY relies on reflection (introspection) | |
172 to build its lexers and parsers. Unlike traditional lex/yacc which | |
173 require a special input file that is converted into a separate source | |
174 file, the specifications given to PLY <em>are</em> valid Python | |
175 programs. This means that there are no extra source files nor is | |
176 there a special compiler construction step (e.g., running yacc to | |
177 generate Python code for the compiler). Since the generation of the | |
178 parsing tables is relatively expensive, PLY caches the results and | |
179 saves them to a file. If no changes are detected in the input source, | |
180 the tables are read from the cache. Otherwise, they are regenerated. | |
181 | |
182 <H2><a name="ply_nn3"></a>4. Lex</H2> | |
183 | |
184 | |
185 <tt>lex.py</tt> is used to tokenize an input string. For example, suppose | |
186 you're writing a programming language and a user supplied the following input string: | |
187 | |
188 <blockquote> | |
189 <pre> | |
190 x = 3 + 42 * (s - t) | |
191 </pre> | |
192 </blockquote> | |
193 | |
194 A tokenizer splits the string into individual tokens | |
195 | |
196 <blockquote> | |
197 <pre> | |
198 'x','=', '3', '+', '42', '*', '(', 's', '-', 't', ')' | |
199 </pre> | |
200 </blockquote> | |
201 | |
202 Tokens are usually given names to indicate what they are. For example: | |
203 | |
204 <blockquote> | |
205 <pre> | |
206 'ID','EQUALS','NUMBER','PLUS','NUMBER','TIMES', | |
207 'LPAREN','ID','MINUS','ID','RPAREN' | |
208 </pre> | |
209 </blockquote> | |
210 | |
211 More specifically, the input is broken into pairs of token types and values. For example: | |
212 | |
213 <blockquote> | |
214 <pre> | |
215 ('ID','x'), ('EQUALS','='), ('NUMBER','3'), | |
216 ('PLUS','+'), ('NUMBER','42), ('TIMES','*'), | |
217 ('LPAREN','('), ('ID','s'), ('MINUS','-'), | |
218 ('ID','t'), ('RPAREN',')' | |
219 </pre> | |
220 </blockquote> | |
221 | |
222 The identification of tokens is typically done by writing a series of regular expression | |
223 rules. The next section shows how this is done using <tt>lex.py</tt>. | |
224 | |
225 <H3><a name="ply_nn4"></a>4.1 Lex Example</H3> | |
226 | |
227 | |
228 The following example shows how <tt>lex.py</tt> is used to write a simple tokenizer. | |
229 | |
230 <blockquote> | |
231 <pre> | |
232 # ------------------------------------------------------------ | |
233 # calclex.py | |
234 # | |
235 # tokenizer for a simple expression evaluator for | |
236 # numbers and +,-,*,/ | |
237 # ------------------------------------------------------------ | |
238 import ply.lex as lex | |
239 | |
240 # List of token names. This is always required | |
241 tokens = ( | |
242 'NUMBER', | |
243 'PLUS', | |
244 'MINUS', | |
245 'TIMES', | |
246 'DIVIDE', | |
247 'LPAREN', | |
248 'RPAREN', | |
249 ) | |
250 | |
251 # Regular expression rules for simple tokens | |
252 t_PLUS = r'\+' | |
253 t_MINUS = r'-' | |
254 t_TIMES = r'\*' | |
255 t_DIVIDE = r'/' | |
256 t_LPAREN = r'\(' | |
257 t_RPAREN = r'\)' | |
258 | |
259 # A regular expression rule with some action code | |
260 def t_NUMBER(t): | |
261 r'\d+' | |
262 t.value = int(t.value) | |
263 return t | |
264 | |
265 # Define a rule so we can track line numbers | |
266 def t_newline(t): | |
267 r'\n+' | |
268 t.lexer.lineno += len(t.value) | |
269 | |
270 # A string containing ignored characters (spaces and tabs) | |
271 t_ignore = ' \t' | |
272 | |
273 # Error handling rule | |
274 def t_error(t): | |
275 print("Illegal character '%s'" % t.value[0]) | |
276 t.lexer.skip(1) | |
277 | |
278 # Build the lexer | |
279 lexer = lex.lex() | |
280 | |
281 </pre> | |
282 </blockquote> | |
283 To use the lexer, you first need to feed it some input text using | |
284 its <tt>input()</tt> method. After that, repeated calls | |
285 to <tt>token()</tt> produce tokens. The following code shows how this | |
286 works: | |
287 | |
288 <blockquote> | |
289 <pre> | |
290 | |
291 # Test it out | |
292 data = ''' | |
293 3 + 4 * 10 | |
294 + -20 *2 | |
295 ''' | |
296 | |
297 # Give the lexer some input | |
298 lexer.input(data) | |
299 | |
300 # Tokenize | |
301 while True: | |
302 tok = lexer.token() | |
303 if not tok: | |
304 break # No more input | |
305 print(tok) | |
306 </pre> | |
307 </blockquote> | |
308 | |
309 When executed, the example will produce the following output: | |
310 | |
311 <blockquote> | |
312 <pre> | |
313 $ python example.py | |
314 LexToken(NUMBER,3,2,1) | |
315 LexToken(PLUS,'+',2,3) | |
316 LexToken(NUMBER,4,2,5) | |
317 LexToken(TIMES,'*',2,7) | |
318 LexToken(NUMBER,10,2,10) | |
319 LexToken(PLUS,'+',3,14) | |
320 LexToken(MINUS,'-',3,16) | |
321 LexToken(NUMBER,20,3,18) | |
322 LexToken(TIMES,'*',3,20) | |
323 LexToken(NUMBER,2,3,21) | |
324 </pre> | |
325 </blockquote> | |
326 | |
327 Lexers also support the iteration protocol. So, you can write the above loop as follows: | |
328 | |
329 <blockquote> | |
330 <pre> | |
331 for tok in lexer: | |
332 print(tok) | |
333 </pre> | |
334 </blockquote> | |
335 | |
336 The tokens returned by <tt>lexer.token()</tt> are instances | |
337 of <tt>LexToken</tt>. This object has | |
338 attributes <tt>tok.type</tt>, <tt>tok.value</tt>, | |
339 <tt>tok.lineno</tt>, and <tt>tok.lexpos</tt>. The following code shows an example of | |
340 accessing these attributes: | |
341 | |
342 <blockquote> | |
343 <pre> | |
344 # Tokenize | |
345 while True: | |
346 tok = lexer.token() | |
347 if not tok: | |
348 break # No more input | |
349 print(tok.type, tok.value, tok.lineno, tok.lexpos) | |
350 </pre> | |
351 </blockquote> | |
352 | |
353 The <tt>tok.type</tt> and <tt>tok.value</tt> attributes contain the | |
354 type and value of the token itself. | |
355 <tt>tok.line</tt> and <tt>tok.lexpos</tt> contain information about | |
356 the location of the token. <tt>tok.lexpos</tt> is the index of the | |
357 token relative to the start of the input text. | |
358 | |
359 <H3><a name="ply_nn5"></a>4.2 The tokens list</H3> | |
360 | |
361 | |
362 <p> | |
363 All lexers must provide a list <tt>tokens</tt> that defines all of the possible token | |
364 names that can be produced by the lexer. This list is always required | |
365 and is used to perform a variety of validation checks. The tokens list is also used by the | |
366 <tt>yacc.py</tt> module to identify terminals. | |
367 </p> | |
368 | |
369 <p> | |
370 In the example, the following code specified the token names: | |
371 | |
372 <blockquote> | |
373 <pre> | |
374 tokens = ( | |
375 'NUMBER', | |
376 'PLUS', | |
377 'MINUS', | |
378 'TIMES', | |
379 'DIVIDE', | |
380 'LPAREN', | |
381 'RPAREN', | |
382 ) | |
383 </pre> | |
384 </blockquote> | |
385 | |
386 <H3><a name="ply_nn6"></a>4.3 Specification of tokens</H3> | |
387 | |
388 | |
389 Each token is specified by writing a regular expression rule compatible with Python's <tt>re</tt> module. Each of these rules | |
390 are defined by making declarations with a special prefix <tt>t_</tt> to indicate that it | |
391 defines a token. For simple tokens, the regular expression can | |
392 be specified as strings such as this (note: Python raw strings are used since they are the | |
393 most convenient way to write regular expression strings): | |
394 | |
395 <blockquote> | |
396 <pre> | |
397 t_PLUS = r'\+' | |
398 </pre> | |
399 </blockquote> | |
400 | |
401 In this case, the name following the <tt>t_</tt> must exactly match one of the | |
402 names supplied in <tt>tokens</tt>. If some kind of action needs to be performed, | |
403 a token rule can be specified as a function. For example, this rule matches numbers and | |
404 converts the string into a Python integer. | |
405 | |
406 <blockquote> | |
407 <pre> | |
408 def t_NUMBER(t): | |
409 r'\d+' | |
410 t.value = int(t.value) | |
411 return t | |
412 </pre> | |
413 </blockquote> | |
414 | |
415 When a function is used, the regular expression rule is specified in the function documentation string. | |
416 The function always takes a single argument which is an instance of | |
417 <tt>LexToken</tt>. This object has attributes of <tt>t.type</tt> which is the token type (as a string), | |
418 <tt>t.value</tt> which is the lexeme (the actual text matched), <tt>t.lineno</tt> which is the current line number, and <tt>t.lexpos</tt> which | |
419 is the position of the token relative to the beginning of the input text. | |
420 By default, <tt>t.type</tt> is set to the name following the <tt>t_</tt> prefix. The action | |
421 function can modify the contents of the <tt>LexToken</tt> object as appropriate. However, | |
422 when it is done, the resulting token should be returned. If no value is returned by the action | |
423 function, the token is simply discarded and the next token read. | |
424 | |
425 <p> | |
426 Internally, <tt>lex.py</tt> uses the <tt>re</tt> module to do its pattern matching. Patterns are compiled | |
427 using the <tt>re.VERBOSE</tt> flag which can be used to help readability. However, be aware that unescaped | |
428 whitespace is ignored and comments are allowed in this mode. If your pattern involves whitespace, make sure you | |
429 use <tt>\s</tt>. If you need to match the <tt>#</tt> character, use <tt>[#]</tt>. | |
430 </p> | |
431 | |
432 <p> | |
433 When building the master regular expression, | |
434 rules are added in the following order: | |
435 </p> | |
436 | |
437 <p> | |
438 <ol> | |
439 <li>All tokens defined by functions are added in the same order as they appear in the lexer file. | |
440 <li>Tokens defined by strings are added next by sorting them in order of decreasing regular expression length (longer expressions | |
441 are added first). | |
442 </ol> | |
443 <p> | |
444 Without this ordering, it can be difficult to correctly match certain types of tokens. For example, if you | |
445 wanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first. By sorting regular | |
446 expressions in order of decreasing length, this problem is solved for rules defined as strings. For functions, | |
447 the order can be explicitly controlled since rules appearing first are checked first. | |
448 | |
449 <p> | |
450 To handle reserved words, you should write a single rule to match an | |
451 identifier and do a special name lookup in a function like this: | |
452 | |
453 <blockquote> | |
454 <pre> | |
455 reserved = { | |
456 'if' : 'IF', | |
457 'then' : 'THEN', | |
458 'else' : 'ELSE', | |
459 'while' : 'WHILE', | |
460 ... | |
461 } | |
462 | |
463 tokens = ['LPAREN','RPAREN',...,'ID'] + list(reserved.values()) | |
464 | |
465 def t_ID(t): | |
466 r'[a-zA-Z_][a-zA-Z_0-9]*' | |
467 t.type = reserved.get(t.value,'ID') # Check for reserved words | |
468 return t | |
469 </pre> | |
470 </blockquote> | |
471 | |
472 This approach greatly reduces the number of regular expression rules and is likely to make things a little faster. | |
473 | |
474 <p> | |
475 <b>Note:</b> You should avoid writing individual rules for reserved words. For example, if you write rules like this, | |
476 | |
477 <blockquote> | |
478 <pre> | |
479 t_FOR = r'for' | |
480 t_PRINT = r'print' | |
481 </pre> | |
482 </blockquote> | |
483 | |
484 those rules will be triggered for identifiers that include those words as a prefix such as "forget" or "printed". This is probably not | |
485 what you want. | |
486 | |
487 <H3><a name="ply_nn7"></a>4.4 Token values</H3> | |
488 | |
489 | |
490 When tokens are returned by lex, they have a value that is stored in the <tt>value</tt> attribute. Normally, the value is the text | |
491 that was matched. However, the value can be assigned to any Python object. For instance, when lexing identifiers, you may | |
492 want to return both the identifier name and information from some sort of symbol table. To do this, you might write a rule like this: | |
493 | |
494 <blockquote> | |
495 <pre> | |
496 def t_ID(t): | |
497 ... | |
498 # Look up symbol table information and return a tuple | |
499 t.value = (t.value, symbol_lookup(t.value)) | |
500 ... | |
501 return t | |
502 </pre> | |
503 </blockquote> | |
504 | |
505 It is important to note that storing data in other attribute names is <em>not</em> recommended. The <tt>yacc.py</tt> module only exposes the | |
506 contents of the <tt>value</tt> attribute. Thus, accessing other attributes may be unnecessarily awkward. If you | |
507 need to store multiple values on a token, assign a tuple, dictionary, or instance to <tt>value</tt>. | |
508 | |
509 <H3><a name="ply_nn8"></a>4.5 Discarded tokens</H3> | |
510 | |
511 | |
512 To discard a token, such as a comment, simply define a token rule that returns no value. For example: | |
513 | |
514 <blockquote> | |
515 <pre> | |
516 def t_COMMENT(t): | |
517 r'\#.*' | |
518 pass | |
519 # No return value. Token discarded | |
520 </pre> | |
521 </blockquote> | |
522 | |
523 Alternatively, you can include the prefix "ignore_" in the token declaration to force a token to be ignored. For example: | |
524 | |
525 <blockquote> | |
526 <pre> | |
527 t_ignore_COMMENT = r'\#.*' | |
528 </pre> | |
529 </blockquote> | |
530 | |
531 Be advised that if you are ignoring many different kinds of text, you may still want to use functions since these provide more precise | |
532 control over the order in which regular expressions are matched (i.e., functions are matched in order of specification whereas strings are | |
533 sorted by regular expression length). | |
534 | |
535 <H3><a name="ply_nn9"></a>4.6 Line numbers and positional information</H3> | |
536 | |
537 | |
538 <p>By default, <tt>lex.py</tt> knows nothing about line numbers. This is because <tt>lex.py</tt> doesn't know anything | |
539 about what constitutes a "line" of input (e.g., the newline character or even if the input is textual data). | |
540 To update this information, you need to write a special rule. In the example, the <tt>t_newline()</tt> rule shows how to do this. | |
541 | |
542 <blockquote> | |
543 <pre> | |
544 # Define a rule so we can track line numbers | |
545 def t_newline(t): | |
546 r'\n+' | |
547 t.lexer.lineno += len(t.value) | |
548 </pre> | |
549 </blockquote> | |
550 Within the rule, the <tt>lineno</tt> attribute of the underlying lexer <tt>t.lexer</tt> is updated. | |
551 After the line number is updated, the token is simply discarded since nothing is returned. | |
552 | |
553 <p> | |
554 <tt>lex.py</tt> does not perform and kind of automatic column tracking. However, it does record positional | |
555 information related to each token in the <tt>lexpos</tt> attribute. Using this, it is usually possible to compute | |
556 column information as a separate step. For instance, just count backwards until you reach a newline. | |
557 | |
558 <blockquote> | |
559 <pre> | |
560 # Compute column. | |
561 # input is the input text string | |
562 # token is a token instance | |
563 def find_column(input,token): | |
564 last_cr = input.rfind('\n',0,token.lexpos) | |
565 if last_cr < 0: | |
566 last_cr = 0 | |
567 column = (token.lexpos - last_cr) + 1 | |
568 return column | |
569 </pre> | |
570 </blockquote> | |
571 | |
572 Since column information is often only useful in the context of error handling, calculating the column | |
573 position can be performed when needed as opposed to doing it for each token. | |
574 | |
575 <H3><a name="ply_nn10"></a>4.7 Ignored characters</H3> | |
576 | |
577 | |
578 <p> | |
579 The special <tt>t_ignore</tt> rule is reserved by <tt>lex.py</tt> for characters | |
580 that should be completely ignored in the input stream. | |
581 Usually this is used to skip over whitespace and other non-essential characters. | |
582 Although it is possible to define a regular expression rule for whitespace in a manner | |
583 similar to <tt>t_newline()</tt>, the use of <tt>t_ignore</tt> provides substantially better | |
584 lexing performance because it is handled as a special case and is checked in a much | |
585 more efficient manner than the normal regular expression rules. | |
586 </p> | |
587 | |
588 <p> | |
589 The characters given in <tt>t_ignore</tt> are not ignored when such characters are part of | |
590 other regular expression patterns. For example, if you had a rule to capture quoted text, | |
591 that pattern can include the ignored characters (which will be captured in the normal way). The | |
592 main purpose of <tt>t_ignore</tt> is to ignore whitespace and other padding between the | |
593 tokens that you actually want to parse. | |
594 </p> | |
595 | |
596 <H3><a name="ply_nn11"></a>4.8 Literal characters</H3> | |
597 | |
598 | |
599 <p> | |
600 Literal characters can be specified by defining a variable <tt>literals</tt> in your lexing module. For example: | |
601 | |
602 <blockquote> | |
603 <pre> | |
604 literals = [ '+','-','*','/' ] | |
605 </pre> | |
606 </blockquote> | |
607 | |
608 or alternatively | |
609 | |
610 <blockquote> | |
611 <pre> | |
612 literals = "+-*/" | |
613 </pre> | |
614 </blockquote> | |
615 | |
616 A literal character is simply a single character that is returned "as is" when encountered by the lexer. Literals are checked | |
617 after all of the defined regular expression rules. Thus, if a rule starts with one of the literal characters, it will always | |
618 take precedence. | |
619 | |
620 <p> | |
621 When a literal token is returned, both its <tt>type</tt> and <tt>value</tt> attributes are set to the character itself. For example, <tt>'+'</tt>. | |
622 </p> | |
623 | |
624 <p> | |
625 It's possible to write token functions that perform additional actions | |
626 when literals are matched. However, you'll need to set the token type | |
627 appropriately. For example: | |
628 </p> | |
629 | |
630 <blockquote> | |
631 <pre> | |
632 literals = [ '{', '}' ] | |
633 | |
634 def t_lbrace(t): | |
635 r'\{' | |
636 t.type = '{' # Set token type to the expected literal | |
637 return t | |
638 | |
639 def t_rbrace(t): | |
640 r'\}' | |
641 t.type = '}' # Set token type to the expected literal | |
642 return t | |
643 </pre> | |
644 </blockquote> | |
645 | |
646 <H3><a name="ply_nn12"></a>4.9 Error handling</H3> | |
647 | |
648 | |
649 <p> | |
650 The <tt>t_error()</tt> | |
651 function is used to handle lexing errors that occur when illegal | |
652 characters are detected. In this case, the <tt>t.value</tt> attribute contains the | |
653 rest of the input string that has not been tokenized. In the example, the error function | |
654 was defined as follows: | |
655 | |
656 <blockquote> | |
657 <pre> | |
658 # Error handling rule | |
659 def t_error(t): | |
660 print("Illegal character '%s'" % t.value[0]) | |
661 t.lexer.skip(1) | |
662 </pre> | |
663 </blockquote> | |
664 | |
665 In this case, we simply print the offending character and skip ahead one character by calling <tt>t.lexer.skip(1)</tt>. | |
666 | |
667 <H3><a name="ply_nn14"></a>4.10 EOF Handling</H3> | |
668 | |
669 | |
670 <p> | |
671 The <tt>t_eof()</tt> function is used to handle an end-of-file (EOF) condition in the input. As input, it | |
672 receives a token type <tt>'eof'</tt> with the <tt>lineno</tt> and <tt>lexpos</tt> attributes set appropriately. | |
673 The main use of this function is provide more input to the lexer so that it can continue to parse. Here is an | |
674 example of how this works: | |
675 </p> | |
676 | |
677 <blockquote> | |
678 <pre> | |
679 # EOF handling rule | |
680 def t_eof(t): | |
681 # Get more input (Example) | |
682 more = raw_input('... ') | |
683 if more: | |
684 self.lexer.input(more) | |
685 return self.lexer.token() | |
686 return None | |
687 </pre> | |
688 </blockquote> | |
689 | |
690 <p> | |
691 The EOF function should return the next available token (by calling <tt>self.lexer.token())</tt> or <tt>None</tt> to | |
692 indicate no more data. Be aware that setting more input with the <tt>self.lexer.input()</tt> method does | |
693 NOT reset the lexer state or the <tt>lineno</tt> attribute used for position tracking. The <tt>lexpos</tt> | |
694 attribute is reset so be aware of that if you're using it in error reporting. | |
695 </p> | |
696 | |
697 <H3><a name="ply_nn13"></a>4.11 Building and using the lexer</H3> | |
698 | |
699 | |
700 <p> | |
701 To build the lexer, the function <tt>lex.lex()</tt> is used. For example:</p> | |
702 | |
703 <blockquote> | |
704 <pre> | |
705 lexer = lex.lex() | |
706 </pre> | |
707 </blockquote> | |
708 | |
709 <p>This function | |
710 uses Python reflection (or introspection) to read the regular expression rules | |
711 out of the calling context and build the lexer. Once the lexer has been built, two methods can | |
712 be used to control the lexer. | |
713 </p> | |
714 <ul> | |
715 <li><tt>lexer.input(data)</tt>. Reset the lexer and store a new input string. | |
716 <li><tt>lexer.token()</tt>. Return the next token. Returns a special <tt>LexToken</tt> instance on success or | |
717 None if the end of the input text has been reached. | |
718 </ul> | |
719 | |
720 <H3><a name="ply_nn14"></a>4.12 The @TOKEN decorator</H3> | |
721 | |
722 | |
723 In some applications, you may want to define build tokens from as a series of | |
724 more complex regular expression rules. For example: | |
725 | |
726 <blockquote> | |
727 <pre> | |
728 digit = r'([0-9])' | |
729 nondigit = r'([_A-Za-z])' | |
730 identifier = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)' | |
731 | |
732 def t_ID(t): | |
733 # want docstring to be identifier above. ????? | |
734 ... | |
735 </pre> | |
736 </blockquote> | |
737 | |
738 In this case, we want the regular expression rule for <tt>ID</tt> to be one of the variables above. However, there is no | |
739 way to directly specify this using a normal documentation string. To solve this problem, you can use the <tt>@TOKEN</tt> | |
740 decorator. For example: | |
741 | |
742 <blockquote> | |
743 <pre> | |
744 from ply.lex import TOKEN | |
745 | |
746 @TOKEN(identifier) | |
747 def t_ID(t): | |
748 ... | |
749 </pre> | |
750 </blockquote> | |
751 | |
752 <p> | |
753 This will attach <tt>identifier</tt> to the docstring for <tt>t_ID()</tt> allowing <tt>lex.py</tt> to work normally. | |
754 </p> | |
755 | |
756 <H3><a name="ply_nn15"></a>4.13 Optimized mode</H3> | |
757 | |
758 | |
759 For improved performance, it may be desirable to use Python's | |
760 optimized mode (e.g., running Python with the <tt>-O</tt> | |
761 option). However, doing so causes Python to ignore documentation | |
762 strings. This presents special problems for <tt>lex.py</tt>. To | |
763 handle this case, you can create your lexer using | |
764 the <tt>optimize</tt> option as follows: | |
765 | |
766 <blockquote> | |
767 <pre> | |
768 lexer = lex.lex(optimize=1) | |
769 </pre> | |
770 </blockquote> | |
771 | |
772 Next, run Python in its normal operating mode. When you do | |
773 this, <tt>lex.py</tt> will write a file called <tt>lextab.py</tt> in | |
774 the same directory as the module containing the lexer specification. | |
775 This file contains all of the regular | |
776 expression rules and tables used during lexing. On subsequent | |
777 executions, | |
778 <tt>lextab.py</tt> will simply be imported to build the lexer. This | |
779 approach substantially improves the startup time of the lexer and it | |
780 works in Python's optimized mode. | |
781 | |
782 <p> | |
783 To change the name of the lexer-generated module, use the <tt>lextab</tt> keyword argument. For example: | |
784 </p> | |
785 | |
786 <blockquote> | |
787 <pre> | |
788 lexer = lex.lex(optimize=1,lextab="footab") | |
789 </pre> | |
790 </blockquote> | |
791 | |
792 When running in optimized mode, it is important to note that lex disables most error checking. Thus, this is really only recommended | |
793 if you're sure everything is working correctly and you're ready to start releasing production code. | |
794 | |
795 <H3><a name="ply_nn16"></a>4.14 Debugging</H3> | |
796 | |
797 | |
798 For the purpose of debugging, you can run <tt>lex()</tt> in a debugging mode as follows: | |
799 | |
800 <blockquote> | |
801 <pre> | |
802 lexer = lex.lex(debug=1) | |
803 </pre> | |
804 </blockquote> | |
805 | |
806 <p> | |
807 This will produce various sorts of debugging information including all of the added rules, | |
808 the master regular expressions used by the lexer, and tokens generating during lexing. | |
809 </p> | |
810 | |
811 <p> | |
812 In addition, <tt>lex.py</tt> comes with a simple main function which | |
813 will either tokenize input read from standard input or from a file specified | |
814 on the command line. To use it, simply put this in your lexer: | |
815 </p> | |
816 | |
817 <blockquote> | |
818 <pre> | |
819 if __name__ == '__main__': | |
820 lex.runmain() | |
821 </pre> | |
822 </blockquote> | |
823 | |
824 Please refer to the "Debugging" section near the end for some more advanced details | |
825 of debugging. | |
826 | |
827 <H3><a name="ply_nn17"></a>4.15 Alternative specification of lexers</H3> | |
828 | |
829 | |
830 As shown in the example, lexers are specified all within one Python module. If you want to | |
831 put token rules in a different module from the one in which you invoke <tt>lex()</tt>, use the | |
832 <tt>module</tt> keyword argument. | |
833 | |
834 <p> | |
835 For example, you might have a dedicated module that just contains | |
836 the token rules: | |
837 | |
838 <blockquote> | |
839 <pre> | |
840 # module: tokrules.py | |
841 # This module just contains the lexing rules | |
842 | |
843 # List of token names. This is always required | |
844 tokens = ( | |
845 'NUMBER', | |
846 'PLUS', | |
847 'MINUS', | |
848 'TIMES', | |
849 'DIVIDE', | |
850 'LPAREN', | |
851 'RPAREN', | |
852 ) | |
853 | |
854 # Regular expression rules for simple tokens | |
855 t_PLUS = r'\+' | |
856 t_MINUS = r'-' | |
857 t_TIMES = r'\*' | |
858 t_DIVIDE = r'/' | |
859 t_LPAREN = r'\(' | |
860 t_RPAREN = r'\)' | |
861 | |
862 # A regular expression rule with some action code | |
863 def t_NUMBER(t): | |
864 r'\d+' | |
865 t.value = int(t.value) | |
866 return t | |
867 | |
868 # Define a rule so we can track line numbers | |
869 def t_newline(t): | |
870 r'\n+' | |
871 t.lexer.lineno += len(t.value) | |
872 | |
873 # A string containing ignored characters (spaces and tabs) | |
874 t_ignore = ' \t' | |
875 | |
876 # Error handling rule | |
877 def t_error(t): | |
878 print("Illegal character '%s'" % t.value[0]) | |
879 t.lexer.skip(1) | |
880 </pre> | |
881 </blockquote> | |
882 | |
883 Now, if you wanted to build a tokenizer from these rules from within a different module, you would do the following (shown for Python interactive mode): | |
884 | |
885 <blockquote> | |
886 <pre> | |
887 >>> import tokrules | |
888 >>> <b>lexer = lex.lex(module=tokrules)</b> | |
889 >>> lexer.input("3 + 4") | |
890 >>> lexer.token() | |
891 LexToken(NUMBER,3,1,1,0) | |
892 >>> lexer.token() | |
893 LexToken(PLUS,'+',1,2) | |
894 >>> lexer.token() | |
895 LexToken(NUMBER,4,1,4) | |
896 >>> lexer.token() | |
897 None | |
898 >>> | |
899 </pre> | |
900 </blockquote> | |
901 | |
902 The <tt>module</tt> option can also be used to define lexers from instances of a class. For example: | |
903 | |
904 <blockquote> | |
905 <pre> | |
906 import ply.lex as lex | |
907 | |
908 class MyLexer(object): | |
909 # List of token names. This is always required | |
910 tokens = ( | |
911 'NUMBER', | |
912 'PLUS', | |
913 'MINUS', | |
914 'TIMES', | |
915 'DIVIDE', | |
916 'LPAREN', | |
917 'RPAREN', | |
918 ) | |
919 | |
920 # Regular expression rules for simple tokens | |
921 t_PLUS = r'\+' | |
922 t_MINUS = r'-' | |
923 t_TIMES = r'\*' | |
924 t_DIVIDE = r'/' | |
925 t_LPAREN = r'\(' | |
926 t_RPAREN = r'\)' | |
927 | |
928 # A regular expression rule with some action code | |
929 # Note addition of self parameter since we're in a class | |
930 def t_NUMBER(self,t): | |
931 r'\d+' | |
932 t.value = int(t.value) | |
933 return t | |
934 | |
935 # Define a rule so we can track line numbers | |
936 def t_newline(self,t): | |
937 r'\n+' | |
938 t.lexer.lineno += len(t.value) | |
939 | |
940 # A string containing ignored characters (spaces and tabs) | |
941 t_ignore = ' \t' | |
942 | |
943 # Error handling rule | |
944 def t_error(self,t): | |
945 print("Illegal character '%s'" % t.value[0]) | |
946 t.lexer.skip(1) | |
947 | |
948 <b># Build the lexer | |
949 def build(self,**kwargs): | |
950 self.lexer = lex.lex(module=self, **kwargs)</b> | |
951 | |
952 # Test it output | |
953 def test(self,data): | |
954 self.lexer.input(data) | |
955 while True: | |
956 tok = self.lexer.token() | |
957 if not tok: | |
958 break | |
959 print(tok) | |
960 | |
961 # Build the lexer and try it out | |
962 m = MyLexer() | |
963 m.build() # Build the lexer | |
964 m.test("3 + 4") # Test it | |
965 </pre> | |
966 </blockquote> | |
967 | |
968 | |
969 When building a lexer from class, <em>you should construct the lexer from | |
970 an instance of the class</em>, not the class object itself. This is because | |
971 PLY only works properly if the lexer actions are defined by bound-methods. | |
972 | |
973 <p> | |
974 When using the <tt>module</tt> option to <tt>lex()</tt>, PLY collects symbols | |
975 from the underlying object using the <tt>dir()</tt> function. There is no | |
976 direct access to the <tt>__dict__</tt> attribute of the object supplied as a | |
977 module value. </p> | |
978 | |
979 <P> | |
980 Finally, if you want to keep things nicely encapsulated, but don't want to use a | |
981 full-fledged class definition, lexers can be defined using closures. For example: | |
982 | |
983 <blockquote> | |
984 <pre> | |
985 import ply.lex as lex | |
986 | |
987 # List of token names. This is always required | |
988 tokens = ( | |
989 'NUMBER', | |
990 'PLUS', | |
991 'MINUS', | |
992 'TIMES', | |
993 'DIVIDE', | |
994 'LPAREN', | |
995 'RPAREN', | |
996 ) | |
997 | |
998 def MyLexer(): | |
999 # Regular expression rules for simple tokens | |
1000 t_PLUS = r'\+' | |
1001 t_MINUS = r'-' | |
1002 t_TIMES = r'\*' | |
1003 t_DIVIDE = r'/' | |
1004 t_LPAREN = r'\(' | |
1005 t_RPAREN = r'\)' | |
1006 | |
1007 # A regular expression rule with some action code | |
1008 def t_NUMBER(t): | |
1009 r'\d+' | |
1010 t.value = int(t.value) | |
1011 return t | |
1012 | |
1013 # Define a rule so we can track line numbers | |
1014 def t_newline(t): | |
1015 r'\n+' | |
1016 t.lexer.lineno += len(t.value) | |
1017 | |
1018 # A string containing ignored characters (spaces and tabs) | |
1019 t_ignore = ' \t' | |
1020 | |
1021 # Error handling rule | |
1022 def t_error(t): | |
1023 print("Illegal character '%s'" % t.value[0]) | |
1024 t.lexer.skip(1) | |
1025 | |
1026 # Build the lexer from my environment and return it | |
1027 return lex.lex() | |
1028 </pre> | |
1029 </blockquote> | |
1030 | |
1031 <p> | |
1032 <b>Important note:</b> If you are defining a lexer using a class or closure, be aware that PLY still requires you to only | |
1033 define a single lexer per module (source file). There are extensive validation/error checking parts of the PLY that | |
1034 may falsely report error messages if you don't follow this rule. | |
1035 </p> | |
1036 | |
1037 <H3><a name="ply_nn18"></a>4.16 Maintaining state</H3> | |
1038 | |
1039 | |
1040 In your lexer, you may want to maintain a variety of state | |
1041 information. This might include mode settings, symbol tables, and | |
1042 other details. As an example, suppose that you wanted to keep | |
1043 track of how many NUMBER tokens had been encountered. | |
1044 | |
1045 <p> | |
1046 One way to do this is to keep a set of global variables in the module | |
1047 where you created the lexer. For example: | |
1048 | |
1049 <blockquote> | |
1050 <pre> | |
1051 num_count = 0 | |
1052 def t_NUMBER(t): | |
1053 r'\d+' | |
1054 global num_count | |
1055 num_count += 1 | |
1056 t.value = int(t.value) | |
1057 return t | |
1058 </pre> | |
1059 </blockquote> | |
1060 | |
1061 If you don't like the use of a global variable, another place to store | |
1062 information is inside the Lexer object created by <tt>lex()</tt>. | |
1063 To this, you can use the <tt>lexer</tt> attribute of tokens passed to | |
1064 the various rules. For example: | |
1065 | |
1066 <blockquote> | |
1067 <pre> | |
1068 def t_NUMBER(t): | |
1069 r'\d+' | |
1070 t.lexer.num_count += 1 # Note use of lexer attribute | |
1071 t.value = int(t.value) | |
1072 return t | |
1073 | |
1074 lexer = lex.lex() | |
1075 lexer.num_count = 0 # Set the initial count | |
1076 </pre> | |
1077 </blockquote> | |
1078 | |
1079 This latter approach has the advantage of being simple and working | |
1080 correctly in applications where multiple instantiations of a given | |
1081 lexer exist in the same application. However, this might also feel | |
1082 like a gross violation of encapsulation to OO purists. | |
1083 Just to put your mind at some ease, all | |
1084 internal attributes of the lexer (with the exception of <tt>lineno</tt>) have names that are prefixed | |
1085 by <tt>lex</tt> (e.g., <tt>lexdata</tt>,<tt>lexpos</tt>, etc.). Thus, | |
1086 it is perfectly safe to store attributes in the lexer that | |
1087 don't have names starting with that prefix or a name that conflicts with one of the | |
1088 predefined methods (e.g., <tt>input()</tt>, <tt>token()</tt>, etc.). | |
1089 | |
1090 <p> | |
1091 If you don't like assigning values on the lexer object, you can define your lexer as a class as | |
1092 shown in the previous section: | |
1093 | |
1094 <blockquote> | |
1095 <pre> | |
1096 class MyLexer: | |
1097 ... | |
1098 def t_NUMBER(self,t): | |
1099 r'\d+' | |
1100 self.num_count += 1 | |
1101 t.value = int(t.value) | |
1102 return t | |
1103 | |
1104 def build(self, **kwargs): | |
1105 self.lexer = lex.lex(object=self,**kwargs) | |
1106 | |
1107 def __init__(self): | |
1108 self.num_count = 0 | |
1109 </pre> | |
1110 </blockquote> | |
1111 | |
1112 The class approach may be the easiest to manage if your application is | |
1113 going to be creating multiple instances of the same lexer and you need | |
1114 to manage a lot of state. | |
1115 | |
1116 <p> | |
1117 State can also be managed through closures. For example, in Python 3: | |
1118 | |
1119 <blockquote> | |
1120 <pre> | |
1121 def MyLexer(): | |
1122 num_count = 0 | |
1123 ... | |
1124 def t_NUMBER(t): | |
1125 r'\d+' | |
1126 nonlocal num_count | |
1127 num_count += 1 | |
1128 t.value = int(t.value) | |
1129 return t | |
1130 ... | |
1131 </pre> | |
1132 </blockquote> | |
1133 | |
1134 <H3><a name="ply_nn19"></a>4.17 Lexer cloning</H3> | |
1135 | |
1136 | |
1137 <p> | |
1138 If necessary, a lexer object can be duplicated by invoking its <tt>clone()</tt> method. For example: | |
1139 | |
1140 <blockquote> | |
1141 <pre> | |
1142 lexer = lex.lex() | |
1143 ... | |
1144 newlexer = lexer.clone() | |
1145 </pre> | |
1146 </blockquote> | |
1147 | |
1148 When a lexer is cloned, the copy is exactly identical to the original lexer | |
1149 including any input text and internal state. However, the clone allows a | |
1150 different set of input text to be supplied which may be processed separately. | |
1151 This may be useful in situations when you are writing a parser/compiler that | |
1152 involves recursive or reentrant processing. For instance, if you | |
1153 needed to scan ahead in the input for some reason, you could create a | |
1154 clone and use it to look ahead. Or, if you were implementing some kind of preprocessor, | |
1155 cloned lexers could be used to handle different input files. | |
1156 | |
1157 <p> | |
1158 Creating a clone is different than calling <tt>lex.lex()</tt> in that | |
1159 PLY doesn't regenerate any of the internal tables or regular expressions. | |
1160 | |
1161 <p> | |
1162 Special considerations need to be made when cloning lexers that also | |
1163 maintain their own internal state using classes or closures. Namely, | |
1164 you need to be aware that the newly created lexers will share all of | |
1165 this state with the original lexer. For example, if you defined a | |
1166 lexer as a class and did this: | |
1167 | |
1168 <blockquote> | |
1169 <pre> | |
1170 m = MyLexer() | |
1171 a = lex.lex(object=m) # Create a lexer | |
1172 | |
1173 b = a.clone() # Clone the lexer | |
1174 </pre> | |
1175 </blockquote> | |
1176 | |
1177 Then both <tt>a</tt> and <tt>b</tt> are going to be bound to the same | |
1178 object <tt>m</tt> and any changes to <tt>m</tt> will be reflected in both lexers. It's | |
1179 important to emphasize that <tt>clone()</tt> is only meant to create a new lexer | |
1180 that reuses the regular expressions and environment of another lexer. If you | |
1181 need to make a totally new copy of a lexer, then call <tt>lex()</tt> again. | |
1182 | |
1183 <H3><a name="ply_nn20"></a>4.18 Internal lexer state</H3> | |
1184 | |
1185 | |
1186 A Lexer object <tt>lexer</tt> has a number of internal attributes that may be useful in certain | |
1187 situations. | |
1188 | |
1189 <p> | |
1190 <tt>lexer.lexpos</tt> | |
1191 <blockquote> | |
1192 This attribute is an integer that contains the current position within the input text. If you modify | |
1193 the value, it will change the result of the next call to <tt>token()</tt>. Within token rule functions, this points | |
1194 to the first character <em>after</em> the matched text. If the value is modified within a rule, the next returned token will be | |
1195 matched at the new position. | |
1196 </blockquote> | |
1197 | |
1198 <p> | |
1199 <tt>lexer.lineno</tt> | |
1200 <blockquote> | |
1201 The current value of the line number attribute stored in the lexer. PLY only specifies that the attribute | |
1202 exists---it never sets, updates, or performs any processing with it. If you want to track line numbers, | |
1203 you will need to add code yourself (see the section on line numbers and positional information). | |
1204 </blockquote> | |
1205 | |
1206 <p> | |
1207 <tt>lexer.lexdata</tt> | |
1208 <blockquote> | |
1209 The current input text stored in the lexer. This is the string passed with the <tt>input()</tt> method. It | |
1210 would probably be a bad idea to modify this unless you really know what you're doing. | |
1211 </blockquote> | |
1212 | |
1213 <P> | |
1214 <tt>lexer.lexmatch</tt> | |
1215 <blockquote> | |
1216 This is the raw <tt>Match</tt> object returned by the Python <tt>re.match()</tt> function (used internally by PLY) for the | |
1217 current token. If you have written a regular expression that contains named groups, you can use this to retrieve those values. | |
1218 Note: This attribute is only updated when tokens are defined and processed by functions. | |
1219 </blockquote> | |
1220 | |
1221 <H3><a name="ply_nn21"></a>4.19 Conditional lexing and start conditions</H3> | |
1222 | |
1223 | |
1224 In advanced parsing applications, it may be useful to have different | |
1225 lexing states. For instance, you may want the occurrence of a certain | |
1226 token or syntactic construct to trigger a different kind of lexing. | |
1227 PLY supports a feature that allows the underlying lexer to be put into | |
1228 a series of different states. Each state can have its own tokens, | |
1229 lexing rules, and so forth. The implementation is based largely on | |
1230 the "start condition" feature of GNU flex. Details of this can be found | |
1231 at <a | |
1232 href="http://flex.sourceforge.net/manual/Start-Conditions.html">http://flex.sourceforge.net/manual/Start-Conditions.html</a>. | |
1233 | |
1234 <p> | |
1235 To define a new lexing state, it must first be declared. This is done by including a "states" declaration in your | |
1236 lex file. For example: | |
1237 | |
1238 <blockquote> | |
1239 <pre> | |
1240 states = ( | |
1241 ('foo','exclusive'), | |
1242 ('bar','inclusive'), | |
1243 ) | |
1244 </pre> | |
1245 </blockquote> | |
1246 | |
1247 This declaration declares two states, <tt>'foo'</tt> | |
1248 and <tt>'bar'</tt>. States may be of two types; <tt>'exclusive'</tt> | |
1249 and <tt>'inclusive'</tt>. An exclusive state completely overrides the | |
1250 default behavior of the lexer. That is, lex will only return tokens | |
1251 and apply rules defined specifically for that state. An inclusive | |
1252 state adds additional tokens and rules to the default set of rules. | |
1253 Thus, lex will return both the tokens defined by default in addition | |
1254 to those defined for the inclusive state. | |
1255 | |
1256 <p> | |
1257 Once a state has been declared, tokens and rules are declared by including the | |
1258 state name in token/rule declaration. For example: | |
1259 | |
1260 <blockquote> | |
1261 <pre> | |
1262 t_foo_NUMBER = r'\d+' # Token 'NUMBER' in state 'foo' | |
1263 t_bar_ID = r'[a-zA-Z_][a-zA-Z0-9_]*' # Token 'ID' in state 'bar' | |
1264 | |
1265 def t_foo_newline(t): | |
1266 r'\n' | |
1267 t.lexer.lineno += 1 | |
1268 </pre> | |
1269 </blockquote> | |
1270 | |
1271 A token can be declared in multiple states by including multiple state names in the declaration. For example: | |
1272 | |
1273 <blockquote> | |
1274 <pre> | |
1275 t_foo_bar_NUMBER = r'\d+' # Defines token 'NUMBER' in both state 'foo' and 'bar' | |
1276 </pre> | |
1277 </blockquote> | |
1278 | |
1279 Alternative, a token can be declared in all states using the 'ANY' in the name. | |
1280 | |
1281 <blockquote> | |
1282 <pre> | |
1283 t_ANY_NUMBER = r'\d+' # Defines a token 'NUMBER' in all states | |
1284 </pre> | |
1285 </blockquote> | |
1286 | |
1287 If no state name is supplied, as is normally the case, the token is associated with a special state <tt>'INITIAL'</tt>. For example, | |
1288 these two declarations are identical: | |
1289 | |
1290 <blockquote> | |
1291 <pre> | |
1292 t_NUMBER = r'\d+' | |
1293 t_INITIAL_NUMBER = r'\d+' | |
1294 </pre> | |
1295 </blockquote> | |
1296 | |
1297 <p> | |
1298 States are also associated with the special <tt>t_ignore</tt>, <tt>t_error()</tt>, and <tt>t_eof()</tt> declarations. For example, if a state treats | |
1299 these differently, you can declare:</p> | |
1300 | |
1301 <blockquote> | |
1302 <pre> | |
1303 t_foo_ignore = " \t\n" # Ignored characters for state 'foo' | |
1304 | |
1305 def t_bar_error(t): # Special error handler for state 'bar' | |
1306 pass | |
1307 </pre> | |
1308 </blockquote> | |
1309 | |
1310 By default, lexing operates in the <tt>'INITIAL'</tt> state. This state includes all of the normally defined tokens. | |
1311 For users who aren't using different states, this fact is completely transparent. If, during lexing or parsing, you want to change | |
1312 the lexing state, use the <tt>begin()</tt> method. For example: | |
1313 | |
1314 <blockquote> | |
1315 <pre> | |
1316 def t_begin_foo(t): | |
1317 r'start_foo' | |
1318 t.lexer.begin('foo') # Starts 'foo' state | |
1319 </pre> | |
1320 </blockquote> | |
1321 | |
1322 To get out of a state, you use <tt>begin()</tt> to switch back to the initial state. For example: | |
1323 | |
1324 <blockquote> | |
1325 <pre> | |
1326 def t_foo_end(t): | |
1327 r'end_foo' | |
1328 t.lexer.begin('INITIAL') # Back to the initial state | |
1329 </pre> | |
1330 </blockquote> | |
1331 | |
1332 The management of states can also be done with a stack. For example: | |
1333 | |
1334 <blockquote> | |
1335 <pre> | |
1336 def t_begin_foo(t): | |
1337 r'start_foo' | |
1338 t.lexer.push_state('foo') # Starts 'foo' state | |
1339 | |
1340 def t_foo_end(t): | |
1341 r'end_foo' | |
1342 t.lexer.pop_state() # Back to the previous state | |
1343 </pre> | |
1344 </blockquote> | |
1345 | |
1346 <p> | |
1347 The use of a stack would be useful in situations where there are many ways of entering a new lexing state and you merely want to go back | |
1348 to the previous state afterwards. | |
1349 | |
1350 <P> | |
1351 An example might help clarify. Suppose you were writing a parser and you wanted to grab sections of arbitrary C code enclosed by | |
1352 curly braces. That is, whenever you encounter a starting brace '{', you want to read all of the enclosed code up to the ending brace '}' | |
1353 and return it as a string. Doing this with a normal regular expression rule is nearly (if not actually) impossible. This is because braces can | |
1354 be nested and can be included in comments and strings. Thus, simply matching up to the first matching '}' character isn't good enough. Here is how | |
1355 you might use lexer states to do this: | |
1356 | |
1357 <blockquote> | |
1358 <pre> | |
1359 # Declare the state | |
1360 states = ( | |
1361 ('ccode','exclusive'), | |
1362 ) | |
1363 | |
1364 # Match the first {. Enter ccode state. | |
1365 def t_ccode(t): | |
1366 r'\{' | |
1367 t.lexer.code_start = t.lexer.lexpos # Record the starting position | |
1368 t.lexer.level = 1 # Initial brace level | |
1369 t.lexer.begin('ccode') # Enter 'ccode' state | |
1370 | |
1371 # Rules for the ccode state | |
1372 def t_ccode_lbrace(t): | |
1373 r'\{' | |
1374 t.lexer.level +=1 | |
1375 | |
1376 def t_ccode_rbrace(t): | |
1377 r'\}' | |
1378 t.lexer.level -=1 | |
1379 | |
1380 # If closing brace, return the code fragment | |
1381 if t.lexer.level == 0: | |
1382 t.value = t.lexer.lexdata[t.lexer.code_start:t.lexer.lexpos+1] | |
1383 t.type = "CCODE" | |
1384 t.lexer.lineno += t.value.count('\n') | |
1385 t.lexer.begin('INITIAL') | |
1386 return t | |
1387 | |
1388 # C or C++ comment (ignore) | |
1389 def t_ccode_comment(t): | |
1390 r'(/\*(.|\n)*?\*/)|(//.*)' | |
1391 pass | |
1392 | |
1393 # C string | |
1394 def t_ccode_string(t): | |
1395 r'\"([^\\\n]|(\\.))*?\"' | |
1396 | |
1397 # C character literal | |
1398 def t_ccode_char(t): | |
1399 r'\'([^\\\n]|(\\.))*?\'' | |
1400 | |
1401 # Any sequence of non-whitespace characters (not braces, strings) | |
1402 def t_ccode_nonspace(t): | |
1403 r'[^\s\{\}\'\"]+' | |
1404 | |
1405 # Ignored characters (whitespace) | |
1406 t_ccode_ignore = " \t\n" | |
1407 | |
1408 # For bad characters, we just skip over it | |
1409 def t_ccode_error(t): | |
1410 t.lexer.skip(1) | |
1411 </pre> | |
1412 </blockquote> | |
1413 | |
1414 In this example, the occurrence of the first '{' causes the lexer to record the starting position and enter a new state <tt>'ccode'</tt>. A collection of rules then match | |
1415 various parts of the input that follow (comments, strings, etc.). All of these rules merely discard the token (by not returning a value). | |
1416 However, if the closing right brace is encountered, the rule <tt>t_ccode_rbrace</tt> collects all of the code (using the earlier recorded starting | |
1417 position), stores it, and returns a token 'CCODE' containing all of that text. When returning the token, the lexing state is restored back to its | |
1418 initial state. | |
1419 | |
1420 <H3><a name="ply_nn21"></a>4.20 Miscellaneous Issues</H3> | |
1421 | |
1422 | |
1423 <P> | |
1424 <li>The lexer requires input to be supplied as a single input string. Since most machines have more than enough memory, this | |
1425 rarely presents a performance concern. However, it means that the lexer currently can't be used with streaming data | |
1426 such as open files or sockets. This limitation is primarily a side-effect of using the <tt>re</tt> module. You might be | |
1427 able to work around this by implementing an appropriate <tt>def t_eof()</tt> end-of-file handling rule. The main complication | |
1428 here is that you'll probably need to ensure that data is fed to the lexer in a way so that it doesn't split in in the middle | |
1429 of a token.</p> | |
1430 | |
1431 <p> | |
1432 <li>The lexer should work properly with both Unicode strings given as token and pattern matching rules as | |
1433 well as for input text. | |
1434 | |
1435 <p> | |
1436 <li>If you need to supply optional flags to the re.compile() function, use the reflags option to lex. For example: | |
1437 | |
1438 <blockquote> | |
1439 <pre> | |
1440 lex.lex(reflags=re.UNICODE) | |
1441 </pre> | |
1442 </blockquote> | |
1443 | |
1444 <p> | |
1445 <li>Since the lexer is written entirely in Python, its performance is | |
1446 largely determined by that of the Python <tt>re</tt> module. Although | |
1447 the lexer has been written to be as efficient as possible, it's not | |
1448 blazingly fast when used on very large input files. If | |
1449 performance is concern, you might consider upgrading to the most | |
1450 recent version of Python, creating a hand-written lexer, or offloading | |
1451 the lexer into a C extension module. | |
1452 | |
1453 <p> | |
1454 If you are going to create a hand-written lexer and you plan to use it with <tt>yacc.py</tt>, | |
1455 it only needs to conform to the following requirements: | |
1456 | |
1457 <ul> | |
1458 <li>It must provide a <tt>token()</tt> method that returns the next token or <tt>None</tt> if no more | |
1459 tokens are available. | |
1460 <li>The <tt>token()</tt> method must return an object <tt>tok</tt> that has <tt>type</tt> and <tt>value</tt> attributes. If | |
1461 line number tracking is being used, then the token should also define a <tt>lineno</tt> attribute. | |
1462 </ul> | |
1463 | |
1464 <H2><a name="ply_nn22"></a>5. Parsing basics</H2> | |
1465 | |
1466 | |
1467 <tt>yacc.py</tt> is used to parse language syntax. Before showing an | |
1468 example, there are a few important bits of background that must be | |
1469 mentioned. First, <em>syntax</em> is usually specified in terms of a BNF grammar. | |
1470 For example, if you wanted to parse | |
1471 simple arithmetic expressions, you might first write an unambiguous | |
1472 grammar specification like this: | |
1473 | |
1474 <blockquote> | |
1475 <pre> | |
1476 expression : expression + term | |
1477 | expression - term | |
1478 | term | |
1479 | |
1480 term : term * factor | |
1481 | term / factor | |
1482 | factor | |
1483 | |
1484 factor : NUMBER | |
1485 | ( expression ) | |
1486 </pre> | |
1487 </blockquote> | |
1488 | |
1489 In the grammar, symbols such as <tt>NUMBER</tt>, <tt>+</tt>, <tt>-</tt>, <tt>*</tt>, and <tt>/</tt> are known | |
1490 as <em>terminals</em> and correspond to raw input tokens. Identifiers such as <tt>term</tt> and <tt>factor</tt> refer to | |
1491 grammar rules comprised of a collection of terminals and other rules. These identifiers are known as <em>non-terminals</em>. | |
1492 <P> | |
1493 | |
1494 The semantic behavior of a language is often specified using a | |
1495 technique known as syntax directed translation. In syntax directed | |
1496 translation, attributes are attached to each symbol in a given grammar | |
1497 rule along with an action. Whenever a particular grammar rule is | |
1498 recognized, the action describes what to do. For example, given the | |
1499 expression grammar above, you might write the specification for a | |
1500 simple calculator like this: | |
1501 | |
1502 <blockquote> | |
1503 <pre> | |
1504 Grammar Action | |
1505 -------------------------------- -------------------------------------------- | |
1506 expression0 : expression1 + term expression0.val = expression1.val + term.val | |
1507 | expression1 - term expression0.val = expression1.val - term.val | |
1508 | term expression0.val = term.val | |
1509 | |
1510 term0 : term1 * factor term0.val = term1.val * factor.val | |
1511 | term1 / factor term0.val = term1.val / factor.val | |
1512 | factor term0.val = factor.val | |
1513 | |
1514 factor : NUMBER factor.val = int(NUMBER.lexval) | |
1515 | ( expression ) factor.val = expression.val | |
1516 </pre> | |
1517 </blockquote> | |
1518 | |
1519 A good way to think about syntax directed translation is to | |
1520 view each symbol in the grammar as a kind of object. Associated | |
1521 with each symbol is a value representing its "state" (for example, the | |
1522 <tt>val</tt> attribute above). Semantic | |
1523 actions are then expressed as a collection of functions or methods | |
1524 that operate on the symbols and associated values. | |
1525 | |
1526 <p> | |
1527 Yacc uses a parsing technique known as LR-parsing or shift-reduce parsing. LR parsing is a | |
1528 bottom up technique that tries to recognize the right-hand-side of various grammar rules. | |
1529 Whenever a valid right-hand-side is found in the input, the appropriate action code is triggered and the | |
1530 grammar symbols are replaced by the grammar symbol on the left-hand-side. | |
1531 | |
1532 <p> | |
1533 LR parsing is commonly implemented by shifting grammar symbols onto a | |
1534 stack and looking at the stack and the next input token for patterns that | |
1535 match one of the grammar rules. | |
1536 The details of the algorithm can be found in a compiler textbook, but the | |
1537 following example illustrates the steps that are performed if you | |
1538 wanted to parse the expression | |
1539 <tt>3 + 5 * (10 - 20)</tt> using the grammar defined above. In the example, | |
1540 the special symbol <tt>$</tt> represents the end of input. | |
1541 | |
1542 | |
1543 <blockquote> | |
1544 <pre> | |
1545 Step Symbol Stack Input Tokens Action | |
1546 ---- --------------------- --------------------- ------------------------------- | |
1547 1 3 + 5 * ( 10 - 20 )$ Shift 3 | |
1548 2 3 + 5 * ( 10 - 20 )$ Reduce factor : NUMBER | |
1549 3 factor + 5 * ( 10 - 20 )$ Reduce term : factor | |
1550 4 term + 5 * ( 10 - 20 )$ Reduce expr : term | |
1551 5 expr + 5 * ( 10 - 20 )$ Shift + | |
1552 6 expr + 5 * ( 10 - 20 )$ Shift 5 | |
1553 7 expr + 5 * ( 10 - 20 )$ Reduce factor : NUMBER | |
1554 8 expr + factor * ( 10 - 20 )$ Reduce term : factor | |
1555 9 expr + term * ( 10 - 20 )$ Shift * | |
1556 10 expr + term * ( 10 - 20 )$ Shift ( | |
1557 11 expr + term * ( 10 - 20 )$ Shift 10 | |
1558 12 expr + term * ( 10 - 20 )$ Reduce factor : NUMBER | |
1559 13 expr + term * ( factor - 20 )$ Reduce term : factor | |
1560 14 expr + term * ( term - 20 )$ Reduce expr : term | |
1561 15 expr + term * ( expr - 20 )$ Shift - | |
1562 16 expr + term * ( expr - 20 )$ Shift 20 | |
1563 17 expr + term * ( expr - 20 )$ Reduce factor : NUMBER | |
1564 18 expr + term * ( expr - factor )$ Reduce term : factor | |
1565 19 expr + term * ( expr - term )$ Reduce expr : expr - term | |
1566 20 expr + term * ( expr )$ Shift ) | |
1567 21 expr + term * ( expr ) $ Reduce factor : (expr) | |
1568 22 expr + term * factor $ Reduce term : term * factor | |
1569 23 expr + term $ Reduce expr : expr + term | |
1570 24 expr $ Reduce expr | |
1571 25 $ Success! | |
1572 </pre> | |
1573 </blockquote> | |
1574 | |
1575 When parsing the expression, an underlying state machine and the | |
1576 current input token determine what happens next. If the next token | |
1577 looks like part of a valid grammar rule (based on other items on the | |
1578 stack), it is generally shifted onto the stack. If the top of the | |
1579 stack contains a valid right-hand-side of a grammar rule, it is | |
1580 usually "reduced" and the symbols replaced with the symbol on the | |
1581 left-hand-side. When this reduction occurs, the appropriate action is | |
1582 triggered (if defined). If the input token can't be shifted and the | |
1583 top of stack doesn't match any grammar rules, a syntax error has | |
1584 occurred and the parser must take some kind of recovery step (or bail | |
1585 out). A parse is only successful if the parser reaches a state where | |
1586 the symbol stack is empty and there are no more input tokens. | |
1587 | |
1588 <p> | |
1589 It is important to note that the underlying implementation is built | |
1590 around a large finite-state machine that is encoded in a collection of | |
1591 tables. The construction of these tables is non-trivial and | |
1592 beyond the scope of this discussion. However, subtle details of this | |
1593 process explain why, in the example above, the parser chooses to shift | |
1594 a token onto the stack in step 9 rather than reducing the | |
1595 rule <tt>expr : expr + term</tt>. | |
1596 | |
1597 <H2><a name="ply_nn23"></a>6. Yacc</H2> | |
1598 | |
1599 | |
1600 The <tt>ply.yacc</tt> module implements the parsing component of PLY. | |
1601 The name "yacc" stands for "Yet Another Compiler Compiler" and is | |
1602 borrowed from the Unix tool of the same name. | |
1603 | |
1604 <H3><a name="ply_nn24"></a>6.1 An example</H3> | |
1605 | |
1606 | |
1607 Suppose you wanted to make a grammar for simple arithmetic expressions as previously described. Here is | |
1608 how you would do it with <tt>yacc.py</tt>: | |
1609 | |
1610 <blockquote> | |
1611 <pre> | |
1612 # Yacc example | |
1613 | |
1614 import ply.yacc as yacc | |
1615 | |
1616 # Get the token map from the lexer. This is required. | |
1617 from calclex import tokens | |
1618 | |
1619 def p_expression_plus(p): | |
1620 'expression : expression PLUS term' | |
1621 p[0] = p[1] + p[3] | |
1622 | |
1623 def p_expression_minus(p): | |
1624 'expression : expression MINUS term' | |
1625 p[0] = p[1] - p[3] | |
1626 | |
1627 def p_expression_term(p): | |
1628 'expression : term' | |
1629 p[0] = p[1] | |
1630 | |
1631 def p_term_times(p): | |
1632 'term : term TIMES factor' | |
1633 p[0] = p[1] * p[3] | |
1634 | |
1635 def p_term_div(p): | |
1636 'term : term DIVIDE factor' | |
1637 p[0] = p[1] / p[3] | |
1638 | |
1639 def p_term_factor(p): | |
1640 'term : factor' | |
1641 p[0] = p[1] | |
1642 | |
1643 def p_factor_num(p): | |
1644 'factor : NUMBER' | |
1645 p[0] = p[1] | |
1646 | |
1647 def p_factor_expr(p): | |
1648 'factor : LPAREN expression RPAREN' | |
1649 p[0] = p[2] | |
1650 | |
1651 # Error rule for syntax errors | |
1652 def p_error(p): | |
1653 print("Syntax error in input!") | |
1654 | |
1655 # Build the parser | |
1656 parser = yacc.yacc() | |
1657 | |
1658 while True: | |
1659 try: | |
1660 s = raw_input('calc > ') | |
1661 except EOFError: | |
1662 break | |
1663 if not s: continue | |
1664 result = parser.parse(s) | |
1665 print(result) | |
1666 </pre> | |
1667 </blockquote> | |
1668 | |
1669 In this example, each grammar rule is defined by a Python function | |
1670 where the docstring to that function contains the appropriate | |
1671 context-free grammar specification. The statements that make up the | |
1672 function body implement the semantic actions of the rule. Each function | |
1673 accepts a single argument <tt>p</tt> that is a sequence containing the | |
1674 values of each grammar symbol in the corresponding rule. The values | |
1675 of <tt>p[i]</tt> are mapped to grammar symbols as shown here: | |
1676 | |
1677 <blockquote> | |
1678 <pre> | |
1679 def p_expression_plus(p): | |
1680 'expression : expression PLUS term' | |
1681 # ^ ^ ^ ^ | |
1682 # p[0] p[1] p[2] p[3] | |
1683 | |
1684 p[0] = p[1] + p[3] | |
1685 </pre> | |
1686 </blockquote> | |
1687 | |
1688 <p> | |
1689 For tokens, the "value" of the corresponding <tt>p[i]</tt> is the | |
1690 <em>same</em> as the <tt>p.value</tt> attribute assigned in the lexer | |
1691 module. For non-terminals, the value is determined by whatever is | |
1692 placed in <tt>p[0]</tt> when rules are reduced. This value can be | |
1693 anything at all. However, it probably most common for the value to be | |
1694 a simple Python type, a tuple, or an instance. In this example, we | |
1695 are relying on the fact that the <tt>NUMBER</tt> token stores an | |
1696 integer value in its value field. All of the other rules simply | |
1697 perform various types of integer operations and propagate the result. | |
1698 </p> | |
1699 | |
1700 <p> | |
1701 Note: The use of negative indices have a special meaning in | |
1702 yacc---specially <tt>p[-1]</tt> does not have the same value | |
1703 as <tt>p[3]</tt> in this example. Please see the section on "Embedded | |
1704 Actions" for further details. | |
1705 </p> | |
1706 | |
1707 <p> | |
1708 The first rule defined in the yacc specification determines the | |
1709 starting grammar symbol (in this case, a rule for <tt>expression</tt> | |
1710 appears first). Whenever the starting rule is reduced by the parser | |
1711 and no more input is available, parsing stops and the final value is | |
1712 returned (this value will be whatever the top-most rule placed | |
1713 in <tt>p[0]</tt>). Note: an alternative starting symbol can be | |
1714 specified using the <tt>start</tt> keyword argument to | |
1715 <tt>yacc()</tt>. | |
1716 | |
1717 <p>The <tt>p_error(p)</tt> rule is defined to catch syntax errors. | |
1718 See the error handling section below for more detail. | |
1719 | |
1720 <p> | |
1721 To build the parser, call the <tt>yacc.yacc()</tt> function. This | |
1722 function looks at the module and attempts to construct all of the LR | |
1723 parsing tables for the grammar you have specified. The first | |
1724 time <tt>yacc.yacc()</tt> is invoked, you will get a message such as | |
1725 this: | |
1726 | |
1727 <blockquote> | |
1728 <pre> | |
1729 $ python calcparse.py | |
1730 Generating LALR tables | |
1731 calc > | |
1732 </pre> | |
1733 </blockquote> | |
1734 | |
1735 <p> | |
1736 Since table construction is relatively expensive (especially for large | |
1737 grammars), the resulting parsing table is written to | |
1738 a file called <tt>parsetab.py</tt>. In addition, a | |
1739 debugging file called <tt>parser.out</tt> is created. On subsequent | |
1740 executions, <tt>yacc</tt> will reload the table from | |
1741 <tt>parsetab.py</tt> unless it has detected a change in the underlying | |
1742 grammar (in which case the tables and <tt>parsetab.py</tt> file are | |
1743 regenerated). Both of these files are written to the same directory | |
1744 as the module in which the parser is specified. | |
1745 The name of the <tt>parsetab</tt> module can be changed using the | |
1746 <tt>tabmodule</tt> keyword argument to <tt>yacc()</tt>. For example: | |
1747 </p> | |
1748 | |
1749 <blockquote> | |
1750 <pre> | |
1751 parser = yacc.yacc(tabmodule='fooparsetab') | |
1752 </pre> | |
1753 </blockquote> | |
1754 | |
1755 <p> | |
1756 If any errors are detected in your grammar specification, <tt>yacc.py</tt> will produce | |
1757 diagnostic messages and possibly raise an exception. Some of the errors that can be detected include: | |
1758 | |
1759 <ul> | |
1760 <li>Duplicated function names (if more than one rule function have the same name in the grammar file). | |
1761 <li>Shift/reduce and reduce/reduce conflicts generated by ambiguous grammars. | |
1762 <li>Badly specified grammar rules. | |
1763 <li>Infinite recursion (rules that can never terminate). | |
1764 <li>Unused rules and tokens | |
1765 <li>Undefined rules and tokens | |
1766 </ul> | |
1767 | |
1768 The next few sections discuss grammar specification in more detail. | |
1769 | |
1770 <p> | |
1771 The final part of the example shows how to actually run the parser | |
1772 created by | |
1773 <tt>yacc()</tt>. To run the parser, you simply have to call | |
1774 the <tt>parse()</tt> with a string of input text. This will run all | |
1775 of the grammar rules and return the result of the entire parse. This | |
1776 result return is the value assigned to <tt>p[0]</tt> in the starting | |
1777 grammar rule. | |
1778 | |
1779 <H3><a name="ply_nn25"></a>6.2 Combining Grammar Rule Functions</H3> | |
1780 | |
1781 | |
1782 When grammar rules are similar, they can be combined into a single function. | |
1783 For example, consider the two rules in our earlier example: | |
1784 | |
1785 <blockquote> | |
1786 <pre> | |
1787 def p_expression_plus(p): | |
1788 'expression : expression PLUS term' | |
1789 p[0] = p[1] + p[3] | |
1790 | |
1791 def p_expression_minus(t): | |
1792 'expression : expression MINUS term' | |
1793 p[0] = p[1] - p[3] | |
1794 </pre> | |
1795 </blockquote> | |
1796 | |
1797 Instead of writing two functions, you might write a single function like this: | |
1798 | |
1799 <blockquote> | |
1800 <pre> | |
1801 def p_expression(p): | |
1802 '''expression : expression PLUS term | |
1803 | expression MINUS term''' | |
1804 if p[2] == '+': | |
1805 p[0] = p[1] + p[3] | |
1806 elif p[2] == '-': | |
1807 p[0] = p[1] - p[3] | |
1808 </pre> | |
1809 </blockquote> | |
1810 | |
1811 In general, the doc string for any given function can contain multiple grammar rules. So, it would | |
1812 have also been legal (although possibly confusing) to write this: | |
1813 | |
1814 <blockquote> | |
1815 <pre> | |
1816 def p_binary_operators(p): | |
1817 '''expression : expression PLUS term | |
1818 | expression MINUS term | |
1819 term : term TIMES factor | |
1820 | term DIVIDE factor''' | |
1821 if p[2] == '+': | |
1822 p[0] = p[1] + p[3] | |
1823 elif p[2] == '-': | |
1824 p[0] = p[1] - p[3] | |
1825 elif p[2] == '*': | |
1826 p[0] = p[1] * p[3] | |
1827 elif p[2] == '/': | |
1828 p[0] = p[1] / p[3] | |
1829 </pre> | |
1830 </blockquote> | |
1831 | |
1832 When combining grammar rules into a single function, it is usually a good idea for all of the rules to have | |
1833 a similar structure (e.g., the same number of terms). Otherwise, the corresponding action code may be more | |
1834 complicated than necessary. However, it is possible to handle simple cases using len(). For example: | |
1835 | |
1836 <blockquote> | |
1837 <pre> | |
1838 def p_expressions(p): | |
1839 '''expression : expression MINUS expression | |
1840 | MINUS expression''' | |
1841 if (len(p) == 4): | |
1842 p[0] = p[1] - p[3] | |
1843 elif (len(p) == 3): | |
1844 p[0] = -p[2] | |
1845 </pre> | |
1846 </blockquote> | |
1847 | |
1848 If parsing performance is a concern, you should resist the urge to put | |
1849 too much conditional processing into a single grammar rule as shown in | |
1850 these examples. When you add checks to see which grammar rule is | |
1851 being handled, you are actually duplicating the work that the parser | |
1852 has already performed (i.e., the parser already knows exactly what rule it | |
1853 matched). You can eliminate this overhead by using a | |
1854 separate <tt>p_rule()</tt> function for each grammar rule. | |
1855 | |
1856 <H3><a name="ply_nn26"></a>6.3 Character Literals</H3> | |
1857 | |
1858 | |
1859 If desired, a grammar may contain tokens defined as single character literals. For example: | |
1860 | |
1861 <blockquote> | |
1862 <pre> | |
1863 def p_binary_operators(p): | |
1864 '''expression : expression '+' term | |
1865 | expression '-' term | |
1866 term : term '*' factor | |
1867 | term '/' factor''' | |
1868 if p[2] == '+': | |
1869 p[0] = p[1] + p[3] | |
1870 elif p[2] == '-': | |
1871 p[0] = p[1] - p[3] | |
1872 elif p[2] == '*': | |
1873 p[0] = p[1] * p[3] | |
1874 elif p[2] == '/': | |
1875 p[0] = p[1] / p[3] | |
1876 </pre> | |
1877 </blockquote> | |
1878 | |
1879 A character literal must be enclosed in quotes such as <tt>'+'</tt>. In addition, if literals are used, they must be declared in the | |
1880 corresponding <tt>lex</tt> file through the use of a special <tt>literals</tt> declaration. | |
1881 | |
1882 <blockquote> | |
1883 <pre> | |
1884 # Literals. Should be placed in module given to lex() | |
1885 literals = ['+','-','*','/' ] | |
1886 </pre> | |
1887 </blockquote> | |
1888 | |
1889 <b>Character literals are limited to a single character</b>. Thus, it is not legal to specify literals such as <tt>'<='</tt> or <tt>'=='</tt>. For this, use | |
1890 the normal lexing rules (e.g., define a rule such as <tt>t_EQ = r'=='</tt>). | |
1891 | |
1892 <H3><a name="ply_nn26"></a>6.4 Empty Productions</H3> | |
1893 | |
1894 | |
1895 <tt>yacc.py</tt> can handle empty productions by defining a rule like this: | |
1896 | |
1897 <blockquote> | |
1898 <pre> | |
1899 def p_empty(p): | |
1900 'empty :' | |
1901 pass | |
1902 </pre> | |
1903 </blockquote> | |
1904 | |
1905 Now to use the empty production, simply use 'empty' as a symbol. For example: | |
1906 | |
1907 <blockquote> | |
1908 <pre> | |
1909 def p_optitem(p): | |
1910 'optitem : item' | |
1911 ' | empty' | |
1912 ... | |
1913 </pre> | |
1914 </blockquote> | |
1915 | |
1916 Note: You can write empty rules anywhere by simply specifying an empty | |
1917 right hand side. However, I personally find that writing an "empty" | |
1918 rule and using "empty" to denote an empty production is easier to read | |
1919 and more clearly states your intentions. | |
1920 | |
1921 <H3><a name="ply_nn28"></a>6.5 Changing the starting symbol</H3> | |
1922 | |
1923 | |
1924 Normally, the first rule found in a yacc specification defines the starting grammar rule (top level rule). To change this, simply | |
1925 supply a <tt>start</tt> specifier in your file. For example: | |
1926 | |
1927 <blockquote> | |
1928 <pre> | |
1929 start = 'foo' | |
1930 | |
1931 def p_bar(p): | |
1932 'bar : A B' | |
1933 | |
1934 # This is the starting rule due to the start specifier above | |
1935 def p_foo(p): | |
1936 'foo : bar X' | |
1937 ... | |
1938 </pre> | |
1939 </blockquote> | |
1940 | |
1941 The use of a <tt>start</tt> specifier may be useful during debugging | |
1942 since you can use it to have yacc build a subset of a larger grammar. | |
1943 For this purpose, it is also possible to specify a starting symbol as | |
1944 an argument to <tt>yacc()</tt>. For example: | |
1945 | |
1946 <blockquote> | |
1947 <pre> | |
1948 parser = yacc.yacc(start='foo') | |
1949 </pre> | |
1950 </blockquote> | |
1951 | |
1952 <H3><a name="ply_nn27"></a>6.6 Dealing With Ambiguous Grammars</H3> | |
1953 | |
1954 | |
1955 The expression grammar given in the earlier example has been written | |
1956 in a special format to eliminate ambiguity. However, in many | |
1957 situations, it is extremely difficult or awkward to write grammars in | |
1958 this format. A much more natural way to express the grammar is in a | |
1959 more compact form like this: | |
1960 | |
1961 <blockquote> | |
1962 <pre> | |
1963 expression : expression PLUS expression | |
1964 | expression MINUS expression | |
1965 | expression TIMES expression | |
1966 | expression DIVIDE expression | |
1967 | LPAREN expression RPAREN | |
1968 | NUMBER | |
1969 </pre> | |
1970 </blockquote> | |
1971 | |
1972 Unfortunately, this grammar specification is ambiguous. For example, | |
1973 if you are parsing the string "3 * 4 + 5", there is no way to tell how | |
1974 the operators are supposed to be grouped. For example, does the | |
1975 expression mean "(3 * 4) + 5" or is it "3 * (4+5)"? | |
1976 | |
1977 <p> | |
1978 When an ambiguous grammar is given to <tt>yacc.py</tt> it will print | |
1979 messages about "shift/reduce conflicts" or "reduce/reduce conflicts". | |
1980 A shift/reduce conflict is caused when the parser generator can't | |
1981 decide whether or not to reduce a rule or shift a symbol on the | |
1982 parsing stack. For example, consider the string "3 * 4 + 5" and the | |
1983 internal parsing stack: | |
1984 | |
1985 <blockquote> | |
1986 <pre> | |
1987 Step Symbol Stack Input Tokens Action | |
1988 ---- --------------------- --------------------- ------------------------------- | |
1989 1 $ 3 * 4 + 5$ Shift 3 | |
1990 2 $ 3 * 4 + 5$ Reduce : expression : NUMBER | |
1991 3 $ expr * 4 + 5$ Shift * | |
1992 4 $ expr * 4 + 5$ Shift 4 | |
1993 5 $ expr * 4 + 5$ Reduce: expression : NUMBER | |
1994 6 $ expr * expr + 5$ SHIFT/REDUCE CONFLICT ???? | |
1995 </pre> | |
1996 </blockquote> | |
1997 | |
1998 In this case, when the parser reaches step 6, it has two options. One | |
1999 is to reduce the rule <tt>expr : expr * expr</tt> on the stack. The | |
2000 other option is to shift the token <tt>+</tt> on the stack. Both | |
2001 options are perfectly legal from the rules of the | |
2002 context-free-grammar. | |
2003 | |
2004 <p> | |
2005 By default, all shift/reduce conflicts are resolved in favor of | |
2006 shifting. Therefore, in the above example, the parser will always | |
2007 shift the <tt>+</tt> instead of reducing. Although this strategy | |
2008 works in many cases (for example, the case of | |
2009 "if-then" versus "if-then-else"), it is not enough for arithmetic expressions. In fact, | |
2010 in the above example, the decision to shift <tt>+</tt> is completely | |
2011 wrong---we should have reduced <tt>expr * expr</tt> since | |
2012 multiplication has higher mathematical precedence than addition. | |
2013 | |
2014 <p>To resolve ambiguity, especially in expression | |
2015 grammars, <tt>yacc.py</tt> allows individual tokens to be assigned a | |
2016 precedence level and associativity. This is done by adding a variable | |
2017 <tt>precedence</tt> to the grammar file like this: | |
2018 | |
2019 <blockquote> | |
2020 <pre> | |
2021 precedence = ( | |
2022 ('left', 'PLUS', 'MINUS'), | |
2023 ('left', 'TIMES', 'DIVIDE'), | |
2024 ) | |
2025 </pre> | |
2026 </blockquote> | |
2027 | |
2028 This declaration specifies that <tt>PLUS</tt>/<tt>MINUS</tt> have the | |
2029 same precedence level and are left-associative and that | |
2030 <tt>TIMES</tt>/<tt>DIVIDE</tt> have the same precedence and are | |
2031 left-associative. Within the <tt>precedence</tt> declaration, tokens | |
2032 are ordered from lowest to highest precedence. Thus, this declaration | |
2033 specifies that <tt>TIMES</tt>/<tt>DIVIDE</tt> have higher precedence | |
2034 than <tt>PLUS</tt>/<tt>MINUS</tt> (since they appear later in the | |
2035 precedence specification). | |
2036 | |
2037 <p> | |
2038 The precedence specification works by associating a numerical | |
2039 precedence level value and associativity direction to the listed | |
2040 tokens. For example, in the above example you get: | |
2041 | |
2042 <blockquote> | |
2043 <pre> | |
2044 PLUS : level = 1, assoc = 'left' | |
2045 MINUS : level = 1, assoc = 'left' | |
2046 TIMES : level = 2, assoc = 'left' | |
2047 DIVIDE : level = 2, assoc = 'left' | |
2048 </pre> | |
2049 </blockquote> | |
2050 | |
2051 These values are then used to attach a numerical precedence value and | |
2052 associativity direction to each grammar rule. <em>This is always | |
2053 determined by looking at the precedence of the right-most terminal | |
2054 symbol.</em> For example: | |
2055 | |
2056 <blockquote> | |
2057 <pre> | |
2058 expression : expression PLUS expression # level = 1, left | |
2059 | expression MINUS expression # level = 1, left | |
2060 | expression TIMES expression # level = 2, left | |
2061 | expression DIVIDE expression # level = 2, left | |
2062 | LPAREN expression RPAREN # level = None (not specified) | |
2063 | NUMBER # level = None (not specified) | |
2064 </pre> | |
2065 </blockquote> | |
2066 | |
2067 When shift/reduce conflicts are encountered, the parser generator resolves the conflict by | |
2068 looking at the precedence rules and associativity specifiers. | |
2069 | |
2070 <p> | |
2071 <ol> | |
2072 <li>If the current token has higher precedence than the rule on the stack, it is shifted. | |
2073 <li>If the grammar rule on the stack has higher precedence, the rule is reduced. | |
2074 <li>If the current token and the grammar rule have the same precedence, the | |
2075 rule is reduced for left associativity, whereas the token is shifted for right associativity. | |
2076 <li>If nothing is known about the precedence, shift/reduce conflicts are resolved in | |
2077 favor of shifting (the default). | |
2078 </ol> | |
2079 | |
2080 For example, if "expression PLUS expression" has been parsed and the | |
2081 next token is "TIMES", the action is going to be a shift because | |
2082 "TIMES" has a higher precedence level than "PLUS". On the other hand, | |
2083 if "expression TIMES expression" has been parsed and the next token is | |
2084 "PLUS", the action is going to be reduce because "PLUS" has a lower | |
2085 precedence than "TIMES." | |
2086 | |
2087 <p> | |
2088 When shift/reduce conflicts are resolved using the first three | |
2089 techniques (with the help of precedence rules), <tt>yacc.py</tt> will | |
2090 report no errors or conflicts in the grammar (although it will print | |
2091 some information in the <tt>parser.out</tt> debugging file). | |
2092 | |
2093 <p> | |
2094 One problem with the precedence specifier technique is that it is | |
2095 sometimes necessary to change the precedence of an operator in certain | |
2096 contexts. For example, consider a unary-minus operator in "3 + 4 * | |
2097 -5". Mathematically, the unary minus is normally given a very high | |
2098 precedence--being evaluated before the multiply. However, in our | |
2099 precedence specifier, MINUS has a lower precedence than TIMES. To | |
2100 deal with this, precedence rules can be given for so-called "fictitious tokens" | |
2101 like this: | |
2102 | |
2103 <blockquote> | |
2104 <pre> | |
2105 precedence = ( | |
2106 ('left', 'PLUS', 'MINUS'), | |
2107 ('left', 'TIMES', 'DIVIDE'), | |
2108 ('right', 'UMINUS'), # Unary minus operator | |
2109 ) | |
2110 </pre> | |
2111 </blockquote> | |
2112 | |
2113 Now, in the grammar file, we can write our unary minus rule like this: | |
2114 | |
2115 <blockquote> | |
2116 <pre> | |
2117 def p_expr_uminus(p): | |
2118 'expression : MINUS expression %prec UMINUS' | |
2119 p[0] = -p[2] | |
2120 </pre> | |
2121 </blockquote> | |
2122 | |
2123 In this case, <tt>%prec UMINUS</tt> overrides the default rule precedence--setting it to that | |
2124 of UMINUS in the precedence specifier. | |
2125 | |
2126 <p> | |
2127 At first, the use of UMINUS in this example may appear very confusing. | |
2128 UMINUS is not an input token or a grammar rule. Instead, you should | |
2129 think of it as the name of a special marker in the precedence table. When you use the <tt>%prec</tt> qualifier, you're simply | |
2130 telling yacc that you want the precedence of the expression to be the same as for this special marker instead of the usual precedence. | |
2131 | |
2132 <p> | |
2133 It is also possible to specify non-associativity in the <tt>precedence</tt> table. This would | |
2134 be used when you <em>don't</em> want operations to chain together. For example, suppose | |
2135 you wanted to support comparison operators like <tt><</tt> and <tt>></tt> but you didn't want to allow | |
2136 combinations like <tt>a < b < c</tt>. To do this, simply specify a rule like this: | |
2137 | |
2138 <blockquote> | |
2139 <pre> | |
2140 precedence = ( | |
2141 ('nonassoc', 'LESSTHAN', 'GREATERTHAN'), # Nonassociative operators | |
2142 ('left', 'PLUS', 'MINUS'), | |
2143 ('left', 'TIMES', 'DIVIDE'), | |
2144 ('right', 'UMINUS'), # Unary minus operator | |
2145 ) | |
2146 </pre> | |
2147 </blockquote> | |
2148 | |
2149 <p> | |
2150 If you do this, the occurrence of input text such as <tt> a < b < c</tt> will result in a syntax error. However, simple | |
2151 expressions such as <tt>a < b</tt> will still be fine. | |
2152 | |
2153 <p> | |
2154 Reduce/reduce conflicts are caused when there are multiple grammar | |
2155 rules that can be applied to a given set of symbols. This kind of | |
2156 conflict is almost always bad and is always resolved by picking the | |
2157 rule that appears first in the grammar file. Reduce/reduce conflicts | |
2158 are almost always caused when different sets of grammar rules somehow | |
2159 generate the same set of symbols. For example: | |
2160 | |
2161 <blockquote> | |
2162 <pre> | |
2163 assignment : ID EQUALS NUMBER | |
2164 | ID EQUALS expression | |
2165 | |
2166 expression : expression PLUS expression | |
2167 | expression MINUS expression | |
2168 | expression TIMES expression | |
2169 | expression DIVIDE expression | |
2170 | LPAREN expression RPAREN | |
2171 | NUMBER | |
2172 </pre> | |
2173 </blockquote> | |
2174 | |
2175 In this case, a reduce/reduce conflict exists between these two rules: | |
2176 | |
2177 <blockquote> | |
2178 <pre> | |
2179 assignment : ID EQUALS NUMBER | |
2180 expression : NUMBER | |
2181 </pre> | |
2182 </blockquote> | |
2183 | |
2184 For example, if you wrote "a = 5", the parser can't figure out if this | |
2185 is supposed to be reduced as <tt>assignment : ID EQUALS NUMBER</tt> or | |
2186 whether it's supposed to reduce the 5 as an expression and then reduce | |
2187 the rule <tt>assignment : ID EQUALS expression</tt>. | |
2188 | |
2189 <p> | |
2190 It should be noted that reduce/reduce conflicts are notoriously | |
2191 difficult to spot simply looking at the input grammar. When a | |
2192 reduce/reduce conflict occurs, <tt>yacc()</tt> will try to help by | |
2193 printing a warning message such as this: | |
2194 | |
2195 <blockquote> | |
2196 <pre> | |
2197 WARNING: 1 reduce/reduce conflict | |
2198 WARNING: reduce/reduce conflict in state 15 resolved using rule (assignment -> ID EQUALS NUMBER) | |
2199 WARNING: rejected rule (expression -> NUMBER) | |
2200 </pre> | |
2201 </blockquote> | |
2202 | |
2203 This message identifies the two rules that are in conflict. However, | |
2204 it may not tell you how the parser arrived at such a state. To try | |
2205 and figure it out, you'll probably have to look at your grammar and | |
2206 the contents of the | |
2207 <tt>parser.out</tt> debugging file with an appropriately high level of | |
2208 caffeination. | |
2209 | |
2210 <H3><a name="ply_nn28"></a>6.7 The parser.out file</H3> | |
2211 | |
2212 | |
2213 Tracking down shift/reduce and reduce/reduce conflicts is one of the finer pleasures of using an LR | |
2214 parsing algorithm. To assist in debugging, <tt>yacc.py</tt> creates a debugging file called | |
2215 'parser.out' when it generates the parsing table. The contents of this file look like the following: | |
2216 | |
2217 <blockquote> | |
2218 <pre> | |
2219 Unused terminals: | |
2220 | |
2221 | |
2222 Grammar | |
2223 | |
2224 Rule 1 expression -> expression PLUS expression | |
2225 Rule 2 expression -> expression MINUS expression | |
2226 Rule 3 expression -> expression TIMES expression | |
2227 Rule 4 expression -> expression DIVIDE expression | |
2228 Rule 5 expression -> NUMBER | |
2229 Rule 6 expression -> LPAREN expression RPAREN | |
2230 | |
2231 Terminals, with rules where they appear | |
2232 | |
2233 TIMES : 3 | |
2234 error : | |
2235 MINUS : 2 | |
2236 RPAREN : 6 | |
2237 LPAREN : 6 | |
2238 DIVIDE : 4 | |
2239 PLUS : 1 | |
2240 NUMBER : 5 | |
2241 | |
2242 Nonterminals, with rules where they appear | |
2243 | |
2244 expression : 1 1 2 2 3 3 4 4 6 0 | |
2245 | |
2246 | |
2247 Parsing method: LALR | |
2248 | |
2249 | |
2250 state 0 | |
2251 | |
2252 S' -> . expression | |
2253 expression -> . expression PLUS expression | |
2254 expression -> . expression MINUS expression | |
2255 expression -> . expression TIMES expression | |
2256 expression -> . expression DIVIDE expression | |
2257 expression -> . NUMBER | |
2258 expression -> . LPAREN expression RPAREN | |
2259 | |
2260 NUMBER shift and go to state 3 | |
2261 LPAREN shift and go to state 2 | |
2262 | |
2263 | |
2264 state 1 | |
2265 | |
2266 S' -> expression . | |
2267 expression -> expression . PLUS expression | |
2268 expression -> expression . MINUS expression | |
2269 expression -> expression . TIMES expression | |
2270 expression -> expression . DIVIDE expression | |
2271 | |
2272 PLUS shift and go to state 6 | |
2273 MINUS shift and go to state 5 | |
2274 TIMES shift and go to state 4 | |
2275 DIVIDE shift and go to state 7 | |
2276 | |
2277 | |
2278 state 2 | |
2279 | |
2280 expression -> LPAREN . expression RPAREN | |
2281 expression -> . expression PLUS expression | |
2282 expression -> . expression MINUS expression | |
2283 expression -> . expression TIMES expression | |
2284 expression -> . expression DIVIDE expression | |
2285 expression -> . NUMBER | |
2286 expression -> . LPAREN expression RPAREN | |
2287 | |
2288 NUMBER shift and go to state 3 | |
2289 LPAREN shift and go to state 2 | |
2290 | |
2291 | |
2292 state 3 | |
2293 | |
2294 expression -> NUMBER . | |
2295 | |
2296 $ reduce using rule 5 | |
2297 PLUS reduce using rule 5 | |
2298 MINUS reduce using rule 5 | |
2299 TIMES reduce using rule 5 | |
2300 DIVIDE reduce using rule 5 | |
2301 RPAREN reduce using rule 5 | |
2302 | |
2303 | |
2304 state 4 | |
2305 | |
2306 expression -> expression TIMES . expression | |
2307 expression -> . expression PLUS expression | |
2308 expression -> . expression MINUS expression | |
2309 expression -> . expression TIMES expression | |
2310 expression -> . expression DIVIDE expression | |
2311 expression -> . NUMBER | |
2312 expression -> . LPAREN expression RPAREN | |
2313 | |
2314 NUMBER shift and go to state 3 | |
2315 LPAREN shift and go to state 2 | |
2316 | |
2317 | |
2318 state 5 | |
2319 | |
2320 expression -> expression MINUS . expression | |
2321 expression -> . expression PLUS expression | |
2322 expression -> . expression MINUS expression | |
2323 expression -> . expression TIMES expression | |
2324 expression -> . expression DIVIDE expression | |
2325 expression -> . NUMBER | |
2326 expression -> . LPAREN expression RPAREN | |
2327 | |
2328 NUMBER shift and go to state 3 | |
2329 LPAREN shift and go to state 2 | |
2330 | |
2331 | |
2332 state 6 | |
2333 | |
2334 expression -> expression PLUS . expression | |
2335 expression -> . expression PLUS expression | |
2336 expression -> . expression MINUS expression | |
2337 expression -> . expression TIMES expression | |
2338 expression -> . expression DIVIDE expression | |
2339 expression -> . NUMBER | |
2340 expression -> . LPAREN expression RPAREN | |
2341 | |
2342 NUMBER shift and go to state 3 | |
2343 LPAREN shift and go to state 2 | |
2344 | |
2345 | |
2346 state 7 | |
2347 | |
2348 expression -> expression DIVIDE . expression | |
2349 expression -> . expression PLUS expression | |
2350 expression -> . expression MINUS expression | |
2351 expression -> . expression TIMES expression | |
2352 expression -> . expression DIVIDE expression | |
2353 expression -> . NUMBER | |
2354 expression -> . LPAREN expression RPAREN | |
2355 | |
2356 NUMBER shift and go to state 3 | |
2357 LPAREN shift and go to state 2 | |
2358 | |
2359 | |
2360 state 8 | |
2361 | |
2362 expression -> LPAREN expression . RPAREN | |
2363 expression -> expression . PLUS expression | |
2364 expression -> expression . MINUS expression | |
2365 expression -> expression . TIMES expression | |
2366 expression -> expression . DIVIDE expression | |
2367 | |
2368 RPAREN shift and go to state 13 | |
2369 PLUS shift and go to state 6 | |
2370 MINUS shift and go to state 5 | |
2371 TIMES shift and go to state 4 | |
2372 DIVIDE shift and go to state 7 | |
2373 | |
2374 | |
2375 state 9 | |
2376 | |
2377 expression -> expression TIMES expression . | |
2378 expression -> expression . PLUS expression | |
2379 expression -> expression . MINUS expression | |
2380 expression -> expression . TIMES expression | |
2381 expression -> expression . DIVIDE expression | |
2382 | |
2383 $ reduce using rule 3 | |
2384 PLUS reduce using rule 3 | |
2385 MINUS reduce using rule 3 | |
2386 TIMES reduce using rule 3 | |
2387 DIVIDE reduce using rule 3 | |
2388 RPAREN reduce using rule 3 | |
2389 | |
2390 ! PLUS [ shift and go to state 6 ] | |
2391 ! MINUS [ shift and go to state 5 ] | |
2392 ! TIMES [ shift and go to state 4 ] | |
2393 ! DIVIDE [ shift and go to state 7 ] | |
2394 | |
2395 state 10 | |
2396 | |
2397 expression -> expression MINUS expression . | |
2398 expression -> expression . PLUS expression | |
2399 expression -> expression . MINUS expression | |
2400 expression -> expression . TIMES expression | |
2401 expression -> expression . DIVIDE expression | |
2402 | |
2403 $ reduce using rule 2 | |
2404 PLUS reduce using rule 2 | |
2405 MINUS reduce using rule 2 | |
2406 RPAREN reduce using rule 2 | |
2407 TIMES shift and go to state 4 | |
2408 DIVIDE shift and go to state 7 | |
2409 | |
2410 ! TIMES [ reduce using rule 2 ] | |
2411 ! DIVIDE [ reduce using rule 2 ] | |
2412 ! PLUS [ shift and go to state 6 ] | |
2413 ! MINUS [ shift and go to state 5 ] | |
2414 | |
2415 state 11 | |
2416 | |
2417 expression -> expression PLUS expression . | |
2418 expression -> expression . PLUS expression | |
2419 expression -> expression . MINUS expression | |
2420 expression -> expression . TIMES expression | |
2421 expression -> expression . DIVIDE expression | |
2422 | |
2423 $ reduce using rule 1 | |
2424 PLUS reduce using rule 1 | |
2425 MINUS reduce using rule 1 | |
2426 RPAREN reduce using rule 1 | |
2427 TIMES shift and go to state 4 | |
2428 DIVIDE shift and go to state 7 | |
2429 | |
2430 ! TIMES [ reduce using rule 1 ] | |
2431 ! DIVIDE [ reduce using rule 1 ] | |
2432 ! PLUS [ shift and go to state 6 ] | |
2433 ! MINUS [ shift and go to state 5 ] | |
2434 | |
2435 state 12 | |
2436 | |
2437 expression -> expression DIVIDE expression . | |
2438 expression -> expression . PLUS expression | |
2439 expression -> expression . MINUS expression | |
2440 expression -> expression . TIMES expression | |
2441 expression -> expression . DIVIDE expression | |
2442 | |
2443 $ reduce using rule 4 | |
2444 PLUS reduce using rule 4 | |
2445 MINUS reduce using rule 4 | |
2446 TIMES reduce using rule 4 | |
2447 DIVIDE reduce using rule 4 | |
2448 RPAREN reduce using rule 4 | |
2449 | |
2450 ! PLUS [ shift and go to state 6 ] | |
2451 ! MINUS [ shift and go to state 5 ] | |
2452 ! TIMES [ shift and go to state 4 ] | |
2453 ! DIVIDE [ shift and go to state 7 ] | |
2454 | |
2455 state 13 | |
2456 | |
2457 expression -> LPAREN expression RPAREN . | |
2458 | |
2459 $ reduce using rule 6 | |
2460 PLUS reduce using rule 6 | |
2461 MINUS reduce using rule 6 | |
2462 TIMES reduce using rule 6 | |
2463 DIVIDE reduce using rule 6 | |
2464 RPAREN reduce using rule 6 | |
2465 </pre> | |
2466 </blockquote> | |
2467 | |
2468 The different states that appear in this file are a representation of | |
2469 every possible sequence of valid input tokens allowed by the grammar. | |
2470 When receiving input tokens, the parser is building up a stack and | |
2471 looking for matching rules. Each state keeps track of the grammar | |
2472 rules that might be in the process of being matched at that point. Within each | |
2473 rule, the "." character indicates the current location of the parse | |
2474 within that rule. In addition, the actions for each valid input token | |
2475 are listed. When a shift/reduce or reduce/reduce conflict arises, | |
2476 rules <em>not</em> selected are prefixed with an !. For example: | |
2477 | |
2478 <blockquote> | |
2479 <pre> | |
2480 ! TIMES [ reduce using rule 2 ] | |
2481 ! DIVIDE [ reduce using rule 2 ] | |
2482 ! PLUS [ shift and go to state 6 ] | |
2483 ! MINUS [ shift and go to state 5 ] | |
2484 </pre> | |
2485 </blockquote> | |
2486 | |
2487 By looking at these rules (and with a little practice), you can usually track down the source | |
2488 of most parsing conflicts. It should also be stressed that not all shift-reduce conflicts are | |
2489 bad. However, the only way to be sure that they are resolved correctly is to look at <tt>parser.out</tt>. | |
2490 | |
2491 <H3><a name="ply_nn29"></a>6.8 Syntax Error Handling</H3> | |
2492 | |
2493 | |
2494 If you are creating a parser for production use, the handling of | |
2495 syntax errors is important. As a general rule, you don't want a | |
2496 parser to simply throw up its hands and stop at the first sign of | |
2497 trouble. Instead, you want it to report the error, recover if possible, and | |
2498 continue parsing so that all of the errors in the input get reported | |
2499 to the user at once. This is the standard behavior found in compilers | |
2500 for languages such as C, C++, and Java. | |
2501 | |
2502 In PLY, when a syntax error occurs during parsing, the error is immediately | |
2503 detected (i.e., the parser does not read any more tokens beyond the | |
2504 source of the error). However, at this point, the parser enters a | |
2505 recovery mode that can be used to try and continue further parsing. | |
2506 As a general rule, error recovery in LR parsers is a delicate | |
2507 topic that involves ancient rituals and black-magic. The recovery mechanism | |
2508 provided by <tt>yacc.py</tt> is comparable to Unix yacc so you may want | |
2509 consult a book like O'Reilly's "Lex and Yacc" for some of the finer details. | |
2510 | |
2511 <p> | |
2512 When a syntax error occurs, <tt>yacc.py</tt> performs the following steps: | |
2513 | |
2514 <ol> | |
2515 <li>On the first occurrence of an error, the user-defined <tt>p_error()</tt> function | |
2516 is called with the offending token as an argument. However, if the syntax error is due to | |
2517 reaching the end-of-file, <tt>p_error()</tt> is called with an | |
2518 argument of <tt>None</tt>. | |
2519 Afterwards, the parser enters | |
2520 an "error-recovery" mode in which it will not make future calls to <tt>p_error()</tt> until it | |
2521 has successfully shifted at least 3 tokens onto the parsing stack. | |
2522 | |
2523 <p> | |
2524 <li>If no recovery action is taken in <tt>p_error()</tt>, the offending lookahead token is replaced | |
2525 with a special <tt>error</tt> token. | |
2526 | |
2527 <p> | |
2528 <li>If the offending lookahead token is already set to <tt>error</tt>, the top item of the parsing stack is | |
2529 deleted. | |
2530 | |
2531 <p> | |
2532 <li>If the entire parsing stack is unwound, the parser enters a restart state and attempts to start | |
2533 parsing from its initial state. | |
2534 | |
2535 <p> | |
2536 <li>If a grammar rule accepts <tt>error</tt> as a token, it will be | |
2537 shifted onto the parsing stack. | |
2538 | |
2539 <p> | |
2540 <li>If the top item of the parsing stack is <tt>error</tt>, lookahead tokens will be discarded until the | |
2541 parser can successfully shift a new symbol or reduce a rule involving <tt>error</tt>. | |
2542 </ol> | |
2543 | |
2544 <H4><a name="ply_nn30"></a>6.8.1 Recovery and resynchronization with error rules</H4> | |
2545 | |
2546 | |
2547 The most well-behaved approach for handling syntax errors is to write grammar rules that include the <tt>error</tt> | |
2548 token. For example, suppose your language had a grammar rule for a print statement like this: | |
2549 | |
2550 <blockquote> | |
2551 <pre> | |
2552 def p_statement_print(p): | |
2553 'statement : PRINT expr SEMI' | |
2554 ... | |
2555 </pre> | |
2556 </blockquote> | |
2557 | |
2558 To account for the possibility of a bad expression, you might write an additional grammar rule like this: | |
2559 | |
2560 <blockquote> | |
2561 <pre> | |
2562 def p_statement_print_error(p): | |
2563 'statement : PRINT error SEMI' | |
2564 print("Syntax error in print statement. Bad expression") | |
2565 | |
2566 </pre> | |
2567 </blockquote> | |
2568 | |
2569 In this case, the <tt>error</tt> token will match any sequence of | |
2570 tokens that might appear up to the first semicolon that is | |
2571 encountered. Once the semicolon is reached, the rule will be | |
2572 invoked and the <tt>error</tt> token will go away. | |
2573 | |
2574 <p> | |
2575 This type of recovery is sometimes known as parser resynchronization. | |
2576 The <tt>error</tt> token acts as a wildcard for any bad input text and | |
2577 the token immediately following <tt>error</tt> acts as a | |
2578 synchronization token. | |
2579 | |
2580 <p> | |
2581 It is important to note that the <tt>error</tt> token usually does not appear as the last token | |
2582 on the right in an error rule. For example: | |
2583 | |
2584 <blockquote> | |
2585 <pre> | |
2586 def p_statement_print_error(p): | |
2587 'statement : PRINT error' | |
2588 print("Syntax error in print statement. Bad expression") | |
2589 </pre> | |
2590 </blockquote> | |
2591 | |
2592 This is because the first bad token encountered will cause the rule to | |
2593 be reduced--which may make it difficult to recover if more bad tokens | |
2594 immediately follow. | |
2595 | |
2596 <H4><a name="ply_nn31"></a>6.8.2 Panic mode recovery</H4> | |
2597 | |
2598 | |
2599 An alternative error recovery scheme is to enter a panic mode recovery in which tokens are | |
2600 discarded to a point where the parser might be able to recover in some sensible manner. | |
2601 | |
2602 <p> | |
2603 Panic mode recovery is implemented entirely in the <tt>p_error()</tt> function. For example, this | |
2604 function starts discarding tokens until it reaches a closing '}'. Then, it restarts the | |
2605 parser in its initial state. | |
2606 | |
2607 <blockquote> | |
2608 <pre> | |
2609 def p_error(p): | |
2610 print("Whoa. You are seriously hosed.") | |
2611 if not p: | |
2612 print("End of File!") | |
2613 return | |
2614 | |
2615 # Read ahead looking for a closing '}' | |
2616 while True: | |
2617 tok = parser.token() # Get the next token | |
2618 if not tok or tok.type == 'RBRACE': | |
2619 break | |
2620 parser.restart() | |
2621 </pre> | |
2622 </blockquote> | |
2623 | |
2624 <p> | |
2625 This function simply discards the bad token and tells the parser that the error was ok. | |
2626 | |
2627 <blockquote> | |
2628 <pre> | |
2629 def p_error(p): | |
2630 if p: | |
2631 print("Syntax error at token", p.type) | |
2632 # Just discard the token and tell the parser it's okay. | |
2633 parser.errok() | |
2634 else: | |
2635 print("Syntax error at EOF") | |
2636 </pre> | |
2637 </blockquote> | |
2638 | |
2639 <P> | |
2640 More information on these methods is as follows: | |
2641 </p> | |
2642 | |
2643 <p> | |
2644 <ul> | |
2645 <li><tt>parser.errok()</tt>. This resets the parser state so it doesn't think it's in error-recovery | |
2646 mode. This will prevent an <tt>error</tt> token from being generated and will reset the internal | |
2647 error counters so that the next syntax error will call <tt>p_error()</tt> again. | |
2648 | |
2649 <p> | |
2650 <li><tt>parser.token()</tt>. This returns the next token on the input stream. | |
2651 | |
2652 <p> | |
2653 <li><tt>parser.restart()</tt>. This discards the entire parsing stack and resets the parser | |
2654 to its initial state. | |
2655 </ul> | |
2656 | |
2657 <p> | |
2658 To supply the next lookahead token to the parser, <tt>p_error()</tt> can return a token. This might be | |
2659 useful if trying to synchronize on special characters. For example: | |
2660 | |
2661 <blockquote> | |
2662 <pre> | |
2663 def p_error(p): | |
2664 # Read ahead looking for a terminating ";" | |
2665 while True: | |
2666 tok = parser.token() # Get the next token | |
2667 if not tok or tok.type == 'SEMI': break | |
2668 parser.errok() | |
2669 | |
2670 # Return SEMI to the parser as the next lookahead token | |
2671 return tok | |
2672 </pre> | |
2673 </blockquote> | |
2674 | |
2675 <p> | |
2676 Keep in mind in that the above error handling functions, | |
2677 <tt>parser</tt> is an instance of the parser created by | |
2678 <tt>yacc()</tt>. You'll need to save this instance someplace in your | |
2679 code so that you can refer to it during error handling. | |
2680 </p> | |
2681 | |
2682 <H4><a name="ply_nn35"></a>6.8.3 Signalling an error from a production</H4> | |
2683 | |
2684 | |
2685 If necessary, a production rule can manually force the parser to enter error recovery. This | |
2686 is done by raising the <tt>SyntaxError</tt> exception like this: | |
2687 | |
2688 <blockquote> | |
2689 <pre> | |
2690 def p_production(p): | |
2691 'production : some production ...' | |
2692 raise SyntaxError | |
2693 </pre> | |
2694 </blockquote> | |
2695 | |
2696 The effect of raising <tt>SyntaxError</tt> is the same as if the last symbol shifted onto the | |
2697 parsing stack was actually a syntax error. Thus, when you do this, the last symbol shifted is popped off | |
2698 of the parsing stack and the current lookahead token is set to an <tt>error</tt> token. The parser | |
2699 then enters error-recovery mode where it tries to reduce rules that can accept <tt>error</tt> tokens. | |
2700 The steps that follow from this point are exactly the same as if a syntax error were detected and | |
2701 <tt>p_error()</tt> were called. | |
2702 | |
2703 <P> | |
2704 One important aspect of manually setting an error is that the <tt>p_error()</tt> function will <b>NOT</b> be | |
2705 called in this case. If you need to issue an error message, make sure you do it in the production that | |
2706 raises <tt>SyntaxError</tt>. | |
2707 | |
2708 <P> | |
2709 Note: This feature of PLY is meant to mimic the behavior of the YYERROR macro in yacc. | |
2710 | |
2711 <H4><a name="ply_nn38"></a>6.8.4 When Do Syntax Errors Get Reported</H4> | |
2712 | |
2713 | |
2714 <p> | |
2715 In most cases, yacc will handle errors as soon as a bad input token is | |
2716 detected on the input. However, be aware that yacc may choose to | |
2717 delay error handling until after it has reduced one or more grammar | |
2718 rules first. This behavior might be unexpected, but it's related to | |
2719 special states in the underlying parsing table known as "defaulted | |
2720 states." A defaulted state is parsing condition where the same | |
2721 grammar rule will be reduced regardless of what <em>valid</em> token | |
2722 comes next on the input. For such states, yacc chooses to go ahead | |
2723 and reduce the grammar rule <em>without reading the next input | |
2724 token</em>. If the next token is bad, yacc will eventually get around to reading it and | |
2725 report a syntax error. It's just a little unusual in that you might | |
2726 see some of your grammar rules firing immediately prior to the syntax | |
2727 error. | |
2728 </p> | |
2729 | |
2730 <p> | |
2731 Usually, the delayed error reporting with defaulted states is harmless | |
2732 (and there are other reasons for wanting PLY to behave in this way). | |
2733 However, if you need to turn this behavior off for some reason. You | |
2734 can clear the defaulted states table like this: | |
2735 </p> | |
2736 | |
2737 <blockquote> | |
2738 <pre> | |
2739 parser = yacc.yacc() | |
2740 parser.defaulted_states = {} | |
2741 </pre> | |
2742 </blockquote> | |
2743 | |
2744 <p> | |
2745 Disabling defaulted states is not recommended if your grammar makes use | |
2746 of embedded actions as described in Section 6.11.</p> | |
2747 | |
2748 <H4><a name="ply_nn32"></a>6.8.5 General comments on error handling</H4> | |
2749 | |
2750 | |
2751 For normal types of languages, error recovery with error rules and resynchronization characters is probably the most reliable | |
2752 technique. This is because you can instrument the grammar to catch errors at selected places where it is relatively easy | |
2753 to recover and continue parsing. Panic mode recovery is really only useful in certain specialized applications where you might want | |
2754 to discard huge portions of the input text to find a valid restart point. | |
2755 | |
2756 <H3><a name="ply_nn33"></a>6.9 Line Number and Position Tracking</H3> | |
2757 | |
2758 | |
2759 Position tracking is often a tricky problem when writing compilers. | |
2760 By default, PLY tracks the line number and position of all tokens. | |
2761 This information is available using the following functions: | |
2762 | |
2763 <ul> | |
2764 <li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em> | |
2765 <li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em> | |
2766 </ul> | |
2767 | |
2768 For example: | |
2769 | |
2770 <blockquote> | |
2771 <pre> | |
2772 def p_expression(p): | |
2773 'expression : expression PLUS expression' | |
2774 line = p.lineno(2) # line number of the PLUS token | |
2775 index = p.lexpos(2) # Position of the PLUS token | |
2776 </pre> | |
2777 </blockquote> | |
2778 | |
2779 As an optional feature, <tt>yacc.py</tt> can automatically track line | |
2780 numbers and positions for all of the grammar symbols as well. | |
2781 However, this extra tracking requires extra processing and can | |
2782 significantly slow down parsing. Therefore, it must be enabled by | |
2783 passing the | |
2784 <tt>tracking=True</tt> option to <tt>yacc.parse()</tt>. For example: | |
2785 | |
2786 <blockquote> | |
2787 <pre> | |
2788 yacc.parse(data,tracking=True) | |
2789 </pre> | |
2790 </blockquote> | |
2791 | |
2792 Once enabled, the <tt>lineno()</tt> and <tt>lexpos()</tt> methods work | |
2793 for all grammar symbols. In addition, two additional methods can be | |
2794 used: | |
2795 | |
2796 <ul> | |
2797 <li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>. | |
2798 <li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>. | |
2799 </ul> | |
2800 | |
2801 For example: | |
2802 | |
2803 <blockquote> | |
2804 <pre> | |
2805 def p_expression(p): | |
2806 'expression : expression PLUS expression' | |
2807 p.lineno(1) # Line number of the left expression | |
2808 p.lineno(2) # line number of the PLUS operator | |
2809 p.lineno(3) # line number of the right expression | |
2810 ... | |
2811 start,end = p.linespan(3) # Start,end lines of the right expression | |
2812 starti,endi = p.lexspan(3) # Start,end positions of right expression | |
2813 | |
2814 </pre> | |
2815 </blockquote> | |
2816 | |
2817 Note: The <tt>lexspan()</tt> function only returns the range of values up to the start of the last grammar symbol. | |
2818 | |
2819 <p> | |
2820 Although it may be convenient for PLY to track position information on | |
2821 all grammar symbols, this is often unnecessary. For example, if you | |
2822 are merely using line number information in an error message, you can | |
2823 often just key off of a specific token in the grammar rule. For | |
2824 example: | |
2825 | |
2826 <blockquote> | |
2827 <pre> | |
2828 def p_bad_func(p): | |
2829 'funccall : fname LPAREN error RPAREN' | |
2830 # Line number reported from LPAREN token | |
2831 print("Bad function call at line", p.lineno(2)) | |
2832 </pre> | |
2833 </blockquote> | |
2834 | |
2835 <p> | |
2836 Similarly, you may get better parsing performance if you only | |
2837 selectively propagate line number information where it's needed using | |
2838 the <tt>p.set_lineno()</tt> method. For example: | |
2839 | |
2840 <blockquote> | |
2841 <pre> | |
2842 def p_fname(p): | |
2843 'fname : ID' | |
2844 p[0] = p[1] | |
2845 p.set_lineno(0,p.lineno(1)) | |
2846 </pre> | |
2847 </blockquote> | |
2848 | |
2849 PLY doesn't retain line number information from rules that have already been | |
2850 parsed. If you are building an abstract syntax tree and need to have line numbers, | |
2851 you should make sure that the line numbers appear in the tree itself. | |
2852 | |
2853 <H3><a name="ply_nn34"></a>6.10 AST Construction</H3> | |
2854 | |
2855 | |
2856 <tt>yacc.py</tt> provides no special functions for constructing an | |
2857 abstract syntax tree. However, such construction is easy enough to do | |
2858 on your own. | |
2859 | |
2860 <p>A minimal way to construct a tree is to simply create and | |
2861 propagate a tuple or list in each grammar rule function. There | |
2862 are many possible ways to do this, but one example would be something | |
2863 like this: | |
2864 | |
2865 <blockquote> | |
2866 <pre> | |
2867 def p_expression_binop(p): | |
2868 '''expression : expression PLUS expression | |
2869 | expression MINUS expression | |
2870 | expression TIMES expression | |
2871 | expression DIVIDE expression''' | |
2872 | |
2873 p[0] = ('binary-expression',p[2],p[1],p[3]) | |
2874 | |
2875 def p_expression_group(p): | |
2876 'expression : LPAREN expression RPAREN' | |
2877 p[0] = ('group-expression',p[2]) | |
2878 | |
2879 def p_expression_number(p): | |
2880 'expression : NUMBER' | |
2881 p[0] = ('number-expression',p[1]) | |
2882 </pre> | |
2883 </blockquote> | |
2884 | |
2885 <p> | |
2886 Another approach is to create a set of data structure for different | |
2887 kinds of abstract syntax tree nodes and assign nodes to <tt>p[0]</tt> | |
2888 in each rule. For example: | |
2889 | |
2890 <blockquote> | |
2891 <pre> | |
2892 class Expr: pass | |
2893 | |
2894 class BinOp(Expr): | |
2895 def __init__(self,left,op,right): | |
2896 self.type = "binop" | |
2897 self.left = left | |
2898 self.right = right | |
2899 self.op = op | |
2900 | |
2901 class Number(Expr): | |
2902 def __init__(self,value): | |
2903 self.type = "number" | |
2904 self.value = value | |
2905 | |
2906 def p_expression_binop(p): | |
2907 '''expression : expression PLUS expression | |
2908 | expression MINUS expression | |
2909 | expression TIMES expression | |
2910 | expression DIVIDE expression''' | |
2911 | |
2912 p[0] = BinOp(p[1],p[2],p[3]) | |
2913 | |
2914 def p_expression_group(p): | |
2915 'expression : LPAREN expression RPAREN' | |
2916 p[0] = p[2] | |
2917 | |
2918 def p_expression_number(p): | |
2919 'expression : NUMBER' | |
2920 p[0] = Number(p[1]) | |
2921 </pre> | |
2922 </blockquote> | |
2923 | |
2924 The advantage to this approach is that it may make it easier to attach more complicated | |
2925 semantics, type checking, code generation, and other features to the node classes. | |
2926 | |
2927 <p> | |
2928 To simplify tree traversal, it may make sense to pick a very generic | |
2929 tree structure for your parse tree nodes. For example: | |
2930 | |
2931 <blockquote> | |
2932 <pre> | |
2933 class Node: | |
2934 def __init__(self,type,children=None,leaf=None): | |
2935 self.type = type | |
2936 if children: | |
2937 self.children = children | |
2938 else: | |
2939 self.children = [ ] | |
2940 self.leaf = leaf | |
2941 | |
2942 def p_expression_binop(p): | |
2943 '''expression : expression PLUS expression | |
2944 | expression MINUS expression | |
2945 | expression TIMES expression | |
2946 | expression DIVIDE expression''' | |
2947 | |
2948 p[0] = Node("binop", [p[1],p[3]], p[2]) | |
2949 </pre> | |
2950 </blockquote> | |
2951 | |
2952 <H3><a name="ply_nn35"></a>6.11 Embedded Actions</H3> | |
2953 | |
2954 | |
2955 The parsing technique used by yacc only allows actions to be executed at the end of a rule. For example, | |
2956 suppose you have a rule like this: | |
2957 | |
2958 <blockquote> | |
2959 <pre> | |
2960 def p_foo(p): | |
2961 "foo : A B C D" | |
2962 print("Parsed a foo", p[1],p[2],p[3],p[4]) | |
2963 </pre> | |
2964 </blockquote> | |
2965 | |
2966 <p> | |
2967 In this case, the supplied action code only executes after all of the | |
2968 symbols <tt>A</tt>, <tt>B</tt>, <tt>C</tt>, and <tt>D</tt> have been | |
2969 parsed. Sometimes, however, it is useful to execute small code | |
2970 fragments during intermediate stages of parsing. For example, suppose | |
2971 you wanted to perform some action immediately after <tt>A</tt> has | |
2972 been parsed. To do this, write an empty rule like this: | |
2973 | |
2974 <blockquote> | |
2975 <pre> | |
2976 def p_foo(p): | |
2977 "foo : A seen_A B C D" | |
2978 print("Parsed a foo", p[1],p[3],p[4],p[5]) | |
2979 print("seen_A returned", p[2]) | |
2980 | |
2981 def p_seen_A(p): | |
2982 "seen_A :" | |
2983 print("Saw an A = ", p[-1]) # Access grammar symbol to left | |
2984 p[0] = some_value # Assign value to seen_A | |
2985 | |
2986 </pre> | |
2987 </blockquote> | |
2988 | |
2989 <p> | |
2990 In this example, the empty <tt>seen_A</tt> rule executes immediately | |
2991 after <tt>A</tt> is shifted onto the parsing stack. Within this | |
2992 rule, <tt>p[-1]</tt> refers to the symbol on the stack that appears | |
2993 immediately to the left of the <tt>seen_A</tt> symbol. In this case, | |
2994 it would be the value of <tt>A</tt> in the <tt>foo</tt> rule | |
2995 immediately above. Like other rules, a value can be returned from an | |
2996 embedded action by simply assigning it to <tt>p[0]</tt> | |
2997 | |
2998 <p> | |
2999 The use of embedded actions can sometimes introduce extra shift/reduce conflicts. For example, | |
3000 this grammar has no conflicts: | |
3001 | |
3002 <blockquote> | |
3003 <pre> | |
3004 def p_foo(p): | |
3005 """foo : abcd | |
3006 | abcx""" | |
3007 | |
3008 def p_abcd(p): | |
3009 "abcd : A B C D" | |
3010 | |
3011 def p_abcx(p): | |
3012 "abcx : A B C X" | |
3013 </pre> | |
3014 </blockquote> | |
3015 | |
3016 However, if you insert an embedded action into one of the rules like this, | |
3017 | |
3018 <blockquote> | |
3019 <pre> | |
3020 def p_foo(p): | |
3021 """foo : abcd | |
3022 | abcx""" | |
3023 | |
3024 def p_abcd(p): | |
3025 "abcd : A B C D" | |
3026 | |
3027 def p_abcx(p): | |
3028 "abcx : A B seen_AB C X" | |
3029 | |
3030 def p_seen_AB(p): | |
3031 "seen_AB :" | |
3032 </pre> | |
3033 </blockquote> | |
3034 | |
3035 an extra shift-reduce conflict will be introduced. This conflict is | |
3036 caused by the fact that the same symbol <tt>C</tt> appears next in | |
3037 both the <tt>abcd</tt> and <tt>abcx</tt> rules. The parser can either | |
3038 shift the symbol (<tt>abcd</tt> rule) or reduce the empty | |
3039 rule <tt>seen_AB</tt> (<tt>abcx</tt> rule). | |
3040 | |
3041 <p> | |
3042 A common use of embedded rules is to control other aspects of parsing | |
3043 such as scoping of local variables. For example, if you were parsing C code, you might | |
3044 write code like this: | |
3045 | |
3046 <blockquote> | |
3047 <pre> | |
3048 def p_statements_block(p): | |
3049 "statements: LBRACE new_scope statements RBRACE""" | |
3050 # Action code | |
3051 ... | |
3052 pop_scope() # Return to previous scope | |
3053 | |
3054 def p_new_scope(p): | |
3055 "new_scope :" | |
3056 # Create a new scope for local variables | |
3057 s = new_scope() | |
3058 push_scope(s) | |
3059 ... | |
3060 </pre> | |
3061 </blockquote> | |
3062 | |
3063 In this case, the embedded action <tt>new_scope</tt> executes | |
3064 immediately after a <tt>LBRACE</tt> (<tt>{</tt>) symbol is parsed. | |
3065 This might adjust internal symbol tables and other aspects of the | |
3066 parser. Upon completion of the rule <tt>statements_block</tt>, code | |
3067 might undo the operations performed in the embedded action | |
3068 (e.g., <tt>pop_scope()</tt>). | |
3069 | |
3070 <H3><a name="ply_nn36"></a>6.12 Miscellaneous Yacc Notes</H3> | |
3071 | |
3072 | |
3073 <ul> | |
3074 | |
3075 <li>By default, <tt>yacc.py</tt> relies on <tt>lex.py</tt> for tokenizing. However, an alternative tokenizer | |
3076 can be supplied as follows: | |
3077 | |
3078 <blockquote> | |
3079 <pre> | |
3080 parser = yacc.parse(lexer=x) | |
3081 </pre> | |
3082 </blockquote> | |
3083 in this case, <tt>x</tt> must be a Lexer object that minimally has a <tt>x.token()</tt> method for retrieving the next | |
3084 token. If an input string is given to <tt>yacc.parse()</tt>, the lexer must also have an <tt>x.input()</tt> method. | |
3085 | |
3086 <p> | |
3087 <li>By default, the yacc generates tables in debugging mode (which produces the parser.out file and other output). | |
3088 To disable this, use | |
3089 | |
3090 <blockquote> | |
3091 <pre> | |
3092 parser = yacc.yacc(debug=False) | |
3093 </pre> | |
3094 </blockquote> | |
3095 | |
3096 <p> | |
3097 <li>To change the name of the <tt>parsetab.py</tt> file, use: | |
3098 | |
3099 <blockquote> | |
3100 <pre> | |
3101 parser = yacc.yacc(tabmodule="foo") | |
3102 </pre> | |
3103 </blockquote> | |
3104 | |
3105 <P> | |
3106 Normally, the <tt>parsetab.py</tt> file is placed into the same directory as | |
3107 the module where the parser is defined. If you want it to go somewhere else, you can | |
3108 given an absolute package name for <tt>tabmodule</tt> instead. In that case, the | |
3109 tables will be written there. | |
3110 </p> | |
3111 | |
3112 <p> | |
3113 <li>To change the directory in which the <tt>parsetab.py</tt> file (and other output files) are written, use: | |
3114 <blockquote> | |
3115 <pre> | |
3116 parser = yacc.yacc(tabmodule="foo",outputdir="somedirectory") | |
3117 </pre> | |
3118 </blockquote> | |
3119 | |
3120 <p> | |
3121 Note: Be aware that unless the directory specified is also on Python's path (<tt>sys.path</tt>), subsequent | |
3122 imports of the table file will fail. As a general rule, it's better to specify a destination using the | |
3123 <tt>tabmodule</tt> argument instead of directly specifying a directory using the <tt>outputdir</tt> argument. | |
3124 </p> | |
3125 | |
3126 <p> | |
3127 <li>To prevent yacc from generating any kind of parser table file, use: | |
3128 <blockquote> | |
3129 <pre> | |
3130 parser = yacc.yacc(write_tables=False) | |
3131 </pre> | |
3132 </blockquote> | |
3133 | |
3134 Note: If you disable table generation, yacc() will regenerate the parsing tables | |
3135 each time it runs (which may take awhile depending on how large your grammar is). | |
3136 | |
3137 <P> | |
3138 <li>To print copious amounts of debugging during parsing, use: | |
3139 | |
3140 <blockquote> | |
3141 <pre> | |
3142 parser = yacc.parse(debug=True) | |
3143 </pre> | |
3144 </blockquote> | |
3145 | |
3146 <p> | |
3147 <li>Since the generation of the LALR tables is relatively expensive, previously generated tables are | |
3148 cached and reused if possible. The decision to regenerate the tables is determined by taking an MD5 | |
3149 checksum of all grammar rules and precedence rules. Only in the event of a mismatch are the tables regenerated. | |
3150 | |
3151 <p> | |
3152 It should be noted that table generation is reasonably efficient, even for grammars that involve around a 100 rules | |
3153 and several hundred states. </li> | |
3154 | |
3155 | |
3156 <p> | |
3157 <li>Since LR parsing is driven by tables, the performance of the parser is largely independent of the | |
3158 size of the grammar. The biggest bottlenecks will be the lexer and the complexity of the code in your grammar rules. | |
3159 </li> | |
3160 </p> | |
3161 | |
3162 <p> | |
3163 <li><tt>yacc()</tt> also allows parsers to be defined as classes and as closures (see the section on alternative specification of | |
3164 lexers). However, be aware that only one parser may be defined in a single module (source file). There are various | |
3165 error checks and validation steps that may issue confusing error messages if you try to define multiple parsers | |
3166 in the same source file. | |
3167 </li> | |
3168 </p> | |
3169 | |
3170 </ul> | |
3171 </p> | |
3172 | |
3173 | |
3174 <H2><a name="ply_nn37"></a>7. Multiple Parsers and Lexers</H2> | |
3175 | |
3176 | |
3177 In advanced parsing applications, you may want to have multiple | |
3178 parsers and lexers. | |
3179 | |
3180 <p> | |
3181 As a general rules this isn't a problem. However, to make it work, | |
3182 you need to carefully make sure everything gets hooked up correctly. | |
3183 First, make sure you save the objects returned by <tt>lex()</tt> and | |
3184 <tt>yacc()</tt>. For example: | |
3185 | |
3186 <blockquote> | |
3187 <pre> | |
3188 lexer = lex.lex() # Return lexer object | |
3189 parser = yacc.yacc() # Return parser object | |
3190 </pre> | |
3191 </blockquote> | |
3192 | |
3193 Next, when parsing, make sure you give the <tt>parse()</tt> function a reference to the lexer it | |
3194 should be using. For example: | |
3195 | |
3196 <blockquote> | |
3197 <pre> | |
3198 parser.parse(text,lexer=lexer) | |
3199 </pre> | |
3200 </blockquote> | |
3201 | |
3202 If you forget to do this, the parser will use the last lexer | |
3203 created--which is not always what you want. | |
3204 | |
3205 <p> | |
3206 Within lexer and parser rule functions, these objects are also | |
3207 available. In the lexer, the "lexer" attribute of a token refers to | |
3208 the lexer object that triggered the rule. For example: | |
3209 | |
3210 <blockquote> | |
3211 <pre> | |
3212 def t_NUMBER(t): | |
3213 r'\d+' | |
3214 ... | |
3215 print(t.lexer) # Show lexer object | |
3216 </pre> | |
3217 </blockquote> | |
3218 | |
3219 In the parser, the "lexer" and "parser" attributes refer to the lexer | |
3220 and parser objects respectively. | |
3221 | |
3222 <blockquote> | |
3223 <pre> | |
3224 def p_expr_plus(p): | |
3225 'expr : expr PLUS expr' | |
3226 ... | |
3227 print(p.parser) # Show parser object | |
3228 print(p.lexer) # Show lexer object | |
3229 </pre> | |
3230 </blockquote> | |
3231 | |
3232 If necessary, arbitrary attributes can be attached to the lexer or parser object. | |
3233 For example, if you wanted to have different parsing modes, you could attach a mode | |
3234 attribute to the parser object and look at it later. | |
3235 | |
3236 <H2><a name="ply_nn38"></a>8. Using Python's Optimized Mode</H2> | |
3237 | |
3238 | |
3239 Because PLY uses information from doc-strings, parsing and lexing | |
3240 information must be gathered while running the Python interpreter in | |
3241 normal mode (i.e., not with the -O or -OO options). However, if you | |
3242 specify optimized mode like this: | |
3243 | |
3244 <blockquote> | |
3245 <pre> | |
3246 lex.lex(optimize=1) | |
3247 yacc.yacc(optimize=1) | |
3248 </pre> | |
3249 </blockquote> | |
3250 | |
3251 then PLY can later be used when Python runs in optimized mode. To make this work, | |
3252 make sure you first run Python in normal mode. Once the lexing and parsing tables | |
3253 have been generated the first time, run Python in optimized mode. PLY will use | |
3254 the tables without the need for doc strings. | |
3255 | |
3256 <p> | |
3257 Beware: running PLY in optimized mode disables a lot of error | |
3258 checking. You should only do this when your project has stabilized | |
3259 and you don't need to do any debugging. One of the purposes of | |
3260 optimized mode is to substantially decrease the startup time of | |
3261 your compiler (by assuming that everything is already properly | |
3262 specified and works). | |
3263 | |
3264 <H2><a name="ply_nn44"></a>9. Advanced Debugging</H2> | |
3265 | |
3266 | |
3267 <p> | |
3268 Debugging a compiler is typically not an easy task. PLY provides some | |
3269 advanced diagostic capabilities through the use of Python's | |
3270 <tt>logging</tt> module. The next two sections describe this: | |
3271 | |
3272 <H3><a name="ply_nn45"></a>9.1 Debugging the lex() and yacc() commands</H3> | |
3273 | |
3274 | |
3275 <p> | |
3276 Both the <tt>lex()</tt> and <tt>yacc()</tt> commands have a debugging | |
3277 mode that can be enabled using the <tt>debug</tt> flag. For example: | |
3278 | |
3279 <blockquote> | |
3280 <pre> | |
3281 lex.lex(debug=True) | |
3282 yacc.yacc(debug=True) | |
3283 </pre> | |
3284 </blockquote> | |
3285 | |
3286 Normally, the output produced by debugging is routed to either | |
3287 standard error or, in the case of <tt>yacc()</tt>, to a file | |
3288 <tt>parser.out</tt>. This output can be more carefully controlled | |
3289 by supplying a logging object. Here is an example that adds | |
3290 information about where different debugging messages are coming from: | |
3291 | |
3292 <blockquote> | |
3293 <pre> | |
3294 # Set up a logging object | |
3295 import logging | |
3296 logging.basicConfig( | |
3297 level = logging.DEBUG, | |
3298 filename = "parselog.txt", | |
3299 filemode = "w", | |
3300 format = "%(filename)10s:%(lineno)4d:%(message)s" | |
3301 ) | |
3302 log = logging.getLogger() | |
3303 | |
3304 lex.lex(debug=True,debuglog=log) | |
3305 yacc.yacc(debug=True,debuglog=log) | |
3306 </pre> | |
3307 </blockquote> | |
3308 | |
3309 If you supply a custom logger, the amount of debugging | |
3310 information produced can be controlled by setting the logging level. | |
3311 Typically, debugging messages are either issued at the <tt>DEBUG</tt>, | |
3312 <tt>INFO</tt>, or <tt>WARNING</tt> levels. | |
3313 | |
3314 <p> | |
3315 PLY's error messages and warnings are also produced using the logging | |
3316 interface. This can be controlled by passing a logging object | |
3317 using the <tt>errorlog</tt> parameter. | |
3318 | |
3319 <blockquote> | |
3320 <pre> | |
3321 lex.lex(errorlog=log) | |
3322 yacc.yacc(errorlog=log) | |
3323 </pre> | |
3324 </blockquote> | |
3325 | |
3326 If you want to completely silence warnings, you can either pass in a | |
3327 logging object with an appropriate filter level or use the <tt>NullLogger</tt> | |
3328 object defined in either <tt>lex</tt> or <tt>yacc</tt>. For example: | |
3329 | |
3330 <blockquote> | |
3331 <pre> | |
3332 yacc.yacc(errorlog=yacc.NullLogger()) | |
3333 </pre> | |
3334 </blockquote> | |
3335 | |
3336 <H3><a name="ply_nn46"></a>9.2 Run-time Debugging</H3> | |
3337 | |
3338 | |
3339 <p> | |
3340 To enable run-time debugging of a parser, use the <tt>debug</tt> option to parse. This | |
3341 option can either be an integer (which simply turns debugging on or off) or an instance | |
3342 of a logger object. For example: | |
3343 | |
3344 <blockquote> | |
3345 <pre> | |
3346 log = logging.getLogger() | |
3347 parser.parse(input,debug=log) | |
3348 </pre> | |
3349 </blockquote> | |
3350 | |
3351 If a logging object is passed, you can use its filtering level to control how much | |
3352 output gets generated. The <tt>INFO</tt> level is used to produce information | |
3353 about rule reductions. The <tt>DEBUG</tt> level will show information about the | |
3354 parsing stack, token shifts, and other details. The <tt>ERROR</tt> level shows information | |
3355 related to parsing errors. | |
3356 | |
3357 <p> | |
3358 For very complicated problems, you should pass in a logging object that | |
3359 redirects to a file where you can more easily inspect the output after | |
3360 execution. | |
3361 | |
3362 <H2><a name="ply_nn49"></a>10. Packaging Advice</H2> | |
3363 | |
3364 | |
3365 <p> | |
3366 If you are distributing a package that makes use of PLY, you should | |
3367 spend a few moments thinking about how you want to handle the files | |
3368 that are automatically generated. For example, the <tt>parsetab.py</tt> | |
3369 file generated by the <tt>yacc()</tt> function.</p> | |
3370 | |
3371 <p> | |
3372 Starting in PLY-3.6, the table files are created in the same directory | |
3373 as the file where a parser is defined. This means that the | |
3374 <tt>parsetab.py</tt> file will live side-by-side with your parser | |
3375 specification. In terms of packaging, this is probably the easiest and | |
3376 most sane approach to manage. You don't need to give <tt>yacc()</tt> | |
3377 any extra arguments and it should just "work."</p> | |
3378 | |
3379 <p> | |
3380 One concern is the management of the <tt>parsetab.py</tt> file itself. | |
3381 For example, should you have this file checked into version control (e.g., GitHub), | |
3382 should it be included in a package distribution as a normal file, or should you | |
3383 just let PLY generate it automatically for the user when they install your package? | |
3384 </p> | |
3385 | |
3386 <p> | |
3387 As of PLY-3.6, the <tt>parsetab.py</tt> file should be compatible across all versions | |
3388 of Python including Python 2 and 3. Thus, a table file generated in Python 2 should | |
3389 work fine if it's used on Python 3. Because of this, it should be relatively harmless | |
3390 to distribute the <tt>parsetab.py</tt> file yourself if you need to. However, be aware | |
3391 that older/newer versions of PLY may try to regenerate the file if there are future | |
3392 enhancements or changes to its format. | |
3393 </p> | |
3394 | |
3395 <p> | |
3396 To make the generation of table files easier for the purposes of installation, you might | |
3397 way to make your parser files executable using the <tt>-m</tt> option or similar. For | |
3398 example: | |
3399 </p> | |
3400 | |
3401 <blockquote> | |
3402 <pre> | |
3403 # calc.py | |
3404 ... | |
3405 ... | |
3406 def make_parser(): | |
3407 parser = yacc.yacc() | |
3408 return parser | |
3409 | |
3410 if __name__ == '__main__': | |
3411 make_parser() | |
3412 </pre> | |
3413 </blockquote> | |
3414 | |
3415 <p> | |
3416 You can then use a command such as <tt>python -m calc.py</tt> to generate the tables. Alternatively, | |
3417 a <tt>setup.py</tt> script, can import the module and use <tt>make_parser()</tt> to create the | |
3418 parsing tables. | |
3419 </p> | |
3420 | |
3421 <p> | |
3422 If you're willing to sacrifice a little startup time, you can also instruct PLY to never write the | |
3423 tables using <tt>yacc.yacc(write_tables=False, debug=False)</tt>. In this mode, PLY will regenerate | |
3424 the parsing tables from scratch each time. For a small grammar, you probably won't notice. For a | |
3425 large grammar, you should probably reconsider--the parsing tables are meant to dramatically speed up this process. | |
3426 </p> | |
3427 | |
3428 <p> | |
3429 During operation, is is normal for PLY to produce diagnostic error | |
3430 messages (usually printed to standard error). These are generated | |
3431 entirely using the <tt>logging</tt> module. If you want to redirect | |
3432 these messages or silence them, you can provide your own logging | |
3433 object to <tt>yacc()</tt>. For example: | |
3434 </p> | |
3435 | |
3436 <blockquote> | |
3437 <pre> | |
3438 import logging | |
3439 log = logging.getLogger('ply') | |
3440 ... | |
3441 parser = yacc.yacc(errorlog=log) | |
3442 </pre> | |
3443 </blockquote> | |
3444 | |
3445 <H2><a name="ply_nn39"></a>11. Where to go from here?</H2> | |
3446 | |
3447 | |
3448 The <tt>examples</tt> directory of the PLY distribution contains several simple examples. Please consult a | |
3449 compilers textbook for the theory and underlying implementation details or LR parsing. | |
3450 | |
3451 </body> | |
3452 </html> | |
3453 | |
3454 | |
3455 | |
3456 | |
3457 | |
3458 | |
3459 |