Search Results

Search found 231 results on 10 pages for 'lex webb'.

Page 1/10 | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Unable to compile output of lex

    - by dbarker
    When I attempt to compile the output of this trivial lex program: # lex.l integer printf("found keyword INT"); using: $ gcc lex.yy.c I get: Undefined symbols: "_yywrap", referenced from: _yylex in ccMsRtp7.o _input in ccMsRtp7.o "_main", referenced from: start in crt1.10.6.o ld: symbol(s) not found collect2: ld returned 1 exit status lex --version tells me I'm actually using 'flex 2.5.35' although ls -fla `which lex` isn't a symlink. Any ideas why the output won't compile?

    Read the article

  • How To Compile YACC And LEX?

    - by nisha
    Actually I'm having YACC file as pos.yacc and LEX file name is pos1.lex.. while compilling I'm getting the folowing error... malathy@malathy:~$ cc lex.yy.c y.tab.c -ly -ll pos1.lex %{ #include "y.tab.h" int yylval; %} DIGIT [0-9]+ %% {DIGIT} {yylval=atoi(yytext);return DIGIT;} [\n ] {} . {return *yytext;} %% yacc file is pos.yacc %token DIGIT %% s:e {printf("%d\n",$1);} e:DIGIT {$$=$1;} |e e "+" {$$=$1+$2;} |e e "*" {$$=$1*$2;} |e e "-" {$$=$1-$2;} |e e "/" {$$=$1/$2;} ; %% main() { yyparse(); } yyerror() { printf("Error"); } so while compiling i m getting like malathy@malathy:~$ cc lex.yy.c y.tab.c -ly -ll pos.y: In function ‘yyerror’: pos.y:16: warning: incompatible implicit declaration of built-in function ‘printf’ pos.y: In function ‘yyparse’: pos.y:4: warning: incompatible implicit declaration of built-in function ‘printf’

    Read the article

  • Error with Phoenix placeholder _val in Boost.Spirit.Lex :(

    - by GooRoo
    Hello, everybody. I'm newbie in Boost.Spirit.Lex. Some strange error appears every time I try to use lex::_val in semantics actions in my simple lexer: #ifndef _TOKENS_H_ #define _TOKENS_H_ #include <iostream> #include <string> #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_statement.hpp> #include <boost/spirit/include/phoenix_container.hpp> namespace lex = boost::spirit::lex; namespace phx = boost::phoenix; enum tokenids { ID_IDENTIFICATOR = 1, ID_CONSTANT, ID_OPERATION, ID_BRACKET, ID_WHITESPACES }; template <typename Lexer> struct mega_tokens : lex::lexer<Lexer> { mega_tokens() : identifier(L"[a-zA-Z_][a-zA-Z0-9_]*", ID_IDENTIFICATOR) , constant (L"[0-9]+(\\.[0-9]+)?", ID_CONSTANT ) , operation (L"[\\+\\-\\*/]", ID_OPERATION ) , bracket (L"[\\(\\)\\[\\]]", ID_BRACKET ) { using lex::_tokenid; using lex::_val; using phx::val; this->self = operation [ std::wcout << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] | identifier [ std::wcout << val(L'<') << _tokenid << val(L':') << _val << val(L'>') ] | constant [ std::wcout << val(L'<') << _tokenid // << val(L':') << _val << val(L'>') ] | bracket [ std::wcout << phx::val(lex::_val) << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] ; } lex::token_def<wchar_t, wchar_t> operation; lex::token_def<std::wstring, wchar_t> identifier; lex::token_def<double, wchar_t> constant; lex::token_def<wchar_t, wchar_t> bracket; }; #endif // _TOKENS_H_ and #include <cstdlib> #include <iostream> #include <locale> #include <boost/spirit/include/lex_lexertl.hpp> #include "tokens.h" int main() { setlocale(LC_ALL, "Russian"); namespace lex = boost::spirit::lex; typedef std::wstring::iterator base_iterator; typedef lex::lexertl::token < base_iterator, boost::mpl::vector<wchar_t, std::wstring, double, wchar_t>, boost::mpl::true_ > token_type; typedef lex::lexertl::actor_lexer<token_type> lexer_type; typedef mega_tokens<lexer_type>::iterator_type iterator_type; mega_tokens<lexer_type> mega_lexer; std::wstring str = L"alfa+x1*(2.836-x2[i])"; base_iterator first = str.begin(); bool r = lex::tokenize(first, str.end(), mega_lexer); if (r) { std::wcout << L"Success" << std::endl; } else { std::wstring rest(first, str.end()); std::wcerr << L"Lexical analysis failed\n" << L"stopped at: \"" << rest << L"\"\n"; } return EXIT_SUCCESS; } This code causes an error in Boost header 'boost/spirit/home/lex/argument.hpp' on line 167 while compiling: return: can't convert 'const boost::variant' to 'boost::variant &' When I don't use lex::_val program compiles with no errors. Obviously, I use _val in wrong way, but I do not know how to do this correctly. Help, please! :) P.S. And sorry for my terrible English…

    Read the article

  • why am i getting error in this switch statement written in c

    - by mekasperasky
    I have a character array b which stores different identifiers in different iterations . I have to compare b with various identifiers of the programming language C and print it into a file . When i do it using the following switch statement it gives me errors b[i]='\0'; switch(b[i]) { case "if":fprintf(fp2,"if ----> IDENTIFIER \n"); case "then":fprintf(fp2,"then ----> IDENTIFIER \n"); case "else":fprintf(fp2,"else ----> IDENTIFIER \n"); case "switch":fprintf(fp2,"switch ----> IDENTIFIER \n"); case 'printf':fprintf(fp2,"prtintf ----> IDENTIFIER \n"); case 'scanf':fprintf(fp2,"else ----> IDENTIFIER \n"); case 'NULL':fprintf(fp2,"NULL ----> IDENTIFIER \n"); case 'int':fprintf(fp2,"INT ----> IDENTIFIER \n"); case 'char':fprintf(fp2,"char ----> IDENTIFIER \n"); case 'float':fprintf(fp2,"float ----> IDENTIFIER \n"); case 'long':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'double':fprintf(fp2,"double ----> IDENTIFIER \n"); case 'char':fprintf(fp2,"char ----> IDENTIFIER \n"); case 'const':fprintf(fp2,"const ----> IDENTIFIER \n"); case 'continue':fprintf(fp2,"continue ----> IDENTIFIER \n"); case 'break':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'for':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'size of':fprintf(fp2,"size of ----> IDENTIFIER \n"); case 'register':fprintf(fp2,"register ----> IDENTIFIER \n"); case 'short':fprintf(fp2,"short ----> IDENTIFIER \n"); case 'auto':fprintf(fp2,"auto ----> IDENTIFIER \n"); case 'while':fprintf(fp2,"while ----> IDENTIFIER \n"); case 'do':fprintf(fp2,"do ----> IDENTIFIER \n"); case 'case':fprintf(fp2,"case ----> IDENTIFIER \n"); } the error being lex.c:94:13: warning: character constant too long for its type lex.c:95:13: warning: character constant too long for its type lex.c:96:13: warning: multi-character character constant lex.c:97:13: warning: multi-character character constant lex.c:98:13: warning: multi-character character constant lex.c:99:13: warning: character constant too long for its type lex.c:100:13: warning: multi-character character constant lex.c:101:13: warning: character constant too long for its type lex.c:102:13: warning: multi-character character constant lex.c:103:13: warning: character constant too long for its type lex.c:104:13: warning: character constant too long for its type lex.c:105:13: warning: character constant too long for its type lex.c:106:13: warning: multi-character character constant lex.c:107:13: warning: character constant too long for its type lex.c:108:13: warning: character constant too long for its type lex.c:109:13: warning: character constant too long for its type lex.c:110:12: warning: multi-character character constant lex.c:111:13: warning: character constant too long for its type lex.c:112:13: warning: multi-character character constant lex.c:113:13: warning: multi-character character constant lex.c: In function ‘int main()’: lex.c:90: error: case label does not reduce to an integer constant lex.c:91: error: case label does not reduce to an integer constant lex.c:92: error: case label does not reduce to an integer constant lex.c:93: error: case label does not reduce to an integer constant lex.c:94: warning: overflow in implicit constant conversion lex.c:95: warning: overflow in implicit constant conversion lex.c:95: error: duplicate case value lex.c:94: error: previously used here lex.c:96: warning: overflow in implicit constant conversion lex.c:97: warning: overflow in implicit constant conversion lex.c:98: warning: overflow in implicit constant conversion lex.c:99: warning: overflow in implicit constant conversion lex.c:99: error: duplicate case value lex.c:97: error: previously used here lex.c:100: warning: overflow in implicit constant conversion lex.c:101: warning: overflow in implicit constant conversion lex.c:102: warning: overflow in implicit constant conversion lex.c:102: error: duplicate case value lex.c:98: error: previously used here lex.c:103: warning: overflow in implicit constant conversion lex.c:103: error: duplicate case value lex.c:97: error: previously used here lex.c:104: warning: overflow in implicit constant conversion lex.c:104: error: duplicate case value lex.c:101: error: previously used here lex.c:105: warning: overflow in implicit constant conversion lex.c:106: warning: overflow in implicit constant conversion lex.c:106: error: duplicate case value lex.c:98: error: previously used here lex.c:107: warning: overflow in implicit constant conversion lex.c:107: error: duplicate case value lex.c:94: error: previously used here lex.c:108: warning: overflow in implicit constant conversion lex.c:108: error: duplicate case value lex.c:98: error: previously used here lex.c:109: warning: overflow in implicit constant conversion lex.c:109: error: duplicate case value lex.c:97: error: previously used here lex.c:110: warning: overflow in implicit constant conversion lex.c:111: warning: overflow in implicit constant conversion lex.c:111: error: duplicate case value lex.c:101: error: previously used here lex.c:112: warning: overflow in implicit constant conversion lex.c:112: error: duplicate case value lex.c:110: error: previously used here lex.c:113: warning: overflow in implicit constant conversion lex.c:113: error: duplicate case value lex.c:101: error: previously used here

    Read the article

  • Yacc and Lex inclusion confusion

    - by thejohnmeister
    I am wondering how to correctly compile a program with a Makefile that has calls to yyparse in it? This is what I do: I have a Makefile that compiles all my regular files and they have no connections to y.tab.c or lex.yy.c (Am I supposed to have them?) I do this on top of my code: #include "y.tab.c" #include "lex.yy.c" #include "y.tab.h" This is what happens when I try to make the program: When I just type in "make", it gives me many warnings. Some of the examples are shown below. In function yywrap': /src/parser.y:12: multiple definition ofyywrap' server.o :/src/parser.y:12: first defined here utils.o: In function yyparse': /src/y.tab.c:1175: multiple definition ofyyparse' server.o:/src/y.tab.c:1175: first defined here utils.o I get many different errors referring to different yy_* files. I have compiled successfully with multiple calls to yyparse in the past, but this time seems to be different. It seems an awful lot like an inclusion problem somewhere but I can't tell what it is. All my header files have ifndef conditions, as well. Thanks for your help!

    Read the article

  • Flex (Lex, not actionscript or w/e) Error

    - by incrediman
    I'm totally new to flex. I'm getting a build error when using flex. That is, I've generated a .c file using flex, and, when running it, am getting this error: 1>lextest.obj : error LNK2001: unresolved external symbol "int __cdecl isatty(int)" (?isatty@@YAHH@Z) 1>C:\...\lextest.exe : fatal error LNK1120: 1 unresolved externals here is the lex file I'm using (grabbed from here): /*** Definition section ***/ %{ /* C code to be copied verbatim */ #include <stdio.h> %} /* This tells flex to read only one input file */ %option noyywrap %% /*** Rules section ***/ /* [0-9]+ matches a string of one or more digits */ [0-9]+ { /* yytext is a string containing the matched text. */ printf("Saw an integer: %s\n", yytext); } . { /* Ignore all other characters. */ } %% /*** C Code section ***/ int main(void) { /* Call the lexer, then quit. */ yylex(); return 0; } As well, why do I have to put a 'main' function in the lex syntax code? What I'd like is to be able to call yylex(); from another c file.

    Read the article

  • Parsing "true" and "false" using Boost.Spirit.Lex and Boost.Spirit.Qi

    - by Andrew Ross
    As the first stage of a larger grammar using Boost.Spirit I'm trying to parse "true" and "false" to produce the corresponding bool values, true and false. I'm using Spirit.Lex to tokenize the input and have a working implementation for integer and floating point literals (including those expressed in a relaxed scientific notation), exposing int and float attributes. Token definitions #include <boost/spirit/include/lex_lexertl.hpp> namespace lex = boost::spirit::lex; typedef boost::mpl::vector<int, float, bool> token_value_type; template <typename Lexer> struct basic_literal_tokens : lex::lexer<Lexer> { basic_literal_tokens() { this->self.add_pattern("INT", "[-+]?[0-9]+"); int_literal = "{INT}"; // To be lexed as a float a numeric literal must have a decimal point // or include an exponent, otherwise it will be considered an integer. float_literal = "{INT}(((\\.[0-9]+)([eE]{INT})?)|([eE]{INT}))"; literal_true = "true"; literal_false = "false"; this->self = literal_true | literal_false | float_literal | int_literal; } lex::token_def<int> int_literal; lex::token_def<float> float_literal; lex::token_def<bool> literal_true, literal_false; }; Testing parsing of float literals My real implementation uses Boost.Test, but this is a self-contained example. #include <string> #include <iostream> #include <cmath> #include <cstdlib> #include <limits> bool parse_and_check_float(std::string const & input, float expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); float actual; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, basic_literal_lexer.float_literal, actual); return result && std::abs(expected - actual) < std::numeric_limits<float>::epsilon(); } int main(int argc, char *argv[]) { if (parse_and_check_float("+31.4e-1", 3.14)) { return EXIT_SUCCESS; } else { return EXIT_FAILURE; } } Parsing "true" and "false" My problem is when trying to parse "true" and "false". This is the test code I'm using (after removing the Boost.Test parts): bool parse_and_check_bool(std::string const & input, bool expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); bool actual; lex::token_def<bool> parser = expected ? basic_literal_lexer.literal_true : basic_literal_lexer.literal_false; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, parser, actual); return result && actual == expected; } but compilation fails with: boost/spirit/home/qi/detail/assign_to.hpp: In function ‘void boost::spirit::traits::assign_to(const Iterator&, const Iterator&, Attribute&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Attribute = bool]’: boost/spirit/home/lex/lexer/lexertl/token.hpp:434: instantiated from ‘static void boost::spirit::traits::assign_to_attribute_from_value<Attribute, boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>, void>::call(const boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>&, Attribute&) [with Attribute = bool, Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, AttributeTypes = boost::mpl::vector<int, float, bool, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, HasState = mpl_::bool_<true>]’ ... backtrace of instantiation points .... boost/spirit/home/qi/detail/assign_to.hpp:79: error: no matching function for call to ‘boost::spirit::traits::assign_to_attribute_from_iterators<bool, __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, void>::call(const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, bool&)’ boost/spirit/home/qi/detail/construct.hpp:64: note: candidates are: static void boost::spirit::traits::assign_to_attribute_from_iterators<bool, Iterator, void>::call(const Iterator&, const Iterator&, char&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >] My interpretation of this is that Spirit.Qi doesn't know how to convert a string to a bool - surely that's not the case? Has anyone else done this before? If so, how?

    Read the article

  • getting started with lex

    - by cambr
    I need to format some hexdump like this: 00010: 02 03 04 05 00020: 02 03 04 08 00030: 02 03 04 08 00010: 02 03 04 05 00020: 02 03 04 05 02 03 04 05 02 03 04 08 to 02 03 04 05 02 03 04 08 02 03 04 02 03 04 05 02 03 04 05 02 03 04 05 02 03 04 a) remove the address fields, if present b) remove any 08 at the end of a paragraph (followed by an empty line) c) remove any empty lines How can this be done using lex? thanks!

    Read the article

  • Sharing memory among YACC, Lex, and C files

    - by sczizzo
    I have a YACC (Bison) grammar, a Lex (Flex) tokenizer, and a C program among which I need to share a struct (or really any variable). Currently, I declare the actual object in the grammar file and extern it wherever I need it (which is to say, my C source file), usually using a pointer to manipulate it. I have a shared header (and implementation) file between the C file and the grammar file with functions useful for manipulating my data structure. This works, but it feels a little uncomfortable. Is there a better way to share memory between the grammar and program?

    Read the article

  • unrecognized rule in lex

    - by Max
    I'm writing a program in lex, and it gives me the following error: scanner.l:49: unrecognized rule Line 49 is: {number} {return(NUM);} Here's my code: #include <stdio.h> %token BOOL, ELSE, IF, TRUE, WHILE, DO, FALSE, INT, VOID %token LPAREN, RPAREN, LBRACK, RBRACK, LBRACE, RBRACE, SEMI, COMMA, PLUS, MINUS, TIMES %token DIV, MOD, AND, OR, NOT, IS, ADDR, EQ, NE, LT, GT, LE, GE %token NUM, ID, PUNCT, OP int line = 1, numAttr; char *strAttr; %} /* regular definitions */ delim [ \t] ws {delim}+ letter [A-Za-z] digit [0-9] id ({letter} | _)({letter} | {digit} | _)* number {digit}+ %% {ws} {/* no action and no return */} [\n] {line++;} bool {return(BOOL);} else {return(ELSE);} if {return(IF);} true {return(TRUE);} while {return(WHILE);} do {return(DO);} false {return(FALSE);} int {return(INT);} void {return(VOID);} {id} {return(ID);} {number} {return(NUM);} // error is here "(" {yylval = LPAREN; return(PUNCT);} ")" {yylval = RPAREN; return(PUNCT);} "[" {yylval = LBRACK; return(PUNCT);} "]" {yylval = RBRACK; return(PUNCT);} "{" {yylval = LBRACE; return(PUNCT);} "}" {yylval = RBRACE; return(PUNCT);} ";" {yylval = SEMI; return(PUNCT);} "," {yylval = COMMA; return(PUNCT);} "+" {yylval = PLUS; return(OP);} "-" {yylval = MINUS; return(OP);} "*" {yylval = TIMES; return(OP);} "/" {yylval = DIV; return(OP);} "%" {yylval = MOD; return(OP);} "&" {yylval = ADDR; return(OP);} "&&" {yylval = AND; return(OP);} "||" {yylval = OR; return(OP);} "!" {yylval = NOT; return(OP);} "!=" {yylval = NE; return(OP);} "=" {yylval = IS; return(OP);} "==" {yylval = EQ; return(OP);} "<" {yylval = LT; return(OP);} "<=" {yylval = LE; return(OP);} ">" {yylval = GT; return(OP);} ">=" {yylval = GE; return(OP);} %% What is wrong with that rule? Thanks.

    Read the article

  • TableTop: Wil Weaton, Morgan Webb, and Friends Review Pandemic [Video]

    - by Jason Fitzpatrick
    In the newest edition of TableTop, the board gaming video blog, Wil Weaton and his friends take a look at Pandemic–a challenging cooperative board game that pits players against a viral outbreak. Check out the above video for an overview of the game (although be forewarned they’re playing it on the highest difficulty setting) and then, for more information about it, hit up the Pandemic entry at BoardGameGeek. [via GeekDad] 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • strange output in ubuntu terminal when running a lex program

    - by Max
    Hi. I'm running a lexical analyzer using lex, and I've got it mostly correct, but my terminal gives strange output once I take out an ECHO statement I was using to help debug the code. With that statement, my output looks like this: max@Max-Ubuntu:~/Desktop/Compiler Project/project2$ ./a.out <../cmmFiles/expression.cmm VOIDID(){ INTID,ID,ID; BOOLID,ID,ID; ID(ID); ID(ID); ID(ID); ID(ID); ID=-ID-NUM+ID/NUM*(-NUM+ID*IDNUM); ID(ID); ID=ID>ID||ID>=ID; IF(ID)ID(NUM);ELSEID(NUM); ID=ID<ID&&ID<=ID; IF(ID==TRUE)ID(NUM);ELSEID(NUM); ID=ID&&!ID||!ID&&ID; IF(ID!=FALSE)ID(NUM);ELSEID(NUM); } While hard to read, that output is correct. Once I take out the ECHO statement, I instead get this: max@Max-Ubuntu:~/Desktop/Compiler Project/project2$ ./a.out <../cmmFiles/expression.cmm }F(ID!=FALSE)ID(NUM);ELSEID(NUM);; It looks like it's only outputting the final line, except with an extraneous } near the beginning, what looks like half an IF token immediately after, and an extraneous ; at the end. Is this some quirk of my terminal, or does removing that ECHO cause my lexer to screw up that badly? I'm hesitant to keep working until I know for sure what's going on here. Thanks for any answers. Here's my lexer: %{ /* definitions of manifest constants -reserved words- BOOL, ELSE, IF, TRUE, WHILE, DO, FALSE, INT, VOID -Punctuation and operators- LPAREN, RPAREN, LBRACK, RBRACK, LBRACE, RBRACE, SEMI, COMMA, PLUS, MINUS, TIMES, DIV, MOD, AND, OR, NOT, IS, ADDR, EQ, NE, LT, GT, LE, GE -Other tokens- NUMBER, ID, PUNCT, OP */ #include <stdio.h> #include <stdlib.h> //#include "y.tab.h" //int line = 1, numAttr; //char *strAttr; %} /* regular definitions */ delim [ \t] ws {delim}+ start "/*" one [^*] two "*" three [^*/] end "/" comment {start}({one}*{two}+{three})*{one}*{two}+{end} letter [A-Za-z] digit [0-9] id ({letter}|_)({letter}|{digit}|_)* number {digit}+ %% {ws} { /*no action and no return */} {comment} { /*no action and no return */} [\n] {ECHO; /*no action */} // <-- this is the ECHO in question. bool { printf("BOOL");} else { printf("ELSE");} if { printf("IF");} true { printf("TRUE");} while { printf("WHILE");} do { printf("DO");} false { printf("FALSE");} int { printf("INT");} void { printf("VOID");} {id} { printf("ID");} {number} { printf("NUM");} "(" { printf("(");} ")" { printf(")");} "[" { printf("[");} "]" { printf("]");} "{" { printf("{");} "}" { printf("}");} ";" { printf(";");} "," { printf(",");} "+" { printf("+");} "-" { printf("-");} "*" { printf("*");} "/" { printf("/");} "%" { printf("%");} "&" { printf("&");} "&&" { printf("&&");} "||" { printf("||");} "!" { printf("!");} "!=" { printf("!=");} "=" { printf("=");} "==" { printf("==");} "<" { printf("<");} "<=" { printf("<=");} ">" { printf(">");} ">=" { printf(">=");} %% int main() { yylex(); printf("\n"); } int yywrap(void) { return 1; } here's the file it's analyzing: /* this program * illustrates evaluation of * arithmetic and boolean * expressions */ void main( ) { int m,n,p; bool a,b,c; scan(m); print(m); scan(n); print(n); p=-m-3+n/2*(-5+m*n%4); print(p); a=m>n || n>=p; if (a) print(1); else print(0); b=m<n && n<=p; if (b==true) print(1); else print(0); c=a && !b || !a && b; if (c!=false) print(1); else print(0); }

    Read the article

  • Big problem with regular expression in Lex (lexical analyzer)

    - by Nazgulled
    Hi, I have some content like this: author = "Marjan Mernik and Viljem Zumer", title = "Implementation of multiple attribute grammar inheritance in the tool LISA", year = 1999 author = "Manfred Broy and Martin Wirsing", title = "Generalized Heterogeneous Algebras and Partial Interpretations", year = 1983 author = "Ikuo Nakata and Masataka Sassa", title = "L-Attributed LL(1)-Grammars are LR-Attributed", journal = "Information Processing Letters" And I need to catch everything between double quotes for title. My first try was this: ^(" "|\t)+"title"" "*=" "*"\"".+"\"," Which catches the first example, but not the other two. The other have multiple lines and that's the problem. I though about changing to something with \n somewhere to allow multiple lines, like this: ^(" "|\t)+"title"" "*=" "*"\""(.|\n)+"\"," But this doesn't help, instead, it catches everything. Than I though, "what I want is between double quotes, what if I catch everything until I find another " followed by ,? This way I could know if I was at the end of the title or not, no matter the number of lines, like this: ^(" "|\t)+"title"" "*=" "*"\""[^"\""]+"," But this has another problem... The example above doesn't have it, but the double quote symbol (") can be in between the title declaration. For instance: title = "aaaaaaa \"X bbbbbb", And yes, it will always be preceded by a backslash (\). Any suggestions to fix this regexp?

    Read the article

  • How to get entire input string in Lex and Yacc?

    - by DevDevDev
    OK, so here is the deal. In my language I have some commands, say XYZ 3 5 GGB 8 9 HDH 8783 33 And in my Lex file XYZ { return XYZ; } GGB { return GGB; } HDH { return HDH; } [0-9]+ { yylval.ival = atoi(yytext); return NUMBER; } \n { return EOL; } In my yacc file start : commands ; commands : command | command EOL commands ; command : xyz | ggb | hdh ; xyz : XYZ NUMBER NUMBER { /* Do something with the numbers */ } ; etc. etc. etc. etc. My question is, how can I get the entire text XYZ 3 5 GGB 8 9 HDH 8783 33 Into commands while still returning the NUMBERs? Also when my Lex returns a STRING [0-9a-zA-Z]+, and I want to do verification on it's length, should I do it like rule: STRING STRING { if (strlen($1) < 5 ) /* Do some shit else error */ } or actually have a token in my Lex that returns different tokens depending on length?

    Read the article

  • Boost Spirit and Lex parser problem

    - by bpw1621
    I've been struggling to try and (incrementally) modify example code from the documentation but with not much different I am not getting the behavior I expect. Specifically, the "if" statement fails when (my intent is that) it should be passing (there was an "else" but that part of the parser was removed during debugging). The assignment statement works fine. I had a "while" statement as well which had the same problem as the "if" statement so I am sure if I can get help to figure out why one is not working it should be easy to get the other going. It must be kind of subtle because this is almost verbatim what is in one of the examples. #include <iostream> #include <fstream> #include <string> #define BOOST_SPIRIT_DEBUG #include <boost/config/warning_disable.hpp> #include <boost/spirit/include/qi.hpp> #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_statement.hpp> #include <boost/spirit/include/phoenix_container.hpp> namespace qi = boost::spirit::qi; namespace lex = boost::spirit::lex; inline std::string read_from_file( const char* infile ) { std::ifstream instream( infile ); if( !instream.is_open() ) { std::cerr << "Could not open file: \"" << infile << "\"" << std::endl; exit( -1 ); } instream.unsetf( std::ios::skipws ); return( std::string( std::istreambuf_iterator< char >( instream.rdbuf() ), std::istreambuf_iterator< char >() ) ); } template< typename Lexer > struct LangLexer : lex::lexer< Lexer > { LangLexer() { identifier = "[a-zA-Z][a-zA-Z0-9_]*"; number = "[-+]?(\\d*\\.)?\\d+([eE][-+]?\\d+)?"; if_ = "if"; else_ = "else"; this->self = lex::token_def<> ( '(' ) | ')' | '{' | '}' | '=' | ';'; this->self += identifier | number | if_ | else_; this->self( "WS" ) = lex::token_def<>( "[ \\t\\n]+" ); } lex::token_def<> if_, else_; lex::token_def< std::string > identifier; lex::token_def< double > number; }; template< typename Iterator, typename Lexer > struct LangGrammar : qi::grammar< Iterator, qi::in_state_skipper< Lexer > > { template< typename TokenDef > LangGrammar( const TokenDef& tok ) : LangGrammar::base_type( program ) { using boost::phoenix::val; using boost::phoenix::ref; using boost::phoenix::size; program = +block; block = '{' >> *statement >> '}'; statement = assignment | if_stmt; assignment = ( tok.identifier >> '=' >> expression >> ';' ); if_stmt = ( tok.if_ >> '(' >> expression >> ')' >> block ); expression = ( tok.identifier[ qi::_val = qi::_1 ] | tok.number[ qi::_val = qi::_1 ] ); BOOST_SPIRIT_DEBUG_NODE( program ); BOOST_SPIRIT_DEBUG_NODE( block ); BOOST_SPIRIT_DEBUG_NODE( statement ); BOOST_SPIRIT_DEBUG_NODE( assignment ); BOOST_SPIRIT_DEBUG_NODE( if_stmt ); BOOST_SPIRIT_DEBUG_NODE( expression ); } qi::rule< Iterator, qi::in_state_skipper< Lexer > > program, block, statement; qi::rule< Iterator, qi::in_state_skipper< Lexer > > assignment, if_stmt; typedef boost::variant< double, std::string > expression_type; qi::rule< Iterator, expression_type(), qi::in_state_skipper< Lexer > > expression; }; int main( int argc, char** argv ) { typedef std::string::iterator base_iterator_type; typedef lex::lexertl::token< base_iterator_type, boost::mpl::vector< double, std::string > > token_type; typedef lex::lexertl::lexer< token_type > lexer_type; typedef LangLexer< lexer_type > LangLexer; typedef LangLexer::iterator_type iterator_type; typedef LangGrammar< iterator_type, LangLexer::lexer_def > LangGrammar; LangLexer lexer; LangGrammar grammar( lexer ); std::string str( read_from_file( 1 == argc ? "boostLexTest.dat" : argv[1] ) ); base_iterator_type strBegin = str.begin(); iterator_type tokenItor = lexer.begin( strBegin, str.end() ); iterator_type tokenItorEnd = lexer.end(); std::cout << std::setfill( '*' ) << std::setw(20) << '*' << std::endl << str << std::endl << std::setfill( '*' ) << std::setw(20) << '*' << std::endl; bool result = qi::phrase_parse( tokenItor, tokenItorEnd, grammar, qi::in_state( "WS" )[ lexer.self ] ); if( result ) { std::cout << "Parsing successful" << std::endl; } else { std::cout << "Parsing error" << std::endl; } return( 0 ); } Here is the output of running this (the file read into the string is dumped out first in main) ******************** { a = 5; if( a ){ b = 2; } } ******************** <program> <try>{</try> <block> <try>{</try> <statement> <try></try> <assignment> <try></try> <expression> <try></try> <success>;</success> <attributes>(5)</attributes> </expression> <success></success> <attributes>()</attributes> </assignment> <success></success> <attributes>()</attributes> </statement> <statement> <try></try> <assignment> <try></try> <fail/> </assignment> <if_stmt> <try> if(</try> <fail/> </if_stmt> <fail/> </statement> <fail/> </block> <fail/> </program> Parsing error

    Read the article

  • How do I generate different yyparse functions from lex/yacc for use in the same program?

    - by th
    Hi, I want to generate two separate parsing functions from lex/yacc. Normally yacc gives you a function yyparse() that you can call when you need to do some parsing, but I need to have several different yyparses each associated with different lexers and grammars. The man page seems to suggest the -p (prefix) flag, but this didn't work for me. I got errors from gcc that indicated that yylval was not properly being relabeled (i.e. it claims that several different tokens are not defined). Does anyone know the general rpocedure for generating these separate functions? thanks

    Read the article

  • Force CL-Lex to read whole word

    - by Flávio Cruz
    I'm using CL-Lex to implement a lexer (as input for CL-YACC) and my language has several keywords such as "let" and "in". However, while the lexer recognizes such keywords, it does too much. When it finds words such as "init", it returns the first token as IN, while it should return a "CONST" token for the "init" word. This is a simple version of the lexer: (define-string-lexer lexer (...) ("in" (return (values :in $@))) ("[a-z]([a-z]|[A-Z]|\_)" (return (values :const $@)))) How do I force the lexer to fully read the whole word until some whitespace appears?

    Read the article

  • I need to develop a parser. Can I use Lex and Yacc for the purpose?

    - by Scrooge
    I need to extract very particular data from log files(of different types and formats). Since I am a recent college passout; my mind ran to using Lex and Yacc for the purpose. Now I have the following Questions 1. Will it be legal to do so ? (This product I am working for belongs to one of the biggest tech companies in the world.) 2. Also ; I would like to know if I am being too afraid to write my own parser? 3. How can I use Lex and Yacc if my product is Windows based? Please tell me if you need any clarification or extra information.

    Read the article

  • identation control while developing a small python like language

    - by sap
    Hello, Im developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for identation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop thats inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 end end #after break 2 it would jump right here but since i dont have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: i would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then everytime i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the identation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesnt use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appretiated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on google and had no luck). thanks in advance.

    Read the article

1 2 3 4 5 6 7 8 9 10  | Next Page >