Search Results

Search found 126 results on 6 pages for 'lexer'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Is there a standard lexer/parser tool for Python?

    - by Salim Fadhley
    A volunteer job requires us to convert a large number of LaTeX documents into ePub format. It's a series of open-source fiction book which has so far only been produced only on paper via a print on demand service. We'd like to be able to offer the book to users of book-reader devices (such as Kindle) which require the ePub format for best results. Fortunately, ePub is a very simple format, however there's no trivial way for LaTeX to produce the XHTML outut required. We experimented with alternative LaTeX compilers (e.g. plastex) but in the end we figured that it would probably be a lot easier to simply write our own compiler which understands a tiny subset of the LaTeX language and compiles directly to XHTML / ePub. Previously I used a tool on Windows called GOLD. This allowed me to go directly from BNF grammars to a stub parser. It also alllowed me to implement the parser in any language I liked. (I'd choose Python). This product has to work on Linux, so I'm wondering if there's an equivalent toolchain that works as well under Ubutnu / Eclipse / Python. The idea is that we will take the grammar of TeX and just implement a teeny subset of that, but we do not want to spend a huge amount of time worrying about grammar and parsing. A parser generator would obviously save us a great deal of time. Sal UPDATE 1: Bonus marks for a solution with excellent documentation or tutorials.

    Read the article

  • How come BraceMatch() doesn't work when a lexer is set in wxStyledTextCtrl?

    - by George Edison
    I have the following chunk of C++ code that gets called when } is pressed: int iCurPos = GetCurrentPos(); InsertText(iCurPos,wxT("}")); int iPos = BraceMatch(iCurPos); This works fine (iPos gets the position of the matching brace) except when I call SetLexer(...) beforehand. Then it returns -1? How can I get it to work? Edit: I should point out that the above code is being called from the EVT_KEY_DOWN handler in a wxStyledTextCtrl-derived class.

    Read the article

  • Parsing Indentation-based syntaxes in Haskell's Parsec

    - by pavpanchekha
    I'm trying to parse an indentation-based language (think Python, Haskell itself, Boo, YAML) in Haskell using Parsec. I've seen the IndentParser library, and it looks like it's the perfect match, but what I can't figure out is how to make my TokenParser into an indentation parser. Here's the code I have so far: import qualified Text.ParserCombinators.Parsec.Token as T import qualified Text.ParserCombinators.Parsec.IndentParser.Token as IT lexer = T.makeTokenParser mylangDef ident = IT.identifier lexer This throws the error: parser2.hs:29:28: Couldn't match expected type `IT.TokenParser st' against inferred type `T.GenTokenParser s u m' In the first argument of `IT.identifier', namely `lexer' In the expression: IT.identifier lexer In the definition of `ident': ident = IT.identifier lexer What am I doing wrong? How should I create an IT.TokenParser? Or is IndentParser broken and to be avoided?

    Read the article

  • Visage

    - by Geertjan
    Raj, the Chennai JUG lead, together with others from that JUG, is interested in Visage, the JavaFX script language closely associated with Stephen Chin. He sent me the related lexer and parser and I started by having a look at them in the new version of ANTLRWorks being developed by Sam Harwell (who demonstrated it very effectively during JavaOne): Notice how the lexer and parser are shown in a tree structure, as well as in a cool syntax diagram. Next, I downloaded a bunch of JARs from here, so that packages such as from "com.sun.tools.mjavac" can be used, i.e., these are Visage-specific packages that aren't found anywhere except in the location below: http://code.google.com/p/visage/wiki/GettingStarted It turns out that there's also a Visage NetBeans plugin out there: http://code.google.com/p/visage/source/browse/?repo=netbeans-plugin Rather than recreating everything from scratch, i.e., generating ANTLR Java classes from the lexer and parser, I copied a lot of stuff from the site above and now a file Raj sent me looks as follows, i.e., basic syntax coloring is shown: For anyone wanting to seriously support Visage in NetBeans IDE, I recommend downloading the existing Visage NetBeans plugin above, rather than creating everything yourself from scratch, and then figuring out how to use that code in some way, i.e., add the JARs I pointed to above, and work on its build.xml file, which could be frustrating in the beginning, but there's no point in recreating everything if everything already exists.

    Read the article

  • Custom whiteSpace using Haskell Parsec

    - by fryguybob
    I would like to use Parsec's makeTokenParser to build my parser, but I want to use my own definition of whiteSpace. Doing the following replaces whiteSpace with my definition, but all the lexeme parsers still use the old definition (e.g. P.identifier lexer will use the old whiteSpace). ... lexer :: P.TokenParser () lexer = l { P.whiteSpace = myWhiteSpace } where l = P.makeTokenParser myLanguageDef ... Looking at the code for makeTokenParser I think I understand why it works this way. I want to know if there are any workarounds to avoid completely duplicating the code for makeTokenParser?

    Read the article

  • Parsing "true" and "false" using Boost.Spirit.Lex and Boost.Spirit.Qi

    - by Andrew Ross
    As the first stage of a larger grammar using Boost.Spirit I'm trying to parse "true" and "false" to produce the corresponding bool values, true and false. I'm using Spirit.Lex to tokenize the input and have a working implementation for integer and floating point literals (including those expressed in a relaxed scientific notation), exposing int and float attributes. Token definitions #include <boost/spirit/include/lex_lexertl.hpp> namespace lex = boost::spirit::lex; typedef boost::mpl::vector<int, float, bool> token_value_type; template <typename Lexer> struct basic_literal_tokens : lex::lexer<Lexer> { basic_literal_tokens() { this->self.add_pattern("INT", "[-+]?[0-9]+"); int_literal = "{INT}"; // To be lexed as a float a numeric literal must have a decimal point // or include an exponent, otherwise it will be considered an integer. float_literal = "{INT}(((\\.[0-9]+)([eE]{INT})?)|([eE]{INT}))"; literal_true = "true"; literal_false = "false"; this->self = literal_true | literal_false | float_literal | int_literal; } lex::token_def<int> int_literal; lex::token_def<float> float_literal; lex::token_def<bool> literal_true, literal_false; }; Testing parsing of float literals My real implementation uses Boost.Test, but this is a self-contained example. #include <string> #include <iostream> #include <cmath> #include <cstdlib> #include <limits> bool parse_and_check_float(std::string const & input, float expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); float actual; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, basic_literal_lexer.float_literal, actual); return result && std::abs(expected - actual) < std::numeric_limits<float>::epsilon(); } int main(int argc, char *argv[]) { if (parse_and_check_float("+31.4e-1", 3.14)) { return EXIT_SUCCESS; } else { return EXIT_FAILURE; } } Parsing "true" and "false" My problem is when trying to parse "true" and "false". This is the test code I'm using (after removing the Boost.Test parts): bool parse_and_check_bool(std::string const & input, bool expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); bool actual; lex::token_def<bool> parser = expected ? basic_literal_lexer.literal_true : basic_literal_lexer.literal_false; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, parser, actual); return result && actual == expected; } but compilation fails with: boost/spirit/home/qi/detail/assign_to.hpp: In function ‘void boost::spirit::traits::assign_to(const Iterator&, const Iterator&, Attribute&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Attribute = bool]’: boost/spirit/home/lex/lexer/lexertl/token.hpp:434: instantiated from ‘static void boost::spirit::traits::assign_to_attribute_from_value<Attribute, boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>, void>::call(const boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>&, Attribute&) [with Attribute = bool, Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, AttributeTypes = boost::mpl::vector<int, float, bool, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, HasState = mpl_::bool_<true>]’ ... backtrace of instantiation points .... boost/spirit/home/qi/detail/assign_to.hpp:79: error: no matching function for call to ‘boost::spirit::traits::assign_to_attribute_from_iterators<bool, __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, void>::call(const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, bool&)’ boost/spirit/home/qi/detail/construct.hpp:64: note: candidates are: static void boost::spirit::traits::assign_to_attribute_from_iterators<bool, Iterator, void>::call(const Iterator&, const Iterator&, char&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >] My interpretation of this is that Spirit.Qi doesn't know how to convert a string to a bool - surely that's not the case? Has anyone else done this before? If so, how?

    Read the article

  • Compiling flex file into dll.

    - by Szpilona
    Hi, I've got a lexer created with flex (cygwin). Normally I compile it to an .exe file. For the newest project I need a lexer to use in a bigger C# program running on Windows XP. Of course I can execute a file using System.Diagnostics.Process. But it is not the best solution for me as I want that program to run on several machines. How can I create a dll under cygwin having the source code of the lexer? Thanks in advance... Szpilona

    Read the article

  • Replace token in ANTLR

    - by user1019710
    I want to replace a token using ANTLR. I tried with TokenRewriteStream and replace, but it didn't work. Any suggestions? ANTLRStringStream in = new ANTLRStringStream(source); MyLexer lexer = new MyLexer(in); TokenRewriteStream tokens = new TokenRewriteStream(lexer); for(Object obj : tokens.getTokens()) { CommonToken token = (CommonToken)obj; tokens.replace(token, "replacement"); } The lexer finds all occurences of single-line comments, and i want to replace them in the original source too.

    Read the article

  • Haskell's cabal dependency problem with happy

    - by wirrbel
    I have problems installing ghc-mod on my linux machine. cabal worries about "happy" not being available in versione = 1.17: $ cabal install ghc-mod Resolving dependencies... [1 of 1] Compiling Main ( /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/Setup.hs, /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/Main.o ) Linking /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/setup ... Configuring haskell-src-exts-1.14.0... setup: The program happy version =1.17 is required but it could not be found. Failed to install haskell-src-exts-1.14.0 cabal: Error: some packages failed to install: ghc-mod-3.1.3 depends on haskell-src-exts-1.14.0 which failed to install. haskell-src-exts-1.14.0 failed during the configure step. The exception was: ExitFailure 1 hlint-1.8.53 depends on haskell-src-exts-1.14.0 which failed to install. However, it even is installed in v. 1.19, as you can see here: $ cabal install happy Resolving dependencies... [1 of 1] Compiling Main ( /tmp/happy-1.19.0-1124/happy-1.19.0/Setup.lhs, /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/Main.o ) Linking /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/setup ... Configuring happy-1.19.0... Building happy-1.19.0... Preprocessing executable 'happy' for happy-1.19.0... [ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o ) [ 2 of 18] Compiling Target ( src/Target.lhs, dist/build/happy/happy-tmp/Target.o ) [ 3 of 18] Compiling AbsSyn ( src/AbsSyn.lhs, dist/build/happy/happy-tmp/AbsSyn.o ) [ 4 of 18] Compiling ParamRules ( src/ParamRules.hs, dist/build/happy/happy-tmp/ParamRules.o ) [ 5 of 18] Compiling GenUtils ( src/GenUtils.lhs, dist/build/happy/happy-tmp/GenUtils.o ) [ 6 of 18] Compiling ParseMonad ( src/ParseMonad.lhs, dist/build/happy/happy-tmp/ParseMonad.o ) [ 7 of 18] Compiling Lexer ( src/Lexer.lhs, dist/build/happy/happy-tmp/Lexer.o ) [ 8 of 18] Compiling Parser ( dist/build/happy/happy-tmp/Parser.hs, dist/build/happy/happy-tmp/Parser.o ) [ 9 of 18] Compiling AttrGrammar ( src/AttrGrammar.lhs, dist/build/happy/happy-tmp/AttrGrammar.o ) [10 of 18] Compiling AttrGrammarParser ( dist/build/happy/happy-tmp/AttrGrammarParser.hs, dist/build/happy/happy-tmp/AttrGrammarParser.o ) [11 of 18] Compiling Grammar ( src/Grammar.lhs, dist/build/happy/happy-tmp/Grammar.o ) [12 of 18] Compiling First ( src/First.lhs, dist/build/happy/happy-tmp/First.o ) [13 of 18] Compiling LALR ( src/LALR.lhs, dist/build/happy/happy-tmp/LALR.o ) [14 of 18] Compiling Paths_happy ( dist/build/autogen/Paths_happy.hs, dist/build/happy/happy-tmp/Paths_happy.o ) [15 of 18] Compiling ProduceCode ( src/ProduceCode.lhs, dist/build/happy/happy-tmp/ProduceCode.o ) [16 of 18] Compiling ProduceGLRCode ( src/ProduceGLRCode.lhs, dist/build/happy/happy-tmp/ProduceGLRCode.o ) [17 of 18] Compiling Info ( src/Info.lhs, dist/build/happy/happy-tmp/Info.o ) [18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o ) Linking dist/build/happy/happy ... Installing executable(s) in /home/hope/.cabal/bin Installed happy-1.19.0 Any ideas? cabal-install version 1.16.0.2 using version 1.16.0 of the Cabal library

    Read the article

  • Netbeans and EOF

    - by sqlnoob
    Hey there, Java, ANTLR and Netbeans newbie here. I have installed a jdk and netbeans. I started a new project on netbeans 6.8 and i have added the antlr-3.2.jar as a library. I also created a lexer and parser class using AntlrWorks. These classes are named ExprParser.java and ExprLexer.java. I copied them into a directory named path-to-netbeans-project/src/parsers. I have a main file: package javaapplication2; import org.antlr.runtime.*; import parsers.*; public class Main { public static void main(String[] args) throws Exception{ ANTLRInputStream input = new ANTLRInputStream(System.in); ExprLexer lexer = new ExprLexer(input); CommonTokenStream tokens = new CommonTokenStream(lexer); ExprParser parser = new ExprParser(tokens); parser.prog(); } } The application builds fine. The book I'm reading says I should run the program and type in some stuff and then press Ctrl+Z (I'm on windows) to send EOF to the console. Problem is that nothing happens when I press Ctrl+z in the netbeans console. When I run from the command line, ctrl+z works fine. This is probably WAY too much information, but I can't figure it out. Sorry. Probably not a good idea to learn 3 new technologies at once.

    Read the article

  • error in a c code while trying to remove whitespace

    - by mekasperasky
    this code is the base of lexer , and it does the basic operation of removing the whitespaces from a source file and rewrites it into another file with each word in separate lines . But i am not able to understand why the file lext.txt not getting updated? #include<stdio.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i; char a,b[20],c; FILE *fp1,*fp2; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer fp2=fopen("lext.txt","w"); //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; while(!feof(fp1)) { a=fgetc(fp1); if(a!="") { b[i]=a; printf("hello"); } else { b[i]='\0'; fprintf(fp2, "%.20s\n", b); i=0; continue; } i=i+1; /*Switch(a) { case EOF :return eof; case '+':sym=sym+1; case '-':sym=sym+1; case '*':sym=sym+1; case '/':sym=sym+1; case '%':sym=sym+1; case ' */ } return 0; }

    Read the article

  • Error with Phoenix placeholder _val in Boost.Spirit.Lex :(

    - by GooRoo
    Hello, everybody. I'm newbie in Boost.Spirit.Lex. Some strange error appears every time I try to use lex::_val in semantics actions in my simple lexer: #ifndef _TOKENS_H_ #define _TOKENS_H_ #include <iostream> #include <string> #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_statement.hpp> #include <boost/spirit/include/phoenix_container.hpp> namespace lex = boost::spirit::lex; namespace phx = boost::phoenix; enum tokenids { ID_IDENTIFICATOR = 1, ID_CONSTANT, ID_OPERATION, ID_BRACKET, ID_WHITESPACES }; template <typename Lexer> struct mega_tokens : lex::lexer<Lexer> { mega_tokens() : identifier(L"[a-zA-Z_][a-zA-Z0-9_]*", ID_IDENTIFICATOR) , constant (L"[0-9]+(\\.[0-9]+)?", ID_CONSTANT ) , operation (L"[\\+\\-\\*/]", ID_OPERATION ) , bracket (L"[\\(\\)\\[\\]]", ID_BRACKET ) { using lex::_tokenid; using lex::_val; using phx::val; this->self = operation [ std::wcout << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] | identifier [ std::wcout << val(L'<') << _tokenid << val(L':') << _val << val(L'>') ] | constant [ std::wcout << val(L'<') << _tokenid // << val(L':') << _val << val(L'>') ] | bracket [ std::wcout << phx::val(lex::_val) << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] ; } lex::token_def<wchar_t, wchar_t> operation; lex::token_def<std::wstring, wchar_t> identifier; lex::token_def<double, wchar_t> constant; lex::token_def<wchar_t, wchar_t> bracket; }; #endif // _TOKENS_H_ and #include <cstdlib> #include <iostream> #include <locale> #include <boost/spirit/include/lex_lexertl.hpp> #include "tokens.h" int main() { setlocale(LC_ALL, "Russian"); namespace lex = boost::spirit::lex; typedef std::wstring::iterator base_iterator; typedef lex::lexertl::token < base_iterator, boost::mpl::vector<wchar_t, std::wstring, double, wchar_t>, boost::mpl::true_ > token_type; typedef lex::lexertl::actor_lexer<token_type> lexer_type; typedef mega_tokens<lexer_type>::iterator_type iterator_type; mega_tokens<lexer_type> mega_lexer; std::wstring str = L"alfa+x1*(2.836-x2[i])"; base_iterator first = str.begin(); bool r = lex::tokenize(first, str.end(), mega_lexer); if (r) { std::wcout << L"Success" << std::endl; } else { std::wstring rest(first, str.end()); std::wcerr << L"Lexical analysis failed\n" << L"stopped at: \"" << rest << L"\"\n"; } return EXIT_SUCCESS; } This code causes an error in Boost header 'boost/spirit/home/lex/argument.hpp' on line 167 while compiling: return: can't convert 'const boost::variant' to 'boost::variant &' When I don't use lex::_val program compiles with no errors. Obviously, I use _val in wrong way, but I do not know how to do this correctly. Help, please! :) P.S. And sorry for my terrible English…

    Read the article

  • ANTLR lexing getting confused over '...' and floats

    - by Andy Hull
    I think the ANTLR lexer is treating my attempt at a range expression "1...3" as a float. The expression "x={1..3}" is coming out of the lexer as "x={.3}" when I used the following token definitions: FLOAT : ('0'..'9')+ ('.' '0'..'9'+)? EXPONENT? | ('.' '0'..'9')+ EXPONENT? ; AUTO : '...'; When I change FLOAT to just check for integers, as so: FLOAT : ('0'..'9')+; then the expression "x={1...3}" is tokenized correctly. Can anyone help me to fix this? Thanks!

    Read the article

  • How to write a Compiler in C for C

    - by Kerb_z
    I want to write a Compiler for C. This is a Project for my College i am doing as per my University. I am an intermediate programmer in C, with understanding of Data Structures. Now i know a Compiler has the following parts: 1. Lexer 2. Parser 3. Intermediate Code Generator 4. Optimizer 5. Code Generator I want to begin with the Lexer part and move on to Parser. I am consulting the following book: Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. The thing is that this book is highly theoretical and perplexing to me. I really appreciate the authors. But the point is i am not able to begin my project, as if i am blinded where to go. Need guidance please help.

    Read the article

  • Problem when reading backslash in Prolog

    - by Jerry
    I'm writing a lexer in Prolog which will be used as a part of functional language interpreter. Language spec allows expressions like for example let \x = x + 2; to occur. What I want lexer to do for such input is to "return": [tokLet, tokLambda, tokVar(x), tokEq, tokVar(x), tokPlus, tokNumber(2), tokSColon] and the problem is, that Prolog seems to ignore the \ character and "returns" the line written above except for tokLambda. One approach to solve this would be to somehow add second backslash before/after every occurrence of one in the program code (because everything works fine if I change the original input to let \\x = x + 2;) but I don't really like it. Any ideas?

    Read the article

  • Wiki .NET Parser, C#

    Wiki .NET Parser C#, 3 (4) files with lexer, grammar, parser definition and ANTLR engine...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Sunday, December 19, 2010

    CodePlex Daily Summary for Sunday, December 19, 2010Popular ReleasesTeam Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.SQL Monitor: SQL Monitor 3.0 alpha 5: 1. fix a problem with not expanding nodes in object explorer, sorryHacker Passwords: HackerPasswords.zip: Source code, executable and documentationWatchersNET.SiteMap: WatchersNET.SiteMap 01.03.03: Whats NewSkin Object: You can now filter by Terms for Example use: <object id="dnnSITEMAPSL" codetype="dotnetnuke/server" codebase="SITEMAPSL"> <param name="TaxMode" value="terms" /> <param name="TaxTerms" value="TermName1,TermName2" /> </object> changes Tax Term Filter should work correct nowSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.EnhSim: EnhSim 2.2.3 ALPHA: 2.2.3 ALPHAThis release adds in the changes for 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in th...Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...Orchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...Pyxis 2: Beta 2.2: An issue stopping the Checkbox from working properly has been resolved (gradient background has also been added) A TabDialog control has been added NumericUpDown control added A driver for the VS1053 has been added (not yet in use) PyxisAPI.OpenFile now works with all its features PyxisAPI.SaveFile now works with all features Settings window is now available from the Pyxis menu You can now connect using Static IP or DHCP You can now set your system time from the Settings windo...sgMotion Animation Library: sgMotion v1.3 for SunBurn 2.0.9 [Includes Sample]: sgMotion 1.3 release for use with SunBurn 2.0.9 (all editions). Includes example project with assets, and full Windows and Xbox support. (tested on all platforms)DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...TweetSharp: TweetSharp v2.0.0.0 - Preview 5: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous ...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010FlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New ProjectsANLP - Another .NET Lexer Parser: This project aims to have a lexer/parser working in Silverlight and help people to write their own grammar and make the lexer/parser available in Silverlight.Better App Tab Shortcuts Firefox Extension: An extension for Firefox 4.0 which improves the shortcuts for selecting tabs.BizTalk Map Converter: Converts BizTalk Maps into ExcelCampo Minado: Campo minado em SilverlightCSharpHASH: Hash project is an college project conserning business software development. Written in C#.DS_HW1: hw1DynamicViewModel: MVVM using POCOs with .NET 4.0: This project aims to provide a way to implement the Model View ViewModel (MVVM) architectural pattern using Plain Old CLR Objects (POCOs) while taking full advantage of .NET 4.0 DynamicObject Class.Evolution Game: Evolution game that uses genetic algorithms to visualize life of different species.Fluent Parser: Fluent Parser is a library allowing to build a syntaxic analyser in C#. The grammar is built using only .NET objects into the form of a Parsing Expression Grammar (PEG).GoogleMusic Player: Choose and play music from newly launched google Music http://www.google.co.in/music create and save playlists Using C#Gracefully update SharePoint 2010 document metadata: Users want to update the metadata for a document in a document library, for which they don’t have access or if they wanted to update some read-only fields. Hacker Passwords: Hacker Passwords generator - free open sourceHTML5 Canvas FlowView: FlowView is a simple to use API for creating/implementing a flowview style element in HTML5/Canvas. The FlowView looks much like something from Apple's cover view.L#: A F# program to provide similar functionalities offered by Prolog for F# oriented applications.MapInfo Tools: Utilities for working with MapInfo files.Notepad X: Notepad X is an alternative open source text editor for Microsoft Windows, with a lot of customization options created to help users managing text documents, featuring tab navigation.OpenFlyClient: An educational open source of a client written in C# for FlyFF. This is for educational and evaluation purposes only. Using this for private servers is not allowed.Orkut API Library: Orkut API Library (more of a wrapper) for .NET makes it easier for .NET developers creating web and/or desktop applications to leverage Orkut OpenSocial API from within the familiar Visual Studio environment. PasteHtml.NET: PasteHtml.NET is a .NET API for using the PasteHtml.com service.Photon: Silverlight MVVM Framework RSATasync: Running analyses in RiskSpectrum PSA Professional software is a time-consuming task. RSATasync is a middleware, which uses .NET4 Parallel Extensions to parallelize analyses sent by the editor to the analyzer in the RiskSpectrum software.The Killie01 Project: all killie01 projects (opensource)United Nations News for Windows Phone 7: Open source project for United Nations News for Windows Phone 7. WCF SIP Stack: WCF SIPWordpress .NET: WPDotnet is a library to ease the Wordpress integration with ASP.NET. Some Server controls are implemented. If you don't want to use them, you can just use the classes.

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • Syntax error beyond end of program

    - by a_m0d
    I am experimenting with writing a toy compiler in ocaml. Currently, I am trying to implement the offside rule for my lexer. However, I am having some trouble with the ocaml syntax (the compiler errors are extremely un-informative). The code below (33 lines of it) causes an error on line 34, beyond the end of the source code. I am unsure what is causing this error. open Printf let s = (Stack.create():int Stack.t); let rec check x = ( if Stack.is_empty s then Stack.push x s else if Stack.top s < x then ( Stack.push x s; printf "INDENT\n"; ) else if Stack.top s > x then ( printf "DEDENT\n"; Stack.pop s; check x; ) else printf "MATCHED\n"; ); let main () = ( check 0; check 4; check 6; check 8; check 5; ); let _ = Printexc.print main () Ocaml output: File "lexer.ml", line 34, characters 0-0: Error: Syntax error Can someone help me work out what the error is caused by and help me on my way to fixing it?

    Read the article

  • segmentation fault in file operations in c

    - by mekasperasky
    #include<stdio.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i; char a,b[20],c; FILE *fp1; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; while(1) { a=fgetc(fp1); if(a !="") { b[i]=a; } else { fprintf(fp1, "%.20s\n", b); i=0; continue; } i=i+1; /*Switch(a) { case EOF :return eof; case '+':sym=sym+1; case '-':sym=sym+1; case '*':sym=sym+1; case '/':sym=sym+1; case '%':sym=sym+1; case ' */ } return 0; } how does this code end up in segmentation fault?

    Read the article

  • Get sublime text 2 (sublimecodeintel) to parse mason

    - by user813182
    So Sublime Text 2 seems to getting a lot of love lately. I am using the Sublime Code intel plugin, https://github.com/Kronuz/SublimeCodeIntel for auto completion. The problem is, it does not recognize .mi files as MASON files. I looked into the lexer and it does have the keywords in place. How do I force it to be parsed as MASON? Running "Set Syntax: " on ctrl+shift+p, does not show a MASON option.

    Read the article

  • What can Go chan do that a list cannot?

    - by alpav
    I want to know in which situation Go chan makes code much simpler than using list or queue or array that is usually available in all languages. As it was stated by Rob Pike in one of his speeches about Go lexer, Go channels help to organize data flow between structures that are not homomorphic. I am interested in a simple Go code sample with chan that becomes MUCH more complicated in another language (for example C#) where chan is not available. I am not interested in samples that use chan just to increase performance by avoiding waiting of data between generating list and consuming the list (which can be solved by chunking) or as a way to organize thread safe queue or thread-safe communication (which can be easily solved by locking primitives). I am interested in a sample that makes code simpler structurally disregarding size of data. If such sample does not exist then sample where size of data matters. I guess desired sample would contain bi-directional communication between generator and consumer. Also if someone could add tag [channel] to the list of available tags, that would be great.

    Read the article

  • Converting ANTLR AST to Java bytecode using ASM

    - by Nick
    I am currently trying to write my own compiler, targeting the JVM. I have completed the parsing step using Java classes generated by ANTLR, and have an AST of the source code to work from (An ANTLR "CommonTree", specifically). I am using ASM to simplify the generating of the bytecode. Could anyone give a broad overview of how to convert this AST to bytecode? My current strategy is to explore down the tree, generating different code depending on the current node (using "Tree.getType()"). The problem is that I can only recognise tokens from my lexer this way, rather than more complex patterns from the parser. Is there something I am missing, or am I simply approaching this wrong? Thanks in advance :)

    Read the article

  • Simple C# Tokenizer Using Regex

    - by Pete
    I'm looking to tokenize really simple strings,but struggling to get the right Regex. The strings might look like this: string1 = "{[Surname]}, some text... {[FirstName]}" string2 = "{Item}foo.{Item2}bar" And I want to extract the tokens in the curly braces (so string1 gets "{[Surname]}","{[FirstName]}" and string2 gets "{Item}" and "{Item2}") this question is quite good, but I can't get the regex right: poor mans lexer for c# Thanks for the help!

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >