Search Results

Search found 167 results on 7 pages for 'lexical analyser'.

Page 1/7 | 1 2 3 4 5 6 7  | Next Page >

  • BizTalk 2009 - BizTalk Server Best Practice Analyser

    - by StuartBrierley
    The BizTalk Server Best Practices Analyser  allows you to carry out a configuration level verification of your BizTalk installation, evaluating the deployed configuration but not modifying or tuning anything that it finds. The Best Practices Analyser uses "reading and reporting" to gather data from different sources, such as: Windows Management Instrumentation (WMI) classes SQL Server databases Registry entries When I first ran the analyser I got a number of errors, if you get any errors these should all be acted upon to resolve them, you should then run the scan again and see if any thing else is reported that needs acting upon. As you can see in the image above, the initial issue that jumped out to me was that the SQL Server Agent was not started. The reasons for this was absent mindedness - this run was against my development PC and I don't have SQL/BizTalk actively running unless I am using them.  Starting the agent service and running the scan again gave me the following results: This resolved most of the issues for me, but next major issue to look at was that there was no tracking host running.  You can also see that I was still getting an error with two of the SQL jobs.  The problem here was that I had not yet configured these two SQL jobs.  Configuring the backup and purge jobs and then starting the tracking host before running the scan again gave: This had cleared all the critical issues, but I did stil have a number of warnings.  For example on this report I was warned that the BizTalk Message box is hosted on the BizTalk Server.  While this is known to be less than ideal, it is as I expected on my development environment where I have installed Visual Studio, SQL and BizTalk on my laptop and I was happy to ignore this and other similar warnings. In your case you should take a look at any warnings you receive and decide what you want to do about each of them in turn.

    Read the article

  • Big problem with regular expression in Lex (lexical analyzer)

    - by Nazgulled
    Hi, I have some content like this: author = "Marjan Mernik and Viljem Zumer", title = "Implementation of multiple attribute grammar inheritance in the tool LISA", year = 1999 author = "Manfred Broy and Martin Wirsing", title = "Generalized Heterogeneous Algebras and Partial Interpretations", year = 1983 author = "Ikuo Nakata and Masataka Sassa", title = "L-Attributed LL(1)-Grammars are LR-Attributed", journal = "Information Processing Letters" And I need to catch everything between double quotes for title. My first try was this: ^(" "|\t)+"title"" "*=" "*"\"".+"\"," Which catches the first example, but not the other two. The other have multiple lines and that's the problem. I though about changing to something with \n somewhere to allow multiple lines, like this: ^(" "|\t)+"title"" "*=" "*"\""(.|\n)+"\"," But this doesn't help, instead, it catches everything. Than I though, "what I want is between double quotes, what if I catch everything until I find another " followed by ,? This way I could know if I was at the end of the title or not, no matter the number of lines, like this: ^(" "|\t)+"title"" "*=" "*"\""[^"\""]+"," But this has another problem... The example above doesn't have it, but the double quote symbol (") can be in between the title declaration. For instance: title = "aaaaaaa \"X bbbbbb", And yes, it will always be preceded by a backslash (\). Any suggestions to fix this regexp?

    Read the article

  • Error compiling flex (the lexical analyzer)

    - by Maulrus
    I'm trying to install flex on my Windows computer. I have MSYS installed. I untar flex, ./configure it, but when I try to make it, I get this error: In file included from ccl.c:34: flexdef.h:94:19: error: regex.h: No such file or directory In file included from ccl.c:34: flexdef.h:1195: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'regex_linedir' flexdef.h:1197: error: expected ')' before '*' token flexdef.h:1198: error: expected ')' before '*' token flexdef.h:1199: error: expected ')' before '*' token flexdef.h:1200: error: expected ')' before '*' token flexdef.h:1201: error: expected ')' before '*' token flexdef.h:1202: error: expected ')' before '*' token make[2]: *** [ccl.o] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 Until recently, I've only ever installed things using an .exe, so I'm pretty confused by this. Installing bison and m4 both went smoothly, and I'm wondering why this isn't. Any ideas?

    Read the article

  • Need an end of lexical scope action which can die normally

    - by Schwern
    I need the ability to add actions to the end of a lexical block where the action might die. And I need the exception to be thrown normally and be able to be caught normally. Unfortunately, Perl special cases exceptions during DESTROY both by adding "(in cleanup)" to the message and making them untrappable. For example: { package Guard; use strict; use warnings; sub new { my $class = shift; my $code = shift; return bless $code, $class; } sub DESTROY { my $self = shift; $self->(); } } use Test::More tests => 2; my $guard_triggered = 0; ok !eval { my $guard = Guard->new( #line 24 sub { $guard_triggered++; die "En guarde!" } ); 1; }, "the guard died"; is $@, "En guarde! at $@ line 24\n", "with the right error message"; is $guard_triggered, 1, "the guard worked"; I want that to pass. Currently the exception is totally swallowed by the eval. This is for Test::Builder2, so I cannot use anything but pure Perl. The underlying issue is I have code like this: { $self->setup; $user_code->(); $self->cleanup; } That cleanup must happen even if the $user_code dies, else $self gets into a weird state. So I did this: { $self->setup; my $guard = Guard->new(sub { $self->cleanup }); $user_code->(); } The complexity comes because the cleanup runs arbitrary user code and it is a use case where that code will die. I expect that exception to be trappable and unaltered by the guard. I'm avoiding wrapping everything in eval blocks because of the way that alters the stack.

    Read the article

  • Dependency Analyser

    - by tsutha
    I have been watching 3 videos put together by SSIS team on Dependency Analyser. I am happy Microsoft have made an effort to do something about it. I have been asking for this feature since SQL 2005 TAP. Still a long way to go before they catch up with competetion. It looks like it currently supports SSIS and SQL dependencies. Release note states its still in development and support only limited sets of tasks. You still would not know the impact of dropping a column on cubes, reports etc. I hope that changes by the time RTM comes out. I am struggling to understand why it is impossible to do it across the solution. Ideally if you have a BI solution which holds DB, SSIS, SSAS, SSRS & PPS projects I would like to right click and execute a dependency analyser, stating what impact would have if I drop / rename a specific column. Has anyone else looked at it and what are your thoughts? ThanksSutha  

    Read the article

  • Apprende à analyser un rapport de type RIST (Random System Information Tool), par Simon-Sayce

    Bonjour, Sayce, l'un des membres de l'équipe de rédaction souhaite vous inviter à la lecture de l'article suivant: Random System Information Tool . Cet article est à destination de toutes les personnes souhaitant vérifier l'intégrité de leur PC en utilisant le logiciel RSIT Ce cours explique ligne par ligne le rapport généré part l'outil. N'hésitez pas à partager vos remarques Nous vous souhaitons une bonne lecture

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • Intel VTune Performance Analyser 9.1 not working on Win 7 64

    - by ian
    Got the 32bit and 64 bit versions of the Intel VTune Performance Analyser. I installed the 64bit version (I think the installed was the "EMT" one) and when I go to create a new project, upon clicking the button to select an executable to profile, no file dialog popup shows. I got an old laptop and installed the 32bit on to Windows XP and it works fine. Regarding the 64 bit version, I did try changing the compatability to XP SP3 but it still didnt work. Does anyone know how to fix this?

    Read the article

  • Python serialize lexical closures?

    - by dsimcha
    Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.

    Read the article

  • Web log analyser with daily statistics per URL

    - by Mat
    Are there any good web server log analysis tools that can provide me with daily statistics on individual URLs? I guess I'm looking at something that can drill down into particular URLs and on particular days rather than just a monthly summary report. The following don't seem to meet my needs as they don't offer drilling down to get more detailed info: awstats analog webalizer (I'm running an nginx frontend into Apache with nginx outputting 'combined' format logfiles if it makes any difference.)

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • difference between a lexical, morphological and semantic mistakes?

    - by AnH
    Hi there, I just want to make sure that I understood the differences between a lexical, morphological and semantic mistakes correctly. If am not mistaken semantic mistakes deal with problems concerning meanings, for example writing a sentence that is correct grammar wise but doesn't make any sense is a semantic mistake, morphological mistakes deals more or less on how the word should look like, for example childrenS is a morphological mistake, so that leaves lexical mistakes, what are those exactly?? can someone sum up the differences between those 3 mistakes please so that I may know for sure that I got them down correctly? Thank you in advance

    Read the article

  • 'Lexical' scoping of type parameters in C#

    - by leppie
    I have 2 scenarios. This fails: class F<X> { public X X { get; set; } } error CS0102: The type 'F' already contains a definition for 'X' This works: class F<X> { class G { public X X { get; set; } } } The only logical explanation is that in the second snippet the type parameter X is out of scope, which is not true... Why should a type parameter affect my definitions in a type? IMO, for consistency, either both should work or neither should work. Any other ideas? PS: I call it 'lexical', but it probably is not not the correct term.

    Read the article

  • Lexical Analyzer(Scanner) for Language G by using C/C++

    - by udsha
    int a = 20; int b =30; float c; c = 20 + a; if(c) { a = c*b + a; } else { c = a - b + c; } use C++ / C to Implement a Lexer. 1. Create Unambiguous grammer for language G. 2. Create Lexical Analyzer for Language G. 3. It should identified tokens and lexemes for that language. 4. create a parse tree. 5. to use attribute grammer on a parse tree the values of the intrinsic attributes should be available on the symbol table.

    Read the article

  • Possible typos in ECMAScript 5 specification?

    - by Andy West
    Does anybody know why, at the end of section 7.6 of the ECMA-262, 5th Edition specification, the nonterminals UnicodeLetter, UnicodeCombiningMark, UnicodeDigit, UnicodeconnectorPunctuation, and UnicodeEscapeSequence are not followed by two colons? From section 5.1.6: Nonterminal symbols are shown in italic type. The definition of a nonterminal is introduced by the name of the nonterminal being defined followed by one or more colons. (The number of colons indicates to which grammar the production belongs.) Since lexical productions are distinguished by having two colons, and this is under "Lexical Conventions", I'm assuming that they meant to put the colons in. Does that sound right? Just making sure that these really are nonterminals and they really are part of the lexical grammar. EDIT: I noticed there have been votes to close this. Just to make my case about why this is programming-related, it is relevant to anyone wanting to implement an ECMAScript interpreter.

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

  • Lexical and dynamic scoping in Mathematica: Local variables with Module, With, and Block

    - by dreeves
    The following code returns 14 as you'd expect: Block[{expr}, expr = 2 z; f[z_] = expr; f[7]] But if you change that Block to a Module then it returns 2*z. It seems to not matter what other variables besides expr you localize. I thought I understood Module, Block, and With in Mathematica but I can't explain the difference in behavior between Module and Block in this example. Related resources: Tutorial on Modularity and the Naming of Things from the Mathematica documentation Excerpt from a book by Paul R. Wellin, Richard J. Gaylord, and Samuel N. Kamin Explanation from Dave Withoff on the Mathematica newsgroup

    Read the article

  • lexical analysis gives only one output?

    - by Caffè
    I tested this example(lexe.java), but it gave me only one output. I gave this text as a reader: public class LexeTest{ private int a = 14; } And the nextToken() function is : public Category nextToken () { if (inp.findWithinHorizon (tokenPat, 0) == null) return Category.EOF; else { lastLexeme = inp.match ().group (0); if (inp.match ().start (1) != -1) return nextToken (); else if (inp.match ().start (2) != -1) return Category.IDENT; else if (inp.match ().start (3) != -1) return Category.NUMERAL; Category result = tokenMap.get (lastLexeme); if (result == null) return Category.ERROR; else return result; } } Isdie the main method: System.out.println(lexeObject.nextToken()); output is : IDENT Why? but the textfile contains multiple keywords? Anyone know what's the problem?

    Read the article

  • Livre Blanc : « Les outils de recensement et d'audit open-source », Smile revient sur l'utilité d'analyser son « patrimoine logiciel »

    Livre Blanc : « Les outils de recensement et d'audit open-source » Smile revient sur l'utilité d'analyser son « patrimoine logiciel » Pour Smile, le recensement est le point de départ d'une politique open source : « il s'agit de faire l'état des lieux des logiciels open source utilisés dans l'entreprise ou entrant dans la composition d'un programme donné ». Le but est d'optimiser et d'accompagner l'analyse d'un « patrimoine de logiciel », (identifier les composants open source utilisés, les licences, etc.). Pour faire le point sur les différents outils du marché (Blackduck Software, Protecode, Palamida, OpenLogic, ou le français Antepedia) Smile vien...

    Read the article

  • Facebook veut analyser les mouvements du curseur sur l'écran, la technologie est déjà en phase de tests

    Facebook veut analyser les mouvements du curseur sur l'écran, la technologie est déjà en phase de tests Ken Rudin, le directeur de l'analyse et de l'exploitation des données pour Facebook, a confié au Wall Street Journal que des tests sont en cours sur de nouvelles techniques d'observation du comportement des usagers. Tout d'abord Facebook a l'intention de traquer les habitudes de ses utilisateurs en suivant à la trace les mouvements du curseur de la souris lors de leurs passages sur le réseau...

    Read the article

1 2 3 4 5 6 7  | Next Page >