Search Results

Search found 1820 results on 73 pages for 'parser combinators'.

Page 62/73 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How do I do MongoDB console-style queries in PHP?

    - by Zoe Boles
    I'm trying to get a MongoDB query from the javascript console into my PHP app. What I'm trying to avoid is having to translate the query into the PHP "native driver"'s format... I don't want to hand build arrays and hand-chain functions any more than I want to manually build an array of MySQL's internal query structure just to get data. I already have a string producing the exact content I want in the Mongo console: db.intake.find({"processed": {"$exists": "false"}}).sort({"insert_date": "1"}).limit(10); The question is, is there a way for me to hand this string, as is, to MongoDB and have it return a cursor with the dataset I request? Right now I'm at the "write your own parser because it's not valid json to kinda turn a subset of valid Mongo queries into the format the PHP native driver wants" state, which isn't very fun. I don't want an ORM or a massive wrapper library; I just want to give a function my query string as it exists in the console and get an Iterator back that I can work with. I know there are a couple of PHP-based Mongo manager applications that apparently take console-style queries and handle them, but initial browsing through their code, I'm not sure how they handle the translation. I absolutely love working with mongo in the console, but I'm rapidly starting to loathe the thought of converting every query into the format the native writer wants...

    Read the article

  • How to create more complex Lucene query strings?

    - by boris callens
    This question is a spin-off from this question. My inquiry is two-fold, but because both are related I think it is a good idea to put them together. How to programmatically create queries. I know I could start creating strings and get that string parsed with the query parser. But as I gather bits and pieces of information from other resources, there is a programattical way to do this. What are the syntax rules for the Lucene queries? --EDIT-- I'll give a requirement example for a query I would like to make: Say I have 5 fields: First Name Last Name Age Address Everything All fields are optional, the last field should search over all the other fields. I go over every field and see if it's IsNullOrEmpty(). If it's not, I would like to append a part of my query so it adds the relevant search part. First name and last name should be exact matches and have more weight then the other fields. Age is a string and should exact match. Address can varry in order. Everything can also varry in order. How should I go about this?

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Is there a way to parse XML via SAX/DOM with line numbers available per node.

    - by Chris
    I already have written a DOM parser for a large XML document format that contains a number of items that can be used to automatically generate Java code. This is limited to small expressions that are then merged into a dynamically generated Java source file. So far - so good. Everything works. BUT - I wish to be able to embed the line number of the XML node where the Java code was included from (so that if the configuration contains uncompilable code, each method will have a pointer to the source XML document and the line number for ease of debugging). I don't require the line number at parse-time and I don't need to validate the XML Source Document and throw an error at a particular line number. I need to be able to access the line number for each node and attribute in my DOM or per SAX event. Any suggestions on how I might be able to achieve this? P.S. Also, I read the StAX has a method to obtain line number whilst parsing, but ideally I would like to achieve the same result with regular SAX/DOM processing in Java 4/5 rather than become a Java 6+ application or take on extra .jar files.

    Read the article

  • How do I ensure that a regex does not match an empty string?

    - by Dancrumb
    I'm using the Jison parser generator for Javascript and am having problems with my language specification. The program I'm writing will be a calculator that can handle feet, inches and sixteenths. In order to do this, I have the following specification: %% ([0-9]+\s*"'")?\s*([0-9]+\s*"\"")?\s*([0-9]+\s*"s")? {return 'FIS';} [0-9]+("."[0-9]+)?\b {return 'NUMBER';} \s+ {/* skip whitespace */} "*" {return '*';} "/" {return '/';} "-" {return '-';} "+" {return '+';} "(" {return '(';} ")" {return ')';} <<EOF>> {return 'EOF';} Most of these lines come from a basic calculator specification. I simply added the first line. The regex correctly matches feet, inch, sixteenths, such as 6'4" (six feet, 4 inches) or 4"5s (4 inches, 5 sixteenths) with any kind of whitespace between the numbers and indicators. The problem is that the regex also matches a null string. As a result, the lexical analysis always records a FIS at the start of the line and then the parsing fails. Here is my question: is there a way to modify this regex to guarantee that it will only match a non-zero length string?

    Read the article

  • Extremely strange glitch in Chrome - parses contents of string!

    - by George Edison
    Okay - this is the dumbest glitch I have seen in a while: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <script type='text/javascript'> var data = "</script>"; </script> </head> <body> This should break! </body> </html> This causes syntax errors because the JavaScript parser is actually reading the contents of the string. How stupid! How can I put </script> in my code. Is there any way? Is there a valid reason for this behavior?

    Read the article

  • wsdl xml parsing , maxlength problem after encoding of text

    - by MichaelD
    We are working together with another firm. our application communicates with the other application through WCF on our side and a custom implemented java wsdl handler on the other side. They specify the wsdl format and one of the rules is that a specific string cannot contain more then 15 characters. (normally it's 60, but i take 15 for easy example reasons) When we try to send the following string to them we get an error that the string is too long according to the wsdl: "example & test" this is a string of 14 characters, so it should be allowed the microsoft wcf parser translates this to "example &amp; test" . This encoded string is 18 characters long. Now what is the standaard behavior to check a maxlength defined in a message? Is it the encoded message or the decoded message? I would think it's the decoded message , but i ain't sure. If it is the encoded message, how should we handle this so we would know how we have to split the string?

    Read the article

  • PHP XML Parsing

    - by ashchawla
    Hi I am making get request in PHP but the parser is having issues. I am looking for a way to retreive the value of URL in the following example. The response xml looks like this. <?xml version="1.0" encoding="iso-8859-1" standalone="no"?> <testtemplate JobName="test1"> <Sources> <Source xmlns="http://www.microsoft.com/test/schema/api"> <System> <SystemIdentifier>HTTPS</SystemIdentifier> <URL>http://example.com</URL> </System> </Source> </Sources> <testtemplate> and my PHP code looks like this: $xml = DOMDocument::LoadXML($response); $mypath = new DOMXPath($xml); $mypath->registerNamespace("a", "http://www.microsoft.com/test/schema/api"); $url = $mypath->evaluate("/testtemplate/Sources/Source/System/URL"); $message = $url->item(0)->value; print($message); Any help would be appreciated. Thanks Ashish

    Read the article

  • C++: parsing with simple regular expression or shoud I use sscanf?

    - by Helltone
    I need to parse a string like func1(arg1, arg2); func2(arg3, arg4);. It's not a very complex parsing problem, so I would prefer to avoid resorting to flex/bison or similar utilities. My first approch was to try to use POSIX C regcomp/regexec or Boost implementation of C++ std::regex. I wrote the following regular expression, which does not work (I'll explain why further on). "^" "[ ;\t\n]*" "(" // (1) identifier "[a-zA-Z_][a-zA-Z0-9_]*" ")" "[ \t\n]*" "(" // (2) non-marking "\[" "(" // (3) non-marking "[ \t]*" "(" // (4..n-1) argument "[a-zA-Z0-9_]+" ")" "[ \t\n]*" "," ")*" "[ \t\n]*" "(" // (n) last argument "[a-zA-Z0-9_]+" ")" "]" ")?" "[ \t\n]*" ";" Note that the group 1 captures the identifier and groups 4..n-1 are intended to capture arguments except the last, which is captured by group n. When I apply this regex to, say func(arg1, arg2, arg3) the result I get is an array {func, arg2, arg3}. This is wrong because arg1 is not in it! The problem is that in the standard regex libraries, submarkings only capture the last match. In other words, if you have for instance the regex "((a*|b*))*" applied on "babb", the results of the inner match will be bb and all previous captures will have been forgotten. Another thing that annoys me here is that in case of error there is no way to know which character was not recognized as these functions provide very little information about the state of the parser when the input is rejected. So I don't know if I'm missing something here... In this case should I use sscanf or similar instead? Note that I prefer to use C/C++ standard libraries (and maybe boost).

    Read the article

  • MS Access (Jet) transactions, workspaces & scope

    - by Eric G
    I am having trouble with committing a transaction (using Access 2003 DAO). It's acting as if I never had called BeginTrans -- I get error 3034 on CommitTrans, "You tried to commit or rollback a transaction without first beginning a transaction"; and the changes are written to the database (presumably because they were never wrapped in a transaction). However, BeginTrans is run, if you step through it. I am running it within the Access environment using the DBEngine(0) workspace. The tables I'm updating are all opened via a Jet database connection (to the same database) and updated using DAO.Recordset.update. The connection is opened before starting BeforeTrans. I'm not doing anything weird in the middle of the transaction like closing/opening connections or multiple workspaces etc. There is one nested transaction level (basically it's wrapping multiple transacted updates in an outer transaction, so if any fail they all fail). The inner transactions run without errors, it's the outer transaction that doesn't work. Here are a few things I've looked into and ruled out: The transaction is spread across several methods and BeginTrans and CommitTrans (and Rollback) are all in different places. But when I tried a simple test of running a transaction this way, it doesn't seem like this should matter. I thought maybe the database connection gets closed when it goes out of local scope, even though I have another 'global' reference to it (I'm never sure what DAO does with dbase connections to be honest). But this seems not to be the case -- right before the commit, the connection and its recordsets are alive (I can check their properties, EOF = False, etc.) My CommitTrans and Rollback are done within event callbacks. (Very basically, a parser program is throwing an 'onLoad' or 'onLoadFail' event at the end of parsing, which I am handling by either committing or rolling back the inserts I made during processing.) However, again, trying a simple test, it doesn't seem like this should matter. Any ideas why this isn't working for me? Thanks.

    Read the article

  • How to call Javascript fun() in JSF EL conditionally?

    - by Paul
    I have to call Javascript funtion based on the bean value. i use the following code onmouseover="#{occasionBean.user.userPreference.defaultPreview==true?'':'Tip()'})" I need to send some parameters in Tip() like this Tip('') Error i am getting is javax.servlet.jsp.JspException: javax.faces.el.EvaluationException: com.sun.faces.el.impl.parser.ParseException: Encountered "test" at line 1, column 60. Was expecting one of: "}" ... "." ... "" ... "gt" ... "<" ... "lt" ... "==" ... "eq" ... "<=" ... "le" ... "=" ... "ge" ... "!=" ... "ne" ... "[" ... "+" ... "-" ... "*" ... "/" ... "div" ... "%" ... "mod" ... "and" ... "&&" ... "or" ... "||" ... "?" ... '

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • What language is this???

    - by Misha Koshelev
    Dear All: I thought this was Javascript... but parser is giving me trouble. Any ideas? Thank you! Misha for (;;); {"error":0,"errorSummary":"","errorDescription":"","errorIsWarning":false,"silentError":0,"payload":{"collections": [{"name":"bcm","type":"flp","filter":"flp_662923563701","value":"662923563701","editable":true,"deletable":true,"members": ["1319651388","539562714","710814793","569071038","1553739575","2413243"]}, {"name":"mstp","type":"flp","filter":"flp_715806870131","value":"715806870131","editable":true,"deletable":true,"members": ["1263807225","1159429816","508447486","508005223","1234906348","642723993","552875889","23401888","10701320","8302901","7931988","3007490","1286522890","1128447272","1076062553","775679867","737520202","640799498","224400055","224400048","14211567","7909445","3005965","2404364","218216","660037273","224400089","73306230","9603033","1111694","1034418884","775680513" ,"704526828","518753881","512477182","224400016","24904610","19000876","5403952","3005641","2100348","100000421128298","1445411167","691445174","1020020100","795471177","683724539","682441089","532450522","224400129","224400005","3006522","2246813","1302265","7197","1900494", "100000474978266","2533582","1205125","1384091677","1260996959","710814793","514951289","224400164","224400156","173601800","13304723","7938844","3004783","3001379","302817","716739950 ","706849","1418109424","562676898","82501644","3007569","13173"]}, {"name":"mystery","type":"flp","filter":"flp_687949656211","value":"687949656211","editable":true,"deletable":true,"members": ["100001286464748","508123007","100001161894460","1148567030","1048974191","769992391","831734347","15347","1297180076","756692945","3005266","733396195","34410910","100000940154241"," 748426280","569417581","1318922027","100000164920046","1475269609","1436536592","10000016210 8385","754095305","100000421128298","537833189","100000692471928","7920231","673753496","3006217","1221878","8365333","1128447272","224400133","218216","505457123","1421958541","183829 5926","2349408","1622810085","1201640391","510959992","23429895","542118016","1017385668","586225579","625100539","100000474886633","26404148","1384091677","224400156","806908635","843 187401","400435","768261176","7901808","748496482","1541469473","2511982","25401573","503715 506","1226000844","559195162","41400094","1099436201","409816","1584400985","1577092523","100000349351880","199301581"]},{"name":"SMS Subscriptions","type":"ms","filter":"ms","value":"3","editable":true,"deletable":false,"members":[]}],"current_view":false}}

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • PHP: json_decode dumping NULL, BOM not found

    - by SerEnder
    I've been trying to find out why this 'json_encode'd string isn't parsing out correctly, and came across previously answered questions that had the UTF BOM sequence that was throwing the error, but didn't help me here. Here's the code that isn't currently working: //Decode the notes attached to the sig $aNotes = json_decode($rule->getNotes(),true); $bom = pack("CCC",0xef,0xbb,0xbf); if(0 == strncmp($rule->getNotes(),$bom,3)) { print('BOM detected - json encoding in UTF-8<br/>'); } else { print('BOM NOT detected - json encoding correctly<br/>'); } print('rule->getNotes:<br/>' . $rule->getNotes() .'<br/>'); var_dump($aNotes); Which generates this result: BOM NOT detected - json encoding correctly rule->getNotes: [{"lDate":"Unknown","sAuthor":"Unknown","sNote":"This is a general purpose Russian spam rule that matches anything starting with 2, 3 or 4 hex digits followed by a domain name ending with .ru -RSK 2010-05-10"},{"lDate":"1295031463082","sAuthor":"Drew Thorstenson","sNote":"this is Ryan's ru rule"}] NULL I've run it through JSON Lint, which said it was valid, and An Online JSON Parser which parsed it correctly too. Any insight would be greatly appreciated.

    Read the article

  • Sencha : how to pass parameter to php using Ext.data.HttpProxy?

    - by Lauraire Jérémy
    I have successfully completed this great tutorial : http://www.sencha.com/learn/ext-js-grids-with-php-and-sql/ I just can't use the baseParams field specified with the proxy... Here is my code that follows tutorial description : __ My Store : Communes.js ____ Ext.define('app.store.Communes', { extend: 'Ext.data.Store', id: 'communesstore', requires: ['app.model.Commune'], config: { model: 'app.model.Commune', departement:'var', // the proxy with POST method proxy: new Ext.data.HttpProxy({ url: 'app/php/communes.php', // File to connect to method: 'POST' }), // the parameter passed to the proxy baseParams:{ departement: "VAR" }, // the JSON parser reader: new Ext.data.JsonReader({ // we tell the datastore where to get his data from rootProperty: 'results' }, [ { name: 'IdCommune', type: 'integer' }, { name: 'NomCommune', type: 'string' } ]), autoLoad: true, sortInfo:{ field: 'IdCommune', direction: "ASC" } } }); _____ The php file : communes.php _____ <?php /** * CREATE THE CONNECTION */ mysql_connect("localhost", "root", "pwd") or die("Could not connect: " . mysql_error()); mysql_select_db("databasename"); /** * INITIATE THE POST */ $departement = 'null'; if ( isset($_POST['departement'])){ $departement = $_POST['departement']; // Get this from Ext } getListCommunes($departement); /** * */ function getListCommunes($departement) { [CODE HERE WORK FINE : just a connection and query but $departement is NULL] } ?> There is no parameter passed as POST method... Any idea?

    Read the article

  • Replace string with incremented value

    - by Andrei
    Hello, I'm trying to write a CSS parser to automatically dispatch URLs in background images to different subdomains in order to parallelize downloads. Basically, I want to replace things like url(/assets/some-background-image.png) with url(http://assets[increment].domain.com/assets/some-background-image.png) I'm using this inside a class that I eventually want to evolve into doing various CSS parsing tasks. Here are the relevant parts of the class : private function parallelizeDownloads(){ static $counter = 1; $newURL = "url(http://assets".$counter.".domain.com"; The counter needs to be reset when it reaches 4 in order to limit to 4 subdomains. if ($counter == 4) { $counter = 1; } $counter ++; return $newURL; } public function replaceURLs() { This is mostly nonsense, but I know the code I'm looking for looks somewhat like this. Note : $this-css contains the CSS string. preg_match("/url/i",$this->css,$match); foreach($match as $URL) { $newURL = self::parallelizeDownloads(); $this->css = str_replace($match, $newURL,$this->css); } }

    Read the article

  • Thread loses Message after wait() and notify()

    - by fugu2.0
    Hey Guys! I have a problem handling messages in a Thread. My run-method looks like this public void run() { Looper.prepareLooper(); parserHandler = new Handler { public void handleMessage(Message msg) { Log.i("","id from message: "+msg.getData.getString("id")); // handle message this.wait(); } } } I have several Activities sending messages to this thread, like this: Message parserMessage = new Message(); Bundle data = new Bundle(); data.putString("id", realId); data.putString("callingClass", "CategoryList"); parserMessage.setData(data); parserMessage.what = PARSE_CATEGORIES_OR_PRODUCTS; parserHandler = parser.getParserHandler(); synchronized (parserHandler) { parserHandler.notify(); Log.i("","message ID: " + parserMessage.getData().getString("id")); } parserHandler.sendMessage(parserMessage); The problem is that the run-method logs "id from message: null" though "message ID" has a value in the Log-statement. Why does the message "lose" it's data when being send to the thread? Has it something to do with the notify? Thanks for your help

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • Does XAML work with file links in Visual Studio?

    - by Tim
    I'm adding a new WPF project to an existing Visual Studio solution and would like to reuse a bunch of code (C# and xaml) from an existing project within the solution. I've created the new project and added existing files as follows: Right click project Add - Add Existing Item Find the file to reuse, use the arrow next to "Add" and "Add as Link" I now have a nice project set up with all the proper links. However, XAML chokes on these links. For example: <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Resources\Elements\Buttons\Buttons.xaml" /> <ResourceDictionary Source="Resources\Elements\TextBox\TextBox.xaml" /> </ResourceDictionary.MergedDictionaries> The files "Buttons.xaml" and "TextBox.xaml" exist as links in my new project. The project builds, but when I run, I get the following XamlParseException: 'Resources\Elements\Buttons\Buttons.xaml' value cannot be assigned to property 'Source' of object 'System.Windows.ResourceDictionary'. Cannot locate resource 'resources/elements/buttons/buttons.xaml'. It seems like the XAML parser is requiring an actual copy of these XAML files to exist in my new project, instead of links. This is exactly what I'm trying to avoid. I want my project to share these files so that any changes get transferred to the other project without hunting and copying. Any insight is appreciated!

    Read the article

  • Parsing URL error

    - by user577875
    It didn't seem like there was a post about this, so here goes. I've been working on a simple app to grab my time table from my school, and get it on my phone. Currently I'm working on the port on android but I've hit an issue. I get the error: java.io.IOException: -1 error loading URL urladress. Code: public void updateTimeTable(){ //Get UID and Birthday SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(getBaseContext()); String uid = prefs.getString("uid", "000000"); String fods = prefs.getString("fodsdag", "000000"); //Set URL String url = "http://unv1.aalborg-stu.dk/cgi-bin/elevskema.pl?ugen=0&unavn=" + uid + "&fodsdag=" + fods; try { Document doc = Jsoup.connect(url).get(); Elements td = doc.getElementsByTag("td"); //ArrayList<String> tdArray = new ArrayList<String>(); // for (Element tds : td) { // String tdText = tds.text(); // tdArray.add(tdText); //} //String[] data = tdArray.toArray(new String[tdArray.size()]); } catch (IOException e ){ Log.e("Parser", "shite", e); } Context context = getApplicationContext(); CharSequence text = url; int duration = Toast.LENGTH_SHORT; Toast toast = Toast.makeText(context, text, duration); toast.show(); } I've commented some lines out to identify where the issue is, and it seems it's at the actual parsing. Anywho, screenshot of the error I get: Screenshot I got about 4 days worth of Java experience so forgive me if it's something silly. Best Regards

    Read the article

  • Apache Lucene: Is Relevance Score Always Between 0 and 1?

    - by Eamorr
    Greetings, I have the following Apache Lucene snippet that's giving me some nice results: int numHits=100; int resultsPerPage=100; IndexSearcher searcher=new IndexSearcher(reader); TopScoreDocCollector collector=TopScoreDocCollector.create(numHits,true); Query q=parser.parse(queryString); searcher.search(q,collector); ScoreDoc[] hits=collector.topDocs(0*resultsPerPage,resultsPerPage).scoreDocs; Results r=new Results(); r.length=hits.length; for(int i=0;i<hits.length;i++){ Document doc=searcher.doc(hits[i].doc); double distanceKm=getGreatCircleDistance(lucene2double(doc.get("lat")), lucene2double(doc.get("lng")), Double.parseDouble(userLat), Double.parseDouble(userLng)); double newRelevance=((1/distanceKm)*Math.log(hits[i].score)/Math.log(2))*(0-1); System.out.println(hits[i].doc+"\t"+hits[i].score+"\t"+doc.get("content")+"\t"+"Km="+distanceKm+"\trlvnc="+String.valueOf(newRelevance)); } What I want to know, is hits[i].score always between 0 and 1? It seems that way, but I can't be sure. I've even checked the Lucene documentation (class ScoreDocs) to no avail. You'll see I'm calculating the log of the "newRelevance" value, which is based on hits[i].score. I need hits[i].score to be between 0 and 1, because if it is below zero, I'll get an error; above 1 and the sign will change from negative to positive. I hope some Lucene expert out there can offer me some insight. Many thanks,

    Read the article

  • How do you unit-test a method with complex input-output

    - by Dan
    When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert. What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example <root><child/></root> that will be used to check parent-child relationships between nodes and so on for the rest of expectations. Now, take a look at follwing Xml: <root> <child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1> <child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2> </root> In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >