Search Results

Search found 19554 results on 783 pages for 'xml pull parser'.

Page 329/783 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Regular expressions - finding and comparing the first instance of a word

    - by Dan
    Hi there, I am currently trying to write a regular expression to pull links out of a page I have. The problem is the links need to be pulled out only if the links have 'stock' for example. This is an outline of what I have code wise: <td class="prd-details"> <a href="somepage"> ... <span class="collect unavailable"> </td> <td class="prd-details"> <a href="somepage"> ... <span class="collect available"> </td> What I would like to do is pull out the links only if 'collect available' is in the tag. I have tried to do this with the regular expression: (?s)prd-details[^=]+="([^"]+)" .+?collect{1}[^\s]+ available However on running it, it will find the first 'prd-details' class and keep going until it finds 'collect available', thereby taking the incorrect results. I thought by specifying the {1} after the word collect it would only use the first instance of the word it finds, but apparently I'm wrong. I've been trying to use different things such as positive and negative lookaheads but I cant seem to get anything to work. Might anyone be able to help me with this issue? Thanks, Dan

    Read the article

  • Lucene wildcard queries

    - by Javi
    Hello, I have this question relating to Lucene. I have a form and I get a text from it and I want to perform a full text search in several fields. Suppose I get from the input the text "textToLook". I have a Lucene Analyzer with several filters. One of them is lowerCaseFilter, so when I create the index, words will be lowercased. Imagine I want to search into two fields field1 and field2 so the lucene query would be something like this (note that 'textToLook' now is 'texttolook'): field1: texttolook* field2:texttolook* In my class I have something like this to create the query. I works when there is no wildcard. String text = "textToLook"; String[] fields = {"field1", "field2"}; //analyser is the same as the one used for indexing Analyzer analyzer = fullTextEntityManager.getSearchFactory().getAnalyzer("customAnalyzer"); MultiFieldQueryParser parser = new MultiFieldQueryParser(fields, analyzer); org.apache.lucene.search.Query queryTextoLibre = parser.parse(text); With this code the query would be: field1: texttolook field2:texttolook but If I set text to "textToLook*" I get field1: textToLook* field2:textToLook* which won't find correctly as the indexes are in lowercase. I have read in lucene website this: " Wildcard, Prefix, and Fuzzy queries are not passed through the Analyzer, which is the component that performs operations such as stemming and lowercasing" My problem cannot be solved by setting the behaviour case insensitive cause my analyzer has other fields which for examples remove some suffixes of words. I think I can solve the problem by getting how the text would be after going through the filters of my analyzer, then I could add the "*" and then I could build the Query with MultiFieldQueryParser. So in this example I woud get "textToLower" and after being passed to to these filters I could get "texttolower". After this I could make "textotolower*". But, is there any way to get the value of my text variable after going through all my analyzer's filters? How can I get all the filters of my analyzer? Is this possible? Thanks

    Read the article

  • Service reference error when moving dev. environment from XP to W7

    - by Peter
    Hi, I am building an application and I am using web services for getting data from a server. It was working fine when I was developing on my XP machine but had to switch to Windows 7. On the new machine I grabbed the latest version of the code using sourcesafe. However, when I try to add a service reference in the solution or update an existing one I get the following error: There was an error downloading 'http://localhost:52490/Service/CustomerService.asmx'. The request failed with the error message: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not create type 'Digital_Server.CustomerService'. Source Error <%@ WebService Language="vb" CodeBehind="CustomerService.asmx.vb" Class="Digital_Server.CustomerService" % Source File: /Service/CustomerService.asmxLine: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 --. Metadata contains a reference that cannot be resolved: 'http://localhost:52490/Service/CustomerService.asmx'. An error occurred while receiving the HTTP response to http://localhost:52490/Service/CustomerService.asmx. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details. The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host If the service is defined in the current solution, try building the solution and adding the service reference again. Does it has anything to do with the IIS or is it any configuration file I have to change in the solution? Any help is appreciated.

    Read the article

  • Antlr MismatchedSetException During Interpretation

    - by user1646481
    I am new to Antlr and I have defined a basic grammar using Antlr 3. The grammar compiles and ANTLRWorks generates the Parser and Lexer code without any problems. The grammar can be seen below: grammar i; @header { package i; } module : 'Module1'| 'Module2'; object : 'I'; objectType : 'Name'; filters : EMPTY | 'WHERE' module; table : module object objectType; STRING : ('a'..'z'|'A'..'Z')+; EMPTY : ' '; The problem is that when I interpret the table Parser I get a MismatchedSetException. This is due to having the EMPTY. As soon as I remove EMPTY from the grammar, the interpretation works. I have looked on the Antlr website and some other examples and the Empty space is ' '. I am not sure what to do. I need this EMPTY. As soon as I change the EMPTY to be the following: EMPTY : ''; instead of: EMPTY : ' '; It actually interprets it. However, I am getting the following Exception: Interpreting... [10:57:23] problem matching token at 1:4 NoViableAltException(' '@[1:1: Tokens : ( T__4 | T__5 | T__6 | T__7 | T__8 | T__9 | T__10 | T__11 | T__12 | T__13 | T__14 | T__15 | T__16 );]) [10:57:23] problem matching token at 1:9 NoViableAltException(' '@[1:1: Tokens : ( T__4 | T__5 | T__6 | T__7 | T__8 | T__9 | T__10 | T__11 | T__12 | T__13 | T__14 | T__15 | T__16 );]) When I take out the EMPTY, there are no problems at all. I hope you can help.

    Read the article

  • boost::spirit (qi) decision between float and double

    - by ChrisInked
    I have a parser which parses different data types from an input file. I already figured out, that spirit can decide between short and int, for example: value %= (shortIntNode | longIntNode); with shortIntNode %= (qi::short_ >> !qi::double_) [qi::_val = phoenix::bind(&CreateShortIntNode, qi::_1)]; longIntNode %= (qi::int_ >> !qi::double_) [qi::_val = phoenix::bind(&CreateLongIntNode, qi::_1)]; I used this type of rules to detect doubles as well (from the answers here and here). The parser was able to decide between int for numbers 65535 and short for numbers <= 65535. But, for float_ and double_ it does not work as expected. It just rounds these values to parse it into a float value, if there is a rule like this: value %= (floatNode | doubleFloatNode); with floatNode %= (qi::float_) [qi::_val = phoenix::bind(&CreateFloatNode, qi::_1)]; doubleFloatNode %= (qi::double_) [qi::_val = phoenix::bind(&CreateDoubleFloatNode, qi::_1)]; Do you know if there is something like an option or some other trick to decide between float_ and double_ depending on the data type range? Thank you very much!

    Read the article

  • How can I release this NSXMLParser without crashing my app?

    - by prendio2
    Below is the @interface for an MREntitiesConverter object I use to strip all html tags from a string using an NSXMLParser. @interface MREntitiesConverter : NSObject { NSMutableString* resultString; NSString* xmlStr; NSData *data; NSXMLParser* xmlParser; } @property (nonatomic, retain) NSMutableString* resultString; - (NSString*)convertEntitiesInString:(NSString*)s; @end And this is the implementation: @implementation MREntitiesConverter @synthesize resultString; - (id)init { if([super init]) { self.resultString = [NSMutableString string]; } return self; } - (NSString*)convertEntitiesInString:(NSString*)s { xmlStr = [NSString stringWithFormat:@"<data>%@</data>", s]; data = [xmlStr dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; xmlParser = [[NSXMLParser alloc] initWithData:data]; [xmlParser setDelegate:self]; [xmlParser parse]; return [resultString autorelease]; } - (void)dealloc { [data release]; //I want to release xmlParser here but it crashes the app [super dealloc]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)s { [self.resultString appendString:s]; } @end If I release xmlParser in the dealloc method I am crashing my app but without releasing I am quite obviously leaking memory. I am new to Instruments and trying to get the hang of optimising this app. Any help you can offer on this particular issue will likely help me solve other memory issues in my app. Yours in frustrated anticipation: ) Oisin

    Read the article

  • Parsing data with Clojure, interval problem.

    - by Andrea Di Persio
    Hello! I'm writing a little parser in clojure for learning purpose. basically is a TSV file parser that need to be put in a database, but I added a complication. The complication itself is that in the same file there are more intervals. The file look like this: ###andreadipersio 2010-03-19 16:10:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84` .... ###andreadipersio 2010-03-19 16:20:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84 I ended up with this code: (defn is-header? "Return true if a line is header" [line] (> (count (re-find #"^\#{3}" line)) 0)) (defn extract-fields "Return regex matches" [line pattern] (rest (re-find pattern line))) (defn process-lines [lines] (map process-line lines)) (defn process-line [line] (if (is-header? line) (extract-fields line header-pattern)) (extract-fields line data-pattern)) My idea is that in 'process-line' interval need to be merged with data so I have something like this: ('andreadipersio', '2010-03-19', '16:10:00', 'root', 'launchd', 1, 0, 0.0, 0.0, '2:46.97') for every row till the next interval, but I can't figure how to make this happen. I tried with something like this: (def process-line [line] (if is-header? line) (def header-data (extract-fields line header-pattern))) (cons header-data (extract-fields line data-pattern))) But this doesn't work as excepted. Any hints? Thanks!

    Read the article

  • Stack overflow while working with CFBuilder plugin

    - by lynxoid
    In the past 30 minutes of working in CFBuilder (I have it as an Eclipse Plug in), I got this error 4 times: A stack overflow has occurred. You are recommended to exit the workbench. Subsequent errors may happen and may terminate the workbench without warning. See the .log file for more details. Do you want to exit workbench?. together with: Unhandled event loop exception java.lang.StackOverflowError The log file had this: !ENTRY org.eclipse.ui 4 0 2010-05-11 09:41:51.951 !MESSAGE Unhandled event loop exception !STACK 0 java.lang.StackOverflowError at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.sort(Unknown Source) at com.adobe.ide.cfml.parser.generated.CFMLParserBase.getVariableInfo(CFMLParserBase.java:1613) at com.adobe.ide.cfml.parser.generated.CFMLParserBase.getVariableInfo(CFMLParserBase.java:1603) at com.adobe.ide.editor.model.CFMLDOMUtils.getVariable(CFMLDOMUtils.java:2375) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromNode(CFMLDOMUtils.java:2484) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromFunctionCall(CFMLDOMUtils.java:2168) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromNode(CFMLDOMUtils.java:2495) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromFunctionCall(CFMLDOMUtils.java:2168) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromNode(CFMLDOMUtils.java:2495) at com.adobe.ide.editor.model.CFMLDOMUtils.getComponentNameFromFunctionCall(CFMLDOMUtils.java:2168) (and so on - repeat n times) It happens whenever I copy/paste something. Does anyone know what is going on?

    Read the article

  • using jquery selector to change attribute in variable returned from ajax request

    - by Blake
    I'm trying to pull in a filename.txt (contains html) using ajax and change the src path in the data variable before I load it into the target div. If I first load it into the div the browser first requests the broken image and I don't want this so I would like to do my processing before I load anything onto the page. I can pull the src values fine but I can't change them. In this example the src values aren't changed. Is there a way to do this with selectors or can they only modify DOM elements? Otherwise I may have to do some regex replace but using a selector will be more convenient if possible. $.ajax( { url: getDate+'/'+name+'.txt', success: function(data) { $('img', data).attr('src', 'new_test_src'); $('#'+target).fadeOut('slow', function(){ $('#'+target).html(data); $('#'+target).fadeIn('slow'); }); } }); My reason is I'm building a fully standalone javascript template system for a newsletter and since images and other things are upload via a drupal web file manager I want the content creators to keep their paths very short and simple and I can then modify them before I load in the content. This will also be distributed on a CD so I can need to change the paths for that so they still work.

    Read the article

  • What is the process involved in viewing a webservice in a browser from within visual studio?

    - by Sam Holder
    I have created a new VS2008 ASP.Net Web service project, with the default name WebService1. If I right click on the Service1.asmx file and select 'View in Browser' what are the processes that go on to make this happen? I am asking because I have a situation where when I run this from a visual studio project started in our development shell (which sets up a common build environment) I cannot get the web service to show up in the browser. It starts the asp.net development server and creates a single file: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c43ddc22\268ae91b\hash\hash.web but when I start it from a stand alone project i get a whole slew of files in here: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\edad4eee\d198cf0e\App_Web_defaultwsdlhelpgenerator.aspx.cdcab7d2.vicgkf94.dll C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\edad4eee\d198cf0e\service1.asmx.cdcab7d2.compiled etc etc I am trying to debug this but not really getting anywhere. i have inspected the output from VS but the only option I get is for the build output, which is basic and doesn't really contain any information that is useful. I have tried running both versions with DebugView running but no output there either. I would like to know if there are any log files I could look at, or if anyone has any suggestions on how I might be able to debug what is going wrong here? For completeness the output I get when it doesn't work is: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not create type 'WebService1.Service1'. Source Error: Line 1: Source File: /Service1.asmx Line: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • NSInvocation object not getting allocated iphone sdk

    - by neha
    Hi all, I'm doing NSString *_type_ = @"report"; NSNumber *_id_ = [NSNumber numberWithInt:report.reportId]; NSDictionary *paramObj = [NSDictionary dictionaryWithObjectsAndKeys: _id_, @"bla1", _type_, @"bla2",nil]; _operation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(initParsersetId:) object:paramObj]; But my _operation object is nil even after processing this line. The selector here is actually a function I'm writing which is like: -(void)initParsersetId:(NSInteger)_id_ type:(NSString *)_type_ { NSString *urlStr = [NSString stringWithFormat:@"apimediadetails?id=624&type=report"]; NSString *finalURLstr = [urlStr stringByAppendingString:URL]; NSURL *url = [[NSURL alloc] initWithString:finalURLstr]; NSXMLParser *xmlParser = [[NSXMLParser alloc] initWithContentsOfURL:url]; //Initialize the delegate. DetailedViewObject *parser = [[DetailedViewObject alloc] initDetailedViewObject]; //Set delegate [xmlParser setDelegate:parser]; //Start parsing the XML file. BOOL success = [xmlParser parse]; if(success) NSLog(@"No Errors"); else NSLog(@"Error Error Error!!!"); } Can anybody please point out wheather where I'm going wrong. Thanx in advance.

    Read the article

  • C callback functions defined in an unnamed namespace?

    - by Johannes Schaub - litb
    Hi all. I have a C++ project that uses a C bison parser. The C parser uses a struct of function pointers to call functions that create proper AST nodes when productions are reduced by bison: typedef void Node; struct Actions { Node *(*newIntLit)(int val); Node *(*newAsgnExpr)(Node *left, Node *right); /* ... */ }; Now, in the C++ part of the project, i fill those pointers class AstNode { /* ... */ }; class IntLit : public AstNode { /* ... */ }; extern "C" { Node *newIntLit(int val) { return (Node*)new IntLit(val); } /* ... */ } Actions createActions() { Actions a; a.newIntLit = &newIntLit; /* ... */ return a; } Now the only reason i put them within extern "C" is because i want them to have C calling conventions. But optimally, i would like their names still be mangled. They are never called by-name from C code, so name mangling isn't an issue. Having them mangled will avoid name conflicts, since some actions are called like error, and the C++ callback function has ugly names like the following just to avoid name clashes with other modules. extern "C" { void uglyNameError(char const *str) { /* ... */ } /* ... */ } a.error = &uglyNameError; I wondered whether it could be possible by merely giving the function type C linkage extern "C" void fty(char const *str); namespace { fty error; /* Declared! But i can i define it with that type!? */ } Any ideas? I'm looking for Standard-C++ solutions.

    Read the article

  • Deploying a Rails app on an Ubuntu server using Git

    - by NudeCanalTroll
    I'm completely new to Linux, but today I find myself setting up a server (Ubuntu 10.04 LTS lucid) from scratch to host a Rails application. Anyway, I managed to get a Rails app up and running on the server itself, but I had to scrap that because I want to use Git. So I setup a git repository on the server, then pushed all the code from my local machine to the repository. Buuuut, of course Git doesn't actually store the files themselves in the repository -- all the code for my Rails app is now only on my local machine. How am I supposed to tell the server to host that? Right now my solution is to have the server use git to pull the code from its own repository. That's the code I'll host for all the world to see. In order to update the code, I guess I'll have to do something like this: Update the code on my local machine. Do some git adds, git commits, and a git push. On the server, do a git pull to update the code. So my question is, am I doing this the right way? enter code here

    Read the article

  • Fixing merge conflicts?

    - by user291701
    I have two remote branches, "grape" and "master". I'm currently on "grape". Now I switch to "master": git checkout master Now I want to pull all changes from "grape" into "master" - is this the way to do it?: git merge origin grape It's my understanding that git will then pull all the current state of the remote branch "grape" into my local copy of "master". It will try to auto-merge for me. If there are conflicts, the files in conflict will have some conflict text actually injected into the file. I then have to go into those files, and delete the chunk I don't want (essentially telling git how to merge these files). For each file in conflict, do I add and commit the changes again?: git add problemfile1.txt git commit -m "Fixed merge conflict." git add problemfile2.txt git commit -m "Fixed another merge conflict." ... after I've fixed all the merge conflicts like above, do I just push to "master" again to finish up the process?: git push origin master or is there something else we need to do when we get into this conflict state? Thank you

    Read the article

  • Avoid slowdowns while using off-site database

    - by Anders Holmström
    The basic layout of my problem is this: Website (ASP.NET/C#) hosted at a dedicated hosting company (location 1) Company database (SQL Server) with records of relevant data (location 2). Location 1 & 2 connected through VPN. Customer visiting the website and wanting to pull data from the company database. No possibility of changing the server locations or layout (i.e. moving the website to an in-office server isn't possible). What I want to do is figure out the best way to handle the data acces in this case, minimizing the need for time-expensive database calls over the VPN. The first idea I'm getting is this: When a user enters the section of the website needing the DB data, you pull all the needed tables from the database into a in-memory dataset. All subsequent views/updates to the data is done on this dataset. When the user leaves (logout, session timeout, browser closed etc) the dataset gets sent to the SQL server. I'm not sure if this is a realistic solution, and it obviously has some problems. If two web visitors are performing updates on the same data, the one finishing up last will have their changes overwriting the first ones. There's also no way of knowing you have the latest data (i.e. if a customer pulls som info on their projects and we update this info while they are viewing them, they won't see these changes PLUS the above overwriting issue will arise). The other solution would be to somehow aggregate database calls and make sure they only happen when you need them, e.g. during data updates but not during data views. But then again the longer a pause between these refreshing DB calls, the bigger a chance that the data view is out of date as per the problem described above. Any input on the above or some fresh ideas would be most welcome.

    Read the article

  • MySql "comments" parameter as descriptor?

    - by Nick
    So, I'm trying to learn a lot at once, and this place is really helpful! I'm making a little running log website for myself and maybe a few other people, and I have it so that the user can add workouts for each day. With each workout, I have a variety of information the user can fill out for the workout, such as running distance, time, quality of run, course, etc... I store this in a MySql database as a table with fields titled "distance", "time", "runquality", etc... Now, these field titles don't match up with what I want displayed on the running log, so I was thinking of using the "Comments" attribute for a field to store its human-readable title--thus the field "runquality" would have "Quality of run" as its comment, and then I would pull the comment with a SQL query and display it instead of the field name. Is this a good theoretical/practical way of going about it? And what sort of SQL would I use to pull the comment for the field anyway? Secondly, suppose I want to add the ability for the user to create their own workout descriptors. So say a user wants to add a "temperature" descriptor for their workout. Should I create a script that adds fields to my workout table, or should I create a separate table listing only workout descriptors and somehow link the descriptor table with the "contents" table? I haven't learned any theory about database design or anything so any help is appreciated!

    Read the article

  • Lucene.Net: How can I add a date filter to my search results?

    - by rockinthesixstring
    I've got my searcher working really well, however it does tend to return results that are obsolete. My site is much like NerdDinner whereby events in the past become irrelevant. I'm currently indexing like this Public Function AddIndex(ByVal searchableEvent As [Event]) As Boolean Implements ILuceneService.AddIndex Dim writer As New IndexWriter(luceneDirectory, New StandardAnalyzer(), False) Dim doc As Document = New Document doc.Add(New Field("id", searchableEvent.ID, Field.Store.YES, Field.Index.UN_TOKENIZED)) doc.Add(New Field("fullText", FullTextBuilder(searchableEvent), Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("user", If(searchableEvent.User.UserName = Nothing, "User" & searchableEvent.User.ID, searchableEvent.User.UserName), Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("title", searchableEvent.Title, Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("location", searchableEvent.Location.Name, Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("date", searchableEvent.EventDate, Field.Store.YES, Field.Index.UN_TOKENIZED)) writer.AddDocument(doc) writer.Optimize() writer.Close() Return True End Function Notice how I have a "date" index that stores the event date. My search then looks like this ''# code omitted Dim reader As IndexReader = IndexReader.Open(luceneDirectory) Dim searcher As IndexSearcher = New IndexSearcher(reader) Dim parser As QueryParser = New QueryParser("fullText", New StandardAnalyzer()) Dim query As Query = parser.Parse(q.ToLower) ''# We're using 10,000 as the maximum number of results to return ''# because I have a feeling that we'll never reach that full amount ''# anyways. And if we do, who in their right mind is going to page ''# through all of the results? Dim topDocs As TopDocs = searcher.Search(query, Nothing, 10000) Dim doc As Document = Nothing ''# loop through the topDocs and grab the appropriate 10 results based ''# on the submitted page number While i <= last AndAlso i < topDocs.totalHits doc = searcher.Doc(topDocs.scoreDocs(i).doc) IDList.Add(doc.[Get]("id")) i += 1 End While ''# code omitted I did try the following, but it was to no avail (threw a NullReferenceException). While i <= last AndAlso i < topDocs.totalHits If Date.Parse(doc.[Get]("date")) >= Date.Today Then doc = searcher.Doc(topDocs.scoreDocs(i).doc) IDList.Add(doc.[Get]("id")) i += 1 End If End While I also found the following documentation, but I can't make heads or tails of it http://lucene.apache.org/java/1_4_3/api/org/apache/lucene/search/DateFilter.html

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

  • Move to php in windows? Concern, hints, "please don't do!"?

    - by Daniel
    I am considering to move frome Microsoft languages to PHP (just for web dev) which has quite an interesting syntax, a perlish look (but a wider programmer base) and it allows me to reuse the web without reinventing it. I have some concerns too. I would be more than happy to gather some wisdom from stackoverflow community, (challenge to my opinions warmly welcome). Here are my doubts. Efficiency. Cgi are slow, what I am supposed to use? Fastcgi? Or what else? Efficiency + stability. Is PHP on windows really stable and a good choice in terms of performances? Database. I use very often MSSQL (I regret, i like it). Could I widely and efficiently interface PHP with MSSQL (using smartly stored pro, for example). XSLT + XML performance. I work quite a lot with XML and XSLT and I really find the MS xml parser a great software component. Are parser used in PHP fast, reliable and efficient (I am interested mainly in DOM, not SAX)? Objects. Is the PHP object programming model valid end efficient? 6 Regex. How efficient is PHP processing regexp? Many thanks for your advices.

    Read the article

  • Git rebase and semi-tracked per-developer config files.

    - by dougkiwi
    This is my first SO question and I'm new-ish to Git as well. Background: I am supposed to be the version control guru for Git in my group of about 8 developers. As I don't have a lot of Git experience, this is exciting. I decided we need a shared repository that would be the authoritative master for the production code and the main meeting-point for the development code. As we work for a corporation, we really do need to show an authoritive source for the production code at least. I have instructed the developers to pull-rebase when pulling from the shared repository, then push the commits that they want to share. We have been running into problems with a particular type of file. One of these files, which I currently assume is typical of the problem, is called web.config. We want a version-controlled master web.config for devs to clone, but each dev may make minor edits to this file that they wish to locally save but not share. The problem is this: how do I tell git not to consider local changes or commits to this file to be relevent for rebasing and pushing? Gitignore does not seem to solve the problem, but maybe that's because I put web.config into .gitignore too late? In some simple situations we have stacked local changes, rebased, pushed, and popped the stack, but that doesn't seem to work all of the time. I haven't picked up the pattern quite yet. The published documentation on pull --rebase tends to deal with simplier situations. Or do I have the wrong idea entirely? Are we misusing Git? Dougkiwi

    Read the article

  • How to parse multiple dates from a block of text in Python (or another language)

    - by mlissner
    I have a string that has several date values in it, and I want to parse them all out. The string is natural language, so the best thing I've found so far is dateutil. Unfortunately, if a string has multiple date values in it, dateutil throws an error: >>> s = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" >>> parse(s, fuzzy=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 697, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 303, in parse raise ValueError, "unknown string format" ValueError: unknown string format Any thoughts on how to parse all dates from a long string? Ideally, a list would be created, but I can handle that myself if I need to. I'm using Python, but at this point, other languages are probably OK, if they get the job done. PS - I guess I could recursively split the input file in the middle and try, try again until it works, but it's a hell of a hack.

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Can I keep git from pushing the master branch to all remotes by default?

    - by Curtis
    I have a local git repository with two remotes ('origin' is for internal development, and 'other' is for an external contractor to use). The master branch in my local repository tracks the master in 'origin', which is correct. I also have a branch 'external' which tracks the master in 'other'. The problem I have now is that my master brach ALSO wants to push to the master in 'other' as well, which is an issue. Is there any way I can specify that the local master should NOT push to other/master? I've already tried updating my .git/config file to include: [branch "master"] remote = origin merge = refs/heads/master [branch "external"] remote = other merge = refs/heads/master [push] default = upstream But remote show still shows that my master is pushing to both remotes: toko:engine cmlacy$ git remote show origin Password: * remote origin Fetch URL: <REPO LOCATION> Push URL: <REPO LOCATION> HEAD branch: master Remote branches: master tracked refresh-hook tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) Those are all correct. toko:engine cmlacy$ git remote show other Password: * remote other Fetch URL: <REPO LOCATION> Push URL: <REPO LOCATION> HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': external merges with remote master Local ref configured for 'git push': master pushes to master (local out of date) That last section is the problem. 'external' should merge with other/master, but master should NEVER push to other/master. It's never gong to work.

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >