Search Results

Search found 5919 results on 237 pages for 'regex matching'.

Page 209/237 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • An alternative to reading input from Java's System.in

    - by dvanaria
    I’m working on the UVa Online Judge problem set archive as a way to practice Java, and as a way to practice data structures and algorithms in general. They give an example input file to submit to the online judge to use as a starting point (it’s the solution to problem 100). Input from the standard input stream (java.lang.System.in) is required as part of any solution on this site, but I can’t understand the implementation of reading from System.in they give in their example solution. It’s true that the input file could consist of any variation of integers, strings, etc, but every solution program requires reading basic lines of text input from System.in, one line at a time. There has to be a better (simpler and more robust) method of gathering data from the standard input stream in Java than this: public static String readLn(int maxLg) { byte lin[] = new byte[maxLg]; int lg = 0, car = -1; String line = “”; try { while (lg < maxLg) { car = System.in.read(); if ((car < 0) || (car == ‘\n’)) { break; } lin[lg++] += car; } } catch (java.io.IOException e) { return (null); } if ((car < 0) && (lg == 0)) { return (null); // eof } return (new String(lin, 0, lg)); } I’m really surprised by this. It looks like something pulled directly from K&R’s “C Programming Language” (a great book regardless), minus the access level modifer and exception handling, etc. Even though I understand the implementation, it just seems like it was written by a C programmer and bypasses most of Java’s object oriented nature. Isn’t there a better way to do this, using the StringTokenizer class or maybe using the split method of String or the java.util.regex package instead?

    Read the article

  • WCF ReliableMessaging method called twice

    - by Brian
    Using Fiddler, we see 3 HTTP requests (and matching responses) for each call when: WS-ReliableMessaging is enabled, and, the method returns a large amount of data (17MB) The first HTTP request is a SOAP message with the action "CreateSequence" (presumable to establish the reliable session). The second and third HTTP requests are identical SOAP messages invoking our webservice method. Why are there two identical messages? Here is our config: <system.serviceModel> <client> <endpoint address="http://server/vdir/AccountingService.svc" binding="wsHttpBinding" bindingConfiguration="customWsHttpBinding" behaviorConfiguration="LargeServiceBehavior" contract="MyProject.Accounting.IAccountingService" name="BasicHttpBinding_IAccountingService" /> </client> <bindings> <wsHttpBinding> <binding name="customWsHttpBinding" maxReceivedMessageSize="90000000"> <reliableSession enabled="true"/> <security mode="None" /> </binding> </wsHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="LargeServiceBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> Thanks, Brian

    Read the article

  • JQuery Simplemodal and Tabs Help Needed

    - by Dave R
    Hi, I've got an asp.net page containing a Textbox with an Autocomplete extender on it. It's setup so the user can type a short reference code into the textbox and then choose from the list of matching codes returned by the autocomplete. On the "select", I then call the server using JQuery. I'm currently using $.get here.... The callback function from $.get checks for "success" and then displays a simple-modal dialog containing info about the item they've just selected. if (sStatus == "success") { $.modal(sText, { overlayClose: true, appendTo:'form', onShow: function(dialog) { $("#ccTargets_tabContainer").tabs(); }, onClose: function(dialog) { $("#<%=TextBox1.ClientID%>").val(""); $.modal.close(); } }); $.ready(); } One of the bits of info being loaded here is a JQuery TABS setup, so the onShow function of the simplemodal is used to initiate the tabs which are within the simplemodal. Now to the crux of my problem. If I do multiple consecutive "autocompletes" on the same page it all works fine Unless I have selected a different tab on the tabs in the simplemodal ....If I select a different tab, close the simplemodal and then do another autocomplete I get a JQuery error which seems to relate to a selector doing something with the "old" selected tab that was on the "closed" modal. I'm clearly missing some sort of cleardown / initialisation somewhere, but can't find what it is. Help? I've tried "tabs.destroy" before the modal call in the code above and I've tried a $.ready() call as indicated too.... UPDATE: Is it something to do with JQuery Tabs appending my addressbar URL with the selected tab's ID?

    Read the article

  • WordPress > Optimizing a query to show recent posts with a "View All" link when postcount exceeds ma

    - by Scott B
    I have a setting in my theme that allows the site owner to set the maximum number of posts ($maxPosts) to display in a "Recent Posts" menu. I'm using a custom script to generate the recent posts (because the Recent Posts widget does not highlight the current page, which I need for my css). My menu also is set up to display a "View All" link below the post listing, but only if the actual post count is $maxposts I'm trying to work out the best method for getting the post count and comparing it to $maxposts in order to determine whether or not to show a "View All" link. I'm sure there's probably a better way, but here's my code. I'm looking to optimize it to support very large post counts... $cat=get_cat_ID('excludeFromRecentPosts'); $catHidden=get_cat_ID('hidden'); $myquery = new WP_Query(); $myquery->query(array( 'cat' => "-$cat,-$catHidden", 'post_not_in' => get_option('sticky_posts') )); $myrecentpostscount = $myquery->found_posts; if ($myrecentpostscount > 0) { //show the menu if ($myrecentpostscount > $maxPosts) { //show "View All" link } } I really only need to determine if the total post count from the query is greater than the maxPost setting in order to determine whether to show the "View All" link, so I'm wondering if, in the case there are thousands of posts matching the criteria, to avoid performance issues, I don't need to get a count of all of them. I just need to count up until the point of maxPosts + 1, and that's where I'm struggling a bit because the user could elect to make maxPosts = -1 which means they want to show all posts. But this would be impractical, so I would probably set a upper limit of 20...

    Read the article

  • Using hibernate criteria, is there a way to escape special characters?

    - by Kevin Crowell
    For this question, we want to avoid having to write a special query since the query would have to be different across multiple databases. Using only hibernate criteria, we want to be able to escape special characters. This situation is the reason for needing the ability to escape special characters: Assume that we have table 'foo' in the database. Table 'foo' contains only 1 field, called 'name'. The 'name' field can contain characters that may be considered special in a database. Two examples of such a name are 'name_1' and 'name%1'. Both the '_' and '%' are special characters, at least in Oracle. If a user wants to search for one of these examples after they are entered in the database, problems may occur. criterion = Restrictions.ilike("name", searchValue, MatchMode.ANYWHERE); return findByCriteria(null, criterion); In this code, 'searchValue' is the value that the user has given the application to use for its search. If the user wants to search for '%', the user is going to be returned with every 'foo' entry in the database. This is because the '%' character represents the "any number of characters" wildcard for string matching and the SQL code that hibernate produces will look like: select * from foo where name like '%' Is there a way to tell hibernate to escape certain characters, or to create a workaround that is not database type sepecific?

    Read the article

  • Autodetect timezone in Rails given UTC offset and DST

    - by Jose
    I basically want to autodetect a user's timezone using Rails. I can use this JS code at the user's browser (http://www.onlineaspect.com/2007/06/08/auto-detect-a-time-zone-with-javascript/) to send a form with the UTC offset and the fact that the time zone observes DST during summer or not, in the user's time zone. Once I have that info in the server, I want to select the matching time zone. In Rails, I can get a list of time zones with ActiveSupport::TimeZone.all. Also, I can filter zones by utf offset thanks to the utc_offset method. However, I don't know how to filter the timezones that do/don't observe DST. E.g. suppose a user lives in Amsterdam. Filtering by UTC offset will return Berlin, Belgrado, Madrid, etc timezones, as well as West Central Africa. All of them, but West Central Africa, would be appropriate timezones for a user in Amsterdam (as they provide the same time/date), but I need to filter West Central Africa, which does not perform DST in summer. How can I do this in Rails? Also, are any of my assumptions wrong?

    Read the article

  • WCF Service w/ SharePoint Error: Could not find default endpoint element that references contract...

    - by Brian Clark
    Full error message: Could not find default endpoint element that references contract 'PublicationServices.IPublicationService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. I have a SharePoint site that I have already opened a project for in Visual Studio 2010. I also created a project that contains a WCF Service Application and added it to the same solution that contains the project for my SharePoint site. I have created a visual web part in my SharePoint project that I am trying to use to consume the WCF Service. I am doing so like this, from within the user control for my web part: PublicationServiceClient proxy = new PublicationServiceClient(); Just having this line alone in OnPreRender, Page_Load, etc. will generate the above error. I've read previous posts about having to have items in the config file of the WCF service also in the config file of the consuming application. I have done this, I copied this section the Web.config file in my WCF service and have placed it in the system.serviceModel tags of my SharePoint project's app.config file: In other words, this is in both of my config files. When I add this web part to the front page of my SharePoint site though, I get the above error every time. I should also note that I have created a console app that I was able to use with no problems to consume data from this very same WCF service. Any help would be appreciated!

    Read the article

  • PHP preg_replace - Don't match with h1 tags

    - by James
    Hi there. I am using preg_replace to add a link to keywords if they are found within a long HTML string. I don't want to add a link if the keyword is found within h1 tags or strong tags. The below regex nearly works and basically says (I think): If the keyword is not immediately wrapped by either a h1 tag or a strong tag then replace with the keyword that was matched, as a bolded link to google. $result = preg_replace('%(?!<h1>)(?!<strong>)\b(bobs widgets)\b(?!<\/strong>)(?!<\/h1>)%i','<a href="http://www.google.com"><strong>$1</strong></a>', $result, -1); (the reason I don't want to match if in strong tags is because I am recursing through a lot of keywords so don't want to link an already linked keyword on subsequent passes) the above works fine and won't match: <h1>bobs widgets</h1> It will however match the keyword in the following text, because the h1 tag isn't immediately either side of the keyword: <h1>Here are bobs widgets for sale</h1> I need to make the spaces either side optional and have tried adding \s* but that doesn't get me anywhere. I'd be very grateful for a push in the right direction here.

    Read the article

  • Getting value from href using jQuery

    - by bateman_ap
    Hi, I wonder if anyone can help with a jQuery problem I am having. I am using the tooltips from the Jquery Tools library to create a popup window when mousing over an hrefed image, this I want to use to cusomise the call to change the content in the DIV. The links I am using are in the form: <a href="/venue/1313.htm" class="quickView"><img src="/images/site/quickView83.png" alt="Quick View" width="83" height="20" /></a> The code I am using to trigger the tip is: $(".quickView").live('mouseover', function() { if (!$(this).data('init')) { $(this).data('init', true); ajax_quickView(); $(this).tooltip ({ /* tooltip configuration goes here */ tip: "#quickViewWindow", position: "right", offset: [0, -300], effect: 'slide' }); $(this).trigger('mouseover'); } }); I have tried the following function to grab the ID (in the example above, 1313) from the link: function ajax_quickView(){ var pageNum = $("a.quickView").attr("href").match(/venue/([0-9]+)/).htm[1]; $("#quickViewWindow").load("/quick-view/", function(){}) } However I think this is where it falls down, I think my regex is prob to blame... Once I get the var pageNum I presume I can just pass it into the .load as: $("#quickViewWindow").load("/quick-view/", {id : pageNum }, function(){}) Many thanks

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • Perform case-insensitive lookup on an Array in MongoDB?

    - by Hal
    So, I've decided to get my feet wet with MongoDB and love it so far. It seems very fast and flexible which is great. But, I'm still going through the initial learning curve and as such, I'm spending hours digging for info on the most basic things. I've search throughout the MongoDB online documentation and have spent hours Googling through pages without any mention of this. I know Mongo is still quite new (v1.x) so it explains why there isn't much information yet. I've even trying looking for books on Mongo without much luck. So yes, I've tried to RTFM with no luck, so, now I turn to you. I have an Array of various Hashtags nested in each document (ie: #apples, #oranges, #Apples, #APPLES) and I would like to perform a case-insensitive find() to access all the documents containing apples in any case. It seems that find does support some regex with /i, but I can't seem to get this working either. Anyway, I hope this is a quick answer for someone. Here's my existing call in PHP which is case sensitive: $cursor = $collection->find(array( "hashtags" => array("#".$keyword)))->sort(array('$natural' => -1))->limit(10); Help?

    Read the article

  • Cocoa Touch UITableView Alphabetical '#' Match All Unmatched

    - by Kevin Sylvestre
    I have a UITableView containing names that I would like to group (and sort) by the first letter (similar to the Address Book application). I am currently able to match any section ('A'-'Z') using: // Sections is an array of strings "{search}" and "A" to "Z" and "#". NSString *pattern = [self.sections objectAtIndex:section]; NSPredicate *predicate = nil; // Ignore search pattern. if ([pattern isEqualToString:@"{search}"]) return nil; // Non-Alpha and Non-Diacritic-Alpha (?). if ([pattern isEqualToString:@"#"]); // Default case (use case and diacritic insensitivity). if (!predicate) predicate = [NSPredicate predicateWithFormat:@"name beginswith[cd] %@", pattern]; // Return filtered results. return [self.friends filteredArrayUsingPredicate:predicate]; However, matching for the '#' eludes me. I tried constructing a REGEX match using: [NSPredicate predicateWithFormat:@"name matches '[^a-zA-Z].*'"]; But this fails for diacritic-alpha (duplicate rows appear). Any ideas would be greatly appreciated! Thanks.

    Read the article

  • How would you answer Joel's sample programming questions?

    - by Khorkrak
    I recently interviewed a candidate for a new position here. I wish though that I'd read Joel's Guerrilla Guide to Interviewing prior to that interview - naturally I happened upon it the night afterwards :P http://www.joelonsoftware.com/articles/GuerrillaInterviewing3.html So I tried answering the easy questions myself - yeah I used the python interpreter to type stuff in and tested the results a bit - I didn't look up any solutions beforehand though and I also thought about how long it took me to come up with answers for each one and what I'd look for the next time I interview someone. I'd let them type stuff into the interpreter and see how did used python's introspection capabilities too to find out things like what's the re module's method for building a regex etc. Here are my answers - these are in python of course - what are yours in your favourite language? Do you see any issues with the answers I came up with - i.e. how could they be improved upon - what did I miss? Joel's example questions: Write a function that determines if a string starts with an upper-case letter A-Z. import re upper_regex = re.compile("^[A-Z]") def starts_with_upper(text): return upper_regex.match(text) is not None Write a function that determines the area of a circle given the radius. from math import pi def area(radius): return pi * radius**2 Add up all the values in an array. sum([1, 2, 3, 4, 5]) Harder Question: Write an example of a recursive function - so how about the classic factorial one: def factorial(num): if num > 1: return num * factorial(num - 1) else: return 1

    Read the article

  • What is the difference: LoadUserProfile -vs- RegOpenCurrentUser

    - by Will5801
    These two APIs are very similar but it is unclear what the differences are and when each should be used (Except that LoadUserProfile is specified for use with CreateProcessAsUser which I am not using. I am simply impersonating for hive accesss). LoadUserProfile http://msdn.microsoft.com/en-us/library/bb762281(VS.85).aspx RegOpenCurrentUser http://msdn.microsoft.com/en-us/library/ms724894(VS.85).aspx According to the Services & the Registry article: http://msdn.microsoft.com/en-us/library/ms685145(VS.85).aspx we should use RegOpenCurrentUser when impersonating. But what does/should RegOpenCurrentUser do if the user profile is roaming - should it load it? As far as I can tell from these docs, both APIs provide a handle to the HKEY_CURRENT_USER for the user the thread is impersonating. Therefore, they both "load" the hive i.e. lock it as a database file and give a handle to it for registry APIs. It might seem that LoadUserProfile loads the user profile in the same way as the User does when he/she logs on, whereas RegOpenCurrentUser does not - is this correct? What is the fundamental difference (if any) in how these two APIs mount the hive? What are the implications and differences (if any) between what happens IF A user logs-on or logs-off while each of these impersonated handles is already in use? A user is already logged-on when each matching close function (RegCloseKey and UnloadUserProfile) is called?

    Read the article

  • extract specific element from nested elements using lxml html

    - by Dan.StackOverflow
    Hi all I am having some problems that I think can be attributed to xpath problems. I am using the html module from the lxml package to try and get at some data. I am providing the most simplified situation below, but keep in mind the html I am working with is much uglier. <table> <tr> <td> <table> <tr><td></td></tr> <tr><td> <table> <tr><td><u><b>Header1</b></u></td></tr> <tr><td>Data</td></tr> </table> </td></tr> </table> </td></tr> </table> What I really want is the deeply nested table, because it has the header text "Header1". I am trying like so: from lxml import html page = '...' tree = html.fromstring(page) print tree.xpath('//table[//*[contains(text(), "Header1")]]') but that gives me all of the table elements. I just want the one table that contains this text. I understand what is going on but am having a hard time figuring out how to do this besides breaking out some nasty regex. Any thoughts?

    Read the article

  • sed/awk or other: one-liner to increment a number by 1 keeping spacing characters

    - by WizardOfOdds
    EDIT: I don't know in advance at which "column" my digits are going to be and I'd like to have a one-liner. Apparently sed doesn't do arithmetic, so maybe a one-liner solution based on awk? I've got a string: (notice the spacing) eh oh 37 and I want it to become: eh oh 36 (so I want to keep the spacing) Using awk I don't find how to do it, so far I have: echo "eh oh 37" | awk '$3>=0&&$3<=99 {$3--} {print}' But this gives: eh oh 36 (the spacing characters where lost, because the field separator is ' ') Is there a way to ask awk something like "print the output using the exact same field separators as the input had"? Then I tried yet something else, using awk's sub(..,..) method: ' sub(/[0-9][0-9]/, ...) {print}' but no cigar yet: I don't know how to reference the regexp and do arithmetic on it in the second argument (which I left with '...' for now). Then I tried with sed, but got stuck after this: echo "eh oh 37" | sed -e 's/\([0-9][0-9]\)/.../' Can I do arithmetic from sed using a reference to the matching digits and have the output not modify the number of spacing characters? Note that it's related to my question concerning Emacs and how to apply this to some (big) Emacs region (using a replace region with Emacs's shell-command-on-region) but it's not an identical question: this one is specifically about how to "keep spaces" when working with awk/sed/etc.

    Read the article

  • Symbolicate adhoc iphone app crashes.

    - by gotye
    Hey everyone, I can't manage to make my code symbolicated ... I read the part "below" : Given a crash report, the matching binary, and its .dSYM file, symbolication is relatively easy. The Xcode Organizer window has a tab for crash reports of the currently selected device. You can view externally received crash reports in this tab - just place them in the appropriate directory. This is the same as the Mac OS X directory described in the first section. It doesn't matter which device you have tethered, but the directory in which you place the crash report must be the directory for the tethered and selected device. It is not necessary to place the binary and .dSYM file in any particular location. Xcode uses Spotlight and the UUID to locate the correct files. It is necessary, though, that both files be in the same directory and that this directory is one that is indexed by Spotlight. Anywhere in your home directory should be fine. But it doesn't work for me ... here is what I did : I opened xcode organizer and I had my iphone device with crash logs App and dsym files are in my xcode project which is on my desktop All the rest should be automatic, right ? but crash logs aren't symbolicated yet ... Any comments welcome. Cheers. Gotye.

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • How can I filter a JTable?

    - by Jonas
    I would like to filter a JTable, but I don't understand how I can do it. I have read How to Use Tables - Sorting and Filtering and I have tried with the code below, but with that filter, no rows at all is shown in my table. And I don't understand what column it is filtered on. private void myFilter() { RowFilter<MyModel, Object> rf = null; try { rf = RowFilter.regexFilter(filterFld.getText(), 0); } catch (java.util.regex.PatternSyntaxException e) { return; } sorter.setRowFilter(rf); } MyModel has three columns, the first two are strings and the last column is of type Integer. How can I apply the filter above, consider the text in filterFld.getText() and only filter the rows where the text is matched on the second column? I would like to show all rows that starts with the text specified by filterFld.getText(). I.e. if the text is APP then the JTable should contain the rows where the second column starts with APPLE, APPLICATION but not the rows where the second column is CAR, ORANGE. I have also tried with this filter: RowFilter<MyModel, Integer> itemFilter = new RowFilter<MyModel, Integer>(){ public boolean include(Entry<? extends MyModel, ? extends Integer> entry){ MyModel model = entry.getModel(); MyItem item = model.getRecord(entry.getIdentifier()); if (item.getSecondColumn().startsWith("APP")) { return true; } else { return false; } } }; How can I write a filter that is filtering the JTable on the second column, specified by my textfield?

    Read the article

  • What is the PIXELFORMATDESCRIPTOR parameter in SetPixelFormat() used for?

    - by Mads Elvheim
    Usually when setting up OpenGL contexts, I've simply filled out a PIXELFORMATDESCRIPTOR structure with the necessary information and called ChoosePixelFormat(), followed by a call to SetPixelFormat() with the returned matching pixelformat from ChoosePixelFormat(). Then I've simply passed the initial descriptor without giving much thought of why. But now I use wglChoosePixelFormatARB() instead if ChoosePixelFormat() because I need some extended traits like sRGB and multisampling. It takes an attribute list of integers, just like XLib/GLX on Linux, not a PIXELFORMATDESCRIPTOR structure. So, do I really have to fill in a descriptor for SetPixelFormat() to use? What does SetPixelFormat() use the descriptor for when it already has the pixelformat descriptor index? Why do I have to specify the same pixelformat attributes in two different places? And which one takes precedence; the attribute list to wglChoosePixelFormatARB(), or the PIXELFORMATDESCRIPTOR attributes passed to SetPixelFormat()? Here are the function prototypes, to make the question more clear: /* Finds a best match based on a PIXELFORMATDESCRIPTOR, and returns the pixelformat index */ int ChoosePixelFormat(HDC hdc, const PIXELFORMATDESCRIPTOR *ppfd); /* Finds a best match based on an attribute list of integers and floats, and returns a list of indices of matches, with the best matches at the head. Also supports extended pixelformat traits like sRGB color space, floating-point framebuffers and multisampling. */ BOOL wglChoosePixelFormatARB(HDC hdc, const int *piAttribIList, const FLOAT *pfAttribFList, UINT nMaxFormats, int *piFormats, UINT *nNumFormats ); /* Sets the pixelformat based on the pixelformat index */ BOOL SetPixelFormat(HDC hdc, int iPixelFormat, const PIXELFORMATDESCRIPTOR *ppfd);

    Read the article

  • Parse Text using scanner useDelimiter

    - by Brian
    Looking to parse the following text file: Sample text file: <2008-10-07text entered by user<2008-11-26additional text entered by user I would like to parse the above text so that I can have three variables: v1 = 2008-10-07 v2 = text entered by user v3 = Ted Parlor v1 = 2008-11-26 v2 = additional text entered by user v3 = Ted Parlor I attempted to use scanner and useDelimiter, however, I'm having issue on how to set this up to have the results as stated above. Here's my first attempt: enter code here import java.io.*; import java.util.Scanner; public class ScanNotes { public static void main(String[] args) throws IOException { Scanner s = null; try { //String regex = "(?<=\<)([^\*)(?=\)"; s = new Scanner(new BufferedReader(new FileReader("cur_notes.txt"))); s.useDelimiter("[<]+"); while (s.hasNext()) { String v1 = s.next(); String v2= s.next(); System.out.println("v1= " + v1 + " v2=" + v2); } } finally { if (s != null) { s.close(); } } } } The results is as follows: v1= 2008-10-07text entered by user v2=Ted Parlor What I desire is: v1= 2008-10-07 v2=text entered by user v3=Ted Parlor v1= 2008-11-26 v2=additional text entered by user v3=Ted Parlor Any help that would allow me to extract all three strings separately would be greatly appreciated.

    Read the article

  • Help setting up command line gist

    - by smotchkkiss
    setup I'm following defunkt's gist setup guide. [smotchkkiss ~]$ sudo gem install gist [smotchkkiss ~]$ git config --global github.user "my github name" [smotchkkiss ~]$ git config --global github.token "my github token" [smotchkkiss ~]$ echo "puts 'hello, gist.'" > hello.rb [smotchkkiss ~]$ gist hello.rb output Usage: open [-e] [-t] [-f] [-W] [-n] [-g] [-h] [-b <bundle identifier>] [-a <application>] [filenames] Help: Open opens files from a shell. By default, opens each file using the default application for that file. If the file is in the form of a URL, the file will be opened as a URL. Options: -a Opens with the specified application. -b Opens with the specified application bundle identifier. -e Opens with TextEdit. -t Opens with default text editor. -f Reads input from standard input and opens with TextEdit. -W, --wait-apps Blocks until the used applications are closed (even if they were already running). -n, --new Open a new instance of the application even if one is already running. -g, --background Does not bring the application to the foreground. -h, --header Searches header file locations for headers matching the given filenames, and opens them. return value nil help! nil return value? What gives? No new gist appears in my My Gists page on github.

    Read the article

  • SQL (mySQL) update some value in all records processed by a select

    - by jdmuys
    I am using mySQL from their C API, but that shouldn't be relevant. My code must process records from a table that match some criteria, and then update the said records to flag them as processed. The lines in the table are modified/inserted/deleted by another process I don't control. I am afraid in the following, the UPDATE might flag some records erroneously since the set of records matching might have changed between step 1 and step 3. SELECT * FROM myTable WHERE <CONDITION>; # step 1 <iterate over the selected set of lines. This may take some time.> # step 2 UPDATE myTable SET processed=1 WHERE <CONDITION> # step 3 What's the smart way to ensure that the UPDATE updates all the lines processed, and only them? A transaction doesn't seem to fit the bill as it doesn't provide isolation of that sort: a recently modified record not in the originally selected set might still be targeted by the UPDATE statement. For the same reason, SELECT ... FOR UPDATE doesn't seem to help, though it sounds promising :-) The only way I can see is to use a temporary table to memorize the set of rows to be processed, doing something like: CREATE TEMPORARY TABLE workOrder (jobId INT(11)); INSERT INTO workOrder SELECT myID as jobId FROM myTable WHERE <CONDITION>; SELECT * FROM myTable WHERE myID IN (SELECT * FROM workOrder); <iterate over the selected set of lines. This may take some time.> UPDATE myTable SET processed=1 WHERE myID IN (SELECT * FROM workOrder); DROP TABLE workOrder; But this seems wasteful and not very efficient. Is there anything smarter? Many thanks from a SQL newbie.

    Read the article

  • F# Active Pattern List.filter or equivalent

    - by akaphenom
    I have a records of types type tradeLeg = { id : int ; tradeId : int ; legActivity : LegActivityType ; actedOn : DateTime ; estimates : legComponents ; entryType : ShareOrDollarBased ; confirmedPrice: DollarsPerShare option; actuals : legComponents option ; type trade = { id : int ; securityId : int ; ricCode : string ; tradeActivity : TradeType ; enteredOn : DateTime ; closedOn : DateTime ; tradeLegs : tradeLeg list ; } Obviously the tradeLegs are a type off of a trade. A leg may be settled or unsettled (or unsettled but price confirmed) - thus I have defined the active pattern: let (|LegIsSettled|LegIsConfirmed|LegIsUnsettled|) (l: tradeLeg) = if Helper.exists l.actuals then LegIsSettled elif Helper.exists l.confirmedPrice then LegIsConfirmed else LegIsUnsettled and then to determine if a trade is settled (based on all legs matching LegIsSettled pattern: let (|TradeIsSettled|TradeIsUnsettled|) (t: trade) = if List.exists ( fun l -> match l with | LegIsSettled -> false | _ -> true) t.tradeLegs then TradeIsSettled else TradeIsUnsettled I can see some advantages of this use of active patterns, however i would think there is a more efficient way to see if any item of a list either matches (or doesn't) an actie pattern without having to write a lambda expression specifically for it, and using List.exist. Question is two fold: is there a more concise way to express this? is there a way to abstract the functionality / expression (fun l - match l with | LegIsSettled - false | _ - true) Such that let itemMatchesPattern pattern item = match item with | pattern -> true | _ -> false such I could write (as I am reusing this design-pattern): let curriedItemMatchesPattern = itemMatchesPattern LegIsSettled if List.exists curriedItemMatchesPattern t.tradeLegs then TradeIsSettled else TradeIsUnsettled Thoughts?

    Read the article

  • Filtering Wikipedia's XML dump: error on some accents

    - by streetpc
    I'm trying to index Wikpedia dumps. My SAX parser make Article objects for the XML with only the fields I care about, then send it to my ArticleSink, which produces Lucene Documents. I want to filter special/meta pages like those prefixed with Category: or Wikipedia:, so I made an array of those prefixes and test the title of each page against this array in my ArticleSink, using article.getTitle.startsWith(prefix). In English, everything works fine, I get a Lucene index with all the pages except for the matching prefixes. In French, the prefixes with no accent also work (i.e. filter the corresponding pages), some of the accented prefixes don't work at all (like Catégorie:), and some work most of the time but fail on some pages (like Wikipédia:) but I cannot see any difference between the corresponding lines (in less). I can't really inspect all the differences in the file because of its size (5 GB), but it looks like a correct UTF-8 XML. If I take a portion of the file using grep or head, the accents are correct (even on the incriminated pages, the <title>Catégorie:something</title> is correctly displayed by grep). On the other hand, when I rectreate a wiki XML by tail/head-cutting the original file, the same page (here Catégorie:Rock par ville) gets filtered in the small file, not in the original… Any idea ? Alternatives I tried: Getting the file (commented lines were tried wihtout success): FileInputStream fis = new FileInputStream(new File(xmlFileName)); //ReaderInputStream ris = ReaderInputStream.forceEncodingInputStream(fis, "UTF-8" ); //(custom function opening the stream, reading it as UFT-8 into a Reader and returning another byte stream) //InputSource is = new InputSource( fis ); is.setEncoding("UTF-8"); parser.parse(fis, handler); Filtered prefixes: ignoredPrefix = new String[] {"Catégorie:", "Modèle:", "Wikipédia:", "Cat\uFFFDgorie:", "Mod\uFFFDle:", "Wikip\uFFFDdia:", //invalid char "Catégorie:", "Modèle:", "Wikipédia:", // UTF-8 as ISO-8859-1 "Image:", "Portail:", "Fichier:", "Aide:", "Projet:"}; // those last always work

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >