Search Results

Search found 15115 results on 605 pages for 'state pattern'.

Page 522/605 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Accessing running task scheduled with java.util.Timer

    - by jbatista
    I'm working on a Java project where I have created a class that looks like this (abridged version): public class Daemon { private static Timer[] timerarray=null; private static Daemon instance=null; protected Daemon() { ArrayList<Timer> timers = new ArrayList<Timer>(); Timer t = new Timer("My application"); t.schedule(new Worker(), 10000,30000); timers.add(t); //... timerarray = timers.toArray(new Timer[]{}); } public static Daemon getInstance() { if(instance==null) instance=new Daemon(); return instance; } public SomeClass getSomeValueFromWorker() { return theValue; } ///////////////////////////////////////////// private class Worker extends TimerTask { public Worker() {} public void run() { // do some work } public SomeReturnClass someMethod(SomeType someParameter) { // return something; } } ///////////////////////////////////////////// } I start this class, e.g. by invoking daemon.getInstance();. However, I'd like to have some way to access the running task objects' methods (for example, for monitoring the objects' state). The Java class java.util.Timer does not seem to provide the means to access the running object, it just schedules the object instance extending TimerTask. Are there ways to access the "running" object instanciated within a Timer? Do I have to subclass the Timer class with the appropriate methods to somehow access the instance (this "feels" strange, somehow)? I suppose someone might have done this before ... where can I find examples of this "procedure"? Thank you in advance for your feedback.

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Systems design question: DB connection management in load-balanced n-tier

    - by aoven
    I'm wondering about the best approach to designing a DB connection manager for a load-balanced n-tier system. Classic n-tier looks like this: Client -> BusinessServer -> DBServer A load-balancing solution as I see it would then look like this: +--> ... +--+ +--> BusinessServer +--+--> SessionServer --+ Client -> Gateway --+--> BusinessServer +--| +--> DBServer +--> BusinessServer +--+--------------------+ +--> ... +--+ As pictured, the business server component is being load-balanced via multiple instances, and a hardware gateway is distributing the load among them. Session server probably needs to be situated outside the load-balancing array, because it manages state, which mustn't be duplicated. Barring any major errors in design so far, what is the best way to implement DB connection management? I've come up with a couple of options, but there may be others I'm not aware of: Introduce a new Broker component between the DBServer and the other components and let it handle the DB connections. The upside is that all the connections can be managed from a single point, which is very convenient. The downside is that now there is an additional "single point of failure" in the system. Other components must go through it for every request that involves DB in some way, which also makes this a bottleneck. Move the DB connection management into BusinessServer and SessionServer components and let each handle its own DB connections. The upside is that there is no additional "single point of failure" or bottleneck components. The downside is that there is also no control over possible conflicts and deadlocks apart from what DBServer itself can provide. What else can be done? FWIW: Technology is .NET, but none of the vendor-specific stacks are used (e.g. no WCF, MSMQ or the like).

    Read the article

  • FQL query using stream table doesn't accept app access token

    - by tougher
    I've searched stackoverflow all day long to find an answer without luck, so here we go. I'm trying to fetch data from the stream table like this: FQL: SELECT post_id, message, created_time FROM stream WHERE source_id = 131559313586863 URL: http:// graph.facebook.com/fql?q=SELECT+post_id%2C+message%2C+created_time+FROM+stream+WHERE+source_id+%3D+131559313586863&access_token=10669xxxxx74470|PF-7GSdBx0Nxxxxxkdi1KwSQG-w But I get a 400 Bad Request as response with the error message: "An access token is required to request this resource.". I'm fetching an application access token with this url: https:// graph.facebook.com/oauth/access_token?client_id=FACEBOOK_APP_ID&client_secret=FACEBOOK_APP_SECRET&grant_type=client_credentials Facebook state in this blog post that "You will need to pass a valid app or user access token to access this functionality.". Functionality refers to /feed and /posts (the stream table). Futhermore this wiki tells the same story about using the stream table, "From June 3 2011 a token is required to query this table. You can use any application or user token to make the query.". Does anyone see my hopefully obvious flaw? Please note: The profile in the FQL query is public. I need this to run userless though a cronjob. No user interaction is possible. The request works if I replace the app access token with my own user token from https:// developers.facebook.com/tools/explorer Sorry for breaking the URL's. I need more reputation to post more than 2 links :-/

    Read the article

  • VS2010: Why do my custom Toolbox tabs and contained controls keep disappearing?

    - by Velika2
    This is how I expected the toolbox to work: Let's say I add a custom Tab to the Toolbox called "Ajaxtoolkit." To add controls to the new tab, I right mouse click and select "Choose Items" and browse to a file, Ajaxtoolkit.dll, that is of a particular version number. I would expect that when I save and reopen the solution, that the Ajax Toolkit custom tab would still be in my Toolbox and that it would contain the same controls that were there last time, the controls that were in the dll that I referenced when the controls were added. If I created a brand new web app, I (possibly) wouldn't expect to see the same Ajax Toolkit custom tab. However, I could perform the same steps as above and add a "Ajax Toolkit" tab and perhaps, this time, select a DIFFERENT VERSION of the tookit, and the state of the toolkit would be retained with each solution file. Another possibility would be for the original Ajaxtoolkit to be retained when the 2nd web solution is created, and perhaps, if I wanted to mix versions of the toolkit across diffreent web sites in my solution, I should start naming my custom toolkit tabs with version specific names like "Ajaxtoolkit 4.0," etc. ...But instead, the Ajaxtoolkit tab disappears when I close VS2010 and reopen it. Why? Is this desirable behavior or a bug?

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Allow paste in worksheet without overwriting locked cells

    - by jjeaton
    I have a protected worksheet that users would like to copy and paste into. I have no control over the workbook they are copying from. The protected worksheet has some rows that are available for data entry, and other rows that are locked and greyed out to the user. The users would like to be able to paste over the top of the entire worksheet from another random workbook and have all the cells available for data entry filled in, while the locked cells are undisturbed. In the current state, the user gets an error when they try to paste, because it cannot paste over the locked cells. Example: Worksheet 1: Act1 100 100 100 Act2 100 100 100 Act3 100 100 100 Worksheet 2: (The second row is locked) Act1 300 300 300 Act2 200 200 200 Act3 100 100 100 After copying/pasting Worksheet 2 should look like this: Act1 100 100 100 Act2 200 200 200 Act3 100 100 100 The values from worksheet 1 are populated and the locked rows are undisturbed. I've been thinking along the lines of having a hook where on paste, the locked cells are unlocked so that the paste can happen, and then are reverted to their original values and relocked. Is there some way I can loop through the cells in the clipboard and only paste cells where the target isn't locked? It is preferable to not create a separate button for paste, so there is less impact on the users, but if that's the only way, I'm not opposed to it. Currently, I plan on grouping the locked rows together, so that the data entry cells are contiguous, but then the accounts will be out of order, which is not preferred.

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Very weird C file-handling anomaly

    - by KáGé
    Hello, I got a very weird issue that I cant figure out in my school project, which is the simulation of a simple filesystem in a human-readable textfile. Unfortunately I don't yet have enough time to translate the comments in my code or make it less gibberish, so if you are bothered by that, you don't have to help, I understand. See the code HERE. Now in drive.h, at line 574 is this part: i = getline(); #ifdef DEBUG printf("Free space in all found at %d.\n\n", i); if(drive.disk != NULL){ printf("Disk OK\n\n"); } #endif //write in data state = seekline(i); Before this it finds place for the allocation database entry in the ALL sector (see the "image files" in the mounts folder, this issue was tested on mount_30.efs-dbf), then gets the line with i = getline() fine (getline is in lglobal.h, line 39), but after that any file manipulation (in this case seekline's fseek, but if I comment that out, then the first fprintf after that) crashes the program straight away. I think the file gets somehow corrupted (though the Disk OK message appears) but can't figure out how. I've tried putting i = getline(); into comment, but it didn't make any difference. I've also tried asking at local programming forums but they didn't really help either. The last few lines of the output before it crashes: Dir written. (drive.h line 562) Seekline entered: 268 (called at drive.h line 564) Getline entered. (called at drive.h line 574) Line got: 268. Free space in all found at 268. (drive.h line 576) Seekline entered: 268 (called at drive.h line 582, note that this exact call was run successfully less than 20 lines back. This one should set the pointer to the beginning of the line it is currently in) After this it crashes. Does anyone has any idea of what causes this and how could I fix it? Thank you.

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • Should I Solve this with Multithreading in Ruby?

    - by viatropos
    I have a strange case, here's the sequence of actions: User edits a document and hits save Application sends GET request to service Service sends POST request back to application in the middle of responding to the GET request Application, in the same state as when it made the GET request, responds to the POST request (sends document data) to service. Service sends data back to Application (responding to original GET request) Application handles the rest... The use case is this: I was thinking how can I make Yahoo Pipes POST data? Specifically, I want it to be able to update Google Docs when a user makes a change locally (on a custom editor). So user edits doc, makes GET request to Yahoo Pipes, Pipes makes a POST request back to App to get the document (Pipes can only make this type of POST request), App sends doc, Pipes formats data according to the Google API, Pipes responds to GET request with Google API formatted XML, App makes the post request. Theoretically, how would I accomplish this? It seems that I need to create a separate ruby Process for the GET request, and when Pipes sends the POST request, I find that process and send its output, then I'm stuck. This would cut out the need for a database for this particular case (I could save the stuff temporarily in a database, but that doesn't seem right). Any ideas? This would make it so I don't have to format things to the Google API in ruby, I could leave that to Pipes.

    Read the article

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Problem saving file on Motorola Droid, Android 2.1?

    - by Rob Kent
    Two of my users have reported a problem with my Android application, OftSeen Gestures. Both of them are using a Motorola Droid. The app saves a text file which is just a list of gesture names and phone numbers, both strings. It saves the file to the private data area. I don't know that it is this code that is failing but they report the assigned numbers disappearing after the phone comes out of screen sleep. Since the file is reread in OnCreate each time, I'm assuming the file doesn't exist on return. As soon as I can get my hands on a Droid I will debug it but in the meantime can you see a reason why this save operation would fail on Droid (no other users have reported this)? OutputStreamWriter out = new OutputStreamWriter(AppGlobal.getContext().openFileOutput(MAPPINGS_FILE_NAME, 0)); for (String key : mMap.keySet()) { String number = mMap.get(key).number; out.write(String.format("%s,%s\n", key, number == null ? "" : number)); } out.close(); AppGlobal.getContext returns the application context and the MAPPINGS_FILE_NAME resolves to "gesture_mappings.txt". Like I say, I don't know that this is the problem. It could be something else to do with state management inside the app. If anyone has a Droid, maybe they could download the app from Market and test it for me? Note this is a genuine request for help - not an attempt to increase my downloads.

    Read the article

  • Changing an image's ALT value with jQuery

    - by NightMICU
    Hi all, I have a modal form that changes the caption of a photo (paragraph under the image) and I am also trying to change the image's ALT attribute but cannot seem to. Here is the jQuery I am trying to make work $(".edit").click(function() { var parent = $(this).parents('.item'); var caption = $(parent).find('.labelCaption').html(); $("#photoCaption").val(caption); $("#editCaptionDialog").dialog({ width: 450, bgiframe: true, resizable: false, modal: true, title: 'Edit Caption', overlay: { backgroundColor: '#000', opacity: 0.5 }, buttons: { 'Edit': function() { var newCaption = $("#photoCaption").val(); $(parent).find(".labelCaption").html(newCaption); $(parent).find('img').attr('alt', newCaption); } } }); return false; }); And the HTML <li class="item ui-corner-all" id="photo<? echo $images['id'];?>"> <div> <a href="http://tapp-essexvfd.org/gallery/photos/<?php echo $images['filename'];?>.jpg" class="lightbox" title="<?php echo $images['caption'];?>"> <img src="http://tapp-essexvfd.org/gallery/photos/thumbs/<?php echo $images['filename'];?>.jpg" alt="<?php echo $images['caption'];?>" class="photo ui-corner-all"/></a><br/> <p><span class="labelCaption"><?php echo $images['caption'];?> </span></p> <p><a href="edit_photo.php?filename=<?php echo $images['filename'];?>" class="button2 edit ui-state-default ui-corner-all">Edit</a></p> </div> </li> The caption is changing like it should. Thanks

    Read the article

  • How to Open Multiple PopUp windows with JQuery on form submition (popUps should be relative to form

    - by Ole Jak
    I have A form like this: <form> <fieldset class="ui-widget-content ui-corner-all"> <select id="Streams" class="multiselect ui-widget-content ui-corner-all" multiple="multiple" name="Streams[]"> <br /> <option value="35"> Example Name (35)</option> <option value="44"> Example Name (44)</option> <option value="5698"> Example Name (5698)</option> <option value="777"> Example Name (777)</option> <option value="12"> Example Name (12)</option> </select> <br/> <input type="submit" class="ui-state-default ui-corner-all" name="submitForm" id="submitForm" value="Play Stream from selected URL's!"/> </fieldset> </form> in my JS + HTML page I use JQuery. As you can see I allow user to select multiple Items. I want to open on Submit button click as many popup windows as many Items he selected in a list. Each popUp window should open some url like www.example.com/test.php?id=OPTION_SELECTED . So for each of the selected options I ll get a pop up window whith different url. Please Help.

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • use awk to identify multi-line record and filtering

    - by nanshi
    I need to process a big data file that contains multi-line records, example input: 1 Name Dan 1 Title Professor 1 Address aaa street 1 City xxx city 1 State yyy 1 Phone 123-456-7890 2 Name Luke 2 Title Professor 2 Address bbb street 2 City xxx city 3 Name Tom 3 Title Associate Professor 3 Like Golf 4 Name 4 Title Trainer 4 Likes Running Note that the first integer field is unique and really identifies a whole record. So in the above input I really have 4 records although I dont know how many lines of attributes each records may have. I need to: - identify valid record (must have "Name" and "Title" field) - output the available attributes for each valid record, say "Name", "Title", "Address" are needed fields. Example output: 1 Name Dan 1 Title Professor 1 Address aaa street 2 Name Luke 2 Title Professor 2 Address bbb street 3 Name Tom 3 Title Associate Professor So in the output file, record 4 is removed since it doen't have the "Name" field. Record 3 doesn't have Address field but still being print to the output since it is a valid record that has "Name" and "Title". Can I do this with awk? But how do i identify a whole record using the first "id" field on each line? Thanks a lot to the unix shell script expert for helping me out! :)

    Read the article

  • Tell me SQL Server Full-Text searcher is crazy, not me.

    - by Ian Boyd
    i have some customers with a particular address that the user is searching for: 123 generic way There are 5 rows in the database that match: ResidentialAddress1 ============================= 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY i run a FT query to look for these rows. i'll show you each step as i add more criteria to the search: SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"123*"') ResidentialAddress1 ========================= 123 MAPLE STREET 12345 TEST 123 MINE STREET 123 GENERIC WAY 123 FAKE STREET ... (30 row(s) affected) Okay, so far so good, now adding the word "generic": SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"123*"') AND CONTAINS(Patrons.ResidentialAddress1, '"generic*"') ResidentialAddress1 ============================= 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY (5 row(s) affected) Excellent. And now i'l add the final keyword that the user wants to make sure exists: SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"123*"') AND CONTAINS(Patrons.ResidentialAddress1, '"generic*"') AND CONTAINS(Patrons.ResidentialAddress1, '"way*"') ResidentialAddress1 ------------------------------ (0 row(s) affected) Huh? No rows? What if i query for just "way*": SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"way*"') ResidentialAddress1 ------------------------------ (0 row(s) affected) At first i thought that perhaps it's because of the *, and it's requiring that the root way have more characters after it. But that's not true: Searching for "123*" matches "123" Searching for "generic*" matches "generic" Books online says, The asterisk matches zero, one, or more characters What if i remove the * just for s&g: SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"way"') Server: Msg 7619, Level 16, State 1, Line 1 A clause of the query contained only ignored words. So one might think that you are just not allowed to even search for way, either alone, or as a root. But this isn't true either: SELECT * FROM Patrons WHERE CONTAINS(Patrons.*, '"way*"') AccountNumber FirstName Lastname ------------- --------- -------- 33589 JOHN WAYNE So sum up, the user is searching for rows that contain all the words: 123 generic way Which i, correctly, translate into the WHERE clauses: SELECT * FROM Patrons WHERE CONTAINS(Patrons.*, '"123*"') AND CONTAINS(Patrons.*, '"generic*"') AND CONTAINS(Patrons.*, '"way*"') which returns no rows. Tell me this just isn't going to work, that it's not my fault, and SQL Server is crazy. Note: i've emptied the FT index and rebuilt it.

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • WCF webHttpBinding with jQuery AJAX - removing/working around same origin policy

    - by csauve
    So I'm trying to create a C# WCF REST service that is called by jQuery. I've discovered that jQuery requires that AJAX calls are made under the same origin policy. I have a few questions for how I might proceed. I am already aware of; 1. The hacky solution of JSONP with a server callback 2. The way too much server overhead of having a cross-domain proxy. 3. Using Flash in the browser to make the call and setting up crossdomain.xml at my WCF server root. I'd rather not use these because; 1. I don't want to use JSON, or at least I don't want to be restricted to using it 2. I would like to separate the server that serves static pages from the one that serves application state. 3. Flash in this day in age is out of the question. What I'm thinking: is there anything like Flash's crossdomain.xml file that works for jQuery? Is this "same-origin" policy a part of jQuery or is it a restriction in specific browsers? If it's just a part of jQuery, maybe I'll try digging in the code to work around it.

    Read the article

  • HttpUnhandledException ASP.NET

    - by jweber
    Hi I have a ASP.NET website. If I make a request for a page it works most of the times, but sometimes I get an HttpUnhandledException. I have tried to log the errors, but from the errors messages I'm not able to solve the problem. StackTrace: at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.default_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\4e215a3c\72ef69da\App_Web_ylvnbciw.6.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Data: System.Collections.ListDictionaryInternal BaseException: System.InvalidOperationException: The connection was not closed. The connection's current state is open. at DbCategory.getParentCategories() at _Default.Page_Load(Object sender, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) TargetSite: Boolean HandleError(System.Exception) I have idea that it's something about my session og get variables, but i'm not sure about that. Does anybody have an idea about what it could be?

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >