Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 712/916 | < Previous Page | 708 709 710 711 712 713 714 715 716 717 718 719  | Next Page >

  • .NET 4: Process.Start using credentials returns empty output

    - by alexey
    I run an external program from ASP.NET: var process = new Process(); var startInfo = process.StartInfo; startInfo.FileName = filePath; startInfo.Arguments = arguments; startInfo.UseShellExecute = false; startInfo.RedirectStandardOutput = true; startInfo.RedirectStandardError = true; process.Start(); process.WaitForExit(); Console.Write("Output: {0}", process.StandardOutput.ReadToEnd()); Console.Write("Error Output: {0}", process.StandardError.ReadToEnd()); Everything works fine with this code: the external program is executed and process.StandardOutput.ReadToEnd() returns the correct output. But after I add these two lines before process.Start() (to run the program in the context of another user account): startInfo.UserName = userName; startInfo.Password = securePassword; The program is not executed and process.StandardOutput.ReadToEnd() returns an empty string. No exceptions are thrown. userName and securePassword are correct (in case of incorrect credentials an exception is thrown). How to run the program in the context of another user account?

    Read the article

  • Why don't file type filters work properly with nsIFilePicker on Mac OSX?

    - by Eric Strom
    I am running a chrome app in firefox (started with -app) with the following code to open a filepicker: var nsIFilePicker = Components.interfaces.nsIFilePicker; var fp = Components.classes["@mozilla.org/filepicker;1"] .createInstance(nsIFilePicker); fp.init(window, "Select Files", nsIFilePicker.modeOpenMultiple); fp.appendFilter("video", "*.mov; *.mpg; *.mpeg; *.avi; *.flv; *.m4v; *.mp4"); fp.appendFilter("all", "*.*"); var res = fp.show(); if (res == nsIFilePicker.returnCancel) return; var files = fp.files; var paths = []; while (files.hasMoreElements()) { var arg = files.getNext().QueryInterface( Components.interfaces.nsILocalFile ).path; paths.push(arg); } Everything seems to work fine on Windows, and the file picker itself works on OSX, but the dropdown menu to select between file types only displays in Windows. The first filter (video in this case) is in effect, but the dropdown to select the other type never shows. Is there something extra that is needed to get this working on OSX? I have tried the latest firefox (3.6) and an older one (3.0.13) and both don't show the file type dropdown on OSX.

    Read the article

  • Display a input type=file over another input type=file

    - by Kevin Sedgley
    WARNING: Lengthy description coming up! I have written an uploader based upon APC progress uploader for PHP. This works fine and dandy, but the script as a whole (apc etc) is intended to be used only for those with Javascript. To achieve this, I have searched for any input type=file, and replaced these with an absolutely positioned form that appears over the original area where the old file input area was. The reasons for this are so the new uploader can submit to a hidden in page IFrame has to be in a seperate <form> in order to submit to the APC reciever to display the progress upload bar. allows it to be used within any form with an input type=file throughout the site I have used JQuery to do this, with the following code: Original HTML form code: <div><input type="file" name="media" id="media" /></div> Find position of div block code: // get the parent div, and properties thereof parentDiv = $(this).closest('div'); w = $(parentDiv).width(); h = $(parentDiv).height(); loc = $(parentDiv).offset(); Locate new block over old block: $('#_sender').appendTo('body').css({left:loc.left,top:loc.top,position:'absolute',zIndex:400,height:h,width:w}).show(); This works fine, and shows over the old block OK. The problem: When other elements in the DOM before or above it change (in this case a "tree view" selector is pushing the old block down) the new upload form gets moved over other elements. Is there a JQuery (or JS) method for changing this upon DOM change? Some kind of .onchange for the page?! Or an .onmove for the original block? Thanks in advance you lovely people Before DOM change: . After: .

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • Does Team Foundation support cross-app workitem groups?

    - by drachenstern
    We're currently using Visual Source Safe and BugNet and looking to migrate up and away from VSS. I've been pushing for either SVN ( a) we're an ASP.NET shop, b) DCVS is not an option - no matter how much I like Hg ;-) or TFS. Well we finally got a new dev server, so I talked the boss into installing TFS on it (30 day trial). In the meantime, we had started experimenting with FogBugz. We really like FogBugz for about 80% of what we want to do, and the other 20% is probably stuff that we don't know what we want. I'm pushing for TFS because it allows for IDE integrated (mostly) everything. He's pushing for FogBugz because he can group tasks by customer and then project and manage everything from one dashboard. (which means I lose most of my IDE integration - no huge loss I agree) Does TFS support a single dashboard that would span all our solutions (in this case each solution is a full app that we sell to a vertical market client) and let us assign workitems to each solution-spanning-group? So for instance I think we envision something like this: PROJECT1 - Bugtracker and workitems PROJECT2 - Bugtracker and workitems PROJECT3 - Bugtracker and workitems CUSTOMER1 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT2) CUSTOMER2 - Deployment schedules, required features, specific notes (Uses PROJECT2, PROJECT3) CUSTOMER3 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT3) Hopefully that makes sense. naturally it's more complicated than this but I think I've given the details enough to paint a picture. I offered the option of creating dummy projects per customer but he doesn't like that and it doesn't really give us the single dashboard view that we're hoping to end up with (and that FogBugz as we've sorta implmented things does do now). Has anyone got a good suggestion on a management app that would accomplish what both of us want?

    Read the article

  • How can I save an entire list of items true or false?

    - by JZ
    I'm following Ryan Bates, Railscast episode 52 and I've translated relevant parts of the code to work with Rails 3.0.0.beta2. In Ryan's case, he simply marks items incomplete and saves a timestamp. If an Item contains a timestamp the model returns the item in the completed list. I'm attempting to save ALL values true or false, depending on whether the check_box_tag is selected or not (using boolean). I am able to save ONLY selected items, true or false. How can I save an entire list of items true or false, depending on whether the checkbox is selected? The following is my attempt: controller logic: def yardsign Add.update_all(["yardsign=?", true], :id => params[:yard_ids]) redirect_to adds_path end html.erb: <%= form_tag yardsign_adds_path, :method => :put do %> <% @adds.each do |add| %> <td><%= check_box_tag "yard_ids[]", add.id %></td> <% end %> <% end %> routes.rb resources :adds do collection do put :yardsign end end Terminal Started POST "/adds/yardsign" for 127.0.0.1 at 2010-04-15 19:22:49 Processing by AddsController#yardsign as HTML Parameters: {"commit"=>"Update", "yardsigntakers"=>["1", "2"], "authenticity_token"=>"3arhsxg/Ky+0W7RNM2T3QditMTJmOnLR5CqmMYWN4Qw="} User Load (0.3ms) SELECT "users".* FROM "users" WHERE ("users"."id" = 1) LIMIT 1 SQL (1.8ms) UPDATE "adds" SET yardsign='t' WHERE ("adds"."id" IN (1, 2)) Redirected to http://localhost:3000/adds

    Read the article

  • Converting non-delimited text into name/value pairs in Delphi

    - by robsoft
    I've got a text file that arrives at my application as many lines of the following form: <row amount="192.00" store="10" transaction_date="2009-10-22T12:08:49.640" comp_name="blah " comp_ref="C65551253E7A4589A54D7CCD468D8AFA" name="Accrington "/> and I'd like to turn this 'row' into a series of name/value pairs in a given TStringList (there could be dozens of these <row>s in the file, so eventually I will want to iterate through the file breaking each row into name/value pairs in turn). The problem I've got is that the data isn't obviously delimited (technically, I suppose it's space delimited). Now if it wasn't for the fact that some of the values contain leading or trailing spaces, I could probably make a few reasonable assumptions and code something to break a row up based on spaces. But as the values themselves may or may not contain spaces, I don't see an obvious way to do this. Delphi' TStringList.CommaText doesn't help, and I've tried playing around with Delimiter but I get caught-out by the spaces inside the values each time. Does anyone have a clever Delphi technique for turning the sample above into something resembling this? ; amount="192.00" store="10" transaction_date="2009-10-22T12:08:49.640" comp_name="blah " comp_ref="C65551253E7A4589A54D7CCD468D8AFA" name="Accrington " Unfortunately, as is usually the case with this kind of thing, I don't have any control over the format of the data to begin with - I can't go back and 'make' it comma delimited at source, for instance. Although I guess I could probably write some code to turn it into comma delimited - would rather find a nice way to work with what I have though. This would be in Delphi 2007, if it makes any difference.

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • How does browser know when to prompt user to save password?

    - by Eric
    This is related to the question I asked here: http://stackoverflow.com/questions/2382329/how-can-i-get-browser-to-prompt-to-save-password This is the problem: I CAN'T get my browser to prompt me to save the password for the site I'm developing. (I'm talking about the bar that appears sometimes when you submit a form on Firefox, that says "Remember the password for yoursite.com? Yes / Not now / Never") This is super frustrating because this feature of Firefox (and most other modern browsers, which I hope work in a similar fashion) seems to be a mystery. It's like a magic trick the browser does, where it looks at your code, or what you submit, or something, and if it "looks" like a login form with a username (or email address) field and a password field, it offers to save. Except in this case, where it's not offering my users that option after they use my login form, and it's making me nuts. :-) (I checked my Firefox settings-- I have NOT told the browser "never" for this site. It should be prompting.) My question: exactly what the heuristics are that Firefox (or any other modern browser) uses to know when it should prompt the user to save? This shouldn't be too difficult to answer, since it's right there in the Mozilla source (I don't know where to look or else I'd try to dig it out myself). You'd think there would be a blog post or some other similar developer note from the Mozilla developers about this but I can't find that either. (* Note that if your answer to me has anything to do with cookies, encryption or anything else that is about how I'm storing the user's passwords in the database, you've probably misread my question. :-)

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • Thread Message Loop Hangs in Delphi

    - by erikjw
    Hello all. I have a simple Delphi program that I'm working on, in which I am attempting to use threading to separate the functionality of the program from its GUI, and to keep the GUI responsive during more lengthy tasks, etc. Basically, I have a 'controller' TThread, and a 'view' TForm. The view knows the controller's handle, which it uses to send the controller messages via PostThreadMessage. I have had no problem in the past using this sort of model for forms which are not the main form, but for some reason, when I attempt to use this model for the main form, the message loop of the thread just quits. Here is my code for the threads message loop: procedure TController.Execute; var Msg : TMsg; begin while not Terminated do begin if (Integer(GetMessage(Msg, hwnd(0), 0, 0)) = -1) then begin Synchronize(Terminate); end; TranslateMessage(Msg); DispatchMessage(Msg); case Msg.message of // ...call different methods based on message end; end; To set up the controller, I do this: Controller := TController.Create(true); // Create suspended Controller.FreeOnTerminate := True; Controller.Resume; For processing the main form's messages, I have tried using both Application.Run and the following loop (immediately after Controller.Resume) while not Application.Terminated do begin Application.ProcessMessages; end; I've run stuck here - any help would be greatly appreciated.

    Read the article

  • Trouble Shoot JavaScript Function in IE

    - by CreativeNotice
    So this function works fine in geko and webkit browsers, but not IE7. I've busted my brain trying to spot the issue. Anything stick out for you? Basic premise is you pass in a data object (in this case a response from jQuery's $.getJSON) we check for a response code, set the notification's class, append a layer and show it to the user. Then reverse the process after a time limit. function userNotice(data){ // change class based on error code returned var myClass = ''; if(data.code == 200){ myClass='success'; } else if(data.code == 400){ myClass='error'; } else{ myClass='notice'; } // create message html, add to DOM, FadeIn var myNotice = '<div id="notice" class="ajaxMsg '+myClass+'">'+data.msg+'</div>'; $("body").append(myNotice); $("#notice").fadeIn('fast'); // fadeout and remove from DOM after delay var t = setTimeout(function(){ $("#notice").fadeOut('slow',function(){ $(this).remove(); }); },5000); }

    Read the article

  • Using custom coordinates with QGraphicsScene

    - by Rob
    I am experimenting with a WYSIWYG editor that allows a user to draw shapes on a page and the Qt graphics scene support seems perfect for this. However, instead of working in pixels I want all my QGraphicsItem objects to work in tenths of a millimetre but I don't know how to achieve this. For example: // Create a scene that is the size if an A4 page (2100 = 21cm, 2970 = 29.7cm) QGraphicsScene* scene = new QGraphicsScene(0, 0, 2100, 2970); // Add a rectangle located 1cm across, 1cm down, 5cm wide and 2cm high QGraphicsItem* item = scene->addRect(100, 100, 500, 200); ... QGraphicsView* view = new QGraphicsView(scene); setCentralWidget(view); Now, when I display the scene above I want the shapes to appear at correct size for the screen DPI. Is this simply a case of using QGraphicsView::scale or do I have to do something more complicated? Note that if I was using a custom QWidget instead then I would use QPainter::setWindow and QPainter::setViewport to create a custom mapping mode but I can't see how to do this using the graphics scene support.

    Read the article

  • OutOfMemoryException

    - by Andrew
    I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays. I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it. I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now. I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException. This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object. Is my thinking correct here?

    Read the article

  • MyMessage<T> throws an exception when calling XmlSerializer

    - by Arthis
    I am very new to nservicebus. I am using version 3.0.1, the last one up to date. And I wonder if my case is a normal limitation of NSB, I am not aware of. I have an asp.net MVC application, I am trying to setup and in my global.asax, I have the following : var configure = Configure.WithWeb() .DefaultBuilder() .ForMvc() .XmlSerializer(); But I have an error with the XmlSerializer when dealing with one of my object: [Serializable] public class MyMessage<T> : IMessage { public T myobject { get; set; } } I pass trough : XmlSerializer() instance.Initialize(types); this.InitType(type, moduleBuilder); this.InitType(info2.PropertyType, moduleBuilder); and then when dealing With T, string typeName = GetTypeName(t); typename is null and the following instruction : if (!nameToType.ContainsKey(typeName)) ends in error. null value not allowed. Is this some limitations to Nservicebus, or am I messing something up?

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Navigating to nodes using xpath in flat structure

    - by James Berry
    I have an xml file in a flat structure. We do not control the format of this xml file, just have to deal with it. I've renamed the fields because they are highly domain specific and don't really make any difference to the problem. <attribute name="Title">Book A</attribute> <attribute name="Code">1</attribute> <attribute name="Author"> <value>James Berry</value> <value>John Smith</value> </attribute> <attribute name="Title">Book B</attribute> <attribute name="Code">2</attribute> <attribute name="Title">Book C</attribute> <attribute name="Code">3</attribute> <attribute name="Author"> <value>James Berry</value> </attribute> Key things to note: the file is not particularly hierarchical. Books are delimited by an occurance of an attribute element with name='Title'. But the name='Author' attribute node is optional. Is there a simple xpath statement I can use to find the authors of book 'n'? It is easy to identify the title of book 'n', but the authors value is optional. And you can't just take the following author because in the case of book 2, this would give the author for book 3. I have written a state machine to parse this as a series of elements, but I can't help thinking there would have been a way to directly get the results that I want.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • grep 5 seconds of input from the serial port inside a shell-script

    - by pica
    I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like: ttylog -b 115200 -d /dev/ttyS0 I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example: touch tempFile ttylog -b 115200 -d /dev/ttyS0 >> tempFile & serialPID=$! sleep 5 #kill ${serialPID} #does not work, gets wrong PID killall ttylog cat tempFile The file gets created but never filled with any data. I can also replace the ttylog line with: ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile & In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh). I have no idea what's going on here. It seems to be a failure of redirection within my script. Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

    Read the article

  • How do I locate a particular word in a text file using .NET

    - by cmrhema
    I am sending mails (in asp.net ,c#), having a template in text file (.txt) like below User Name :<User Name> Address : <Address>. I used to replace the words within the angle brackets in the text file using the below code StreamReader sr; sr = File.OpenText(HttpContext.Current.Server.MapPath(txt)); copy = sr.ReadToEnd(); sr.Close(); //close the reader copy = copy.Replace(word.ToUpper(),"#" + word.ToUpper()); //remove the word specified UC //save new copy into existing text file FileInfo newText = new FileInfo(HttpContext.Current.Server.MapPath(txt)); StreamWriter newCopy = newText.CreateText(); newCopy.WriteLine(copy); newCopy.Write(newCopy.NewLine); newCopy.Close(); Now I have a new problem, the user will be adding new words within an angle, say for eg, they will be adding <Salary>. In that case i have to read out and find the word <Salary>. In other words, I have to find all the words, that are located with the angle brackets (<). How do I do that?

    Read the article

  • Multithreaded IOCP Client Issue

    - by Carl
    I am writing a multithreaded client that uses an IO Completion Port. I create and connect the socket that has the WSA_FLAG_OVERLAPPED attribute set. if ((m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == INVALID_SOCKET) { throw std::exception("Failed to create socket."); } if (WSAConnectByName(m_socket, L"server.com", L"80", &localAddressLength, reinterpret_cast<sockaddr*>(&localAddress), &remoteAddressLength, &remoteAddress, NULL, NULL) == FALSE) { throw std::exception("Failed to connect."); } I associate the IO Completion Port with the socket. if ((m_hIOCP = CreateIoCompletionPort(reinterpret_cast<HANDLE>(m_socket), m_hIOCP, NULL, 8)) == NULL) { throw std::exception("Failed to create IOCP object."); } All appears to go well until I try to send some data over the socket. SocketData* socketData = new SocketData; socketData->hEvent = 0; DWORD bytesSent = 0; if (WSASend(m_socket, socketData->SetBuffer(socketData->GenerateLoginRequestHeader()), 1, &bytesSent, NULL, reinterpret_cast<OVERLAPPED*>(socketData), NULL) == SOCKET_ERROR && WSAGetLastError() != WSA_IO_PENDING) { throw std::exception("Failed to send data."); } Instead of returning SOCKET_ERROR with the last error set to WSA_IO_PENDING, WSASend returns immediately. I need the IO to pend and for it's completion to be handled in my thread function which is also my worker thread. unsigned int __stdcall MyClass::WorkerThread(void* lpThis) { } I've done this before but I don't know what is going wrong in this case, I'd greatly appreciate any efforts in helping me fix this problem.

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • Can a class inherit from LambdaExpression in .NET? Or is this not recommended?

    - by d.
    Consider the following code (C# 4.0): public class Foo : LambdaExpression { } This throws the following design-time error: Foo does not implement inherited abstract member System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) There's absolutely no problem with public class Foo : Expression { } but, out of curiosity and for the sake of learning, I've searched in Google System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) and guess what: zero results returned (when was the last time you saw that?). Needless to say, I haven't found any documentation on this method anywhere else. As I said, one can easily inherit from Expression; on the other hand LambdaExpression, while not marked as sealed (Expression<TDelegate> inherits from it), seems to be designed to prevent inheriting from it. Is this actually the case? Does anyone out there know what this method is about? EDIT (1): More info based on the first answers - If you try to implement Accept, the editor (C# 2010 Express) automatically gives you the following stub: protected override Expression Accept(System.Linq.Expressions.ExpressionVisitor visitor) { return base.Accept(visitor); } But you still get the same error. If you try to use a parameter of type StackSpiller directly, the compiler throws a different error: System.Linq.Expressions.Compiler.StackSpiller is inaccessible due to its protection level. EDIT (2): Based on other answers, inheriting from LambdaExpression is not possible so the question as to whether or not it is recommended becomes irrelevant. I wonder if, in cases like this, the error message should be Foo cannot implement inherited abstract member System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) because [reasons go here]; the current error message (as some answers prove) seems to tell me that all I need to do is implement Accept (which I can't do).

    Read the article

< Previous Page | 708 709 710 711 712 713 714 715 716 717 718 719  | Next Page >