Search Results

Search found 22641 results on 906 pages for 'case'.

Page 701/906 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • DOJO : How do you reinitiate form elements after ajax call ?

    - by Dural
    I'm trying to do a couple of things using Zend Framework & Dojo Toolkit, and any help would be appreciated. Here's the problem: I have a form that is rendered with the Zend Framework form class, which has an ajax radio button selection. Clicking one of these radio buttons will send an ajax request to another controller, which has no layout, just a rendered form. The ajax request will then populate a div with the new form options. Problem is, when I replace the innerHTML of the div with the ajax response, all the form inputs and elements are not inheriting the same Dojo styling and form validation. I was wondering if there is a way to reinitate form elements after an ajax call? I have tried to use the code attached which I found and modified slightly for this example, however it did not work. If I use the line dojo.parser.parse( div ); nothing changes (rg_adress in the example is the ID of a form element that is placed on the DOM). Here is the console.log of rg_address: <input type="text" dojotype="dijit.form.ValidationTextBox" required="1" invalidmessage="The first name of the recipient" value="" name="rg_address" id="rg_address" class="textbox"/> onClick=' dojo.xhrGet( { url: "/transfer/newrecipient/", handleAs: "text", timeout: 10000, // Time in milliseconds // The LOAD function will be called on a successful response. load: function(response, ioArgs) { $("#newRecipient").html(response); $("#newPayMethod").html(""); $("#newPayDetail").html(""); var div = dojo.byId("rg_address"); console.log( div ); dojo.parser.parse( div ); return response; }, // The ERROR function will be called in an error case. error: function(response, ioArgs) { $("#newRecipient").html("Error loading registration form"); $("#newPayMethod").html(""); $("#newPayDetail").html(""); return response; } });' Thanks, Dural

    Read the article

  • Microsoft Solver Foundation constraint

    - by emaster70
    Hello, I'm trying to use Microsoft Solver Foundation 2 to solve a fairly complicated situation, however I'm stuck with an UnsupportedModelException even when I dumb down the model as much as possible. Does anyone have an idea of what I'm doing wrong? Following is the least example required to reproduce the problematic behavior. var ctx = SolverContext.GetContext(); var model = ctx.CreateModel(); var someConstant = 1337.0; var decisionA = new Decision(Domain.Real, "decisionA"); var decisionB = new Decision(Domain.Real, "decisionB"); var decisionC = new Decision(Domain.Real, "decisionC"); model.AddConstraint("ca", decisionA <= someConstant); model.AddConstraint("cb", decisionB <= someConstant); model.AddConstraint("cc", decisionC <= someConstant); model.AddConstraint("mainConstraint", Model.Equal(Model.Sum(decisionA, decisionB, decisionC), someConstant)) model.AddGoal("myComplicatedGoal", GoalKind.Minimize, decisionC); var solution = ctx.Solve(); solution.GetReport().WriteTo(Console.Out); Console.ReadKey(); Please consider that my actual model should include, once complete, a few constraints in the form of a*a+b*a <= someValue, so if what I'm willing to do ultimately isn't supported, please let me know in advance. If that's the case I'd also appreciate a suggestion of some other solver with a .NET friendly interface that I could use (only well-known commercial packages, please). Thanks in advance

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

  • Does NHibernate LINQ support ToLower() in Where() clauses?

    - by Daniel T.
    I have an entity and its mapping: public class Test { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } } public class TestMap : EntityMap<Test> { public TestMap() { Id(x => x.Id); Map(x => x.Name); Map(x => x.Description); } } I'm trying to run a query on it (to grab it out of the database): var keyword = "test" // this is coming in from the user keyword = keyword.ToLower(); // convert it to all lower-case var results = session.Linq<Test> .Where(x => x.Name.ToLower().Contains(keyword)); results.Count(); // execute the query However, whenever I run this query, I get the following exception: Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index Am I right when I say that, currently, Linq to NHibernate does not support ToLower()? And if so, is there an alternative that allows me to search for a string in the middle of another string that Linq to NHibernate is compatible with? For example, if the user searches for kap, I need it to match Kapiolani, Makapuu, and Lapkap.

    Read the article

  • Problem with GWT behind a reverse proxy - either nginx or apache

    - by Don Branson
    I'm having this problem with GWT when it's behind a reverse proxy. The backend app is deployed within a context - let's call it /context. The GWT app works fine when I hit it directly: http://host:8080/context/ I can configure a reverse proxy in front it it. Here's my nginx example: upstream backend { server 127.0.0.1:8080; } ... location / { proxy_pass http://backend/context/; } But, when I run through the reverse proxy, GWT gets confused, saying: 2009-10-04 14:05:41.140:/:WARN: Login: ERROR: The serialization policy file '/C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.140:/:WARN: Login: WARNING: Failed to get the SerializationPolicy 'C7F5ECA5E3C10B453290DE47D3BE0F0E' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. 2009-10-04 14:05:41.292:/:WARN: StoryService: ERROR: The serialization policy file '/0445C2D48AEF2FB8CB70C4D4A7849D88.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.292:/:WARN: StoryService: WARNING: Failed to get the SerializationPolicy '0445C2D48AEF2FB8CB70C4D4A7849D88' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. In other words, GWT isn't getting the word that it needs to prepend /context/ hen look for C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc, but only when the request comes throught proxy. A workaround is to add the context to the url for the web site: location /context/ { proxy_pass http://backend/context/; } but that means the context is now part of the url that the user sees, and that's ugly. Anybody know how to make GWT happy in this case? Software versions: GWT - 1.7.0 (same problem with 1.7.1) Jetty - 6.1.21 (but the same problem existed under tomcat) nginx - 0.7.62 (same problem under apache 2.x) I've looked at the traffic between the proxy and the backend using DonsProxy, but there's nothing noteworthy there.

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • Is ASP.NET MVC is really MVC? Or how to separate model from controller?

    - by Andrey
    Hi all, This question is a bit rhetorical. At some point i got a feeling that ASP.NET MVC is not that authentic implementation of MVC pattern. Or i didn't understood it. Consider following domain: electric bulb, switch and motion detector. They are connected together and when you enter the room motion detector switches on the bulb. If i want to represent them as MVC: switch is model, because it holds the state and contains logic bulb is view, because it presents the state of model to human motion detector is controller, because it converts user actions to generic model commands Switch has one private field (On/Off) as a State and two methods (PressOn, PressOff). If you call PressOn when it is Off it goes to On, if you call it again state doesn't change. Bulb can be replaced with buzzer, motion detector with timer or button, but the model still represent the same logic. Eventually system will have same behavior. This is how i understand classical MVC decomposition, please correct me if i am wrong. Now let's decompose it in ASP.Net MVC way. Bulb is still a view Controller will be switch + motion detector Model is some object that will just pass state to bulb. So the logic that defines behavior moves to controller. Question 1: Is my understanding of MVC and ASP.NET MVC correct? Question 2: If yes, do you agree that ASP.NET MVC is not 100% accurate implementation? And back to life. The final question is how to separate model from controller in case of ASP.NET MVC. There can be two extremes. Controller does basic stuff and call model to do all the logic. Another is controller does all the logic and model is just something like class with properties that is mapped to DB. Question 3: Where should i draw the line between this extremes? How to balance? Thanks, Andrey

    Read the article

  • EOL Special Char not matching

    - by Aurélien Ribon
    Hello, I am trying to find every "a - b, c, d" pattern in an input string. The pattern I am using is the following : "^[ \t]*(\\w+)[ \t]*->[ \t]*(\\w+)((?:,[ \t]*\\w+)*)$" This pattern is a C# pattern, the "\t" refers to a tabulation (its a single escaped litteral, intepreted by the .NET String API), the "\w" refers to the well know regex litteral predefined class, double escaped to be interpreted as a "\w" by the .NET STring API, and then as a "WORD CLASS" by the .NET Regex API. The input is : a -> b b -> c c -> d The function is : private void ParseAndBuildGraph(String input) { MatchCollection mc = Regex.Matches(input, "^[ \t]*(\\w+)[ \t]*->[ \t]*(\\w+)((?:,[ \t]*\\w+)*)$", RegexOptions.Multiline); foreach (Match m in mc) { Debug.WriteLine(m.Value); } } The output is : c -> d Actually, there is a problem with the line ending "$" special char. If I insert a "\r" before "$", it works, but I thought "$" would match any line termination (with the Multiline option), especially a \r\n in a Windows environment. Is it not the case ?

    Read the article

  • Parsing multibyte string in PHP

    - by Petr Peller
    I would like to write a (HTML) parser based on state machine but I have doubts how to acctually read/use an input. I decided to load the whole input into one string and then work with it as with an array and hold its index as current parsing position. There would be no problems with single-byte encoding, but in multi-byte encoding each value does not represent a character, but a byte of a character. Example: $mb_string = 'žšcr'; //4 multi-byte characters in UTF-8 for($i=0; $i < 4; $i++) { echo $mb_string[$i], PHP_EOL; } Outputs: L ž L A This means I cannot iterate through the string in a loop to check single characters, because I never know if I am in the middle of an character or not. So the questions are: How do I multi-byte safe read a single character from a string in a performance friendly way? Is it good idea to work with the string as it was an array in this case? How would you read the input?

    Read the article

  • Load external pages using jquery

    - by user1688011
    I'm trying to use jquery to load external pages into the current without reloading. Apparently everything works fine except of one little issue, I hope I'll be clear as much as possible. When I call the page 'info.php' it is loaded into the #content div. That's what the script supposed to do, the problem is that in the main page, which contains the script and the #content div, I already have some code that I want it to be executed when someone visit the page and not to be called from external page. That is actually the case but when I click on one of the links in the menu, I can't go back to the initial content.. <script> $(function() { $('#nav a').click(function() { var page = $(this).attr('href'); $('#content').load(page + '.php'); return false; }); }); </script> <ul id="nav"> <li><a href="#">Page1</a></li> <li><a href="about">About</a></li> <li><a href="contact">Contact</a></li> <li><a href="info">Info</a></li> </ul> <div id="content"> Here I have some code that I wanted to be attributed to the "Page1" </div> Do you have any suggestions how to fix this issue? Thanks

    Read the article

  • SAP related testing

    - by mgj
    Dear all, One notion that has been prevalent mostly as rumours for many aspiring programmers is that the testing phase of the SDLC(Software Development Life Cycle) is not that challenging and interesting as one's job as a tester after a period of time becomes monotonous because a person does the same thing repeatedly over and over again. Thats why many have this complexion of looking for a developers job rather than that of testers. Don't testers have a space for themselves in software companies to grow..? Please feel free to express your views for or against this. How true is that, could you please give e.g.'s of instances( need not be practical, even theoretical would suffice) which actually contradict this statement wrt a tester's career specifically wrt the SAP domain. E.g.'s from other domains are also welcome. This question is not meant to hurt someone's feelings who is in the testing domain. Its just that for e.g. in my case I want to know what actually would be the challenge's a tester could also face in real life situations.Something that would make their job also interesting and fun-filled. I myself am pro-testing and also interested in pursuing testing as a profession in a sw co, just curious to know more about it so...:) Thanks..:)

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Multithreaded IOCP Client Issue

    - by Carl
    I am writing a multithreaded client that uses an IO Completion Port. I create and connect the socket that has the WSA_FLAG_OVERLAPPED attribute set. if ((m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == INVALID_SOCKET) { throw std::exception("Failed to create socket."); } if (WSAConnectByName(m_socket, L"server.com", L"80", &localAddressLength, reinterpret_cast<sockaddr*>(&localAddress), &remoteAddressLength, &remoteAddress, NULL, NULL) == FALSE) { throw std::exception("Failed to connect."); } I associate the IO Completion Port with the socket. if ((m_hIOCP = CreateIoCompletionPort(reinterpret_cast<HANDLE>(m_socket), m_hIOCP, NULL, 8)) == NULL) { throw std::exception("Failed to create IOCP object."); } All appears to go well until I try to send some data over the socket. SocketData* socketData = new SocketData; socketData->hEvent = 0; DWORD bytesSent = 0; if (WSASend(m_socket, socketData->SetBuffer(socketData->GenerateLoginRequestHeader()), 1, &bytesSent, NULL, reinterpret_cast<OVERLAPPED*>(socketData), NULL) == SOCKET_ERROR && WSAGetLastError() != WSA_IO_PENDING) { throw std::exception("Failed to send data."); } Instead of returning SOCKET_ERROR with the last error set to WSA_IO_PENDING, WSASend returns immediately. I need the IO to pend and for it's completion to be handled in my thread function which is also my worker thread. unsigned int __stdcall MyClass::WorkerThread(void* lpThis) { } I've done this before but I don't know what is going wrong in this case, I'd greatly appreciate any efforts in helping me fix this problem.

    Read the article

  • Why don't file type filters work properly with nsIFilePicker on Mac OSX?

    - by Eric Strom
    I am running a chrome app in firefox (started with -app) with the following code to open a filepicker: var nsIFilePicker = Components.interfaces.nsIFilePicker; var fp = Components.classes["@mozilla.org/filepicker;1"] .createInstance(nsIFilePicker); fp.init(window, "Select Files", nsIFilePicker.modeOpenMultiple); fp.appendFilter("video", "*.mov; *.mpg; *.mpeg; *.avi; *.flv; *.m4v; *.mp4"); fp.appendFilter("all", "*.*"); var res = fp.show(); if (res == nsIFilePicker.returnCancel) return; var files = fp.files; var paths = []; while (files.hasMoreElements()) { var arg = files.getNext().QueryInterface( Components.interfaces.nsILocalFile ).path; paths.push(arg); } Everything seems to work fine on Windows, and the file picker itself works on OSX, but the dropdown menu to select between file types only displays in Windows. The first filter (video in this case) is in effect, but the dropdown to select the other type never shows. Is there something extra that is needed to get this working on OSX? I have tried the latest firefox (3.6) and an older one (3.0.13) and both don't show the file type dropdown on OSX.

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • unable to place breakpoints in eclipse

    - by anjanb
    I am using eclipse europa (3.5) on windows vista home premium 64-bit using JDK 1.6.0_18 (32 BIT). Normally, I am able to put breakpoints just fine; However, for a particular class which is NOT part of the project (this class is inside a .JAR file (.JAR file is part of the project) ), although I have attached a source directory to this .JAR file, I am unable to place a breakpoint in this class. If I double-click on the breakpoint pane(left border), I notice that a class breakpoint is placed. I was wondering if there was NO debug info; However, found that this particular class was compiled using ant/javac task using debug="true" and debuglevel="lines,vars,source". I even ran jad on this class to confirm that it indeed contained the debug info. So, why is eclipse preventing me from placing a breakpoint ? EDIT : Just so everyone understands the context, this is a webapp running under tomcat 6.0. I am remote debugging the application from eclipse after having started tomcat outside. The application is working just fine. I am trying to understand the behavior of the above class which I'm unable to do since eclipse is not letting me set a BP. P.S : I saw a few threads here talking about BPs not being hit but in my case, I am unable to place the BP! P.P.S : I tried JDK 1.6.0_16 before trying out 1.6.0_18. Thanks for any pointers.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • jQuery carousel in a div with display:none

    - by Fred Kafka
    I have to use on my site a jQuery responsive carousel with 4 displayed items that slide one at a time, etc etc. The point is: this carousel is placed in a div with display:none and it appears clicking on a button with a slideToggle script (jQuery). Well, when the div appears the carousel is not displayed. Nothing! Notice that if I remove the display:none the carousel shows perfectly. I've tried a bunch of carousel plugin (bxslider, caroufredsel, elastislide, flexslider) and this issue happens for all of them. And then... I'm going crazy!! Excuse meSorry friends, here is the code: HTML (here is the case of FlexSlider but the code is similar for the other plugins) <div id="hiddenDiv"> <div id="hiddenDivInner"> <div class="flexslider"> <ul class="slides"> <li>...</li> <li>...</li> <li>...</li> </ul> </div> </div> </div> CSS #hiddenDiv{ display:none; padding-bottom:10px; background: url("../img/xxx.gif") repeat left bottom #FFFFFF; } SCRIPT (copy-paste from the site. This script is between $(document).ready together with other scripts. Alredy tried to remove the load function) $(window).load(function() { $('.flexslider').flexslider({ animation: "slide", animationLoop: false, itemWidth: 300, itemMargin: 5, minItems: 1, maxItems: 4 }); }); $("#trigger").click(function () { $("#hiddenDiv").slideToggle(400, "easeInOutExpo"); }); I remind you that with this code and no display:none every carousels work, also if I slide up and then down the div using the slideToggle button (#trigger).

    Read the article

  • Add double tap action (presentModalViewController) to UISCOLLVIEW

    - by R.J.
    I have been wrestling this issue for a while now and cannot seem to get the following "touchesEnded" method to execute within a UISCROLLVIEW. I have read on many of the forums that UISCROLLVIEW will take control of all touch events unless it is subclassed, but I cannot seem to get the code right (still new to the SDK). Basically I have a scrollview made uo with several UIIMAGEVIEW's and currenlty have scrolling with paging (much like the photo app). I have been studying the SCROLLING MADDNESS example without success. All I want to do is anywhere in the UISCROLLVIEW have the user double tap to presentModalViewController back to my info page (i.e.) (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { NSSet *allTouches = [event allTouches]; switch ([allTouches count]) { case 1: {// One finger touch UITouch *touch = [[allTouches allObjects] objectAtIndex:0]; if ([touch tapCount] == 2) {InfoButtonViewController *scroll = [[InfoButtonViewController alloc] initWithNibName:nil bundle:nil]; scroll.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:scroll animated:YES]; [scroll release]; } } } } Any code assistance would be greatly appreciated. The UISCROLLVIEW is implemented as follows (let me know if I need to provide additional details). Thank you in advance... MyViewController.h @interface MyViewController : UIViewController { } @end

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • F#, Linux and makefiles

    - by rwallace
    I intend to distribute an F# program as both binary and source so the user has the option of recompiling it if desired. On Windows, I understand how to do this: provide .fsproj and .sln files, which both Visual Studio and MSBuild can understand. On Linux, the traditional solution for C programs is a makefile. This depends on gcc being directly available, which it always is. The F# compiler can be installed on Linux and works under Mono, so that's fine so far. However, as far as I can tell, it doesn't create a scenario where fsc runs the compiler, instead the command is mono ...path.../fsc.exe. This is also fine, except I don't know what the path is going to be. So the full command to run the compiler in my case could be mono ~/FSharp-2.0.0.0/bin/fsc.exe types.fs tptp.fs main.fs -r FSharp.PowerPack.dll except that I'm not sure where fsc.exe will actually be located on the user's machine. Is there a way to find that out within a makefile, or would it be better to fall back on just explaining the above in the documentation and relying on the user to modify the command according to his setup?

    Read the article

  • Delphi How to wait for socket answer inside procedure?

    - by Astronavigator
    For some specific needs i need to create procedure that waits for socket request (or answer) in dll: TForm1 = class(TForm) ServerSocket1: TServerSocket; ...... procedure MyWaitProc; stdcall; begin Go := false; while not Go do begin // Wating... // Application.ProcessMessages; // Works with this line end; end; procedure TForm1.ServerSocket1ClientRead(Sender: TObject; Socket: TCustomWinSocket); begin MessageBoxA(0, PAnsiChar('Received: '+Socket.ReceiveText), '', MB_OK); Go := true; end; exports MyWaitProc; When I call Application.ProcessMessages everything works fine: application waits for request and then continues. But in my case calling Application.ProcessMessages causes to unlocking main form on host application (not dll's one). When I don't call Application.ProcessMessages application just hangs couse it cannot handle message... So, how to create such a procedure that's wating for socket answer ? Maybe there a way to wait for socket answer without using Application.ProcessMessages ? EDIT I also tried to use TIdTCPServer, for some reasons, the result is the same. TForm1 = class(TForm) IdTCPServer1: TIdTCPServer; ..... procedure MyWaitProc; stdcall; begin Go := false; while not Go do begin // Waiting ... // Application.ProcessMessages; end; end; procedure TForm1.IdTCPServer1Execute(AContext: TIdContext); var s: string; begin s := AContext.Connection.Socket.ReadString(1); AllText := AllText + s; Go := True; end;

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • I didn't say anything treasonous, quit putting words in my mouth

    - by You guys Lie
    Where at all did I say ANYTHING about organizing some kind of anti-government activity? Nowhere. I do not give a flying fuck about America in case you hadn't realized, I'm talking about what I'm going to do when Jesus Christ returns. Besides, why would I make it easy for them to throw me in a black site prison? Use common sense, sheeple. In the end I guess it doesn't matter anyways, since I do not recognize the government as my ultimate authority. Police/Army/Whatever-- funny outfits and a shiny badge don't make you better than me. Allah will do away with your kind. However, as long as they're with me (as they semi-currently are), we have no problems. I fear the government will have other plans to control the population when things start to further decline, and that is when they will run into problems with me and mine, and probably a large percentage of the public. You'd have to be a fucking fool to think we'd fall into anarchy immediately, given the vast resources and blind loyalty of this country. Saying there are practical limits to free speech for anything I have said in this thread is not only ignorant, it is unpatriotic. I should be being celebrated as the modern day Paul Revere for warning you people about our impending doom. You would do well to study the foundations of this country. Nothing I have said is treasonous at all. Besides, the DoD knows I'm harmless until shit pops off, they have bigger fish to fry right now, go forward. Keep in mind, I don't personally have to do anything to overthrow the government. I just said, I would not advocate for any paramilitary organizations at this time, only if/after we dissolve into (more) tyranny. I am not a terrorist, like some soldiers in Iraq. The machine is doing a great job of ruining itself, while I get to comfortably laugh at it on the nightly news knowing that I'm ready for it to shut down.

    Read the article

  • How can I choose different hints for different joins for a single table in a query hint?

    - by RenderIn
    Suppose I have the following query: select * from A, B, C, D where A.x = B.x and B.y = C.y and A.z = D.z I have indexes on A.x and B.x and B.y and C.y and D.z There is no index on A.z. How can I give a hint to this query to use an INDEX hint on A.x but a USE_HASH hint on A.z? It seems like hints only take the table name, not the specific join, so when using a single table with multiple joins I can only specify a single strategy for all of them. Alternative, suppose I'm using a LEADING or ORDERED hint on the above query. Both of these hints only take a table name as well, so how can I ensure that the A.x = B.x join takes place before the A.z = D.z one? I realize in this case I could list D first, but imagine D subsequently joins to E and that the D-E join is the last one I want in the entire query. A third configuration -- Suppose I want the A.x join to be the first of the entire query, and I want the A.z join to be the last one. How can I use a hint to have a single join from A to take place, followed by the B-C join, and the A-D join last?

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >