Search Results

Search found 15403 results on 617 pages for 'request querystring'.

Page 523/617 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • problem in accessing the path involving pagination using display tag

    - by sarah
    Hi All, I am using display tag for pagiantion and display of the data in table format the code is like <display:column title="Select" style="width: 90px;"> <input type="checkbox" name="optionSelected" value="<c:out value='${userList.loginName}'/>"/> </display:column> <display:column property="loginName" sortable="false" title="UserName" paramId="loginName" style="width: 150px; text-align:center" href="./editUser.do?method=editUser"/> the pagesize is one it will display one in a page and eidt page will be dispaly on click of login name,but when i go to the second page i get an error saying /views/editUser path not found.How exactly should i define the path ? the struts-config is like <!-- action for edit user --> <action path="/editUser" name="editUserForm" type="com.actions.UserManagementAction" parameter="method" input="/EditUser.jsp" scope="request"> <forward name="success" path="/views/EditUser.jsp" /> <forward name="failure" path="/views/failure.jsp" /> </action> Pleas tell me how should i define the access path for display tag as on click of edit link the first page works but not the second edit

    Read the article

  • WCF and ASP.NET - Server.Execute throwing object reference not set to an instance of an object

    - by user208662
    Hello, I have an ASP.NET page that calls to a WCF service. This WCF service uses a BackgroundWorker to asynchronously create an ASP.NET page on my server. Oddly, when I execute the WCF Service [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] public void PostRequest(string comments) { // Do stuff // If everything went o.k. asynchronously render a page on the server. I do not want to // block the caller while this is occurring. BackgroundWorker myWorker = new BackgroundWorker(); myWorker.DoWork += new DoWorkEventHandler(myWorker_DoWork); myWorker.RunWorkerAsync(HttpContext.Current); } private void myWorker_DoWork(object sender, DoWorkEventArgs e) { // Set the current context so we can render the page via Server.Execute HttpContext context = (HttpContext)(e.Argument); HttpContext.Current = context; // Retrieve the url to the page string applicationPath = context.Request.ApplicationPath; string sourceUrl = applicationPath + "/log.aspx"; string targetDirectory = currentContext.Server.MapPath("/logs/"); // Execute the other page and load its contents using (StringWriter stringWriter = new StringWriter()) { // Write the contents out to the target url // NOTE: THIS IS WHERE MY ERROR OCCURS currentContext.Server.Execute(sourceUrl, stringWriter); // Prepare to write out the result of the log targetPath = targetDirectory + "/" + DateTime.Now.ToShortDateString() + ".aspx"; using (StreamWriter streamWriter = new StreamWriter(targetPath, false)) { // Write out the content to the file sb.Append(stringWriter.ToString()); streamWriter.Write(sb.ToString()); } } } Oddly, when the currentContext.Server.Execute method is executed, it throws an "object reference not set to an instance of an object" error. The reason this is so strange is because I can look at the currentContext properties in the watch window. In addition, Server is not null. Because of this, I have no idea where this error is coming from. Can someone point me in the correct direction of what the cause of this could be? Thank you!

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Pattern for limiting number of simultaneous asynchronous calls

    - by hitch
    I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing? Would something like this be appropriate (pseudocode, obviously)? private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>; public List<externalSystemObjects> GetObjects(List<string> ids) { int callCount = 0; int maxCallCount = 25; WaitHandle[] handles; foreach(id in itemIds to get) { if(callCount < maxCallCount) { WaitHandle handle = executeCall(id, callback); addWaitHandleToWaitArray(handle) } else { int returnedCallId = WaitHandle.WaitAny(handles); removeReturnedCallFromWaitHandles(handles); } } WaitHandle.WaitAll(handles); return returnedObjects; } public void callback(object result) { returnedObjects.Add(result); }

    Read the article

  • How can I display an ASP.NET MVC html part from one application in another

    - by Frank Sessions
    We have several asp.net MVC apps in the following setup SecurityApp (root application - handles forms auth for SSO and has a profile edit page) Application1 (virtual directory) Application2 (virtual directory) Application3 (virtual directory) so that domain.com points to SecurityApp and domain.com/Application1 etc point to their associated virtual directories. All of our Single Sign On (SSO) is working properly using forms authentication. Based on the users permissions when logging in a menu that lists their available applications and a logout link will be generated and saved in the cache - this menu displays fine whenever the user is in the SecurityApp (editing their profile) but we cannot figure out how to get the Applications in the virtual directories to display the same application menu. We have tried: 1) Using JSONP to do an request that will return the html for the menu. The ajax call returns the HTML with the html; however, because User.IsAuthenticated is false the menu comes back empty. 2) We created a user control and include it along with the dll's for the SecurityApp project and this works; however, we dont want to have to include all the dlls for the SecurityApp project in every application that we create (along with all the app settings in the web.config) We would like this to be as simple as possible to implement so that anyone creating a new app can add the menu to their application in as few steps as possible... Any ideas? To Clarify - we are using ASP.NET MVC 1.0 since these apps are in production and we do not have the okay to go to ASP.NET MVC 2.0 (unfortunately)

    Read the article

  • Django: Serving Media Behind Custom URL

    - by TheLizardKing
    So I of course know that serving static files through Django will send you straight to hell but I am confused on how to use a custom url to mask the true location of the file using Django. http://stackoverflow.com/questions/2681338/django-serving-a-download-in-a-generic-view but the answer I accepted seems to be the "wrong" way of doing things. urls.py: url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), views.py: def song_download(request, song_id): song = Song.objects.get(id=song_id) fsock = open(os.path.join(song.path, song.filename)) response = HttpResponse(fsock, mimetype='audio/mpeg') response['Content-Disposition'] = "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response This solution works perfectly but not perfectly enough it turns out. How can I avoid having a direct link to the mp3 while still serving through nginx/apache? EDIT 1 - ADDITIONAL INFO Currently I can get my files by using an address such as: http://www.example.com/music/song/1692/download/ But the above mentioned method is the devil's work. How can I accomplished what I get above while still making nginx/apache serve the media? Is this something that should be done at the webserver level? Some crazy mod_rewrite? http://static.example.com/music/Aphex%20Twin%20-%20Richard%20D.%20James%20(V0)/10%20Logon-Rock%20Witch.mp3

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • How can I fetch Google static maps with TIdHTTP?

    - by cloudstrif3
    I'm trying to return content from maps.google.com from within Delphi 2006 using the TIdHTTP component. My code is as follows procedure TForm1.GetGoogleMap(); var t_GetRequest: String; t_Source: TStringList; t_Stream: TMemoryStream; begin t_Source := TStringList.Create; try t_Stream := TMemoryStream.Create; try t_GetRequest := 'http://maps.google.com/maps/api/staticmap?' + 'center=Brooklyn+Bridge,New+York,NY' + '&zoom=14' + '&size=512x512' + '&maptype=roadmap' + '&markers=color:blue|label:S|40.702147,-74.015794' + '&markers=color:green|label:G|40.711614,-74.012318' + '&markers=color:red|color:red|label:C|40.718217,-73.998284' + '&sensor=false'; IdHTTP1.Post(t_GetRequest, t_Source, t_Stream); t_Stream.SaveToFile('google.html'); finally t_Stream.Free; end; finally t_Source.Free; end; end; However I keep getting the response HTTP/1.0 403 Forbidden. I assume this means that I don't have permission to make this request but if I copy the url into my web browser IE 8, it works fine. Is there some header information that I need or something else?

    Read the article

  • Save a form in an XML file using Ajax and JSP

    - by novellino
    Hello, I want to create a simple form with a name and an email and save these data in an XML file. So far I found that using Ajax with jQuery is quite easy. So I used the usual code: //dataString have the values taken from the form var dataString = 'name='+ name + '&email=' + email; $.ajax({ type: "POST", url: "users.xml", data: dataString, dataType: "xml", success: function() { .... } }); If I understood well, in the url I should add the name of the XML file that will be created. When the user clicks a button I call the function with the Ajax request, and then I should call somewhere a function for generating the xml. I am using also two beans. One is for setting the elements of the user and the other is for saving the data in the XML. I am using the XStream library for the xml although I don't know if is the best solution. The problem now it that I can not connect all these together in order to save the data in the XML. Does anyone know what should I do? Thanks a lot!

    Read the article

  • I like the way they Design/Architecture it but how do I implement this

    - by Rachel
    Summary: I have different components on homepage and each components shows some promotion to the user. I have Cart as one Component and depending upon content of the cart promotion are show. I have to track user online activities and send that information to Omniture for Report Generation. Now my components are loaded asynchronously basically are loaded when AjaxRequest is fired up and so there is not fix pattern or rather information on when components will appear on the webpages. Now in order to pass information to Omniture I need to call track function on $(document).(ready) and append information for each components(7 parameters are required by Omniture for each component). So in the init:config function of each component am calling Omniture and passing paramters but now no. of Omniture calls is directly proportional to no. of Components on the webpage but this is not acceptable as each call to Omniture is very expensive. Now I am looking for a way where in I can club the information about 7 parameters and than make one Call to Omniture wherein I pass those information. Points to note is that I do not know when the components are loaded and so there is no pre-defined time or no. of components that would be loaded. The thing is am calling track function when document is ready but components are loaded after call to Omniture has been made and so my question is Q: How can I collect the information for all the components and than just make one call to Omniture to send those information ? As mentioned, I do not know when the components are loaded as they are done on the Ajax Request. Hope I am able to explain my challenge and would appreciate if some one can provide from Design/Architect Solutions for the Challenge.

    Read the article

  • PHP Json Encoding w/ quote escaping in 5.2?

    - by NickAldwin
    I'm playing with the flickr api and php. I want to pass some information from PHP to Javascript through Ajax. I have the following code: json_encode($pics); which results in the following example JSON string: [{"id":"4363603591","title":"blue, white and red...another seattle view","date_faved":"1266379499"},{"id":"4004908219","title":"\u201cI just told you my dreams and you made me see that I could walk into the sun and I could still be me and now I can't deny nothing lasts forever.\u201d","date_faved":"1259987670"}] Javascript has problems with this, however, due to the unescaped single-quote in the second item ("can't deny"). I want to use the function json_encode with the options parameter to make it strip the quotes, but that's only available in PHP 5.3, and I'm running 5.2 (not my server). Is there a fast way to run through the entire array and escape everything before encoding it in Json? I looked for a way to do this, but it all seems to deal with encoding it as the data is generated, something I cannot do as I'm not the one generating the data. If it helps, I'm currently using the following javascript after the ajax request: var photos = eval('(' + resptxt + ')');

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • Dragging a copy of all selected elements from a select box--possible?

    - by Sean
    I have a picklist web interface: a pair of select elements with a pair of buttons (a left-pointing arrow and a right-pointing arrow) between them. Users can move items between the two columns by, eg, selecting one of options in the left column and clicking on the right-pointing arrow. Now I have an enhancement request: someone wants to be able to drag-and-drop items between the two columns instead of clicking a button. The problem with my initial two-select-box setup is that as soon as I click one of the highlighted options to initiate a drag, all of the other selected options are deselected. Using jQuery, I've attached mousedown event handlers to both the select boxes and each individual option that just call preventDefault() on the event object, but this isn't sufficient. On Firefox 3 the clicked-on option loses focus immediately, but all other options are still deseleted, and on IE6 (which I regrettably still have to support) it makes no difference at all. So I thought I could maybe create a reasonable facsimile of a select box using list elements or divs or something. I can create something reasonable-looking that works on Firefox, but on IE6, if I shift-click on an element of my pseudo-select object (in order to select a range of options), IE selects all of the text between where I click and the last place I clicked. Again, I've attached preventDefault-ing mousedown, mouseup, and click handlers to all of the elements involved, but it doesn't help. I've even tried overlaying transparent divs over both my original select boxes and my pseudo-select objects, thinking to intercept mouse clicks and manage the selections manually, but I can't make it work on IE. If I use a select box, I can't prevent clicks from changing the selection, and if I use text that just looks like a select element, I can't prevent it from selecting a range of text on a shift-click. Is there some general solution, or am I just out of luck?

    Read the article

  • i18n redirection breaks my tests ....

    - by Mike
    I have a big application covered by more than a thousand tests via rspec. We just made the choice to redirect any page like : / /foo /foo/4/bar/34 ... TO : /en /en/foo /fr/foo/4/bar/34 .... So I made a before filter in application.rb like so : if params[:locale].blank? headers["Status"] = "301 Moved Permanently" redirect_to request.env['REQUEST_URI'].sub!(%r(^(http.?://[^/]*)?(.*))) { "#{$1}/#{I18n.locale}#{$2}" } end It's working great but ... It's breaking a lot of my tests, ex : it "should return 404" do Video.should_receive(:failed_encodings).and_return([]) get :last_failed_encoding response.status.should == "404 Not Found" end To fix this test, I should do : get :last_failed_encoding, :locale => "en" But ... seriously I don't want to fix all my test one by one ... I tried to make the locale a default parameter like this : class ActionController::TestCase alias_method(:old_get, :get) unless method_defined?(:old_get) def get(path, parameters = {}, headers = nil) parameters.merge({:locale => "fr"}) if parameters[:locale].blank? old_get(path, parameters, headers) end end ... but couldnt make this work ... Any idea ??

    Read the article

  • How to send a JSONObject to a REST service?

    - by Sebi
    Retrieving data from the REST Server works well, but if I want to post an object it doesn't work: public static void postJSONObject(int store_type, FavoriteItem favorite, String token, String objectName) { String url = ""; switch(store_type) { case STORE_PROJECT: url = URL_STORE_PROJECT_PART1 + token + URL_STORE_PROJECT_PART2; //data = favorite.getAsJSONObject(); break; } HttpClient httpClient = new DefaultHttpClient(); HttpPost postMethod = new HttpPost(url); try { HttpEntity entity = new StringEntity("{\"ID\":0,\"Name\":\"Mein Projekt10\"}"); postMethod.setEntity(entity); HttpResponse response = httpClient.execute(postMethod); Log.i("JSONStore", "Post request, to URL: " + url); System.out.println("Status code: " + response.getStatusLine().getStatusCode()); } catch (ClientProtocolException e) { I always get a 400 Error Code. Does anybody know whats wrong? I have working C# code, but I can't convert: System.Net.WebRequest wr = System.Net.HttpWebRequest.Create("http://localhost:51273/WSUser.svc/pak3omxtEuLrzHSUSbQP/project"); wr.Method = "POST"; string data = "{\"ID\":1,\"Name\":\"Mein Projekt\"}"; byte [] d = UTF8Encoding.UTF8.GetBytes(data); wr.ContentLength = d.Length; wr.ContentType = "application/json"; wr.GetRequestStream().Write(d, 0, d.Length); System.Net.WebResponse wresp = wr.GetResponse(); System.IO.StreamReader sr = new System.IO.StreamReader(wresp.GetResponseStream()); string line = sr.ReadToEnd();

    Read the article

  • events is not displayed on my fullcalendar

    - by ChangJiu
    Hi BalusC! I have used your method at above in my servelt. **[CalendarMap]** public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Map<String, Object> map = new HashMap<String, Object>(); map.put("id", 115); map.put("title", "changjiu"); map.put("start", new SimpleDateFormat("yyyy-MM-15").format(new Date())); map.put("url", "http://yahoo.com/"); // Convert to JSON string. String json = new Gson().toJson(map); // Write JSON string. response.setContentType("application/json"); response.setCharacterEncoding("UTF-8"); response.getWriter().write(json); } I want to display it my fullcalendar as follow. $(document).ready(function() { $('#calendar').fullCalendar({ eventSources: [ "CalendarMap" ] }); }); but it's not worked! Can you help me? thank you!

    Read the article

  • Qt Socket blocking functions required to run in QThread where created. Any way past this?

    - by Alexander Kondratskiy
    The title is very cryptic, so here goes! I am writing a client that behaves in a very synchronous manner. Due to the design of the protocol and the server, everything has to happen sequentially (send request, wait for reply, service reply etc.), so I am using blocking sockets. Here is where Qt comes in. In my application I have a GUI thread, a command processing thread and a scripting engine thread. I create the QTcpSocket in the command processing thread, as part of my Client class. The Client class has various methods that boil down to writing to the socket, reading back a specific number of bytes, and returning a result. The problem comes when I try to directly call Client methods from the scripting engine thread. The Qt sockets randomly time out and when using a debug build of Qt, I get these warnings: QSocketNotifier: socket notifiers cannot be enabled from another thread QSocketNotifier: socket notifiers cannot be disabled from another thread Anytime I call these methods from the command processing thread (where Client was created), I do not get these problems. To simply phrase the situation: Calling blocking functions of QAbstractSocket, like waitForReadyRead(), from a thread other than the one where the socket was created (dynamically allocated), causes random behaviour and debug asserts/warnings. Anyone else experienced this? Ways around it? Thanks in advance.

    Read the article

  • VS2010 express beta2 - no add reference dialog, no open file/project dialogs

    - by David
    Just installed VS2010 express for Windows Phone last night. Install went smoothly. It creates a project, compiles, and deploys the app to the emulator. Here's the problem: When I try to "Add Reference" through the Project menu, I do not get the Add Reference dialog box. Same thing if I right click References in the solution explorer and click Add Reference. That's not all. "File...Open" and "File...Open Project" also fail to throw up an open file dialog box. When attempting any of these actions, the IDE quickly loses and regains focus. Even pressing a keyboard shortcut (Ctrl+O) causes the IDE to quickly lose and regain focus, but no open file dialog box appears. This is what I have tried, not particularly in this order: 1. Turned off UAC 2. Monitored file and registry access using Process Monitor during a File...Open operation. File activity showed mostly "SUCCESS" with a few "FAST IO DISALLOWED" and a few "INVALID DEVICE REQUEST" results. Registry activity showed mostly "SUCCESS" with some "NAME NOT FOUND" and a few "BUFFER OVERFLOW" results. 3. Created a new, clean Windows account to run the IDE from 4. Forced a test project to add a reference to "System.Xml.Linq" by editing the ".csproj" project file. Project failed to load in the IDE. I don't have these problems at all on 2 other Windows 7 computers with VS2010 C# express beta 2 installed. One machine is 32bit and the other 64bit, both Home Premium edition. My system: Windows 7 Home Premium, 64bit Other Visual Studio products installed: VS2008 C# express, VS2008 C++ express One other thing to note: Several months ago I installed the non-phone distribution of VS2010 C# express beta 2, and I had the same exact problems. Back then I chalked it up to being beta and went back to VS2008 C# express, where I do not have these issues.

    Read the article

  • Exception Handling in ASP.NET MVC and Ajax - [HandleException] filter

    - by Graham
    All, I'm learning MVC and using it for a business app (MVC 1.0). I'm really struggling to get my head around exception handling. I've spent a lot of time on the web but not found anything along the lines of what I'm after. We currently use a filter attribute that implements IExceptionFilter. We decorate a base controller class with this so all server side exceptions are nicely routed to an exception page that displays the error and performs logging. I've started to use AJAX calls that return JSON data but when the server side implementation throws an error, the filter is fired but the page does not redirect to the Error page - it just stays on the page that called the AJAX method. Is there any way to force the redirect on the server (e.g. a ASP.NET Server.Transfer or redirect?) I've read that I must return a JSON object (wrapping the .NET Exception) and then redirect on the client, but then I can't guarantee the client will redirect... but then (although I'm probably doing something wrong) the server attempts to redirect but then gets an unauthorised exception (the base controller is secured but the Exception controller is not as it does not inherit from this) Has anybody please got a simple example (.NET and jQuery code). I feel like I'm randomly trying things in the hope it will work Exception Filter so far... public class HandleExceptionAttribute : FilterAttribute, IExceptionFilter { #region IExceptionFilter Members public void OnException(ExceptionContext filterContext) { if (filterContext.ExceptionHandled) { return; } filterContext.Controller.TempData[CommonLookup.ExceptionObject] = filterContext.Exception; if (filterContext.HttpContext.Request.IsAjaxRequest()) { filterContext.Result = AjaxException(filterContext.Exception.Message, filterContext); } else { //Redirect to global handler filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary(new { controller = AvailableControllers.Exception, action = AvailableActions.HandleException })); filterContext.ExceptionHandled = true; filterContext.HttpContext.Response.Clear(); } } #endregion private JsonResult AjaxException(string message, ExceptionContext filterContext) { if (string.IsNullOrEmpty(message)) { message = "Server error"; //TODO: Replace with better message } filterContext.HttpContext.Response.StatusCode = (int)HttpStatusCode.InternalServerError; filterContext.HttpContext.Response.TrySkipIisCustomErrors = true; //Needed for IIS7.0 return new JsonResult { Data = new { ErrorMessage = message }, ContentEncoding = Encoding.UTF8, }; } }

    Read the article

  • "Scheduling restart of crashed service", but no call to onStart() follows

    - by kostmo
    In the 1.6 API, is there a way to ensure that the onStart() method of a Service is called after the service is killed due to memory pressure? From the logs, it seems that the "process" that the service belongs to is restarted, but the service itself is not. I have placed a Log.d() call in the onStart() method, and this is not reached. To test my service under memory pressure, I spawn it from an activity, then launch the web browser and visit some Javascript-heavy websites like Slashdot until my service is killed. The logcat reads: 03-07 16:44:13.778: INFO/ActivityManager(52): Process com.kostmo.charbuilder.full (pid 2909) has died. 03-07 16:44:13.778: WARN/ActivityManager(52): Scheduling restart of crashed service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService in 5000ms 03-07 16:44:13.778: INFO/ActivityManager(52): Low Memory: No more background processes. 03-07 16:44:13.778: ERROR/ActivityThread(52): Failed to find provider info for android.server.checkin 03-07 16:44:13.778: WARN/Checkin(52): Can't log event SYSTEM_SERVICE_LOOPING: java.lang.IllegalArgumentException: Unknown URL content://android.server.checkin/events 03-07 16:44:18.908: INFO/ActivityManager(52): Start proc com.kostmo.charbuilder.full for service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService: pid=3560 uid=10027 gids={3003, 1015} 03-07 16:44:19.868: DEBUG/ddm-heap(3560): Got feature list request 03-07 16:44:20.128: INFO/ActivityThread(3560): Publishing provider com.kostmo.charbuilder.full.provider.character: com.kostmo.charbuilder.provider.ImageFileContentProvider

    Read the article

  • Reading and writing XML over an SslStream

    - by Mark
    I want to read and write XML data over an SslStream. The data (written and read) consists of objects serialized by an XmlSerializer. I have tried the following; (left some details out for clarity!) TcpClient tcpClient = new TcpClient(server, port); SslStream sslStream = new SslStream(tcpClient.GetStream(), true, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); sslStream.AuthenticateAsClient(server); XmlReader xmlReader = XmlReader.Create(sslStream,readerSettings); XmlWriter xmlWriter = XmlWriter.Create(sslStream,writerSettings); myClass c = new myClass (); XmlSerializer serializer = new XmlSerializer(typeof(myClass)); serializer.Serialize(xmlWriter, c); myClass c2 = (myClass )serializer.Deserialize(xmlReader); First of all, it appears that writing to the stream succeeds. But the XmlSerializer throws an error because of invalid XML. It appears that the first character read from the stream is '00' or a null-character. (I have googled this problem, and see a million people with the same 'problem', but no viable solution.) I can work around this 'issue' by using a StreamReader, read everything into a string and then use that string as input for another stream that the serializer can use. (Very dirty, but it works.) Second problem is, that when I try to use the same SslStream to write a second request, I do not get a response from the Reader. (The server DOES send a valid XML response though!) So the serializer.Serialize(xmlWriter, c3) works, but reading from the stream yields no results. I have tried several different classes that implement Stream. (StreamReader, XmlTextReader, etc.) Anyone has an idea how reading and writing XML data to and from an SslStream is supposed to work? Thanks in advance!

    Read the article

  • C# Byte[] to Url Friendly String

    - by LorenVS
    Hello, I'm working on a quick captcha generator for a simple site I'm putting together, and I'm hoping to pass an encrypted key in the url of the page. I could probably do this as a query string parameter easy enough, but I'm hoping not too (just because nothing else runs off the query string)... My encryption code produces a byte[], which is then transformed using Convert.ToBase64String(byte[]) into a string. This string, however, is still not quite url friendly, as it can contain things like '/' and '='. Does anyone know of a better function in the .NET framework to convert a byte array to a url friendly string? I know all about System.Web.HttpUtility.UrlEncode() and its equivalents, however, they only work properly with query string parameters. If I url encode an '=' inside of the path, my web server brings back a 400 Bad Request error. Anyways, not a critical issue, but hoping someone can give me a nice solution **EDIT: Just to be absolutely sure exactly what I'm doing with the string, I figured I would supply a little more information. The byte[] that results from my encryption algorithm should be fed through some sort of algorithm to make it into a url friendly string. After this, it becomes the content of an XElement, which is then used as the source document for an XSLT transformation, and is used as a part of the href attribute for an anchor. I don't believe the xslt transformation is causing the issues, since what is coming through on the path appears to be an encoded query string parameter, but causes the HTTP 400 I've also tried HttpUtility.UrlPathEncode() on a base64 string, but that doesn't seem to do the trick either (I still end up with '/'s in my url)**

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • How to persist objects between requests in PHP

    - by SztupY
    I've been using rails, merb, django and asp.net mvc applications in the past. What they have common (that is relevant to the question) is that they have code that sets up the framework. This usually means creating objects and state that is persisted until the web server is recycled (like setting up routing, or checking which controllers are available, etc). As far as I know PHP is more like a CGI script that gets compiled to some bytecode each time it's run, and after the request it's discarded. Of course you can have sessions, to persist data between requests from the same user, and as I see there are extensions like APC, with which you can persist objects between requests at the server level. My question is: how can one create a PHP application that works like rails and such? I mean an application that on the first requests sets up the framework, then on the 2nd and later requests use the objects that are already set up. Is there some built in caching facility in mod_php? (for example that stores the compiled bytecode of the executed php applications) Or is using APC or some similar extensions the only way to solve this problem? How would you do it? Thanks.

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >