Search Results

Search found 15040 results on 602 pages for 'request servervariables'.

Page 524/602 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Using DotNetOpenAuth AccessToken for uploading docx file to google

    - by PrashantC
    Hi , I am using DotNetOpenAuth Package, I am trying to upload a package to google docs, Using client credentials i am able to do it successfully using following code, DocumentEntry objDocumentEntry = new DocumentEntry(); objDocumentsService.setUserCredentials(strUserName,strPassWord); string strAuthenticationToken = objDocumentsService.QueryAuthenticationToken(); objDocumentEntry = objDocumentsService.UploadDocument(Server.MapPath("test.docx"), "New Name"); I want achieve save with plain oAuth, I am having following code written for it, if (this.TokenManager != null) { if (!IsPostBack) { var google = new WebConsumer(GoogleConsumer.ServiceDescription, this.TokenManager); // Is Google calling back with authorization? var accessTokenResponse = google.ProcessUserAuthorization(); if (accessTokenResponse != null) { this.AccessToken = accessTokenResponse.AccessToken; } else if (this.AccessToken == null) { // If we don't yet have access, immediately request it. GoogleConsumer.RequestAuthorization(google, GoogleConsumer.Applications.DocumentsList); } } } I successfully get "AccessToken", But i am not sure how to use it.. Do we need to exchange this token? what excatly to do with this token? Is it a sessionToken? Please provide some inputs, I am badly stuck with this problem from last 3 days, Prashant C

    Read the article

  • Why does Android allocate more memory than needed when loading images

    - by Simon
    Folks, I don't think that this is a duplicate and is NOT one of those how do I avoid OOMs questions. This is a genuine quest for knowledge so hold off on those down votes please... Imagine I have a JPEG of 500x500 pixels. I load it as ARGB_8888 which is as "bad as it gets". I would expect Android to allocate 500x500x4 bytes = a little under 1MB however, look at a heap dump and you will see that Android allocates significantly more, often factors of 5-10 times greater. You frequently see questions on here about OOMS where the stack trace shows a heap request of say 15MB and it is ALWAYS much larger than is required simply to hold the bytes of the image. The OP usually catches some downvotes then is bombarded with stock answers and comments about using less memory (thanks Romain!) and in scaling. I think there is more than meets the eye here. Anybody know why this is? If there is no apparent answer, I will put together an SSCCE if it helps. PS. I assume that JPEG vs PNG etc is irrelevant since we're talking about the memory usage of the backing bitmap which is simply x times y times BPP - or am I being slow?

    Read the article

  • How can I determine PerlLogHandler performance impact?

    - by Timmy
    I want to create a custom Apache2 log handler, and the template that is found on the apache site is: #file:MyApache2/LogPerUser.pm #--------------------------- package MyApache2::LogPerUser; use strict; use warnings; use Apache2::RequestRec (); use Apache2::Connection (); use Fcntl qw(:flock); use File::Spec::Functions qw(catfile); use Apache2::Const -compile => qw(OK DECLINED); sub handler { my $r = shift; my ($username) = $r->uri =~ m|^/~([^/]+)|; return Apache2::Const::DECLINED unless defined $username; my $entry = sprintf qq(%s [%s] "%s" %d %d\n), $r->connection->remote_ip, scalar(localtime), $r->uri, $r->status, $r->bytes_sent; my $log_path = catfile Apache2::ServerUtil::server_root, "logs", "$username.log"; open my $fh, ">>$log_path" or die "can't open $log_path: $!"; flock $fh, LOCK_EX; print $fh $entry; close $fh; return Apache2::Const::OK; } 1; What is the performance cost of the flocks? Is this logging process done in parallel, or in serial with the HTTP request? In parallel the performance would not matter as much, but I wouldn't want the user to wait another split second to add something like this.

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • Ajax: Add new <div> from JSON with jQuery

    - by Francesc
    Hi, In a page I have this HTML code: <div id="content"> <div class="container"> <div id="author">@Francesc</div> <div id="message"Hey World!</div <div id="time"13/06/2010 11:53 GMT</div </div> <div class="container"> <div id="author">@SomeOtherUser</div> <div id="message"Bye World!</div <div id="time"13/06/2010 14:53 GMT</div </div> <div class="container"> <div id="author">@Me</div> <div id="message"Hey World!</div <div id="time"13/06/2010 18:53 GMT</div </div> </div> I want to ask, how to get a JSON file, from the server that has more recent messages and put them to the top, I mean above the first <div class="container">. Another question, it's possible to pass with GET when submiting the request to the server, the time of last update? How can I do it? Thanks.

    Read the article

  • How to send XML and other post parameters via cURL in PHP

    - by tomaszs
    Hello. I've used code below to send XML to my REST API. $xml_string_data contains proper XML, and it is passed well to mypi.php: //set POST variables $url = 'http://www.server.cu/mypi.php'; $fields = array( 'data'=>urlencode($xml_string_data) ); //url-ify the data for the POST $fields_string = ""; foreach($fields as $key=>$value) { $fields_string .= $key.'='.$value.'&'; } rtrim($fields_string,'&'); echo $fields_string; //open connection $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch,CURLOPT_POST,count($fields)); curl_setopt($ch,CURLOPT_POSTFIELDS,$fields_string); curl_setopt($ch,CURLOPT_HTTPHEADER,array ( "Expect: " )); //execute post $result = @curl_exec($ch); But when I've added other field: $fields = array( 'method' => "methodGoPay", 'data'=>urlencode($xml_string_data) ); It stopped to work. On the mypi.php I don't recieve any more POST parameters at all! Could you you please tell me what to do to send XML and other post parameters in one cURL request? Please don't suggest using any libraries, I wan't to acomplish it in plain PHP.

    Read the article

  • Doing without partial commits the "Mercurial way"

    - by David Moles
    Subversion shop considering switching to Mercurial, trying to figure out in advance what all the complaints from developers are going to be. There's one fairly common use case here that I can't see how to handle. I'm working on some largish feature, and I have a significant part of the code -- or possibly several significant parts of the code -- in pieces all over the garage floor, totally unsuitable for checkin, maybe not even compiling. An urgent bugfix request comes in. The fix is nice and local and doesn't touch any of the code I've been working on. I make the fix in my working copy. Now what? I've looked at "Mercurial cherry picking changes for commit" and "best practices in mercurial: branch vs. clone, and partial merges?" and all the suggestions seem to be extensions of varying complexity, from Record and Shelve to Queues. The fact that there apparently isn't any core functionality for this makes me suspect that in some sense this working style is Doing It Wrong. What would a Mercurial-like solution to this use case look like?

    Read the article

  • Gdata JavaScript Authsub continues redirect

    - by Krustal
    I am using the JavaScript Google Data API and having issues getting the AuthSub script to work correctly. This is my script currently: google.load('gdata', '1'); function getCookie(c_name){ if(document.cookie.length>0){ c_start=document.cookie.indexOf(c_name + "="); if(c_start!=-1){ c_start=c_start + c_name.length+1; c_end=document.cookie.indexOf(";",c_start); if(c_end==-1) c_end=document.cookie.length; return unescape(document.cookie.substring(c_start, c_end)); } } return ""; } function main(){ var scope = 'http://www.google.com/calendar/feeds/'; if(!google.accounts.user.checkLogin(scope)){ google.accounts.user.login(); } else { /* * Retrieve all calendars */ // Create the calendar service object var calendarService = new google.gdata.calendar.CalendarService('GoogleInc-jsguide-1.0'); // The default "allcalendars" feed is used to retrieve a list of all // calendars (primary, secondary and subscribed) of the logged-in user var feedUri = 'http://www.google.com/calendar/feeds/default/allcalendars/full'; // The callback method that will be called when getAllCalendarsFeed() returns feed data var callback = function(result) { // Obtain the array of CalendarEntry var entries = result.feed.entry; //for (var i = 0; i < entries.length; i++) { var calendarEntry = entries[0]; var calendarTitle = calendarEntry.getTitle().getText(); alert('Calendar title = ' + calendarTitle); //} } // Error handler to be invoked when getAllCalendarsFeed() produces an error var handleError = function(error) { alert(error); } // Submit the request using the calendar service object calendarService.getAllCalendarsFeed(feedUri, callback, handleError); } } google.setOnLoadCallback(main); However when I run this the page redirects me to the authentication page. After I authenticate it send me back to my page and then quickly sends me back to the authenticate page again. I've included alerts to check if the token is being set and it doesn't seem to be working. Has anyone has this problem?

    Read the article

  • How to keep confirmation messages after POST while doing a post-submit redirect?

    - by MicE
    Hello, I'm looking for advise on how to share certain bits of data (i.e. post-submit confirmation messages) between individual requests in a web application. Let me explain: Current approach: user submits an add/edit form for a resource if there were no errors, user is shown a confirmation with links to: submit a new resource (for "add" form) view the submitted/edited resource view all resources (one step above in hierarchy) user then has to click on one of the three links to proceed (i.e. to the page "above") Progmatically, the form and its confirmation page are one set of classes. The page above that is another. They can technically share code, but at the moment they are both independent during processing of individual requests. We would like to amend the above as follows: user submits an add/edit form for a resource if there were no errors, the user is redirected to the page with all resources (one step above in hierarchy) with one or more confirmation messages displayed at the top of the page (i.e. success message, to whom was the request assigned, etc) This will: save users one click (they have to go through a lot of these add/edit forms) the post-submit redirect will address common problems with browser refresh / back-buttons What approach would you recommend for sharing data needed for the confirmation messages between the two requests, please? I'm not sure if it helps, it's a PHP application backed by a RESTful API, but I think that this is a language-agnostic question. A few simple solutions that come to mind are to share the data via cookies or in the session, this however breaks statelessness and would pose a significant problem for users who work in several tabs (the data could clash together). Passing the data as GET parameters is not suitable as we are talking about several messages which are dynamic (e.g. changing actors, dates). Thanks, M.

    Read the article

  • How to get at JSON in grails 2.0

    - by Mikey
    I am sending myself JSON like so with jQuery: $.ajax ({ type: "POST", url: 'http://localhost:8080/myproject/myController/myAction', dataType: 'json', async: false, //json object to sent to the authentication url data: {"stuff":"yes", "listThing":[1,2,3], "listObjects":[{"one":"thing"},{"two":"thing2"}]}, success: function () { alert("Thanks!"); } }) I send this to a controller and do println params And I know I'm already in trouble... [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:myAction, controller:myController] I cannot figure out how to get at most of these values... I can get "yes" with params.stuff, but I cant do params.listThing.each{} or params.listObjects.each{} What am I doing wrong? UPDATE: I make the controller do this to try the two suggestions so far: println params println params.stuff println params.list('listObjects') println params.listThing def thisWontWork = JSON.parse(params.listThing) render("omg l2json") look how weird the parameters look at the end of the null pointer exception when I try the answers: [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:l2json, controller:rateAPI] yes [] null | Error 2012-03-25 22:16:13,950 ["http-bio-8080"-exec-7] ERROR errors.GrailsExceptionResolver - NullPointerException occurred when processing request: [POST] /myproject/myController/myAction - parameters: stuff: yes listObjects[1][two]: thing2 listObjects[0][one]: thing listThing[]: 1 listThing[]: 2 listThing[]: 3 UPDATE 2 I am learning things, but this can't be right: println params['listThing[]'] println params['listObjects[0][one]'] prints [1, 2, 3] thing It seems like this is some part of grails new JSON marshaling. This is somewhat inconvenient for my purposes of hacking around with the values. How would I get all these individual params back into a big groovy object of nested maps and lists? Maybe I am not doing what I want with jQuery?

    Read the article

  • How can I get a custom made set of checkboxes return values in the postback?

    - by AngryHacker
    I have the following in an aspx page: <td colspan="2"> <% DisplayParties(); %> </td> In the code behind for the aspx page, i have this (e.g. I build HTML for the checkboxes): public void DisplayParties() { var s = new StringBuilder(); s.Append("<input type=\"checkbox\" id=\"attorney\" value=\"12345\"/>"); s.Append("<input type=\"checkbox\" id=\"attorney\" value=\"67890\"/>"); s.Append("<input type=\"checkbox\" id=\"adjuster\" value=\"125\"/>"); Response.WriteLine(s.ToString()); } Not my proudest moment, but whatever. The problem is that when this page posts back via some event on the page, I never get these tags in the Request.Form collection. Is this simply how ASP.NET works (e.g. only server-side control post back) or am I missing something simple. My understanding was that a postback should bring back all the form variables.

    Read the article

  • Need some help understanding this problem

    - by Legend
    I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually. Problem I am trying to solve: 1. Constructing the dependency graph Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that node 3 depends on node 4 node 2 depends on node 3 node 3 depends on node 5 But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system. 2. Execute the request order Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions? PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.

    Read the article

  • Django IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • asp file system object

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %<% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize % <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "" & sDir & "" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number < 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "Parent folder" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "" & objSubFolder.Name & "" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "" & objFile.Name & "" & sSize & "" & objFile.Type & "" & vbCRLF Next Response.Write "" % it works fine. but when i try to access something on shred path like: "\cvrdd0110:share" it gives error. how to access these files?

    Read the article

  • ASP.NET MVC 2.0 + Implementation of a IRouteHandler goes not fire

    - by Peter
    Can anybody please help me with this as I have no idea why public IHttpHandler GetHttpHandler(RequestContext requestContext) is not executing. In my Global.asax.cs I have public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.Add("ImageRoutes", new Route("Images/{filename}", new CustomRouteHandler())); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } //CustomRouteHandler implementation is below public class CustomRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // IF I SET A BREAK POINT HERE IT DOES NOT HIT FOR SOME REASON. string filename = requestContext.RouteData.Values["filename"] as string; if (string.IsNullOrEmpty(filename)) { // return a 404 HttpHandler here } else { requestContext.HttpContext.Response.Clear(); requestContext.HttpContext.Response.ContentType = GetContentType(requestContext.HttpContext.Request.Url.ToString()); // find physical path to image here. string filepath = requestContext.HttpContext.Server.MapPath("~/logo.jpg"); requestContext.HttpContext.Response.WriteFile(filepath); requestContext.HttpContext.Response.End(); } return null; } } Can any body tell me what I'm missing here. Simply public IHttpHandler GetHttpHandler(RequestContext requestContext) does not fire. I havn't change anything in the web.config either. What I'm missing here? Please help.

    Read the article

  • getting service from wsdd via xpath not wroking

    - by subes
    Hi, I am trying to get the XPath "/deployment/service". Tested on this site: http://www.xmlme.com/XpathTool.aspx <?xml version="1.0" encoding="UTF-8" standalone="no"?> <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org /axis/wsdd/providers/java"> <service name="kontowebservice" provider="java:RPC" style="rpc" use="literal"> <parameter name="wsdlTargetNamespace" value="http://strategies.spine"/> <parameter name="wsdlServiceElement" value="ExposerService"/> <parameter name="wsdlServicePort" value="kontowebservice"/> <parameter name="className" value="dmd4biz.container.webservice.konto.internal.KontoWebServiceImpl_WS"/> <parameter name="wsdlPortType" value="Exposer"/> <parameter name="typeMappingVersion" value="1.2"/> <operation xmlns:operNS="http://strategies.spine" xmlns:rtns="http://www.w3.org/2001/XMLSchema" name="expose" qname="operNS:expose" returnQName="exposeReturn" returnType="rtns:anyType" soapAction=""> <parameter xmlns:tns="http://www.w3.org/2001/XMLSchema" qname="in0" type="tns:anyType"/> </operation> <parameter name="allowedMethods" value="expose"/> <parameter name="scope" value="Request"/> </service> </deployment> I absolutely can't find out why it always tells me that my xpath does not match... This may be stupid, but am I missing something?

    Read the article

  • find contiguous stretches of equal data in a vector

    - by mariotomo
    I have a numeric vector, it contains patches of elements that are repeating, something like: R> data <- c(1,1,1,2,2,2,3,3,2,2,2,2,2,3,3,1,1,1,1,1) R> data [1] 1 1 1 2 2 2 3 3 2 2 2 2 2 3 3 1 1 1 1 1 R> I need to extract contiguous patches of elements equals to a specific value... but I'm only interested in the patch around a specific position. so, my input is: (1) the numeric vector, (2) the desired value, (3) the position. I want to return a logic vector indicating which positions satisfy the request. if at that position the data does not equal the value, I return all FALSE. possible outcomes that are not all F would be: [1] 1 1 1 2 2 2 3 3 2 2 2 2 2 3 3 1 1 1 1 1 [1] T T T F F F F F F F F F F F F F F F F F [2] F F F T T T F F F F F F F F F F F F F F [3] F F F F F F T T F F F F F F F F F F F F [4] F F F F F F F F T T T T T F F F F F F F [5] F F F F F F F F F F F F F T T F F F F F [6] F F F F F F F F F F F F F F F T T T T T

    Read the article

  • Resource mapping in a Ruby on Rails URL (RESTful API)

    - by randombits
    I'm having a bit of difficulty coming up with the right answer to this, so I will solicit my problem here. I'm working on a RESTFul API. Naturally, I have multiple resources, some of which consist of parent to child relationships, some of which are stand alone resources. Where I'm having a bit of difficulty is figuring out how to make things easier for the folks who will be building clients against my API. The situation is this. Hypothetically I have a 'Street' resource. Each street has multiple homes. So Street :has_many to Homes and Homes :belongs_to Street. If a user wants to request an HTTP GET on a specific home resource, the following should work: http://mymap/streets/5/homes/10 That allows a user to get information for a home with the id 10. Straight forward. My question is, am I breaking the rules of the book by giving the user access to: http://mymap/homes/10 Technically that home resource exists on its own without the street. It makes sense that it exists as its own entity without an encapsulating street, even though business logic says otherwise. What's the best way to handle this?

    Read the article

  • Async WebRequest Timeout Windows Phone 7

    - by Tyler
    Hi All, I'm wondering what the "right" way of timing out an HttpWebRequest is on Windows Phone7? I've been reading about ThreadPool.RegisterWaitForSingleObject() but this can't be used as WaitHandles throw a Not implemented exception at run time. I've also been looking at ManualReset events but A) Don't understand them properly and B) Don't understand how blocking the calling thread is an acceptable way to implement a time out on an Async request. Here's my existing code sans timeout, can someone please show me how I would add a timeout to this? public static void Get(Uri requestUri, HttpResponseReceived httpResponseReceivedCallback, ICredentials credentials, object userState, bool getResponseAsString = true, bool getResponseAsBytes = false) { var httpWebRequest = (HttpWebRequest)WebRequest.Create(requestUri); httpWebRequest.Method = "GET"; httpWebRequest.Credentials = credentials; var httpClientRequestState = new JsonHttpClientRequestState(null, userState, httpResponseReceivedCallback, httpWebRequest, getResponseAsString, getResponseAsBytes); httpWebRequest.BeginGetResponse(ResponseReceived, httpClientRequestState); } private static void ResponseReceived(IAsyncResult asyncResult) { var httpClientRequestState = asyncResult.AsyncState as JsonHttpClientRequestState; Debug.Assert(httpClientRequestState != null, "httpClientRequestState cannot be null. Fatal error."); try { var webResponse = (HttpWebResponse)httpClientRequestState.HttpWebRequest.EndGetResponse(asyncResult); } }

    Read the article

  • Most efficient way to bind a Listbox with SelectionMode=Multiple

    - by Draak
    Hi, I have an ASP.NET webform that has a listbox (lbxRegions) with multi-select option enabled. In my db, I have a table with an xml field that holds a list of regions. I need to populate the listbox with all available regions and then "check off" the list items that match the regions in the db table. The list options also need to be ordered by region name. So, I wrote the following code that works just fine -- no problems. But I was wondering if anyone can think of a better (more succinct, more efficient) way to have done the same thing. Thanks in advance. Dim allRegions = XElement.Load(Server.MapPath(Request.ApplicationPath) & "\Regions.xml").<country>.<regions>.<region> Dim selectedRegions = (From ev In dc.Events Where ev.EventId = 2951).Single.CEURegions.<country>.<regions>.<region> Dim unselectedRegions = allRegions.Except(selectedRegions) Dim selectedItems = From x In selectedRegions Select New ListItem() _ With {.Value = x.@code, .Text = x.Value, .Selected = True} Dim unselectedItems = From x In unselectedRegions Select New ListItem() _ With {.Value = x.@code, .Text = x.Value} Dim allItems = selectedItems.Union(unselectedItems).OrderBy(Function(x) x.Text) lbxRegions.Items.AddRange(allItems.ToArray()) P.S. You can post code in C# if you like.

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • joomla and allow_url_fopen [closed]

    - by liz
    so i have been reading of the pros and cons of allowing: allow_url_fopen. but i am still confused. after a recent hacking incident (which i believe had nothing to do with allow_url_fopen) my host turned allow_url_fopen off. so the thing i dont get is, in joomla 2.5.x there is an updating feature.you can search for new versions and be notified if things are out of date. there is a big security hole if joomla or its extensions get out of date. But the catch it needs allow_url_fopen turned on. so why did joomla build a security risk into a feature to improve security??is it okay to turn allow_url_fopen on and have the updating feature? to clarify: my question is. i have Joomla installed. I have CURl installed. when i run the discover updates through NATIVE joomla i get a request for fopen. shouldn't i not need to enable a security risk? i am running version 2.5.8 of joomla.

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Checking multiple conditions in Ruby (within Rails, which may not matter)

    - by Ev
    Hello rubyists and railers, I have a method which checks over a params hash to make sure that it contains certain keys, and to make sure that certain values are set within a certain range. This is for an action that responds to a POST query by an iPhone app. Anyway, this method is checking for about 10 different conditions - any of which will result in an HTTP error being returned (I'm still considering this, but possibly a 400: bad request error). My current syntax is basically this (paraphrased): def invalid_submission_params?(params) [check one] or [check two] or [check three] or [check four] etc etc end Where each of the check statements returns true if that particular parameter check results in an invalid parameter set. I call it as a before filter with params[:submission] as the argument. This seems a little ugly (all the strung together or statements). Is there a better way? I have tried using case but can't see a way to make it more elegant. Or, perhaps, is there a rails method that lets me check the incoming params hash for certain conditions before handing control off to my action method?

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >