Search Results

Search found 13324 results on 533 pages for 'send on behalf of'.

Page 425/533 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Integrate Google Maps API into an iPhone app

    - by Corey Floyd
    Update: iPhone SDk 3.0 now addresses the question here, however the NDA prevents any in depth discussion. Log in to the iPhone Dev Center if you need more info. Ok, I have to admit I'm a little lost here. I am fairly comfortable with Cocoa, but am having trouble picking up the bit of javascript needed to solve this problem. I am trying to send a request to Google for a reverse geo code. I have looked over the Google documentation I have viewed here: http://code.google.com/apis/maps/documentation/index.html http://code.google.com/apis/maps/documentation/geocoding/ Even after a rough reading, I am missing a basic concept: How do I talk to google? In some examples, they show a url being sent to google (which seems easy enough), but in others they show javascript. It seems for reverse geocoding, the request might be be harder than sending the url with some parameters (but I hope I am wrong). Can someone point me to the correct way to make a request? (In objective-C, so I can wrap my head around it)

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

  • WPF DataGridTemplateColumn. Am I missing something?

    - by plotnick
    <data:DataGridTemplateColumn Header="Name"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <TextBox Text="{Binding Name}"> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> It's clear example of Template column, right? What could be wrong with that? So, here is the thing - when a user navigates through DataGrid with hitting TAB-key, it needs to hit the TAB twice(!) to be able to edit text in TextBox. How could I make it editable as soon as the user gets the column focus, I mean even if he just starts typing? Ok. I found a way - into Grid.KeyUp() I put the code below: if (Grid.CurrentColumn.Header.ToString() == "UserName") { if (e.Key != Key.Escape) { Grid.BeginEdit(); // Simply send another TAB press if (Keyboard.FocusedElement is Microsoft.Windows.Controls.DataGridCell) { var keyEvt = new KeyEventArgs(Keyboard.PrimaryDevice, Keyboard.PrimaryDevice.ActiveSource, 0, Key.Tab) { RoutedEvent = Keyboard.KeyDownEvent }; InputManager.Current.ProcessInput(keyEvt); } } }

    Read the article

  • Why is WCF Stream response getting corrupted on write to disk?

    - by Alvin S
    I am wanting to write a WCF web service that can send files over the wire to the client. So I have one setup that sends a Stream response. Here is my code on the client: private void button1_Click(object sender, EventArgs e) { string filename = System.Environment.CurrentDirectory + "\\Picture.jpg"; if (File.Exists(filename)) File.Delete(filename); StreamServiceClient client = new StreamServiceClient(); int length = 256; byte[] buffer = new byte[length]; FileStream sink = new FileStream(filename, FileMode.CreateNew, FileAccess.Write); Stream source = client.GetData(); int bytesRead; while ((bytesRead = source.Read(buffer,0,length))> 0) { sink.Write(buffer,0,length); } source.Close(); sink.Close(); MessageBox.Show("All done"); } Everything processes fine with no errors or exceptions. The problem is that the .jpg file that is getting transferred is reported as being "corrupted or too large" when I open it. What am I doing wrong? On the server side, here is the method that is sending the file. public Stream GetData() { string filename = Environment.CurrentDirectory+"\\Chrysanthemum.jpg"; FileStream myfile = File.OpenRead(filename); return myfile; } I have the server configured with basicHttp binding with Transfermode.StreamedResponse.

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • How to parse a raw HTTP response?

    - by Ed
    If I have a raw HTTP response as a string: HTTP/1.1 200 OK Date: Tue, 11 May 2010 07:28:30 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=UTF-8 Server: gws X-XSS-Protection: 1; mode=block Connection: close <!doctype html><html>...</html> Is there an easy way I can parse it into an HttpListenerResponse object? Or at least some kind .NET object so I don't have to work with raw responses. What I'm doing currently is extracting the header key/value pairs and setting them on the HttpListenerResponse. But some headers can't be set, and then I have to cut out the body of the response and write it to the OutputStream. But the body could be gzipped, or it could be an image, which I can't get to work yet. And some responses contain random characters everywhere, which looks like an encoding problem. It's a lot of trouble. I'm getting a raw response because I'm using SOCKS to send an HTTP request. The program I'm working on is basically an HTTP proxy that can route requests through a SOCKS proxy, like Privoxy does.

    Read the article

  • How to find cause and of the SocketException with message that an established connection was aborted

    - by cdpnet
    Hi All, I know the similar question may have been asked many times, but I want to represent the behavior I'm seeing and find if somebody can help predict the cause of this. I am writing a windows service which connects to other windows service over TCP. There are 100 user entities of this, and 5 connections per each. These users perform their tasks using their individual connections. The application goes on withough seeing this problem for 1 or 2 days. Or sometimes show the problem right after starting (-rarely). The best run I had was like 4 to 5 days without showing this exception. And after that application died or I had to stop it for various reasons. I want to know what can be causing this? Here is the stacktrace. System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)

    Read the article

  • How to catch an expected (and intended) 302 response code with generic XmlHttpRequest?

    - by Anthony
    So, if you look back at my previous question about Exchange Autodiscover, you'll see that the easiet way to get the autodiscover URL is to send a non-secure, non-authenticated GET request to the server, ala: http://autodiscover.exchangeserver.org/autodiscover/autodiscover.xml The server will respond with a 302 redirect with the correct url in the Location header. I'm trying out something really simple at first with a Chrome extension, where I have: if (req.readyState==4 && req.status==302) { return req.getResponseHeader("Location"); } With another ajax call set up with the full XML Post and the user credentials, But instead Chrome hangs at this point, and a look at the developer panel shows that it is not returning back the response but instead is acting like no response was given, meanwhile showing a Uncaught Error: NETWORK_ERR: XMLHttpRequest Exception 101 in the error log. The way I see it, refering to the exact response status is about the same as "catching" it, but I'm not sure if the problem is with Chrome/WebKit or if this is how XHR requests always handle redirects. I'm not sure how to catch this so that I can get still get the headers from the response. Or would it be possible to set up a secondary XHR such that when it gets the 302, it sends a totally different request? Quick Update I just changed it so that it doesn't check the response code: if (req.readyState==4) { return req.getResponseHeader("Location"); } and instead when I alert out the value it's null. and there is still the same error and no response in the dev console. SO it seems like it either doesn't track 302 responses as responses, or something happens after that wipes that response out?

    Read the article

  • Deployed Qt5 Application Doesn't Print or Show Print Dialog

    - by MustacheMcLimey
    I'm experiencing Qt4 to Qt5 troubles. In my application when the user clicks the print button two things should happen, one is that a PDF gets written to disk (which still works fine in the new version, so I know that some of the printing functions are working properly) and the other is that a QPrintDialog should exec() and then send to a connected printer. I see the dialog when I launch from my development machine. The application launches on the deployed machine, but the QPrintDialog never shows and the document never prints. I am including print support. QT += core gui network webkitwidgets widgets printsupport I have been using Process Explorer to see what DLLs the application uses on my development machine, and I believe that everything is present. My application bundle includes: {myAppPath}\MyApp[MyApp.exe, Qt5PrintSupport.dll, ...] {myAppPath}\plugins\printsupport\windowsprintersupport.dll {myAppPath}\plugins\imageformats[ qgif.dll, qico.dll,qjpeg.dll, qmng.dll, qtga.dll, qtiff.dll, qwbmp.dll ] The following is the relevant code snippet: void PrintableForm::printFile() { //Writes the PDF to disk in every environment pdfCopy(); //Paper Copy only works on my dev machine QPrinter paperPrinter; QPrintDialog printDialog(&paperPrinter,this); if( printDialog.exec() == QDialog::Accepted ) { view->print(&paperPrinter); } this->accept(); } My first thought is that the relevant DLLs are not being found come print time, and that means that my application file system is incorrect, but I have not found anything that shows me a different file structure. Am I on the right track or is there something else wrong with this setup?

    Read the article

  • Struts2 - How to use the Struts2 Annotations?

    - by Aaron
    I'm trying to implement the Struts 2 Annotations in my project, but I don't know how. I added the convention-plugin v 2.1.8.1 to my pom I modified the web.xml ... <init-param> <param-name>actionPackages</param-name> <param-value>org.apache.struts.helloworld.action</param-value> </init-param> ... My Action package org.apache.struts.helloworld.action; import org.apache.struts.helloworld.model.MessageStore; import com.opensymphony.xwork2.ActionSupport; import org.apache.struts2.convention.annotation.Result; import org.apache.struts2.convention.annotation.Results; @Results({ @Result(name="success", location="HelloWorld.jsp") }) public class HelloWorld extends ActionSupport { public String execute() throws Exception { messageStore = new MessageStore() ; return SUCCESS; } The jsp page from where I'm trying to use my action. <body> <h1>Welcome To Struts 2!</h1> <p><a href="<s:url action='helloWorld'/>">Hello World</a></p> </body> When I press the link associated to the action helloWorld, but it's sends me to the exactly the same page. So, from index.jsp, it's sends to index.jsp. The way it should behave: it should send me to HelloWorld.jsp. I uploaded the project (a very simple HelloWorld app) to FileFront, maybe someone sees where is the problem. http://www.filefront.com/16364385/Hello_World.zip

    Read the article

  • HTTP POST to Imageshack

    - by Brandon Schlenker
    I am currently uploading images to my server via HTTP POST. Everything works fine using the code below. NSString *UDID = md5([UIDevice currentDevice].uniqueIdentifier); NSString *filename = [NSString stringWithFormat:@"%@-%@", UDID, [NSDate date]]; NSString *urlString = @"http://taptation.com/stationary_data/index.php"; request= [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; NSString *boundary = @"---------------------------14737809831466499882746641449"; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; NSMutableData *postbody = [NSMutableData data]; [postbody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; filename=\"%@.jpg\"\r\n", filename] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[NSData dataWithData:imageData]]; [postbody appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [request setHTTPBody:postbody]; NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding]; NSLog(returnString); However, when I try to convert this to work with Image Shacks XML API, it doesn't return anything. The directions from ImageShack are below. Send the following variables via POST to imageshack. us /index.php fileupload; (the image) xml = "yes"; (specifies the return of XML) cookie; (registration code, optional) Does anyone know where I should go from here?

    Read the article

  • How to extend an 'unloadable' Rails plugin?

    - by Vitaly Kushner
    I'm trying to write a plugin that will extend InheritedResources. Specifically I want to rewrite some default helpers. And I'd like it to "just work" once installed, w/o any changes to application code. The functionality is provided in a module which needs to be included in a right place. The question is where? :) The first attempt was to do it in my plugin's init.rb: InheritedResources::Base.send :include, MyModule It works in production, but fails miserably in development since InheritedResource::Base declared as unloadable and so its code is reloaded on each request. So my module is there for the first request, and then its gone. InheritedResource::Base is 'pulled' in again by any controller that uses it: Class SomeController < InheritedResource::Base But no code is 'pulling in' my extension module since it is not referenced anywhere except init.rb which is not re-loaded on each request So right now I'm just including the module manually in every controller that needs it which sucks. I can't even include it once in ApplicationController because InheritedResources inherites from it and so it will override any changes back. update I'm not looking for advice on how to 'monkey patch'. The extension is working in production just great. my problem is how to catch moment exactly after InheritedResources loaded to stick my extension into it :) update2 another attempt at clarification: the sequence of events is a) rails loads plugins. my plugin loads after inherited_resources and patches it. b) a development mode request is served and works c) rails unloads all the 'unloadable' code which includes all application code and also inherited_resources d) another request comes in e) rails loads controller, which inherites from inherited resources f) rails loads inherited resources which inherit from application_controller g) rails loads application_contrller (or may be its already loaded at this stage, not sure) g) request fails as no-one loaded my plugin to patch inherited_resources. plugin init.rb files are not reloaded I need to catch the point in time between g and h

    Read the article

  • MSBuild 2010 - how to publish web app to a specific location (nant)?

    - by Mr. Flibble
    I'm trying to get MSBuild 2010 to publish a web app to a specific location. I can get it to publish the deployment package to a particular path, but the deployment package then adds it's own path that changes. For example: if I tell it to publish to C:\dev\build\Output\Debug then the actual web files end up at C:\dev\build\Output\Debug\Archive\Content\C_C\code\sawadee\frontend\IPP-FrontEnd\Source\ControllersViews\obj\Debug\Package\PackageTmp And the C_C part of the path changes (not sure how it chooses this part of the path). This means I can't just script a copy from the publish location. I'm using this nant/msbuild command at the moment: <target name="compile" description="Compiles"> <msbuild project="${name}.sln"> <property name="Platform" value="Any CPU"/> <property name="Configuration" value="Debug"/> <property name="DeployOnBuild" value="true"/> <property name="DeployTarget" value="Package"/> <property name="PackageLocation" value="C:\dev\build\Output\Debug\"/> <property name="AutoParameterizationWebConfigConnectionStrings" value="false"/> <property name="PackageAsSingleFile" value="false"/> </msbuild> Any ideas on how to get it to send the web files directly to a specific location?

    Read the article

  • How to validate selects / inserts are hitting the right server with MySQL Master/Slave

    - by bwizzy
    I've got a rails app using the master_slave_adapter plugin (http://github.com/mauricio/master_slave_adapter/tree/master) to send all selects to a slave, and all other statements to the master. Replication is setup using Mysql master / slave. I'm trying to validate that all the SQL statements are indeed going to the right place. Selects to the slave (db2), inserts to the master (db1) but I'm not sure how to do it. I've tried using tcpdump on the webservers: sudo /usr/sbin/tcpdump -q -i eth0 dst port 3306 and this is the output for a page request with a ton of selects: 10:32:36.570930 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.576805 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577201 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577980 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 86 10:32:36.578186 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 21 10:32:36.578359 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 27 10:32:36.578522 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 5 10:32:36.578741 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 13 10:32:36.579611 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 29 10:32:36.588201 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588323 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588677 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588784 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 86 It doesn't look like all the selects are going to the slave. Maybe this isn't the right way to test, anyone know a better way?

    Read the article

  • passing timezone from client (GWT) to server (Joda Time)

    - by Caffeine Coma
    I'm using GWT on the client (browser) and Joda Time on the server. I'd like to perform some DB lookups bounded by the day (i.e. 00:00:00 until 23:59:59) that a request comes in, with the time boundaries based on the user's (i.e. browser) timezone. So I have the GWT code do a new java.util.Date() to get the time of the request, and send that to the server. Then I use Joda Time like so: new DateTime(clientDate).toDateMidnight().toDateTime() The trouble of course is that toDateMidnight(), in the absence of a specified TimeZone, will use the system's (i.e. the server's) TimeZone. I've been trying to find a simple way to pass the TimeZone from the browser to the server without much luck. In GWT I can get the GMT offset with: DateTimeFormat.getFormat("Z").fmt(new Date()) which results in something like "-0400". But Joda Time's DateTimeZone.forID() wants strings formatted like "America/New_York", or an integer argument of hours and minutes. Of course I can parse "-0400" into -4 hours and 0 minutes, but I'm wondering if there is not a more straightforward way of doing this.

    Read the article

  • Pasting formatted Excel range into Outlook message

    - by Steph
    Hi everyone, I am using Office 2007 and I would like to use VBA to paste a range of formatted Excel cells into an Outlook message and then mail the message. In the following code (that I lifted from various sources), it runs without error and then sends an empty message... the paste does not work. Can anyone see the problem and better yet, help with a solution? Thanks, -Steph Sub SendMessage(SubjectText As String, Importance As OlImportance) Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim iAddr As Integer, Col As Integer, SendLink As Boolean 'Dim Doc As Word.Document, wdRn As Word.Range Dim Doc As Object, wdRn As Object ' Create the Outlook session. Set objOutlook = CreateObject("Outlook.Application") ' Create the message. Set objOutlookMsg = objOutlook.CreateItem(olMailItem) Set Doc = objOutlookMsg.GetInspector.WordEditor 'Set Doc = objOutlookMsg.ActiveInspector.WordEditor Set wdRn = Doc.Range wdRn.Paste Set objOutlookRecip = objOutlookMsg.Recipients.Add("[email protected]") objOutlookRecip.Type = 1 objOutlookMsg.Subject = SubjectText objOutlookMsg.Importance = Importance With objOutlookMsg For Each objOutlookRecip In .Recipients objOutlookRecip.Resolve ' Set the Subject, Body, and Importance of the message. '.Subject = "Coverage Requests" 'objDrafts.GetFromClipboard Next .Send End With Set objOutlookMsg = Nothing Set objOutlook = Nothing End Sub

    Read the article

  • Update CGridView when a dropdown value changes

    - by Gautam Borad
    I have a CGridView with columns from a table "product" => {'product_id','category_id',...} I have another table "category" => {'category_id','category_name'} category_id is the FK in the product table. Now i want a dropdown list of the category table and on selecting a particular value the CGridView of product should be updated to show only the rows with that category_id. I also need the column filtering/sorting for the CGridView to work (using AJAX). I was able to refresh the CGridView when a value is selected from the dropdown, however i am not able to send the category_id with the 'data' for the CGridView: clientScript-registerScript('search', " $('.cat_dropdown').change(function(){ $.fn.yiiGridView.update('order-grid', { data: $(this).serialize(), }); return false; }); "); The data: $(this).serialize() sends only the values that are present in the filtering text fields of the CGridView. How do i append the category_id with it? If the above method is not the right one, please suggest an alternative method.

    Read the article

  • Using section header in Sendgrid

    - by Zefiryn
    I am trying to send emails through sendgrid in Zend application. I copy the php code from the sendgrid documentation (smtapi class and swift). I create a template with places that should be substituted with %variable%. Now I create headers for sendgrid as defined here: http://docs.sendgrid.com/documentation/api/smtp-api/developers-guide/ In result I get something looking like this: { "to": ["[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]"], "sub": {"%firstname%": ["Benny", "Chaim", "Ephraim", "Yehuda", "will"]}, "section": {"%postername%": "Rabbi Yitzchak Lieblich", "%postermail%": "[email protected]", "%categoryname%": "General", "%threadname%": "Completely new thread", "%post%": "This thread is to inform you about something very important", "%threadurl%": "http:\/\/hb.local\/forums\/general\/thread\/143", "%replyto%": "http:\/\/hb.local\/forums\/general\/thread\/143", "%unsubscribeurl%": "http:\/\/hb.local\/forums\/settings\/", "%subscribeurl%": "http:\/\/hb.local\/forums\/subscribe-thread\/id\/143\/token\/1b20eb7799829e22ba2d48ca0867d3ce"} } Now while all data defined in "sub" changes I cannot make section work. In the final email I still got %postername%. When I move this data to sub and repeat them for each email everything is working fine. Has anyone a clue what I am doing wrong? Docs for section are here: http://docs.sendgrid.com/documentation/api/smtp-api/developers-guide/section-tags/

    Read the article

  • PHP session_write_close() causes empty response

    - by Xeoncross
    When using session_write_close() in a shutdown function at the end of my script - PHP just dies. There is no error logged, response headers (firebug), or data (even whitespace!) returned. I have full PHP error reporting on with STRICT enabled and PHP 5.2.1. My guess is that since session_write_close() is being called after shutdown - some fatal error is being encountered that crashes PHP before it has a chance to send the output or log anything. This only happens on the logout page where I first: ... //If there is no session to delete (not started) if ( ! session_id()) { return; } // Get the session name $name = session_name(); // Delete the session cookie (if exists) if ( ! empty($_COOKIE[$name])) { //Get the current cookie config $params = session_get_cookie_params(); // Delete the cookie from globals unset($_COOKIE[$name], $_SESSION); //Delete the cookie on the user_agent setcookie($name, '', time()-43200, $params['path'], $params['domain'], $params['secure']); } // Destroy the session session_destroy(); ... then 2) do some more stuff 3) issue a redirect and 4) finally, after the whole page is done the register_shutdown_function(); I placed earlier runs and calls session_write_close() which saves the session to the database. The end. Since this blank response only occurs on logout I'm guessing that I'm not restarting the session properly which is causing session_write_close() to die fatally at the end of the script.

    Read the article

  • Asp.net mvc json

    - by user310657
    Hi, I am working on a mvc project, and having problem with json. i have created a demo project with list of colors public JsonResult GetResult() { List strList = new List(); strList.Add("white"); strList.Add("blue"); strList.Add("black"); strList.Add("red"); strList.Add("orange"); strList.Add("green"); return this.Json(strList); } i am able to get these on my page, but when i try to delete one color, that is when i send the following using jquery function deleteItem(item) { $.ajax({ type: "POST", url: "/Home/Delete/white", data: "{}", contentType: "application/json; charset=utf-8", success: ajaxCallSucceed, dataType: "json", failure: ajaxCallFailed }); } the controler action public JsonResult Delete(string Color) {} Color always returns null, even if i have specified "/Home/Delete/white" in the url. i know i am doing something wrong or missing something, but not able to find out what. please can any one guide me in the right direction.

    Read the article

  • WCF: Serializing and Deserializing generic collections

    - by Fabiano
    I have a class Team that holds a generic list: [DataContract(Name = "TeamDTO", IsReference = true)] public class Team { [DataMember] private IList<Person> members = new List<Person>(); public Team() { Init(); } private void Init() { members = new List<Person>(); } [System.Runtime.Serialization.OnDeserializing] protected void OnDeserializing(StreamingContext ctx) { Log("OnDeserializing of Team called"); Init(); if (members != null) Log(members.ToString()); } [System.Runtime.Serialization.OnSerializing] private void OnSerializing(StreamingContext ctx) { Log("OnSerializing of Team called"); if (members != null) Log(members.ToString()); } [System.Runtime.Serialization.OnDeserialized] protected void OnDeserialized(StreamingContext ctx) { Log("OnDeserialized of Team called"); if (members != null) Log(members.ToString()); } [System.Runtime.Serialization.OnSerialized] private void OnSerialized(StreamingContext ctx) { Log("OnSerialized of Team called"); Log(members.ToString()); } When I use this class in a WCF service, I get following log output OnSerializing of Team called System.Collections.Generic.List 1[Person] OnSerialized of Team called System.Collections.Generic.List 1[Person] OnDeserializing of Team called System.Collections.Generic.List 1[ENetLogic.ENetPerson.Model.FirstPartyPerson] OnDeserialized of Team called ENetLogic.ENetPerson.Model.Person[] After the deserialization members is an Array and no longer a generic list although the field type is IList< (?!) When I try to send this object back over the WCF service I get the log output OnSerializing of Team called ENetLogic.ENetPerson.Model.FirstPartyPerson[] After this my unit test crashes with a System.ExecutionEngineException, which means the WCF service is not able to serialize the array. (maybe because it expected a IList<) So, my question is: Does anybody know why the type of my IList< is an array after deserializing and why I can't serialize my Team object any longer after that? Thanks

    Read the article

  • Forms authentication in Silverlight

    - by Matt
    I have a website using forms authentication. Everything runs sweet their. I've got a Silverlight app that uses Duplex messaging to talk to a WCF service. I'd like to be able to authenticate users in my service. I realize that by doing this <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> that my service would then have access to the HttpContext.Current context and I could easily authenticate a user. But herein lies the problem. aspNetCompatibilityEnabled="true" combined with Duplex messaging results in very, very, very slow communication between silverlight and the website (10 seconds or more). Unless I have a configuration wrong, I'm going to assume that this is a bug in WCF / Silverlight. So basically I'm looking for a workaround. One idea I wanted to try was to read the ASPSESSID cookie from the browser and send that value over the wire. But I don't know what to do with the cookie on the service side. Is there some way to authenticate a user by sending their cookie data over duplex messaging?

    Read the article

  • How does Visual Studio decide the order in which stack variables should be allocated?

    - by Jason
    I'm trying to turn some of the programs in gera's Insecure Programming by example into client/server applications that could be used in capture the flag scenarios to teach exploit development. The problem I'm having is that I'm not sure how Visual Studio (I'm using 2005 Professional Edition) decides where to allocate variables on the stack. When I compile and run example 1: int main() { int cookie; char buf[80]; printf("buf: %08x cookie: %08x\n", &buf, &cookie); gets(buf); if (cookie == 0x41424344) printf("you win!\n"); } I get the following result: buf: 0012ff14 cookie: 0012ff64 buf starts at an address eighty bytes lower than cookie, and any four bytes that are copied in buf after the first eighty will appear in cookie. The problem I'm having is when I place this code in some other function. When I compile and run the following code, I get a different result: buf appears at an address greater than cookie's. void ClientSocketHandler(SOCKET cs){ int cookie; char buf[80]; char stringToSend[160]; int numBytesRecved; int totalNumBytes; sprintf(stringToSend,"buf: %08x cookie: %08x\n",&buf,&cookie); send(cs,stringToSend,strlen(stringToSend),NULL); The result is: buf: 0012fd00 cookie: 0012fcfc Now there is no way to set cookie to arbitrary data via overwriting buf. Is there any way to tell Visual Studio to allocate cookie before buf? Is there any way to tell beforehand how the variables will be allocated? Thanks, Jason

    Read the article

  • Cannot update a single field using Linq to Sql

    - by KallDrexx
    I am having a hard time attempting to update a single field without having to retrieve the whole record prior to saving. For example, in my web application I have an in place editor for the Name and Description fields of an object. Once you edit either field, it sends the new field (with the object's ID value) to the web server. What I want is the webserver to take that value and ID and only update the one field. There are only two ways google tells me to do this: 1) When I get the value I want to change, the value and the ID, retrieve the record from the database, update the field in the c# object, and then send it back to the server. I don't like this method because not only does it include a completely unnecessary database read call (which includes two tables due to the way my schema is). 2) Set UpdateCheck for all the fields (but the primary keys) to UpdateCheck.Never. This doesn't work for me (I think) due to my mapping layer between the Linq to Sql and my Entity/ViewModel layer. When I convert my entity into the linq to sql db object it seems to be updating those fields regardless of the UpdateCheck setting. This might be just because of integers, since not setting an int means it is a zero (and no, I can't use int? instead). Are there any other options that I have?

    Read the article

  • Subscription Management with Merchant Account via API

    - by Josh
    I'm researching gateways/vendors that provide the ability to create subscription based transitions for merchant accounts. In other words, I want to allow customers to signup for a subscription for a website service that charges once a month. Authorize.Net has an ARB (Automated Recurring Billing) Module. The cost is cheap, $10 a month for the service, with unlimited subscriptions, and they have an API that allows XML or SOAP access to create, update and cancel. The LARGE negative of the service is that it doesn't have elegant way to obtain the current status of a subscription. They can send a daily email with an attached CSV file, or someone can login into the site and review statuses – neither is an enterprise solution. The parent company "CyberSource" has a "Recurring Billing Service" which implies a more robust solution, including API access to subscription information. I’m currently waiting for a sales call back on costs related to the service. I also looked at PayPal's Recurring Billing Service, but that appears to require that users are redirected to the PayPal site to signup for the subscription -- again, not an an elegant solution. Does anyone know of any other vendors/gateways that offer subscription service, that meet the following criteria: Vendor/Gateway must host the credit card number and be PCI compliant Have an API that accessible via a Web Service, Post over HTTPS or SOAP Have an API that allows querying the status of subscriptions and/or the ability to query for activity since a certain date. Thanks in advance for your suggestions.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >