Search Results

Search found 5875 results on 235 pages for 'https'.

Page 86/235 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • how can i insert a new sitemap with google gdata api? it returns 400 bad request

    - by wingoo
    i try to insert a new sitemap to google using api, but i can't do it successful-_- this is the method var fullDomainUrl = "http://www.example.com/"; var entry = new SitemapsEntry(); entry.Id = new AtomId(fullDomainUrl + "sitemap.xml"); entry.Categories.Add(new AtomCategory("http://schemas.google.com/webmasters/tools/2007#site-info", new AtomUri("http://schemas.google.com/g/2005#kind"))); entry.SitemapType = "WEB"; myService.Insert(new Uri(string.Format("https://www.google.com/webmasters/tools/feeds/{0}/sitemaps/", HttpUtility.UrlEncode(fullDomainUrl))), entry); this will retuen a 400 bad requestand i try another method var settings = new RequestSettings("TesterApp1", domain.GoogleAuthToken, CommonService.GetRsaPrivateKey(Context)); var request = new WebmasterToolsRequest(settings); var sitemap = new Sitemap(); sitemap.Id = fullDomainUrl + "sitemap.xml"; sitemap.Categories.Add(new AtomCategory("http://schemas.google.com/webmasters/tools/2007#site-info", new AtomUri("http://schemas.google.com/g/2005#kind"))); sitemap.SitemapType = "WEB"; //request.AddSitemap(fullDomainUrl, sitemap); request.Insert(new Uri(string.Format("https://www.google.com/webmasters/tools/feeds/{0}/sitemaps/", HttpUtility.UrlEncode(fullDomainUrl))), sitemap); this also return a 400 bad request and then i try to use HttpWebRequest to post the atom to google,but it also return a 400 bad request(???") i can insert/update site successful,but can;t insert a new sitemap.. does any can give a right code with .net?

    Read the article

  • retrieve events where uid is the creator and application id is the admin - Facebook API

    - by Anup Parekh
    I would like to know if there is a Facebook API call to retrieve the events (eids) for all the events a user has created using my facebook connect application. The events are created using the following REST api call: https://api.facebook.com/method/events.create?event_info=' . $e_i . '&access_token=' . $cookie['access_token'] $e_i is the event info array where the 'host' value is set to 'Me' as follows $event_info['host'] = 'Me'; On Facebook events under the "Created by:" section it lists "My user name,Application Name", I presume this is because I am the creator and the application is the admin as stated in the REST api documentation http://developers.facebook.com/docs/reference/rest/events.create/ Unfortunately I cannot seem to find out how (neither REST nor GPRAPH API) to return a list of events where I am the creator and the application is the admin as in the above scenario. If this is possible I would really appreciate some assistance with how it is done. So far I have tried: REST API events.get using uid=application_id. This only returns events created by the application not those including the user who created them GRAPH API https://graph.facebook.com/me/events?fields=owner&access_token=... this returns all the events for 'me' but not where the application is also the admin. It seems strange that there's no reference to the linkage between the event creator and the event admin through the API but in Facebook it is able to pull both and display them on the event details.

    Read the article

  • Problem with cruise control and visual svn

    - by Andrew
    Hi Wonder if anyone can help, I am experiencing a strange issue with my configuration of cruisecontrol.net and visual svn. I am setting the current ccnet.config <sourcecontrol type="svn"> <trunkUrl>https://bladerunner.azullo.local:8443/svn/application/trunk</trunkUrl> <executable>C:\Program Files (x86)\VisualSVN Server\bin\svn.exe</executable> <username>test</username> <password>test</password> <workingDirectory>D:\Development\Build\application\</workingDirectory> </sourcecontrol> <publishers> <xmllogger/> </publishers> <modificationDelaySeconds>10</modificationDelaySeconds> </project> When I run this I expect it to go to https://bladerunner.azullo.local:8443/svn/application/trunk, however i get the following ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'http://bladerunner.azullo.local:8080/svn/application/trunk': could not connect to server (http://bladerunner.azullo.local:8080) . Process command: C:\Program Files (x86)\VisualSVN Server\bin\svn.exe update D:\Development\build\application\ --username test --password ** --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.UpdateSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) So for some reason it goes to 'http://bladerunner.azullo.local:8080/svn/application/trunk' If I remove the username and password elements in the ccnet.config. It goes to the correct url. I dont understand this behaviour. I have configured visual svn with a certificate using Active directory Certificate Services, if this was the problem I would expect it to show an error regarding the certificate instead of changing the url. I have cleared our state etc Any Ideas

    Read the article

  • Problem with GWT behind a reverse proxy - either nginx or apache

    - by Don Branson
    I'm having this problem with GWT when it's behind a reverse proxy. The backend app is deployed within a context - let's call it /context. The GWT app works fine when I hit it directly: http://host:8080/context/ I can configure a reverse proxy in front it it. Here's my nginx example: upstream backend { server 127.0.0.1:8080; } ... location / { proxy_pass http://backend/context/; } But, when I run through the reverse proxy, GWT gets confused, saying: 2009-10-04 14:05:41.140:/:WARN: Login: ERROR: The serialization policy file '/C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.140:/:WARN: Login: WARNING: Failed to get the SerializationPolicy 'C7F5ECA5E3C10B453290DE47D3BE0F0E' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. 2009-10-04 14:05:41.292:/:WARN: StoryService: ERROR: The serialization policy file '/0445C2D48AEF2FB8CB70C4D4A7849D88.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.292:/:WARN: StoryService: WARNING: Failed to get the SerializationPolicy '0445C2D48AEF2FB8CB70C4D4A7849D88' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. In other words, GWT isn't getting the word that it needs to prepend /context/ hen look for C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc, but only when the request comes throught proxy. A workaround is to add the context to the url for the web site: location /context/ { proxy_pass http://backend/context/; } but that means the context is now part of the url that the user sees, and that's ugly. Anybody know how to make GWT happy in this case? Software versions: GWT - 1.7.0 (same problem with 1.7.1) Jetty - 6.1.21 (but the same problem existed under tomcat) nginx - 0.7.62 (same problem under apache 2.x) I've looked at the traffic between the proxy and the backend using DonsProxy, but there's nothing noteworthy there.

    Read the article

  • PayPal return URL

    - by Sam
    Here's the code for my Paypal button: <form action="https://www.sandbox.paypal.com/cgi-bin/webscr" method="post"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="lc" value="GB"> <input type="hidden" name="button_subtype" value="products"> <input type="hidden" name="no_note" value="1"> <input type="hidden" name="no_shipping" value="1"> <input type="hidden" name="rm" value="0"> <input type="hidden" name="return" value="http://www.example.com"> <input type="hidden" name="item_name" value="My Item"> <input type="hidden" name="amount" value="25.00"> <input type="hidden" name="currency_code" value="GBP"> <input type="hidden" name="bn" value="PP-BuyNowBF:proceed_btn.gif:NonHosted"> <input type="hidden" name="item_number" value="4BD9569402CDE"> <input type="image" src="http://www.example.com/image.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online."> <img alt="" border="0" src="https://www.paypal.com/en_GB/i/scr/pixel.gif" width="1" height="1"> </form> Is it possible to add the item_number to the return URL? For example, after completing the payment within PayPal the user gets sent back to http://www.example.com?item_number=4BD9569402CDE

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Double script tags in Google Analytics tracking code

    - by Tom
    This is more a curiosity question than anything else... Google instructs to add the analytics tracking code as follows: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script> I'm wondering some JS guru here could tell me why they're separating it into two script tags instead of sticking it all inside one. I know that the top part could be put in the header and the bottom part just before body tag to ensure the page loaded before it's tracked, but I'm wondering if there's something more to it. Anyone who'd know that would likely know how to separate the code into two tags anyway. I'm only asking as this is coming from the Goog and is being used by millions of sites... Thanks

    Read the article

  • paypal express checkout integration in asp classic

    - by Noam Smadja
    i am trying to figure this out for almost a week now.. with no success.. i am coding in ASP and would love to receive some help. i am following the steps from paypal wizard here: https://www.paypal-labs.com/integrationwizard/ecpaypal/code.php i am collecting all the information on my website i just want to pass it to paypal when the buyer clicks checkout. so i pointed the checkout form to expresschackout.asp as pointed in the wizard. but when i click on the paypal button i get a white page with nothing. no errors no nothing. it just hangs there on expresscheckout.asp shopping cart: ..code showing a review of the shopping cart.. ..i saved the total amount into SESSION("Payment_Amount").. <form action='cart/expresscheckout.asp' METHOD='POST'> <input type='image' name='submit' src='https://www.paypal.com/en_US/i/btn/btn_xpressCheckout.gif' border='0' align='top' alt='Check out with PayPal'/> </form>

    Read the article

  • Editing a User's Likes on Facebook

    - by Ed Marty
    I've been looking at the Facebook API to find some way to edit a user's Likes (that is, add or remove items from https://graph.facebook.com/me/likes/). The API doesn't say anything about it specifically, but does say this: You can publish to the Facebook graph by issuing HTTP POST requests to the appropriate connection URLs above. Where above, one of the connection URLs is the aforementioned https://graph.facebook.com/me/likes link. However, there's no documentation for the PROFILE_ID/likes post, and whenever I try to post it returns the error "invalid post_id". I assume this is because to like something, you post a request to POST_ID/likes. It's a bit inconsistent. What I'm trying to do is get the user's profile to add a Page to their likes (by posting using the page's id as an "id" parameter in the post body). However, it seems like there's just no way to edit user's likes. At the end of the day, I just want to allow a user to click a button in my application (mobile device application, not a web app) and have them add our Facebook page into their list of pages, and I've found no way of doing that short of presenting our page to them and making them click on the "Like" button manually. Many other things are supported without showing the Facebook website, like posting to their wall or making albums, but I can't find anything to do this. Any ideas?

    Read the article

  • how to login to criaglist through c#

    - by kosikiza
    i m using the following code to login to criaglist, but hav't successsed yet. string formParams = string.Format("inputEmailHandle={0}&inputPassword={1}", "[email protected]", "pakistan01"); //string postData = "[email protected]&inputPassword=pakistan01"; string uri = "https://accounts.craigslist.org/"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri); request.KeepAlive = true; request.ProtocolVersion = HttpVersion.Version10; request.Method = "POST"; byte[] postBytes = Encoding.ASCII.GetBytes(formParams); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postBytes.Length; Stream requestStream = request.GetRequestStream(); requestStream.Write(postBytes, 0, postBytes.Length); requestStream.Close(); HttpWebResponse response = (HttpWebResponse)request.GetResponse(); cookyHeader = response.Headers["Set-cookie"]; string pageSource; string getUrl = "https://post.craigslist.org/del"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookyHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); }

    Read the article

  • .NET proxy detection

    - by Ziplin
    I am having an issue with .NET detecting the proxy settings configured through internet explorer. I'm writing a client application that supports proxies, and to test I set up an array of 9 squid servers to support various authentication methods for HTTP and HTTPs. I have a script that updates IE to whichever configuration I choose (which proxy, detection via "Auto", PAC, or hardcode). I have tried the 3 methods below to detect the IE configuration through .NET. On occassion I notice that .NET picks up the wrong set of proxy servers. IE has the correct settings, and if I browse the web with IE, I can see I am hitting the correct servers via wireshark. WebRequest.GetSystemWebProxy().GetProxy(destination); GlobalProxySelection.Select.GetProxy(destination); WebRequest.DefaultWebProxy Here are the following tips I have: My script sets a PAC file on a webserver, and updates the configuration in IE, then clears IE's cache .NET seems to get "stuck" on a certain proxy configuration, and I have to set another configuration for .NET to realize there was a change. Occasionally it seems to pick some random set of servers (I'm sure they're not random, just a set of servers I used once and are in some cached PAC file or something). As in, I will check the proxy for the destination "https://www.secure.com" and I may have IE configured for and thus expect to get "http://squidserver:18" and instead it will return "http://squidserver:28" (port 18 runs NTLM, 28 runs without authentication). All the squid servers work. This does not appear to be an issue on XP, only Vista, 2003, and windows 7. Hardcoding the proxy servers in IE ALWAYS works Time always solves the issue - if I leave the computer for about 20 or 30 minutes and come back, .NET picks up the correct proxy settings, as if a cached PAC script expired.

    Read the article

  • What's the difference between the input type "text" and "password" in an html form?

    - by Domingo
    Hi everybody, this question might seem stupid, but here's the situation: I'm trying to create an auto login page for my mail using jquery's post request, but it's not working, it works with all other pages except with webmail. So, trying to figure out what was wrong, I recreated the login form, here's the code: <form id="form1" name="form1" method="post" action="https://login.hostmonster.com/"> <label>User <input type="text" name="login" id="user" /> </label> <label>Pass <input name="password" type="password" id="pass" /> </label> <input name="doLogin" type="submit" id="doLogin" value="Login"> </form> The strange thing is when you change the input type of pass to text, the form doesn't work! I can't figure out why. Anyway, if you can tell me what's the real difference between the input type text and password (and not what it says everywhere on the net that the only difference is that when you type stars appear instead of characters) I would appreciate it. Also, do you think this is affecting my jquery's post? Here's the code for it: $j.post('https://login.hostmonster.com/', { login: '[email protected]', password: 'xxx' }, function(data, text){ if (text=='success') { alert('Success '+data); } else { alert('Failed'); } }); Thanks a lot! Regards, D

    Read the article

  • How important is it to use SSL?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • using mod-rewrite to redirect requests for jquery.js to GoogleAPI cache

    - by Aditya Advani
    Hi All, Our Linux server with Apache 2.x, Plesk 8.x hosts a number of e-commerce websites. To take advantage of browser caching we would like to use Google's provided copy of jquery.js. Hence in the vhost.conf file of each we can use the following RewriteRule RewriteCond %{REQUEST_FILENAME} jquery.min.js [nc] RewriteRule . http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js [L] And in vhost_ssl.conf RewriteCond %{REQUEST_FILENAME} jquery.min.js [nc] RewriteRule . https://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js [L] OK now these rules work fine in the individual vhost.conf files of each domain. However we host over 200 domains, I would like for them to work but cannot seem to get them to work globally in the httpd.conf file. Challenges are the following: Get the rewriterule to work in httpd.conf Detect if HTTPS is on, and if it is and the is is a secure page, rewrite to ... Each individual domain will still have it's own custom mod-rewrite rules. Which rules take precedence - global or per-domain? Do they combine? Is it ok if I have the "RewriteEngine On" directive in the global httpd.conf and then again in the vhost.conf? Please let me know what your guys' suggestions are. Desperate for a solution to this problem.

    Read the article

  • Getting broken link error whle Using App Engine service accounts

    - by jade
    I'm following this tutorial https://developers.google.com/bigquery/docs/authorization#service-accounts-appengine Here is my main.py code import httplib2 from apiclient.discovery import build from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from oauth2client.appengine import AppAssertionCredentials # BigQuery API Settings SCOPE = 'https://www.googleapis.com/auth/bigquery' PROJECT_NUMBER = 'XXXXXXXXXX' # REPLACE WITH YOUR Project ID # Create a new API service for interacting with BigQuery credentials = AppAssertionCredentials(scope=SCOPE) http = credentials.authorize(httplib2.Http()) bigquery_service = build('bigquery', 'v2', http=http) class ListDatasets(webapp.RequestHandler): def get(self): datasets = bigquery_service.datasets() listReply = datasets.list(projectId=PROJECT_NUMBER).execute() self.response.out.write('Dataset list:') self.response.out.write(listReply) application = webapp.WSGIApplication( [('/listdatasets(.*)', ListDatasets)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() Here is my app.yaml file code application: bigquerymashup version: 1 runtime: python api_version: 1 handlers: - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: .* script: main.py And yes i have added app engine service account name in google api console Team tab with can edit permissions. When upload the app and try to access the link it says Oops! This link appears to be broken. Ealier i ran this locally and tried to access it using link localhost:8080.Then i thought may be running locally might be giving the error so i uploaded my code to http://bigquerymashup.appspot.com/ but still its giving error.

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • PHP: Post ASPX form that uses 'WebForm_DoPostBackWithOptions' with cUrl

    - by Oliver Cooper
    I am trying to post an ASPX form which uses SharePoint with PHP. Posting data works with standard forms, I just can't seem to get this form working. I did notice an onclick attribute on the submit button: onclick="javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("ctl00$PlaceHolderMain$btnSaveTask", "", true, "", "", false, false))" I have never used ASPX and have no idea what this is. It is not an authentication problem or something wrong with the url as it returns the webpage (which contains the form) fine. Here is my PHP code: function add($username, $password){ $data = array( 'ctl00$PlaceHolderSearchArea$ctl01$ctl00' => 'https://keystone.stpeters.sa.edu.au', 'ctl00$PlaceHolderSearchArea$ctl01$ctl04' => '0', 'ctl00$PlaceHolderMain$DateStart$DateStartDate' => '9/05/2012', 'ctl00$PlaceHolderMain$DateDue$DateDueDate' => '10/05/2012', 'ctl00$PlaceHolderMain$Title' => 'test again', 'ctl00$PlaceHolderMain$btnSaveTask' => 'OK', '__spText1' => '', '__spText2' => '' ); $curl_handle = curl_init(); $fullurl = "https://keystone.stpeters.sa.edu.au/_layouts/StPeters.Keystone/MyTasks/MyTaskDetail.aspx?id=0&IsDlg=1"; // $ch = curl_init(); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_FAILONERROR, 0); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($ch, CURLOPT_USERPWD, "$username:$password"); curl_setopt($ch, CURLOPT_URL, $fullurl); $returned = curl_exec($ch); curl_close ($ch); return $returned; }

    Read the article

  • git-svn cannot create a branch to follow SVN branching

    - by Serhiy Yakovyn
    Hello everybody, I'm struggling with the following issue. When I continue fetching revisions from SVN with git svn fetch I'm getting the following error (removed https to be able to post question): *Found possible branch point: somecompany.com/product/trunk = somecompany.com/product/branches/deep/branches/product-001, 72666 Found branch parent: (refs/remotes/deep/branches/product-001) b685b7b92813885fdf 6b8e2663daf884bf504b14 Following parent with do_switch Successfully followed parent error: 'refs/remotes/deep' exists; cannot create 'refs/remotes/deep/branches/product-001' fatal: Cannot lock the ref 'refs/remotes/deep/branches/product-001'. update-ref -m r72667 refs/remotes/deep/branches/product-001 df51920e8f0a53f26507 c2679eb6a9dbad91e0d6: command returned error: 128* This happened because I was fetching revisions using the default filter for SVN branches: [svn-remote "svn"] url = https://somecompany.com/someproduct fetch = trunk:refs/remotes/trunk branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/* Now, I have the line below added, but it's too late: branches = branches/deep/branches/*:refs/remotes/deep/branches/* I have tried to fix this by using git reset to remove all the commits. Actually I can see from the error message that git is trying right thing, but cannot because of the branch remotes/deep being existing. I have tried to search for 2 possible solutions: 1. Remove that branch (remotes/deep), but as it is tracked by git as a remote, I was not able to find any solution for that. 2. Remove the whole history related to that branch. No success too :( Does anybody know how to deal with my issue? Thank you in advance, Serhiy Y

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • Facebook Open Graph - post to all approved users feeds.

    - by simnom
    Hi, I'm struggling to get to grips with posting a feed item to all the members of an approved application. Within the application settings for the user it is stating that the application has permission to post to the wall but I can only achieve this if that user is currently logged in to facebook. Obviously I would like this to function so that any items I uploaded are posted to all the members of the application at any one time. I am using the Facebook PHP SDK from http://github.com/facebook/php-sdk/ and currrently my code is as follows: require 'src/facebook.php'; //Generates access token for this transaction $accessToken = file_get_contents("https://graph.facebook.com/oauth/access_token?type=client_cred&client_id=MyAppId&client_secret=MySecret"); //Gets the full user details as an object $contents = json_decode(file_get_contents("https://graph.facebook.com/SomeUserId?scope=publish_stream&" . $accessToken)); print_r($contents); if ($facebook->api('/' . $contents->id . '/feed', 'POST', array( 'title' => 'New and Improved, etc - 12/03/2010', 'link' => 'http://www.ib3.co.uk/news/2010/03/12/new-and-improved--etc', 'picture' => 'http://www.ib3.co.uk/userfiles/image/etc-booking.jpg', 'scope' => 'publish_stream' ) )==TRUE) { echo "message posted"; } else { echo "message failed"; } The output from $contents shows the expected user details but nothing relating to the permissions for my application. Am I missing a trick here? Then using the $facebook-api() function I am receiving a #200 - Permissions error. The application does not have permission to perform this action. This is driving me a little potty as I suspect I'm missing something straightforward with the authorisation but what? Many thanks in advance for an assistance offered.

    Read the article

  • How to use a loop to download HTML with paging?

    - by Nai
    I want to loop through this URL and download the HTML. https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" +"&start=" + i start and num controls the paging of the URL. So if &start=2, and &num=10, it will scrape 10 results from page 2. Given that Google has a max limit of num = 10, how can I write a loop that loops through the HTML and scrape the results for the first 10 pages? This is what I have so far which just scrapes the first page. //input search term Console.WriteLine("What is your search query?:"); string searchTerm = Console.ReadLine(); //concantenate the strings using + symbol to make it URL friendly for google string searchTermFormat = searchTerm.Replace(" ", "+"); //create a new instance of Webclient and use DownloadString method from the Webclient class to extract download html WebClient client = new WebClient(); int i = 1; string Json = client.DownloadString("https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" + "&start=" + i); //create a new instance of JavaScriptSerializer and deserialise the desired content JavaScriptSerializer js = new JavaScriptSerializer(); GoogleSearchResults results = js.Deserialize<GoogleSearchResults>(Json); //output results to console Console.WriteLine(js.Serialize(results)); Console.ReadLine();

    Read the article

  • FQL query using stream table doesn't accept app access token

    - by tougher
    I've searched stackoverflow all day long to find an answer without luck, so here we go. I'm trying to fetch data from the stream table like this: FQL: SELECT post_id, message, created_time FROM stream WHERE source_id = 131559313586863 URL: http:// graph.facebook.com/fql?q=SELECT+post_id%2C+message%2C+created_time+FROM+stream+WHERE+source_id+%3D+131559313586863&access_token=10669xxxxx74470|PF-7GSdBx0Nxxxxxkdi1KwSQG-w But I get a 400 Bad Request as response with the error message: "An access token is required to request this resource.". I'm fetching an application access token with this url: https:// graph.facebook.com/oauth/access_token?client_id=FACEBOOK_APP_ID&client_secret=FACEBOOK_APP_SECRET&grant_type=client_credentials Facebook state in this blog post that "You will need to pass a valid app or user access token to access this functionality.". Functionality refers to /feed and /posts (the stream table). Futhermore this wiki tells the same story about using the stream table, "From June 3 2011 a token is required to query this table. You can use any application or user token to make the query.". Does anyone see my hopefully obvious flaw? Please note: The profile in the FQL query is public. I need this to run userless though a cronjob. No user interaction is possible. The request works if I replace the app access token with my own user token from https:// developers.facebook.com/tools/explorer Sorry for breaking the URL's. I need more reputation to post more than 2 links :-/

    Read the article

  • Django install on a shared host, .htaccess help

    - by redconservatory
    I am trying to install Django on a shared host using the following instructions: docs.google.com/View?docid=dhhpr5xs_463522g My problem is with the following line on my root .htaccess: RewriteRule ^(.*)$ /cgi-bin/wcgi.py/$1 [QSA,L] When I include this line I get a 500 error with almost all of my domains on this account. My cgi-bin directory is home/my-username/public_html/cgi-bin/ The wcgi.py file contains: #!/usr/local/bin/python import os, sys sys.path.insert(0, "/home/username/django/") sys.path.insert(0, "/home/username/django/projects") sys.path.insert(0, "/home/username/django/projects/newprojects") import django.core.handlers.wsgi os.chdir("/home/username/django/projects/newproject") # optional os.environ['DJANGO_SETTINGS_MODULE'] = "newproject.settings" def runcgi(): environ = dict(os.environ.items()) environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) environ['wsgi.multithread'] = False environ['wsgi.multiprocess'] = True environ['wsgi.run_once'] = True application = django.core.handlers.wsgi.WSGIHandler() if environ.get('HTTPS','off') in ('on','1'): environ['wsgi.url_scheme'] = 'https' else: environ['wsgi.url_scheme'] = 'http' headers_set = [] headers_sent = [] def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: exc_info = None # avoid dangling circular ref elif headers_set: raise AssertionError("Headers already set!") headers_set[:] = [status,response_headers] return write result = application(environ, start_response) try: for data in result: if data: # don't send headers until body appears write(data) if not headers_sent: write('') # send headers now if body was empty finally: if hasattr(result,'close'): result.close() runcgi() Only I changed the "username" to my username...

    Read the article

  • PHP webservice for c# application

    - by user293995
    Hi, I want to create a php webservice server to be used from a C# application. I want to have a library who autogenerate wsdl file for easy management (That's the reason why I choosed NuSoap). I tried to use nusoap on PHP5. I have some problems with charset and Content-Type. Visual Studio gives this error : The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'https://www.xxx.yy'. There is a problem with the XML that was received from the network. See inner exception for more details. The encoding in the declaration 'ISO-8859-1' does not match the encoding of the document 'utf-8'. If the service is defined in the current solution, try building the solution and adding the service reference again. $soap= new soap_server(); $soap-xml_encoding = 'utf-8'; $soap-configureWSDL('Bonjour', 'https://www.xxx.yy'); $soap-wsdl-schemaTargetNamespace = 'http://soapinterop.org/xsd/'; $soap-register('bonjour', array('prenom' = 'xsd:string')); $HTTP_RAW_POST_DATA = file_get_contents("php://input"); $soap-service($HTTP_RAW_POST_DATA); header('Content-Type: application/soap+xml; charset=utf-8'); function bonjour($prenom) { return "Bonjour ".$prenom; } ? Does someone knows how to change that to make it compliant and working ? Thanks

    Read the article

  • JavaScript: Double script tags in Google Analytics tracking code

    - by Tom
    This is more a curiosity question than anything else... Google instructs to add the analytics tracking code as follows: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script> I'm wondering some JS guru here could tell me why they're separating it into two script tags instead of sticking it all inside one. I know that the top part could be put in the header and the bottom part just before body tag to ensure the page loaded before it's tracked, but I'm wondering if there's something more to it. Anyone who'd know that would likely know how to separate the code into two tags anyway. I'm only asking as this is coming from the Goog and is being used by millions of sites... Thanks

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >