Search Results

Search found 20873 results on 835 pages for 'url fetch'.

Page 291/835 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • NSMutableArray of Objects

    - by Terry Owen
    First off I am very new to Objective C and iPhone programming. Now that that is out of the way. I have read through most of the Apple documentation on this and some third party manuals. I guess I just want to know if I'm going about this the correct way ... - (NSMutableArray *)makeModel { NSString *api = @"http://www.mycoolnewssite.com/api/v1"; NSArray *namesArray = [NSArray arrayWithObjects:@"News", @"Sports", @"Entertainment", @"Business", @"Features", nil]; NSArray *urlsArray = [NSArray arrayWithObjects: [NSString stringWithFormat:@"%@/news/news/25/stories.json", api], [NSString stringWithFormat:@"%@/news/sports/25/stories.json", api], [NSString stringWithFormat:@"%@/news/entertainment/25/stories.json", api], [NSString stringWithFormat:@"%@/news/business/25/stories.json", api], [NSString stringWithFormat:@"%@/news/features/25/stories.json", api], nil]; NSMutableArray *result = [NSMutableArray array]; for (int i = 0; i < [namesArray count]; i++) { NSMutableDictionary *objectDict = [NSMutableDictionary dictionary]; NSString *name = (NSString *)[namesArray objectAtIndex:i]; NSString *url = (NSString *)[urlsArray objectAtIndex:i]; [objectDict setObject:name forKey:@"NAME"]; [objectDict setObject:url forKey:@"URL"]; [objectDict setObject:@"NO" forKey:@"HASSTORIES"]; [result addObject:objectDict]; } return result; } Any insight would be appreciated ;-)

    Read the article

  • How can I easily test connectivity to external sources with WinHTTP?

    - by Mike B
    I've got an server application that uses Winhttp to fetch information from an external source. Occasionally, I'll need to troubleshoot connectivity issues and I'd like an easy way to test connections through winHTTP (on the off-chance that there's something that is specifically impeding winHTTP and not other unrelated connectivity commands like telnet). Does IE use WinHTTP? If not, are there any tools (preferably already integrated into Windows) that I can use? Occasionally I'll use IE but I'm not sure if that's quite the same.

    Read the article

  • iphone webview dynamic font support

    - by Kiran
    Friends, I am trying to build a simple iphone application to view a local webpage that uses dynamic fonts. The url is www.eenadu.net. I have just a single view based application and inserted a webview into the view and implementing the webviewdelegate in viewcontroller. The site uses ttf/eot fonts that are dynamically downloadable by browser from http://www.eenadu.net/eenadu.ttf or http://www.eenadu.net/EENADU0.eot. Here is what I am doing in the code by doing some research: Code: ( void ) loadFont { NSString *fontPath = [[NSBundle mainBundle] pathForResource:@"eenadu" ofType:@"ttf"]; CGDataProviderRef fontDataProvider = CGDataProviderCreateWithFilename([fontPath UTF8String]); // Create the font with the data provider, then release the data provider. customFont = CGFontCreateWithDataProvider(fontDataProvider); CGDataProviderRelease(fontDataProvider); fontPath = [[NSBundle mainBundle] pathForResource:@"eenadu" ofType:@"eot"]; fontDataProvider = CGDataProviderCreateWithFilename([fontPath UTF8String]); // Create the font with the data provider, then release the data provider. customFont = CGFontCreateWithDataProvider(fontDataProvider); CGDataProviderRelease(fontDataProvider); } (void) viewDidLoad { [super viewDidLoad]; [self loadFont]; [webView setBackgroundColor:[UIColor whiteColor]]; NSString *urlAddress = @"http://www.eenadu.net/"; NSURL *url = [NSURL URLWithString:urlAddress]; NSURLRequest *requestObj = [NSURLRequest requestWithURL:url]; [webView loadRequest:requestObj]; } I see that the page loads however doesn't display the font. Please help. Also, For loadFont, I have included the fonts in the build with right names and took care of the case as well.

    Read the article

  • Download-from-PyPI-and-install script

    - by zubin71
    Hello, I have written a script which fetches a distribution, given the URL. After downloading the distribution, it compares the md5 hashes to verify that the file has been downloaded properly. This is how I do it. def download(package_name, url): import urllib2 downloader = urllib2.urlopen(url) package = downloader.read() package_file_path = os.path.join('/tmp', package_name) package_file = open(package_file_path, "w") package_file.write(package) package_file.close() I wonder if there is any better(more pythonic) way to do what I have done using the above code snippet. Also, once the package is downloaded this is what is done: def install_package(package_name): if package_name.endswith('.tar'): import tarfile tarfile.open('/tmp/' + package_name) tarfile.extract('/tmp') import shlex import subprocess installation_cmd = 'python %ssetup.py install' %('/tmp/'+package_name) subprocess.Popen(shlex.split(installation_cmd) As there are a number of imports for the install_package method, i wonder if there is a better way to do this. I`d love to have some constructive criticism and suggestions for improvement. Also, I have only implemented the install_package method for .tar files; would there be a better manner by which I could install .tar.gz and .zip files too without having to write seperate methods for each of these?

    Read the article

  • google maps api : internal server error when inserting a feature

    - by user142764
    Hi, I try to insert features on a custom google map : i use the sample code from the doc but i get a ServiceException (Internal server error) when i call the service's insert method. Here is what i do : I create a map and get the resulting MapEntry object : myMapEntry = (MapEntry) service.insert(mapUrl, myEntry); This works fine : i can see the map i created in "my maps" on google. I use the feed url from the map to insert a feature : final URL featureEditUrl = myMapEntry.getFeatureFeedUrl(); I create a kml string using the sample from the doc : String kmlStr = "< Placemark xmlns=\"http://www.opengis.net/kml/2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; And when i call the insert method i get an internal server error. I must be doing something wrong but i cant see what, can anybody help ? Here is the complete code i use : public void doCreateFeaturesFormap(MapEntry myMap) throws ServiceException, IOException { final URL featureEditUrl = myMap.getFeatureFeedUrl(); FeatureEntry featureEntry = new FeatureEntry(); try { String kmlStr = "<Placemark xmlns=\"http://www.opengis.net/kml/ 2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; XmlBlob kml = new XmlBlob(); kml.setFullText(kmlStr); featureEntry.setKml(kml); featureEntry.setTitle(new PlainTextConstruct("Feature Title")); } catch (NullPointerException e) { System.out.println("Error: " + e.getClass().getName()); } FeatureEntry myFeature = (FeatureEntry) service.insert( featureEditUrl, featureEntry); } Thanks in advance, Vincent.

    Read the article

  • Selenium RC: how to capture/handle error?

    - by KenBurnsFan1
    Hi, My test uses Selenium to loop through a CSV list of URLs via an HTTP proxy (working script below). As I watch the script run I can see about 10% of the calls produce "Proxy error: 502" ("Bad_Gateway"); however, the errors are not captured by my catch-all "except Exception" clause -- ie: instead of writing 'error' in the appropriate row of the "output.csv", they get passed to the else clause and produce a short piece of html that starts: "Proxy error: 502 Read from server failed: Unknown error." Also, if I collect all the URLs which returned 502s and re-run the script, they all pass, which leads me to believe that this is a sporadic network path issue. Question: Can the script be made to recognize the the 502 errors, sleep a minute, and then retry the URL instead of moving on to the next URL in the list? The only alternative that I can think of is to apply re.search("Proxy error: 502") after "get_html_source" as a way to catch the bad calls. Then, if the RE matches, put the script to sleep for a minute and then retry 'sel.open(row[0]' on the URL which produced the 502. Any advice would be much appreciated. Thanks! #python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://baseDomain.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('ListOfSubDomains.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception: ofile = open('output.csv', 'ab') ofile.write("error" + '\n') ofile.close() else: time.sleep(5) html = sel.get_html_source() ofile = open('output.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()

    Read the article

  • Reading Data from DDFS ValueError: No JSON object could be decoded

    - by secumind
    I'm running dozens of map reduce jobs for a number of different purposes using disco. My data has grown enormous and I thought I would try using DDFS for a change rather than standard txt files. I've followed the DISCO map/reduce example Counting Words as a map/reduce job, without to much difficulty and with the help of others, Reading JSON specific data into DISCO I've gotten past one of my latest problems. I'm trying to read data in/out of ddfs to better chunk and distribute it but am having a bit of trouble. Here's an example file: file.txt {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a>"} I load it into DDFS with: # ddfs chunk data:test1 ./file.txt created: disco://localhost/ddfs/vol0/blob/44/file_txt-0$549-db27b-125e1 I test that the file is indeed loaded into ddfs with: # ddfs xcat data:test1 {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a> At this point everything is great, I load up the script that resulted from a previous Stack Post: from disco.core import Job, result_iterator import gzip def map(line, params): import unicodedata import json r = json.loads(line).get('text') s = unicodedata.normalize('NFD', r).encode('ascii', 'ignore') for word in s.split(): yield word, 1 def reduce(iter, params): from disco.util import kvgroup for word, counts in kvgroup(sorted(iter)): yield word, sum(counts) if __name__ == '__main__': job = Job().run(input=["tag://data:test1"], map=map, reduce=reduce) for word, count in result_iterator(job.wait(show=True)): print word, count NOTE: That this script runs file if the input=["file.txt"], however when I run it with "tag://data:test1" I get the following error: # DISCO_EVENTS=1 python count_normal_words.py Job@549:db30e:25bd8: Status: [map] 0 waiting, 1 running, 0 done, 0 failed 2012/11/25 21:43:26 master New job initialized! 2012/11/25 21:43:26 master Starting job 2012/11/25 21:43:26 master Starting map phase 2012/11/25 21:43:26 master map:0 assigned to solice 2012/11/25 21:43:26 master ERROR: Job failed: Worker at 'solice' died: Traceback (most recent call last): File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 329, in main job.worker.start(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 290, in start self.run(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 286, in run getattr(self, task.mode)(task, params) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 299, in map for key, val in self['map'](entry, params): File "count_normal_words.py", line 12, in map File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2012/11/25 21:43:26 master WARN: Job killed Status: [map] 1 waiting, 0 running, 0 done, 1 failed Traceback (most recent call last): File "count_normal_words.py", line 28, in <module> for word, count in result_iterator(job.wait(show=True)): File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 348, in wait timeout, poll_interval * 1000) File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 309, in check_results raise JobError(Job(name=jobname, master=self), "Status %s" % status) disco.error.JobError: Job Job@549:db30e:25bd8 failed: Status dead The Error states: ValueError: No JSON object could be decoded. Again, this works fine using the text file as input but now DDFS. Any ideas, I'm open to suggestions?

    Read the article

  • Pulling data and printing it in an HTML table

    - by John
    Hello, From a MySQL table called "submission" containing the fields "loginid, submissionid, title, url, datesubmitted, displayurl", I would like to print an HTML table thats contains all "title" and corresponding "datesubmitted" where "loginid" equals "$profile." The code I am trying to use is below. It isn't working. Any ideas why it isn't working? Thanks in advance, John $profile = $_GET['profile']; $sqlStr = "SELECT loginid, submissionid, title, url, datesubmitted, displayurl FROM submission WHERE loginid = $profile ORDER BY datesubmitted DESC"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"samplesrec\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="sitename1"><a href="http://www.'.$row["url"].'">'.$row["title"].'</a></td>'; echo '</tr>'; echo '<tr>'; echo '<td class="sitename2">'.$row["datesubmitted"].'</a></td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Check if the internet cannot be accessed in Python

    - by Sridhar Ratnakumar
    I have an app that makes a HTTP GET request to a particular URL on the internet. But when the network is down (say, no public wifi - or my ISP is down, or some such thing), I get the following traceback at urllib.urlopen: 70, in get u = urllib2.urlopen(req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1161, in http_open return self.do_open(httplib.HTTPConnection, req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1136, in do_open raise URLError(err) URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I want to print a friendly error to the user telling him that his network maybe down instead of this unfriendly "nodename nor servname provided" error message. Sure I can catch URLError, but that would catch every url error, not just the one related to network downtime. I am not a purist, so even an error message like "The server example.com cannot be reached; either the server is indeed having problems or your network connection is down" would be nice. How do I go about selectively catching such errors? (For a start, if DNS resolution fails at urllib.urlopen, that can be reasonably assumed as network inaccessibility? If so, how do I "catch" it in the except block?)

    Read the article

  • Making HTTP POST request

    - by infrared
    I'm trying to make a POST request to retrieve information about a book. Here is the code that returns HTTP code: 302, Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("bkstr.com:80") conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code? Thanks EDIT: Here's what I get when I call print response.msg 302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT Vary: Host,Accept-Encoding,User-Agent Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Content-Type: text/plain; charset=utf-8 Seems that the location points to the same url I'm trying to access in the first place? EDIT2: I've tried using urllib2 as suggested here. Here is the code: import urllib, urllib2 url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch' values = {'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) print response.geturl() print response.info() the_page = response.read() print the_page And here is the output: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch Date: Tue, 07 Sep 2010 16:58:35 GMT Pragma: No-cache Cache-Control: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/ Vary: Accept-Encoding,User-Agent X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Connection: close Content-Type: text/html; charset=utf-8 Content-Language: en-US Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/

    Read the article

  • Problem with grails web app running in production: "No such property: save for class: JsecRole"

    - by Sarah Boyd
    I've got a grails 1.1 web app running great in development but when I try and run it in production with an sqlserver database it crashes in a weird way. The relevant part of my datasource.groovy is as follows: environments { development { dataSource { dbCreate = "create-drop" // one of 'create', 'create-drop','update' url = "jdbc:hsqldb:mem:devDB" } } test { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver" endUsername = "sa" password = "pw4db" url = "jdbc:sqlserver://localhost:1433;databaseName=ReleasePlanner;selectMethod=cursor" The error message I receive is: Message: No such property: save for class: JsecRole Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole Class: ProjectController At Line: [28] Code Snippet: 27: println "###about to create project roles" 28: userManagerService.createProjectRoles(project) 29: userManagerService.addUserToProject(session.user.id.toString(), project, 'owner') } } } The stacktrace is as follows: org.codehaus.groovy.runtime.InvokerInvocationException: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole at org.jsecurity.web.servlet.JSecurityFilter.doFilterInternal(JSecurityFilter.java:382) at org.jsecurity.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:180) Caused by: groovy.lang.MissingPropertyException: No such property: save for class: JsecRole at UserManagerService.createProjectRoles(UserManagerService.groovy:9) at UserManagerService$$FastClassByCGLIB$$6fa73713.invoke(<generated>) at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149) at UserManagerService$$EnhancerByCGLIB$$fcf60984.createProjectRoles(<generated>) at UserManagerService$createProjectRoles.call(Unknown Source) at ProjectController$_closure4.doCall(ProjectController.groovy:28) at ProjectController$_closure4.doCall(ProjectController.groovy) ... 2 more Any help is appreciated. Thanks Sarah

    Read the article

  • Newly created Document library and columns using webservices are not visible on sharepoint

    - by Royson
    Hi, for creating a columns I worked on this code . and for creating document library Lists listService = new Lists(); listService.PreAuthenticate = true; listService.Credentials = new NetworkCredential(username,password,domain; String url = "http://YourServer/SiteName/"; listService.Url = url @ + /_vti_bin/lists.asmx"; XmlNode ndList = listService.AddList(NewListName, "Description", 101); Both are working successfully. But Problem i am facing is: New Columns and document library are not visible. I tried with comparing Field Value of Both Visible and No-Visible types. Difference i found is : Visible (Created Manually) doesn't contain Version value. were as i am creating have it. Can you help me out in this? EDIT: I checked contents of ndList node, List is created and it is visible on my UI. but on sharepoint it should be listed in 'Document' tab where default 'Shared Documents' library is shown. If i click on 'Documents' then we can also see all lib created by this code. Visible means library displayed under 'Documents' tab

    Read the article

  • How do I scrape information off ASP.NET websites when paging and JavaScript links are being used?

    - by Ian Roke
    I have been given a staff list which is supposed to be up to date but it doesn't match an intranet People Finder which is written in ASP.NET. As the information is sensitive I am not able to access the database the People Finder is using so the only way I can get at the information is by scraping the structure starting at the top brass at the top and then going through each tier in turn. Each person has a Staff number which then forms the URL http://intranet/peoplefinder/index.aspx?srn=ABC1234 and then all the people who report to them are listed underneth in the format <a id="gvEmployees_ctl03_lnkFullName" href="index.aspx?srn=ABC4321" target="_self"> where each URL indicates the Staff number and provides a link to their team. The trouble arises when the teams are big as paging is implemented in the GridView with an URL such as <a href="javascript:__doPostBack('gvEmployees','Page$2')">2</a>. How would I scrape this page, capture the SRN and other details along with the people who report to the person on all pages of the GridView then loop through each reportee and do the same process until the whole list is complete?

    Read the article

  • https google urlshortener request missing body

    - by Peter
    Hi, Just trying to get the new API for the goo.gl URL shortening service working on my iPhone, following the instructions on http://code.google.com/apis/urlshortener/v1/getting_started.html I'm set up and the API is enabled etc., but when I send a request in the recommended format: POST https://www.googleapis.com/urlshortener/v1/url Content-Type: application/json {"longUrl": "http://www.google.com/"} I get an error returned. The error is exactly the one listed on that page in the errors section for if you haven't passed in a longURL param. This makes me think that I'm not setting up the body of the POST request properly. Here's the code if you have any pointers... NSString *longURLString=@"http://www.stackoverflow.com"; NSString *googlRequestString=@"https://www.googleapis.com/urlshortener/v1/url"; NSMutableURLRequest *request=[NSMutableURLRequest requestWithURL:[NSURL URLWithString:googlRequestString]]; [request setHTTPMethod:@"POST"]; [request addValue:@"application/json" forHTTPHeaderField:@"Content-Type:"]; NSString *bodyString=[NSString stringWithFormat:@"{\"longUrl\": \"%@\"}",longURLString]; [request setHTTPBody:[bodyString dataUsingEncoding:NSUTF8StringEncoding]]; NSURLResponse *theResponse; NSError *error=nil; NSData *receivedData=[NSURLConnection sendSynchronousRequest:request returningResponse:&theResponse error:&error]; NSString *receivedString=[[NSString alloc] initWithData:receivedData encoding:NSUTF8StringEncoding]; NSLog(@"Received data: %@",receivedString); [receivedString release]; The NSLog returns: Received data: { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Required", "locationType": "parameter", "location": "resource.longUrl" } ], "code": 400, "message": "Required" } } which is exactly what Google says you get if you have not passed a longUrl parameter.... My guess is I'm missing something very obvious here :-) P

    Read the article

  • Unknown error when submit a REST request to Liferay json API

    - by r.rodriguez
    I'm writing an script in Python to automatically update the structures in my Liferay portal and I want to do it via the json REST API. I make a request to get an structure (method getStructure), and it worked. But when I try to do an structure update in the portal it shows me the following error: ValueError: Content-Length should be specified for iterable data of type class 'dict' {'serviceContext': "{'prueba'}", 'serviceClassName': 'com.liferay.portlet.journal.service.JournalStructureServiceUtil', 'name': 'FOO', 'xsd': '... THE XSD OBTAINED VIA JSON ...', 'serviceParameters': '[groupId,structureId,parentStructureId,name,description,xsd,serviceContext]', 'description': 'FOO Structure', 'serviceMethodName': 'updateStructure', 'groupId': '10133'} What I'm doing is the next: urllib.request.Request(url = URL, data = data_update, headers = headers) URL is http://localhost:8080/tunnel-web/secure/json The headers are configured with basic authentication (it works, it is tested with the getStructure method). Data is: data_update = { "serviceClassName" : "com.liferay.portlet.journal.service.JournalStructureServiceUtil", "serviceMethodName" : "updateStructure", "serviceParameters" : "[groupId,structureId,parentStructureId,name,description,xsd,serviceContext]", "groupId" : 10133, "name" : FOO, "description" : FOO Structure, "xsd" : ... THE XSD OBTAINED VIA JSON ..., "serviceContext" : "{}" } Does anybody know the solution? Have I to specify the length for the dictionary and how? Or this is a bug?

    Read the article

  • How do you configure jax-ws to work with Spring using jax-ws commons?

    - by LES2
    In web.xml I have the following: <servlet> <description>JAX-WS endpoint - EARM</description> <display-name>jaxws-servlet</display-name> <servlet-name>jaxws-servlet</servlet-name> <servlet-class>com.sun.xml.ws.transport.http.servlet.WSSpringServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>jaxws-servlet</servlet-name> <url-pattern>/webServices/*</url-pattern> </servlet-mapping> In my application context I have the following definitions: <bean id="helloService" class="com.foo.HelloServiceImpl"> <property name="regularService" ref="regularService" /> </bean> <wss:binding url="/webServices/helloService" service="#helloService" /> I get a NullPointerException when trying to access the WSDL: java.lang.NullPointerException at com.sun.xml.ws.transport.http.HttpAdapter.<init>(HttpAdapter.java:145) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.<init>(ServletAdapter.java:76) at com.sun.xml.ws.transport.http.servlet.ServletAdapterList.createHttpAdapter(ServletAdapterList.java:5 0) at com.sun.xml.ws.transport.http.servlet.ServletAdapterList.createHttpAdapter(ServletAdapterList.java:4 7) at com.sun.xml.ws.transport.http.HttpAdapterList.createAdapter(HttpAdapterList.java:73) at com.sun.xml.ws.transport.http.servlet.SpringBinding.create(SpringBinding.java:24) at com.sun.xml.ws.transport.http.servlet.WSSpringServlet.init(WSSpringServlet.java:46) Strange ... appears to be a configuration error but the darn thing just dies with a NullPointerException!!!!!!!! No logging is provided. Deployed in Resin.

    Read the article

  • Why does the page posts take so long?

    - by Olle
    Hi! I am having some problems with some page post backs that take a loooong time to execute. If I do a "appcmd list requests" I can get something like this: REQUEST "79000001800004e3" (url:POST /dir/file.aspx, time:87219 msec, client:xxx.xxx.xxx.xxx, stage:ExecuteRequestHandler, module:ManagedPipelineHandler) REQUEST "8600000080002f82" (url:POST /dir/file.aspx, time:61391 msec, client:xxx.xxx.xxx.xxx, stage:AcquireRequestState, module:Session) REQUEST "5e00010280000420" (url:POST /dir/file.aspx, time:21047 msec, client:xxx.xxx.xxx.xxx, stage:AcquireRequestState, module:Session) It's one particular file that causes the problem (dir/file.aspx in this case). It comes from the same IP-adress. And the first on is from ManagedPipelineHandler module and the two after that from Session module. I do not have any details about the web browser, or anything more about the client for that matter. I have looked for sql dead locks and did not find any. There are no long running sql queries at all. Do you have any idea of what can be the problem? Regards.

    Read the article

  • Sample twitter application.

    - by Jack
    <?php function updateTwitter($status) { // Twitter login information $username = 'xxxxx'; $password = 'xxxxxx'; // The url of the update function $url = 'http://twitter.com/statuses/update.xml'; // Arguments we are posting to Twitter $postargs = 'status='.urlencode($status); // Will store the response we get from Twitter $responseInfo=array(); // Initialize CURL $ch = curl_init($url); // Tell CURL we are doing a POST curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_NOBODY, 0); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // execute the CURL call $response = curl_exec($ch); // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> I get the following output Error: 0 Please help.

    Read the article

  • Blank graph for some munin plugins

    - by jack
    I have a munin-master and munin-node installed on same server (Ubuntu 9.10 server). Most pre-installed plugins work well but the following plugins are with blank graph: Memcached bytes used Memcached connections Memcached cache hits and misses Memcached cached items Memcached requests Memcached network traffic MySQL Queries Cache Size I run the following 3 script in terminal and results were ok. /etc/munin/plugins/memcached_bytes /etc/munin/plugins/memcached_counters /etc/munin/plugins/memcached_rates But when I tried the command below after "telnet localhost 4949" fetch memcached_bytes # Unknown service etch memcached_bytes_ # timeout pid 28009 - killing...done Does anyone know the reason?

    Read the article

  • Qt Jambi: Accessing the content of QNetworkReply

    - by Richard
    Hi All, I'm having trouble accessing the content of QNetworkReply objects. Content appears to be empty or zero. According to the docs (translating from c++ to java) I think I've got this set up correctly, but to no avail. Additionally an "Unknown error" is being reported. Any ideas much appreciated. Code: public class Test extends QObject { private QWebPage page; public Test() { page = new QWebPage(); QNetworkAccessManager nac = new QNetworkAccessManager(); nac.finished.connect(this, "requestFinished(QNetworkReply)"); page.setNetworkAccessManager(nac); page.loadProgress.connect(this, "loadProgress(int)"); page.loadFinished.connect(this, "loadFinished()"); } public void requestFinished(QNetworkReply reply) { reply.reset(); reply.open(OpenModeFlag.ReadOnly); reply.readyRead.connect(this, "ready()"); // never gets called System.out.println("bytes: " + reply.url().toString()); // writes out asset uri no problem System.out.println("bytes: " + reply.bytesToWrite()); // 0 System.out.println("At end: " + reply.atEnd()); // true System.out.println("Error: " + reply.errorString()); // "Unknown error" } public void loadProgress(int progress) { System.out.println("Loaded " + progress + "%"); } public void loadFinished() { System.out.println("Done"); } public void ready() { System.out.println("Ready"); } public void open(String url) { page.mainFrame().load(new QUrl(url)); } public static void main(String[] args) { QApplication.initialize(new String[] { }); Test t = new Test(); t.open("http://news.bbc.co.uk"); QApplication.exec(); } }

    Read the article

  • Foreach is crashing script, but no errors reported.

    - by ILMV
    So I've created this smarty function to get images from my flickr photostream using SimplePie... simple really, or so it should be. The problem I'm having is the foreach will crash the script, this doesn't happen if I put an exit after the closing foreach, of course because of this the rest of my script doesn't execute. The problem also completely subsides if I remove the foreach, I've tested it and it's not the contents of the foreach, but the loop itself. Error reporting is turned on but I don't get any, I also tried messing with the memory_limit, with no luck. Anyone know why this foreach is killing my script? Thanks! function smarty_function_flickr ($params, &$smarty) { require_once('system/library/SimplePie/simplepie.inc'); require_once('system/library/SimplePie/idn/idna_convert.class.php'); $flickr=new flickr(); /** * Set up SimplePie with all default values using shorthand syntax. */ $feed = new SimplePie($params['feed'], 'system/library/SimplePie/cache', '600'); $feed->handle_content_type(); /** * What sizes should we use? * Choices: square, thumb, small, medium, large. */ $thumb = 'square'; $full = 'medium'; $output = array(); $counter=0; // If I comment this foreach out the problem subsides, I know it is not the code within the foreach foreach ($feed->get_items() as $item) { $url = $flickr->image_from_description($item->get_description()); $output[$counter]['title'] = $item->get_title(); $output[$counter]['image'] = $flickr->select_image($url, $full); $output[$counter]['thumb'] = $flickr->select_image($url, $thumb); $counter++; } // Set template variables and template $smarty->assign('flickr',$output); $smarty->display('forms/'.$params['template'].'.tpl'); }

    Read the article

  • Send post request from client to node.js

    - by Husar
    In order to learn node.js I have built a very simple guestbook app. There is basically just a comment form and a list of previous comments. Currently the app is client side only and the items are stored within local storage. What I want to do is send the items to node where I will save them using Mongo DB. The problem is I have not yet found a way to establish a connection to send data back and forth the client and node.js using POST requests. What I do now is add listeners to the request and wait for data I send: request.addListener('data', function(chunk) { console.log("Received POST data chunk '"+ chunk + "'."); }); On the client side I send the data using a simple AJAX request: $.ajax({ url: '/', type: 'post', dataType: 'json', data: 'test' }) This does not work at all in them moment. It could be that I don't know what url to place in the AJAX request 'url' parameter. Or the whole thing might just be the build using the wrong approach. I have also tried implementing the method described here, but also with no success. It would really help if anyone can share some tips on how to make this work (sending POST request from the client side to node and back) or share any good tutorials. thanks.

    Read the article

  • Nonetype object has no attribute '__getitem__'

    - by adohertyd
    I am trying to use an API wrapper downloaded from the net to get results from the new azure Bing API. I'm trying to implement it as per the instructions but getting the runtime error: Traceback (most recent call last): File "bingwrapper.py", line 4, in <module> bingsearch.request("affirmative action") File "/usr/local/lib/python2.7/dist-packages/bingsearch-0.1-py2.7.egg/bingsearch.py", line 8, in request return r.json['d']['results'] TypeError: 'NoneType' object has no attribute '__getitem__' This is the wrapper code: import requests URL = 'https://api.datamarket.azure.com/Data.ashx/Bing/SearchWeb/Web?Query=%(query)s&$top=50&$format=json' API_KEY = 'SECRET_API_KEY' def request(query, **params): r = requests.get(URL % {'query': query}, auth=('', API_KEY)) return r.json['d']['results'] The instructions are: >>> import bingsearch >>> bingsearch.API_KEY='Your-Api-Key-Here' >>> r = bingsearch.request("Python Software Foundation") >>> r.status_code 200 >>> r[0]['Description'] u'Python Software Foundation Home Page. The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to ...' >>> r[0]['Url'] u'http://www.python.org/psf/ This is my code that uses the wrapper (as per the instructions): import bingsearch bingsearch.API_KEY='abcdefghijklmnopqrstuv' r = bingsearch.request("affirmative+action")

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >