Search Results

Search found 978 results on 40 pages for 'feeds'.

Page 12/40 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • web services access not being reached thru the web browser [closed]

    - by Tony
    I am trying to reference my .asmx webservices in .NET but my server is not exposed to the internet. When I put on the following address I get the message mentioned below. What's the reason for not being able to see the directory? Am I missing something in my IIS configuraction? Am I missing anything in my permissions? Just as reference I have other folders with webservices and I have the same issue. When I login to the server I am doing it with my windows user and password (I am using windows authentication). It's necessary to mention that when I put the URL I am getting a popup screen to put in my userid and password but it seems that's not able to validate since keeps asking me a couple of times. Let me know if you need more information to address this issue . http://appsvr02/Inetpub/wwwroot/DevWebApi/ Internet Explorer cannot display the webpage What you can try: It appears you are connected to the Internet, but you might want to try to reconnect to the Internet. Retype the address. Go back to the previous page. Most likely causes: •You are not connected to the Internet. •The website is encountering problems. •There might be a typing error in the address. More information This problem can be caused by a variety of issues, including: •Internet connectivity has been lost. •The website is temporarily unavailable. •The Domain Name Server (DNS) is not reachable. •The Domain Name Server (DNS) does not have a listing for the website's domain. •If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds 1.Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) 1.Click Tools , and then click Work Offline. 2.Click the Favorites Center button , click History, and then click the page you want to view.

    Read the article

  • How to Install Mac OS X Lion on Your HP ProBook (or Compatible Laptop)

    - by Usman
    There’s nothing more satisfying than building a hackintosh, i.e. installing Mac OS X on a non-Apple machine. Although it isn’t as easy as it sounds, but the end result is worth the effort. Building a PC with specific components and installing Mac OS X on it can save you thousands of dollars you might spend on a real Mac. And now, it’s time to step into the portable world. Today we will show how you can turn an HP ProBook (or any compatible Sandy Bridge laptop) into a 95% MacBook Pro! Why should (or shouldn’t) you do it? Let’s clarify whether or not it should be done. Firstly, we all know Apple makes awesome laptops. The design, build quality, and the aesthetics (not to mention, the glowing Apple) would make you crave for one. Secondly, all these Apple laptops are bundled with Mac OS X, which (for some people) is the most user-friendly and annoyance-free operating system. Digital artists, musicians, video editors, they all prefer Mac for a reason. So the verdict is, if hardware design is what you really look for, you should get a real Mac, and we are not at all stopping you from doing so. But if you’re only concerned with the OS (and saving a few bucks in your pocket), you may consider giving this a shot. But remember, it may not perform as good as a real Mac does. The results vary, so hope for the best, and proceed with caution. Why HP ProBook? How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers

    Read the article

  • Convite: Manageability Partner Community

    - by pfolgado
    Oracle PartnerNetwork | Account | Feedback WELCOME TO THE NEW ORACLE EMEA MANAGEABILITY PARTNER COMMUNITY Dear partner You are receiving this message because you are a registered member of the Oracle Applications & Systems Management Partner Community in EMEA. With occasion of the announcement of Oracle Enterprise Manager 12c we are revitalizing and rebranding our EMEA Applications & Systems Management Partner Community. To do this we have improved the community platform, for better and increased collaboration: The EMEA Applications & Systems Management Partner Community is now renamed to "Manageability Partner Community EMEA" We have created a Manageability Community blog and a Collaboration Workspace: The EMEA Manageability Partner Community blog is a public blog and we use it to provide quick and easy communication to the community members. (Please bookmark or subscribe to the RSS feeds). The EMEA Manageability Partner Community Collaborative Workspace is a restricted area that only community members can access. It contains materials from community events, sales kits, implementation experiences, reserved for community members. It also allows for partners to share content and collaborate with other community members. As a registered member of the community you have already been granted access to this restricted area. A dedicated team that manages the EMEA Manageability on a continuous basis. What do you have to do? All you have to do now is to bookmark the EMEA Manageability Partner Community blog page or subscribe to the blog's RSS feeds and use this as your central point of contact for Manageability information from Oracle. I look forward to develop a strong community in the Manageability area, where Oracle Manageability partners can share experiences and mutually benefit. Best regards, Javier Puerta Director Core Technology Partner Programs Alliances & Channels EMEA Phone: +34 916 312 41 Mobile: +34 609 062 373 Patrick Rood EMEA Partner Programs for Manageability Oracle EMEA Technology Phone: +31 306 627 969 Mobile: +31 611 954 277 Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy

    Read the article

  • How does Comparison Sites work?

    - by Vijay
    Need your thinking on how does these Comparision Sites actually work. Sites like Junglee.com policybazaar.com and there are many like these which provides comaprision of products , fares etc. grabbed from different websites. I had read a little about it and what i found is-: These sites uses Feeds of the sites data. These sites uses APIs of the sites which are actually provided by those sites. And for some sites which do not have any of these two posibility then the Comparision sites uses web-crawler to crawl their data. This is what i have found out. If you think there is more things to it please do give your own views. But i want to know these for my learning purpose and a little for curiosity- how does they actually matches the crawled data , feeds, and other so that there is no duplicacy. What is the process or algorithms for it. And where should i go to learn these concepts. References for books , articles or anything else.

    Read the article

  • Booby Traps and Locked-in Kids: An Interview with a Safecracker

    - by Jason Fitzpatrick
    While most of our articles focus on security of the digital sort, this interview with a professional safecracker is an interesting look the physical side of securing your goods. As part of their Interviews with People Who Have Interesting or Unusual Jobs series over at McSweeney’s, they interviewed Ken Doyle, a professional a locksmithing and safecracking veteran with 30 years of industry experience. The interview is both entertaining and an interesting read. One of the more unusual aspects of safecracking he highlights: Q: Do you ever look inside? A: I NEVER look. It’s none of my business. Involving yourself in people’s private affairs can lead to being subpoenaed in a lawsuit or criminal trial. Besides, I’d prefer not knowing about a client’s drug stash, personal porn, or belly button lint collection. When I’m done I gather my tools and walk to the truck to write my invoice. Sometimes I’m out of the room before they open it. I don’t want to be nearby if there is a booby trap. Q: Why would there be a booby trap? A: The safe owner intentionally uses trip mechanisms, explosives or tear gas devices to “deter” unauthorized entry into his safe. It’s pretty stupid because I have yet to see any signs warning a would-be culprit about the danger. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • SQL Monitor Alerts in Outlook Without Configuring Email Settings

    - by Fatherjack
    SQL Monitor is a Red Gate tool that I have a long history with and I have worked closely with the development team from a time before it was called SQL Monitor. It is with that history in mind I am a little disappointed in myself that I have only just found out about a pretty cool feature. Out of the box SQL Monitor keeps itself to itself, it busily goes about watching over your servers, noting down when things look suspicious, change drastically or are just out and out wrong. You have to go into the settings and provide email details (SMTP server, account details etc.) before it starts getting at all intrusive with warning and alerts on the condition of your servers. However, it was after installing the most recent version that I was going through the application screen by screen looking for new and interesting changes that I noticed something that had avoided my attention. On the Alerts tab there is an option in the left hand menu. I don’t know how long ago it appeared or why I have never explored it previously but it appears that you can see your Alerts in the format of an RSS feed. Now when you click that link you are taken to a page that is the raw RSS XML – not too interesting but clearly you can use this in an RSS aggregator. Such as Outlook. Note the URL in the newly opened page take it with you into Outlook. For me it is in the form of http://SQLMonitorServerName/Alerts/Inbox/Feed. Again, this is something that I have only recently noticed – Outlook can aggregate RSS feeds. Down below the Inbox, Drafts folders etc, one up from the bottom is RSS Feeds. If you right click that and choose to Add a feed then you can supply the URL for SQL Monitor Alerts: And there you have it, your SQL Monitor Alerts available in Outlook where you can keep an eye on the number of unread items and pick them off at your convenience.

    Read the article

  • News Applications internal working [on hold]

    - by Vijay
    How does news applications work other than RSS Feed based applications? I know some of them take the RSS content from the source site.But sometimes I see, those applications show - Title Description Date Image video etc. Even though when I see the original site's rss, image, video is not there in rss. So how does one get that to show in there applications? Some applications even shows feeds from magazine sites, newspaper sites. How do these applications work? I am creating an application which will link to different news sites feeds categorized (like top news, technology, games, articles etc.) On the front page it will show the website names, then on selection of any news site it will get the feed from that website and show it to user. So I would like to know All the fetching of data from should be done on user selection or data should be prefetched? Detailed information I want to fetch from the original like provided in the rss data. How should I go about it?

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • XmlWriter and lower ASCII characters

    - by Rick Strahl
    Ran into an interesting problem today on my CodePaste.net site: The main RSS and ATOM feeds on the site were broken because one code snippet on the site contained a lower ASCII character (CHR(3)). I don't think this was done on purpose but it was enough to make the feeds fail. After quite a bit of debugging and throwing in a custom error handler into my actual feed generation code that just spit out the raw error instead of running it through the ASP.NET MVC and my own error pipeline I found the actual error. The lovely base exception and error trace I got looked like this: Error: '', hexadecimal value 0x03, is an invalid character. at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)at System.Xml.XmlUtf8RawTextWriter.WriteElementTextBlock(Char* pSrc, Char* pSrcEnd)at System.Xml.XmlUtf8RawTextWriter.WriteString(String text)at System.Xml.XmlWellFormedWriter.WriteString(String text)at System.Xml.XmlWriter.WriteElementString(String localName, String ns, String value)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItemContents(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItem(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteFeed(XmlWriter writer)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteTo(XmlWriter writer)at CodePasteMvc.Controllers.ApiControllerBase.GetFeed(Object instance) in C:\Projects2010\CodePaste\CodePasteMvc\Controllers\ApiControllerBase.cs:line 131 XML doesn't like extended ASCII Characters It turns out the issue is that XML in general does not deal well with lower ASCII characters. According to the XML spec it looks like any characters below 0x09 are invalid. If you generate an XML document in .NET with an embedded &#x3; entity (as mine did to create the error above), you tend to get an XML document error when displaying it in a viewer. For example, here's what the result of my  feed output looks like with the invalid character embedded inside of Chrome which displays RSS feeds as raw XML by default: Other browsers show similar error messages. The nice thing about Chrome is that you can actually view source and jump down to see the line that causes the error which allowed me to track down the actual message that failed. If you create an XML document that contains a 0x03 character the XML writer fails outright with the error: '', hexadecimal value 0x03, is an invalid character. The good news is that this behavior is overridable so XML output can at least be created by using the XmlSettings object when configuring the XmlWriter instance. In my RSS configuration code this looks something like this:MemoryStream ms = new MemoryStream(); var settings = new XmlWriterSettings() { CheckCharacters = false }; XmlWriter writer = XmlWriter.Create(ms,settings); and voila the feed now generates. Now generally this is probably NOT a good idea, because as mentioned above these characters are illegal and if you view a raw XML document you'll get validation errors. Luckily though most RSS feed readers however don't care and happily accept and display the feed correctly, which is good because it got me over an embarrassing hump until I figured out a better solution. How to handle extended Characters? I was glad to get the feed fixed for the time being, but now I was still stuck with an interesting dilemma. CodePaste.net accepts user input for code snippets and those code snippets can contain just about anything. This means that ASP.NET's standard request filtering cannot be applied to this content. The code content displayed is encoded before display so for the HTML end the CHR(3) input is not really an issue. While invisible characters are hardly useful in user input it's not uncommon that odd characters show up in code snippets. You know the old fat fingering that happens when you're in the middle of a coding session and those invisible characters do end up sometimes in code editors and then end up pasted into the HTML textbox for pasting as a Codepaste.net snippet. The question is how to filter this text? Looking back at the XML Charset Spec it looks like all characters below 0x20 (space) except for 0x09 (tab), 0x0A (LF), 0x0D (CR) are illegal. So applying the following filter with a RegEx should work to remove invalid characters:string code = Regex.Replace(item.Code, @"[\u0000-\u0008,\u000B,\u000C,\u000E-\u001F]", ""); Applying this RegEx to the code snippet (and title) eliminates the problems and the feed renders cleanly.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  XML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ajax call through cas

    - by manu1001
    I need to write a google gadget that reads feeds from google groups. Trouble is I'm making an ajax call to retrieve the feeds and our google apps domain is protected by CAS (central authentication service). So, I'm getting a 400 bad request on making the call. I suspect that the browser is not sending the cookie when making ajax call. How do I ensure that the cookie is also sent with the ajax call? OR if that's not supposed to be the problem, what do i need to do?

    Read the article

  • Where/what is the specification for odata.service meta tag?

    - by Sam Saffron
    I would like to add some tags to our web app to enable auto-discovery of our odata feeds. So for example Nerd Dinner has the following tag: <link rel="odata.service" title="NerdDinner.com OData Service" href="/Services/OData.svc" /><link rel="odata.feed" title="NerdDinner.com OData Service - Dinners" href="/Services/OData.svc/Dinners" /> The trouble is that I have 4 different feeds and am unclear if I am allowed to add multiple link rel="odata.service" to the document. Where is the specification for this meta tag? (follow on question, are there any apps that take advantage of this tag that I can use to test out behavior)

    Read the article

  • SyndicationItem.Content is Null

    - by kdmurray
    I'm trying to pull the contents of an RSS feed into an object that can be manipulated in code. It looks like the SyndicationFeed and SyndicationItem classes in .NET 3.5 will do what I need, except for one thing. Every time I've tried to read in the contents of an RSS feed using the SyndicationFeed class, the .Content element for each SyndicationItem is null. I've run my feed through FeedValidator and have tried this with feeds from several other sources, but to no avail. XmlReader xr = XmlReader.Create("http://shortordercode.com/feed/"); SyndicationFeed feed = SyndicationFeed.Load(xr); foreach (SyndicationItem item in feed.Items) { Console.WriteLine(item.Title.Text); Console.WriteLine(item.Content.ToString()); } Console.ReadLine(); I suspect I may just be missing a step somewhere, but I can't seem to find a good tutorial on how to consume RSS feeds using these classes.

    Read the article

  • YouTube API Get all videos uploaded feed

    - by Paul
    Hi Guys, I can't seem to retrieve ALL videos from a particular channel on YouTube, despite the API giving example code that should perform just that. I'm using Java. http://gdata.youtube.com/feeds/api/users/GoogleDevelopers/uploads The above rss feed is the URL they suggest using along with the following sample code.. /*init the list*/ String feedUrl = "http://gdata.youtube.com/feeds/api/users/GoogleDevelopers/uploads"; VideoFeed videoFeeder = null; videoFeeder = serviceobject.getFeed(new URL(feedUrl), VideoFeed.class); Looping this with a for loop suggests 25 entries (as per the RSS). However - the actual number of videos uploaded is significantly larger. (662 at time of writing). My query is how on earth you retrieve everything with the API, not just a subset of the data. Any ideas on where I'm going wrong? Should I be using a different URL? http://www.youtube.com/GoogleDevelopers#g/a

    Read the article

  • Question on SyndicationItem - ASP.NET

    - by bobsmith123
    I am trying to understand how to use SyndicationItem to display feed which is RSS 2.0 or Atom compliant. What property of SyndicationItem gives me the entire description of the post. It appears that there is a Summary property but per MSDN, it gives only the Summary. Also I have noticed that in my RSS feed reader, some RSS feeds show only a few lines of description and I have to click and go to the website for the full post. But in some feeds, I can see the full post within the Feed Reader. Can someone explain how all this comes together? PS: My web page lets user enter a RSS feed address and I need to validate if the feed exists. If it does, I need to grab the last x items and show the feed's title and full description

    Read the article

  • gdata youtube api 302 'The document has moved'

    - by zalew
    I'm trying to get YouTube feeds with the python gdata library. Authentication features work ok, yt_service.ProgrammaticLogin() works, generating subauth token works, etc., but when I try to get some feeds (GetMostRecentVideoFeed, GetYouTubeVideoEntry, even GetFeed, and any other) I get: RequestError: {'status': 302, 'body': '<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">\n<TITLE>302 Moved</TITLE></HEAD><BODY>\n<H1>302 Moved</H1>\nThe document has moved\n<A HREF="http://www.google.com">here</A>.\r\n</BODY></HTML>\r\n', 'reason': 'Redirect received, but redirects_remaining <= 0'} 302 to 'google.com'??? I've even tried to do something from the google online tutorials and I get the same error. What's going on?

    Read the article

  • How to move a google doc into a folder using Zend Gdata

    - by Andre
    Hi Guys I am really struggling with moving a document from my root folder to another folder using zend gdata here is how i am trying to do it, but its not working. $service = Zend_Gdata_Docs::AUTH_SERVICE_NAME; $client = Zend_Gdata_ClientLogin::getHttpClient($gUser, $gPass, $service); $link = "https://docs.google.com/feeds/documents/private/full/spreadsheet:0AUFNVEpLOVg2U0E"; // Not real id for privacy purposes $docs = new Zend_GData_Docs($client); // Attach a category object of folder to this entry // I have tried many variations of this including attaching label categories $cat = new Zend_Gdata_App_Extension_Category('My Folder Name','http://schemas.google.com/docs/2007#folder'); $entry = $docs->getDocumentListEntry($link); $entry->setCategory(array($cat)); $return = $docs->updateEntry($entry,$entry->getEditLink()->href); When I run this I get No errors, but nothing seems to change, and the return data does not contain the new category. EDIT: Ok I realised that its not the category but the link that decides which "collection" (folder) a resource belongs too. https://developers.google.com/google-apps/documents-list/#managing_collections_and_their_contents says that each resource has as et of parent links, so I tried changing my code to do set link instead of set category, but this did not work. $folder = "https://docs.google.com/feeds/documents/private/full/folder%3A0wSFA2WHc"; $rel = "http://schemas.google.com/docs/2007#parent"; $linkObj = new Zend_Gdata_App_Extension_Link($folder,$rel,'application/atom+xml', NULL,'Folder Name'); $links = $entry->getLink(); array_push($links,$linkObj); $entry->setLink($links); $return = $docs->updateEntry($entry,$entry->getEditLink()->href); EDIT: SOLVED [nearly] OK Here is how to move/copy, sort of, from one folder to another: simpler than initially thought, but problem is it creates a reference and NOT a move! It now in both places at the same time.... // Folder you wnat to move too $folder = "https://docs.google.com/feeds/folders/private/full/folder%asdsad"; $data = $docs->insertDocument($entry, $folder); // Entry is the entry you want moved using insert automatically assigns link & category for you...

    Read the article

  • Server is on or off

    - by Daniel
    Curious. I'm starting to broadcast high school football games online, and it uses a program to broadcast the feeds off my computer. However, when I shut the program down or the computer down, the server goes offline and guests won't be able to access the feeds. Is there any kind of code out there that I can post onto my website that will indicate to my guests whether the server is on or off? I would figure it would be a simple code, a php script or something that periodically checks to see if a site is on line and then displays ON or OFF.

    Read the article

  • Dealing with HTTP content in HTTPS pages

    - by El Yobo
    We have a site which is accessed entirely over HTTPS, but sometimes display external content which is HTTP (images from RSS feeds, mainly). The vast majority of our users are also stuck on IE6. I would both of the following Prevent the IE warning message about insecure content Present something useful to users in place of the images that they can't otherwise see; if there was some JS I could run to figure out which images haven't been loaded and replace them with an image of ours instead that would be great. I suspect that the first aim is simply not possible, but the second may be sufficient. A worst case scenario is that I parse the RSS feeds when we import them, grab the images store them locally so that the users can access them that way, but it seems like a lot of pain for reasonably little gain.

    Read the article

  • Django Generating RSS feed with description

    - by Issy
    Hey Guys, I am trying to generate a full rss feed, however when loading the feed in Mail, it just shows the title, with a read more link at the bottom. I have tried several different options. But none seem to work. I would like to generate the feed with a combination of several feeds in my modl. Here is the code i have tried: class LatestEvents(Feed): description_template = "events_description.html" def title(self): return "%s Events" % SITE.name def link(self): return '/events/' def items(self): events = list(Event.objects.all().order_by('-published_date')[:5]) return events author_name = 'Latest Events' def item_pubdate(self, item): return item.published_date And in my template which is stored in TEMPLATE_ROOT/feeds/ {{ obj.description|safe }} <h1>Event Location Details</h1> {{ obj.location|safe }} Even if i hard code the description it does not work.

    Read the article

  • What server log file(with Centos5) must I check to locate specific IP activity?

    - by makmour
    Hi! I leasing a dedi server under Centos5, just recently I ve setup a blog website that grabs feeds from other news websites and presents them on my blog. Some days after I see high server loads that cant be coming just from the cron jobs that I run every now and then to grab feeds from other websites for my blog website. Thats why I m trying to pico some log files in order to have a better look the whole problem. These last days I noticed(from statscounter service) the same IP address visiting my blog website many times per day so I want to find out what us trying to do. I tried looking in all /var/log log files and httpd too but no luck. Is there any other log file I should open or any other procedure to track this IP acivity on server?

    Read the article

  • How to sort the row contents on table cell by using date

    - by MS Nathan
    Hi i am new to iphone application development.i am developing RSS feed Reader it has number of different RSS feeds. In my application (XML parsing), i want to display the content of the row on cell for particular date and with their corresponding title, description (in my application, i am using three labels for displaying title, date and description) on table view cell after parsing xml data. And i want to sort the all kinds of RSS feeds row contents(title, date, description) depends upon the date in single another UIViewController and for each RSS feed has a separate ViewController. thanks in advance.

    Read the article

  • ModelName(django.contrib.auth.models.User) vs ModelName(models.Model)

    - by amr.negm
    I am developing a django project. I created some apps, some of those are related to User model, for instance, I have a feeds app that handles user feeds, and another app that deals with extra user data like age, contacts, and friends. for each of these, I created a table that should be connected to the User model, which I using for storing and authenticating users. I found two ways to deal with this issue. One, is through extending User model to be like this: ModelName(User): friends = models.ManyToMany('self') ..... Two, is through adding a foreign key to the new table like this: ModelName(models.Model): user = models.ForeignKey(User, unique=True) friends = friends = models.ManyToMany('self') ...... I can't decide which to use in which case. in other words, what are the core differences between both?

    Read the article

  • getting real link from rss feed link

    - by pfunc
    I am experimenting with scraping certain pages from an RSS feed using curl and php. The page scraping was working fine when I was just using actual links, not links from the rss feeds. However, I realize now that links in rss feeds are usually just redirects to the actual page (at least this is what it seems like). Because now when I scrape a page with the rss link, it doesn't actually get the information I am looking for. Has anyone encountered this and know of a workaround. Is there anyway to see where the rss link is redirecting to and capturing that value?

    Read the article

  • google contacts api service account oauth2.0 sub user

    - by user3709507
    I am trying to use the Google Contacts API to connect to a user's contact information, on my Google apps domain. Generating an access_token using the gdata api's ContactsService clientlogin function while using the API key for my project works fine, but I would prefer to not store the user's credentials, and from the information I have found that method uses OAuth1.0 So, to use OAuth2.0 I have: Generated a Service Account in the developer's console for my project Granted access to the service account for the scope of https://www.google.com/m8/feeds/ in the Google apps domain admin panel Attempted to generate credentials using SignedJwtAssertionCredentials: credentials = SignedJwtAssertionCredentials( service_account_name=service_account_email, private_key=key_from_p12_file, scope='https://www.google.com/m8/feeds/', sub=user_email') The problem I am running into is that attempting to generate an access token using this method fails. It succeeds in generating the token when I remove the sub parameter, but then that token fails when I try to fetch the user's contacts. Does anyone know why this might be happening?

    Read the article

  • How SQLite on Android handles long stings?

    - by Levara
    Hi! I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1, String h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);"; Thanks!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >