Search Results

Search found 19155 results on 767 pages for 'url redirection'.

Page 405/767 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Should OpenID clients accept adding WWW to the domain?

    - by Steve Clay
    For a long time I've used OpenID delegation on my site: http://example.org/ delegated to: http://example.openid-provider.com/, so I logged into OpenID-consuming sites using the former as ID. Recently I added www. to my site's canonical domain so http://example.org/ now redirects to http://www.example.org/. Should I be able to continue logging into existing OpenID accounts using http://example.org/? StackExchange sites say "yes". I can use either URL. At least one other doesn't recognize my existing account. Who's "right" (per spec) and is there anything I can fix on my end?

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • Can't find Localhost files

    - by GMF
    Hope you can help. This is my first time trying Ubuntu/Linux. I am logged in as root I have downloaded and installed LAMP and PHPMYADMIN. I get the test page under localhost say that It works and is installed Correctly. I have also put my files in the /var/www. they are PHP files When I put the address localhost/(page name.php) I get an error saying Not Found The requested URL /index.php was not found on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I Have put the files in the wrong folder?? If I look in the "/etc/ap[ache2/sites-available/default", It tells my my DocumentRoot is /var/www Would love somehelp on this please Many thanks GF

    Read the article

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • Google Analytics on Static Site Hosted by GAE

    - by Cody Hess
    I finagled hosting a static site on Google App Engine at http://corbyhaas.com The HTML when visiting the URL shows some meta information and a frame to the site's actual address: http://cody-static-sites.appspot.com/corbyhaas which has the content. This is done automagically by Google App Engine. I've set up Google Analytics by including their script in my index.html, but the report shows 100% of visits coming from referring site "corbyhaas.com", which is useless information. Has anyone set up Google Analytics for a static GAE site? Is there a setting in my Analytics dashboard I can tweak, or is this a hazard of using Google App Engine for static content? Also, while it's not relevant here (but could be for future sites), does GAE's method of showing only meta information with frames for static data affect SEO?

    Read the article

  • It's a Long, Long Way to Tipperary but not that Far to Yak about Apps

    - by linda.fishman.hoyle
    I wanted to let everyone know that my blog URL will be moving to http://blogs.oracle.com/lindafishman/. I will focus my future writings to be about the upgrade and adoption strategies of Oracle E-Business Suite customers. To give you a little preview, here is a link to a book of 60 customers who are live on E-Business Suite Release 12 and 12.1. We have thousands of customers live on Release 12.x and are feverishly trying to write as many stories as we can so those of you who are thinking about upgrading, putting a business case together to move from another ERP application to E-Business Suite or for small and midsize companies who want a better understanding of the benefits E-Business Suite provides organizations of your size, this will be the place to go. See you at the new site! Linda

    Read the article

  • how to point godaddy to my entrydns domain

    - by geminiCoder
    I have a server connected via dynamic ip. I have set up entrydns to manage the change of my ip. If I put in my entrydns url it points me to my servers current ip. I purchased a domain from go daddy, but I have been unable to get it to point to my entrydns. What I want is to be able to ssh to my server. but ideally id like to do this by using my domain name. I must confess Im a bit overwhelmed by the godaddy interface. So The bottom line is how do I point my godaddy domain to my dns domain so that when I look up the domain I get the current ip of the server?

    Read the article

  • Continue with out a default route?

    - by user2009
    I am doing a complete unattended install of Ubuntu 12.04. I am doing static network configuration. Here is content for Static network configuration from the preseed file. d-i netcfg/disable_dhcp boolean true d-i netcfg/no_default_route boolean true d-i netcfg/get_nameservers string 192.168.1.254 d-i netcfg/get_ipaddress string 192.168.1.13 d-i netcfg/get_netmask string 255.255.255.0 d-i netcfg/get_gateway string 192.168.1.1 d-i netcfg/confirm_static boolean true Still is asking "Continue without a default route?". I have to say , then only installed is going ahead. Am passing preseed file via network (preseed/url). How to avoid this manual intervention? Does the order of netcfg statements matter?

    Read the article

  • Evolution - exchange-connector, Rackspace global catalog server

    - by user10669
    I'm (trying to) switch to Ubuntu from Windows XP on my work laptop. Unfortunately, one of the dealbreakers is that I need full Exchange contact/calendar syncing. Our email is hosted by Rackspace (owa.mailseat.com). We login using the usernames of the format [email protected]. I'm trying to set up Evolution to use this account, using exchange-connector-setup-2.32. The first step, entering the OWA URL, usename and password works, and progresses to step 2, where I need to enter a Global Catalog server. I have no idea what to enter here. Everything fails. My questions are - What is the "Global Catalog server" - can I enter/run some dummy server here? - If its necessary, where can I get the information from this? I have a Windows XP machine synced up using Outlook 2007, so if I need to gather any information from that setup I can.

    Read the article

  • Blogger sub-directory

    - by user137263
    There has long been a debate on the internet about SEO in relation to using either sub-domains or sub-directories for blogs. I am not terribly interested in that debate. I merely want to redirect my blogger blog to my domain, the easiest way possible, and in a manner least likely to impair the current functionality of my server/websites. I believe that the simplest way in which to do this is to use a subdirectory for the blog (although I am slightly concerned that the CNAME record will be shared by both) My question is this: how to use custom domain sub-directories when blogger refuse their use; complaining that the "URL must not end with a path" when a user attempts to establish such a custom domain? Google searches on this matter are oddly useless, as most results return Blogger's forum entries that always seem to direct to Blogger's Help home page o.O (using the search facilities of Blogger's Help directory itself fails to unearth these forum posts). Any pointers (no pun intended) would be greatly appreciated.

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • Can't recognize local webserver

    - by Syed Khalil-ur-Rehman
    My Internet Cable provider has set up a web server which hosts different entertainment material like movies, songs, tv shows and games etc. While using windows the pc recognises it as a local web server and downloads files with full LAN speed of 10 mb per second. On the contrary when using Ubuntu I am only able to download the files on my Internet speed not more than 100 kb per second. What ever I try ubuntu does not recognizes the webserver as a local area network web server but as a normal internet website. How to make Ubuntu download files from this server with full LAN speed. Please help in this regard. The url is http://dmasti.pk and yes it is a web server browsable by a web browser like firefox or ie.

    Read the article

  • Remove multiple trailing slashes in a single 301 in .htaccess?

    - by Jakobud
    There is a similar question here, but the solution does not work in Apache for our site. I'm trying to remove multiple trailing slashes from URLs on our site. I found some .htaccess code that seems to work: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} ^(.*)//(.*)$ RewriteRule . %1/%2 [R=301,L] This rule removes multiple slashes from anywhere in the URL: http://www.mysite.com/category/accessories//// becomes http://www.mysite.com/category/accessories/ However, it redirects once for every extra slash. So: http://www.mysite.com/category/accessories/////// 301 Redirects to http://www.mysite.com/category/accessories////// 301 Redirects to http://www.mysite.com/category/accessories///// 301 Redirects to http://www.mysite.com/category/accessories//// 301 Redirects to http://www.mysite.com/category/accessories/// 301 Redirects to http://www.mysite.com/category/accessories// 301 Redirects to http://www.mysite.com/category/accessories/ Is it possible to rewrite this rule so that it does it all in a single 301 redirect? Also, this above directive does not work at the root level of our site: http://www.mysite.com///// does not redirect but it should.

    Read the article

  • How Do I Export Pages from Browser with Embedded Hyperlinks?

    - by Volomike
    Made a sad discovery today. I have Ubuntu 10.04 LTS. My client is in the ad business and she had a marketing competition task for me. She wanted me to visit websites of the competitors, and export the home pages as PDF. However, she wanted me to do so with embedded hyperlinks. As it turns out, Firefox (and even the latest Chrome) on Ubuntu 10.04 LTS do not embed hyperlinks in PDF web page exports. Sure, there are several Chrome and FF plugins that let you export as PDF, but what these do is connect to the URL remotely, generate the PDF remotely, and then force a download in your browser to download it from a remote location. That's not good for me, though, because some of these competitor pages require an initial login. That means that all I get back on the PDF printing from these FF or Chrome plugins is a login page. Is there a way to get around this problem, to fix the broken PDF printer on Ubuntu 10.04?

    Read the article

  • Xoom Giveaway Courtesy of the Complete Android Guide [Giveaway]

    - by Jason Fitzpatrick
    If you’re an Android fan and looking to score an Android 3.0 tablet, you can enter to win a Xoom tablet courtesy of the Complete Android Guide. What do you need to do? Per their official rules: Contribute content to the site. To do so: Sign up (via the Register link in the top-right corner). Email android ‘at’ completeguides ‘dot’ net and request contributor access to this site. Write a killer tutorial, reference or chapter for the book.  Buy the book, in paperback or ebook form.  The deadline is March 31, the winner will be drawn in in April. Note: The link to the officials rules appears defunct, we’ll update shortly when the URL is fixed. Xoom Drawing @ Complete Android Guide [Complete Guides] How To Make a Youtube Video Into an Animated GIFHTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear Monitors

    Read the article

  • How do I point a new domain to start on a page that's not index.html on separate hosting? [closed]

    - by Owen Campbell-Moore
    Possible Duplicate: How do I point a new domain to start on a page that's not index.html on separate hosting? I'm using a service called SquareSpace to host my site, and today I'm registering the domain for it. Basically, how do I make it so when somebody types www.tedxoxford.com it points at http://www.tedxoxford.com/landing (currently http://tedxoxford.squarespace.com/landing) instead of the default index? Is this possible? Squarespace is quite a restricted CMS and means that your logos etc all point to the index so I don't want people ending up on my landing/splash page every time they want the home page, only on the first time they type in the URL. A dirty hack would be to check the refferer and redirect anyone hitting the index to the landing page, but that's a lot of loading overhead I'd rather avoid...

    Read the article

  • Save Points

    - by raghu.yadav
    Explicit save point : Requires an end user action before a bounded or unbounded task flow creates a save point. For example, an end user clicks a button that invokes a method call activity that, in turn, creates a save point Implicit save point : can only originate from a bounded task flow if 1) A session times out due to end user inactivity 2) An end user logs out without saving the data 3) An end user closes the only browser window, thus logging out of the application 4) An end user navigates away from the current application using control flow rules (for example, uses a goLink component to go to an external URL) and having unsaved data. good usecases and examples given by frank/biemond and on implicit save points http://www.oracle.com/technology/products/jdev/tips/fnimphius/cancelForm/cancelForm_wsp.html?_template=/ocom/print http://biemond.blogspot.com/2008/04/automatically-save-transactions-with.html

    Read the article

  • Reinstall of Windows Bootloader not working

    - by MrBoxy
    It first started when I deleted my partition that had Ubuntu 12.04 on it. Little did I realize at the time that it would destroy all the resources of Grub on it, and subsequently prevent me booting into my hibernated windows. So I made a 12.10 Ubuntu live USB and reinstalled windows bootloader. When I rebooted it gave me a bootloader error saying I that it could not find the resources to boot from. So I went back to my live USB and installed boot-repair and tried to repair the mbr. It didn't work though. Here is the URL the boot-repair gave me if I was in trouble: paste.ubuntu.com/1328309 Thanks in advance :)

    Read the article

  • GWB | Administrator Blog Is Back To Life

    - by Jeff Julian
    We are bringing back the administrator’s blog for Geekswithblogs.net as a place to get information for what is going on with GWB. Couple reasons we are doing this. One, I post a lot of information on my blog that is not Geekswithblogs.net related. Most the time it isn’t even developer related and I know I need to work on that too, but in an effort to keep the signal much higher than the noise, we are moving the information over there. The blog URL is http://geekswithblogs.net/administrator. The other reason we are doing it is I am not the only member of the GWB staff. So please subscribe to that blog and let us know what you think about Geekswithblogs.net and how we can make the site better.http://geekswithblogs.net/administrator

    Read the article

  • SEO Suggestion For My Blog [closed]

    - by Rana
    I have a programming tutorial blog, which have decent traffic. However, I am interested to do some basic seo for my blog to get it optimized. I want to do it myself by learning. I was wondering if experts here can suggest me how should I proceed please? Also, if you please review my blog and suggest the most common seo concern that come to your mind first, those will be helpful as well. My blog site url is as follow: http://codesamplez.com/ Looking forward to your feedback soon. Thanks.

    Read the article

  • Implement blogging system as a part of a web application/website

    - by Rana
    I am working on developing a website, where registered members will be able to write blog/tutorials and each registered site user will be a registered blogger automatically. Site news/announcements will be posted through it too. Now, will it be a wise idea to use wordpress for this purpose, or should I develop custom blogging system myself? Also, is there any impact if I use a subdomain for the blog section. user same domain (sub directory url) for the blog section. Looking forward to hear your feedback/comments/suggestions. Thanks.

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

  • CSS Style Element if it does not contain another specific type of Element [migrated]

    - by Chris S
    My CSS includes the following: #mainbody a[href ^='http'] { background:transparent url('/images/icons/external.svg') no-repeat top right; padding-right: 12px; } This places an "external" icon next to links that start with "http" (all internal site links are relative). Works perfectly except if I link an Image, it also get this icon. For example: <a href='http://example.com'><img src='whatever.jpg'/></a> would also get the "external" icon next to the image. I can live with this if necessary, but would like to eliminate it. This must be implement in CSS (no JS); must not require any special IDs, Classes, styling in the html for the image or anchor around the image. Is this possible?

    Read the article

  • Morning Routine Is an Alarm Clock Deactivated via Barcode Scan

    - by Jason Fitzpatrick
    Android: If you have trouble getting out of bed in the morning, Morning Routine might be just the tool you need–an alarm clock that requires you to scan a barcode to deactivate it. Similar in concept to alarm clocks which require you to solve a puzzle to shut them down, Morning Routine requires you to get out of bed and scan a barcode/QR code to turn the alarm off. If you’re worried that’s not enough you can even set it up to require a sequence of scans. In addition, you can have the scan(s) open a URL to launch your favorite news site, web radio, or other resource that serves as part of your morning routine. Morning Routine is free for a limited time, Android only. Morning Routine [via Addictive Tips] How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • Replace %26 in htaccess to %2526

    - by Patrick
    I would like htaccess to rewrite example.com/something_%26_else into example.com/something_%2526_else. I'm importing a bunch of pages that have ampersands in the title from Mediawiki. These are encoded as %26. Drupal, for various reasons, has decided double encode the url it to have it become %2526. I simply can't create the alisis within Drupal so I have to use htaccess This is what I have as my rule so far as RewriteRule ^w/([^%26]+)\%26(.*)$ w/$1\%2526$2 [R=301] I asked this question three months ago on stackexchange and was not able to get it working. I tried hiring a contractor for this but was unable to find one. So this my last ditch effort before I completely give up. I really appreciate the help. All the best, Patrick

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >