Search Results

Search found 19950 results on 798 pages for 'url scheme'.

Page 437/798 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Extracting meta tags attribute using wget [migrated]

    - by Amit
    I have a file having some URLs per line. I need to extract the "keywords" present in the tags i.e. if there is meta tag for "keywords" then i want to get "content" value for it. Example: if the web-page has this meta-tag then for that URL i want "wikipedia,encyclopedia" to be extracted. One approach is to download the web-page using "wget" and then parse it using some standard HTML parser. I was wondering is there any better way to do this without downloading the entire web-page.

    Read the article

  • help redirecting IP address

    - by Alice
    Google has indexed the IP address of my site rather than the domain, so now I'm trying to set up a 301 redirect that will redirect the IP address and all subsequent pages to the domain. I currently have something like this in my .htaccess file (however don't think it's working correctly?): RewriteCond %{HTTP_HOST} ^12.34.567.890 RewriteRule (.*) (domain address)/$1 [R=301,L] I've used various redirect checker tools and keep getting the message: "... not redirecting to any URL or the redirect is NOT SEARCH ENGINE FRIENDLY" Am I doing something wrong or is there something else I should be trying? Thanks! Alice

    Read the article

  • Clean SOAP Calls from iOS - SudzC

    - by Richard Jones
    This is worth another mention. If you need to call SOAP web-services from iOS or Javascript, and lets face who doesn't. http://SudzC.com really delivers. You give it the URL to you're WSDL file (or upload a file) and it just spits out a ready to go Xcode project. I would point out that to get it to work 100% I changed line 204, in Soap.m (commented out line is old version, mine is below) //if([child respondsToSelector:@selector(name)] && [[child name] isEqual: name]) { if([child respondsToSelector:@selector(name)] && [[child name] hasSuffix: name]) { I consumed a Microsoft Dynamics NAV set of web-service pages no problem (and they tend to be fairly complex WSDL definitions).

    Read the article

  • SEO Benefits of adding a Tumblr feed to site

    - by Paul
    A client of ours has a CMS driven Blog in his hotel site - he would like to use the blog to add depth top his site and add seo benefits relating to the blogs content. The current blog is a basic header / text field and doesn't contain any tagging / meta features. Unfortunately we dont have a .net developer in our team to alter the existing blog and add meta / tagging and there isn't budget to hire one - so I considered using a Tumblr blog - setting it up externally - giving it a blog.hotelname.com address and feeding it into the existing page via tumblrs js - which basically does a document.write into the page - which we can style. I understand from a previous post (Poor CMS blog vs Tumblr embed as a general rule most search engines ignore JS created content - but will the above approach act as an improvement on the existing system for now - as the blog will be setup externally with its own url and also feed into the existing site? Cheers Paul

    Read the article

  • Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes

    Mise à jour du 07.05.2010 par Katleen Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes Il y a quelques heures, les trois premiers noms de domaines non-latins on été placé dans la root zone du DNS. Ils sont donc désormais en service, et fonctionnent parfaitement. Voici un exemple de ce que vous pourrez voir dans le champ d'URL de votre navigateur, si vous visitez l'un de ces sites : [IMG]http://blog.icann.org/wp-content/uploads/2010/05/idn-example-450px.png[/IMG] Ces trois nouveaux domaines sont السعودية. (?Al-Saudiah?), امارات. ( ?Emarat?) et ...

    Read the article

  • SEO: Joomla Category Page Optimization + Canonical Linking

    - by Huberis
    I'm wondering how best to optimize my Joomla site's SEO. I have pages with multiple articles on each page. Either via category-type pages, or via modules. In each case, I'm not wanting users to access the articles separately from the forward facing, menu-linked pages. I understand however that Joomla still generates a url for those articles, and Google can still crawl and display these articles separate from the pages. My question is what is the best way to control this so that my users get directed only to the front-facing pages? By using the canonical element for each article to point to the front-facing page it's on? Or is there a better method? Thanks for your help!

    Read the article

  • Will removing unused query string parameters negatively affect SEO?

    - by trm
    Will changing links to remove query string parameters that are no longer used have any negative impact on search engine rankings? Say I have a page about.php on my site, and all of my links to this page are of the form http://www.example.com/about.php?foo=bar and I've made some changes to the script such that the parameter foo is no longer used. I would like to remove the unused parameter from the links so the URL will look cleaner, but I am concerned that this could cause problems with SEO. Is it safe to remove ?foo=bar from my links?

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Release 51 of Sun Rack II capacity calculator available

    - by uwes
    A new release of the Sun Rack II capacity calculator is available on eSTEP portal. Just uploaded release 51 of the calculator. The following changes have been integrated: Added LOD date of 30 NOV 2014 for ST25xx M2 (NEP LOD – other customers LOD is 31 MAY 2014) Moved 7420 to EOL HW due to met LOD Bug correction : X4-2 and X4-2L weren’t working. Bug correction : ES1-24 RU are now correctly shown (2 ES1-24 only takes 1 RU) The tool calculates all the data necessary (power requirements, BTU, number of rack units, needed power outlets etc.) while inserting the many different kind of HW equipment in aSun Rack II cabinet (version 1000 and 1200). It takes into consideration most of the available servers, storage devices, tapes, and Netra products. There are also a couple of third party products which are taken into account. The spreadsheet can be downloaded from eSTEP portal. URL: http://launch.oracle.com/ PIN: eSTEP_2011

    Read the article

  • Calling a web service through a reverse proxy

    - by Ken
    I had a w/s that when I first read the WSDL in test, was http, but needed to be accessed from behind a reverse proxy with https.  Here are the steps: Change the app.config, <httpTransport> to <httpsTransport> Change the app.config and the url address in the <endpoint>to the reverse proxy address Add System.Net.ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };  to disable certificate validation.  This will validate all certificates (including invalid, expired or self-signed ones).

    Read the article

  • Stop Google Analytics from appending hostname?

    - by Nick Q.
    I've come across an Analytics profile that is appending the rest of a URL to the end of a page's path. For example when looking at the page that exists at http://example.com/page I would expect to see /page but instead it shows me /page/http://example.com/. The profile has no filters applied to it, and until July was reporting as expected (/page), in July the site in question switched hosts (and absolutely nothing else, so I'm not sure that's the problem). The analytics code on the site is the standard Google Async code with a domain set. All other profiles for the site show /page as expected. Any ideas as to how I can get the profile to function as expected?

    Read the article

  • Can SSL Wildcards have multiple/nested levels of wildcard?

    - by Don Faulkner
    I know that an SSL wildcard certificate (*.example.org) can be used to support many names under the domain (a.example.org, b.example.org, c.example.org). I also know that the * is only good for matching a single level of name. That is, *.example.org will not work on a.b.example.org. What if I used a certificate with the name ..example.org? I'd like to build a certificate with the following name configuration: CN=example.org subjectAltName=DNS:example.org, DNS:*.example.org, DNS:*.*.example.org, DNS:*.*.*.example.org I've tried building a few like this as self-signed certificates, but I've not had good results. For example, chrome tells me "Server's certificate does not match the URL." Is it possible to have nested wildcards in a certificate, or do the popular browsers not support this?

    Read the article

  • How to delete all your old website data from the internet?

    - by Akky Awesøme
    I had my website on rohbits.com but for some reasons I had to delete it and recreate it with this URL wwww.rohbits.com/blog. My problem is that the old links are still visible on google search and when people click on those links, they land on a 404 Error page of the hosting company. I want to either delete all the previous data from the search engines or have an 404 Error page of my own so that I can tell my visitors where the actual website is. I have already redirected all the traffic which comes to rohbits.com to www.rohbits.com/blog but when they click on the expired links, they get this error page. One sample expired link is this one: http://rohbits.com/wordpress-tricks.

    Read the article

  • Website .htaccess file for Wordpress sub folder

    - by ubique
    I developed a Flash website for a client and added the following .htaccess file in the root directory and the non-www to www redirect works perfectly. RewriteEngine On RewriteCond %{HTTP_HOST} ^website.com [NC] RewriteRule ^(.*)$ http://www.website.com/$1 [L,R=301] I was also asked to add a Wordpress blog so I put it in a new directory folder (as opposed to a sub domain) with so the URL is www.website.com/blog Does Google now see the main site and blog as two different websites? Do I need to link them together using another .htaccess file in the Wordpress Root so Google automatically crawls the whole domain? Any help appreciated....

    Read the article

  • SAML Request / Response decoding.

    - by Shawn Cicoria
    When you’re working with Web SSO integration, sometimes it’s helpful to be able to decode the tokens that get passed around via the browser from the various participants in the trust – RP, STS, etc. With SAML tokens, sometimes they’re simply base64 encoded when they’re in the POST body; other times they’re part of the query string, which they end up being base64encoded, deflated, then Url encoded. I always end up putting together some simple tool that does this for me – so, this is an effort to make this more permanent. It’s a simple WinForms application that is using NetFx 4.0. Download

    Read the article

  • "find" command and piping its output through another program

    - by Charbel
    this is not an Ubuntu specific quesion, it applies to all unix/linux. how can I run a command like this: find . -maxdepth 1 -type d -print -exec svn info "{}" | grep URL \; the command above doesn't do what I want, I can't seem to pipe the output of the svn info to grep. This works, but the output contains much more than I need: find . -maxdepth 1 -type d -print -exec svn info "{}" \; Any ideas?

    Read the article

  • HTML5 - check if font has loaded

    - by espais
    At present I load my font for my game in with @font-face For instance: @font-face { font-family: 'Orbitron'; src: url('res/orbitron-medium.ttf'); } and then reference it throughout my JS implementation as such: ctx.font = "12pt Orbitron"; where ctx is my 2d context from the canvas. However, I notice a certain lag time while the font is downloaded to the user. Is there a way I can use a default font until it is loaded in? Edit - I'll expand the question, because I hadn't taken the first comment into account. What would the proper method of handling this be in the case that a user has disabled custom fonts?

    Read the article

  • Editing a command-line argument to create a new variable

    - by user1883614
    I have a bash script called test.sh that uses command-line argument: lynx -dump $1 > $name".txt" But I need name to be created from the argument by specific keywords in the argument. An example is: http://www.pcmag.com/article2/0,2817,2412941,00.asp http://www.pcmag.com/article2/0,2817,2412919,00.asp Both are two separate articles but are the difference can only be seen in those 12 characters. How do I create a variable from a url for those 12 characters? So that when I run test.sh in Terminal: ./test.sh http://www.pcmag.com/article2/0,2817,2412941,00.asp there is a text file saved as 0,2817,2412941,00?

    Read the article

  • IIS isn't propagating domain

    - by ErocM
    I called Godaddy and 'verified' my settings for the two ips were correct. ns1.asezo.com = xx.xx.xx.15 ns2.asezo.com = xx.xx.xx.16 then I set the nameserver of asezo.com to the ns1/ns2 above, which Godaddy tech support says is right. When I try to visit my site, it says Oops! Google Chrome could not find asezo.com. When I try to ping the website, it gives me a time out. I have the bindings in IIS for the default website as: http - xx.xx.xx.15 - 80 www.asezo.com and http - xx.xx.xx.15 - 80 asezo.com And I'm still getting nothing. I can go directly to the ip http://xx.xx.xx.15/ and it gives me the IIS default website, I just can't get the url to propagate. What am I doing wrong?

    Read the article

  • Best strategy for supporting multiple server communication from iPhone/android app?

    - by tipycalFlow
    I'm making an app that will be used in multiple hospitals in the US. As per HIPAA compliance requirement, every hospital will have its own server that complies with these requirements of ensuring patient data security, etc. Now the task is that the app should communicate with a particular server based on the login info. An additional requirement is that new hospitals(servers) are likely to be added along the way, even after the app is available on the market. So basically, according to some login credentials, the app should communicate with the server of the hospital assigned to that person. One pretty crude way is to set up our own server which links the hospitals with the login info and accordingly, provides a base-url for data exchange. Is there a more efficient way to handle this?

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • How to tell google that i have changed my website urls ?

    - by Momen M El Zalabany
    I have done major updates in my website, and renamed all my urls. Problem is how can i tell google to i have renamed all urls and let google refresh his library ? I have uploaded sitemap vai google webmaster tools many times. My Website url : http://www.pndmasr.com My sitemap http://www.pndmasr.com/sitemap.xml but still every time i search google for "pndmasr" i get old pages results, i have waited more than 3 days by yet same problem. any solutions ? is there a problem with my sitemap ?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • Disqus thread migration. Gotchas?

    - by sramsay
    I've been migrating a site to a new domain. The site itself is pretty straightforward (it uses Jekyll), and everything has gone fine -- except migration of Disqus threads. I've had partial success -- some of the threads have migrated successfully, but not all. I've tried the domain migration wizard (which caught a few), the URL mapper (which caught a few), and the 301 redirect crawler (which caught a few). But the remaining threads just won't move, no matter which method I use. So, I suppose I suppose I'm asking if there are any "gotchas" I should know about with this. When you execute any of these migration tools, it says it will "take awhile." Does that mean hours? Days? I can't tell if it's working, and there's no logging or error reporting that I can see.

    Read the article

  • JavaScript malware analysis

    - by begueradj
    I want to test websites for JavaScript malware presence . I plan to develop a Python program that sends the URL of a given website to a virtual machine where the dynamic execution of the eventual malicious JavaScript embedded in the website's page is monitored. My questions: Should my VM be Windows or Linux ? What if the malware damages my VM: is there a hint how to avoid that ? Or launch a new VM automatically instead ? If I use telnet client library to communicate with the VM: must I implement a server within the VM to deal with my queries or can I overcome this ? I am jut looing for hints, general ideas. Thank you for any help.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >