Search Results

Search found 34778 results on 1392 pages for 'url link'.

Page 279/1392 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Can't ping external websites

    - by Frantumn
    I can't ping google.com with my virtual ubuntu 12.04 server. I have set up a proxy URL in my /etc/apt/apt.conf file and it says Aquire::http::proxy http://urlname.com:9999; Now, I don't know a lot about how the proxy works, but I do know when we use it on windows VMachines it's a pac script that we place in internet explorer LAN settings and it automatically detects the script and gives internet access. I tried including the 9999/proxy.pac in the apt.conf URL and it didn't seem to work any better. Would ubuntu know how to handle a proxy.pac assuming it was created for windows? Should my URL include the .pac or just end after the port numbers? I've tried both without sucess, but I would like to know. A quick test to ping a fellow co-workers' PC was sucessful. So I can see network computers, but not google. or other internet sources.

    Read the article

  • Google Analytics goal funnel does not recognize virtual page views

    - by Webber Smith
    I have a setup wizard with 3 steps. Since I'm using AJAX each step uses a virtual pageview with an appropriate URL for each step (see below). The pageviews are being recorded in the Content section of Google Analytics but the Goal Funnel still shows zero for each step. I've tried advise on other forums such as... Make sure Goal URL is set to Exact match Make sure no steps or the Goal URL are a parent directory of any other steps. For example, don't track /wizard/ as a Goal/step and track /wizard/step2/. Not sure why this would be a problem since it is an exact match, but it shouldn't hurt so I tried it... Require (or don't require - tried both) the first step in the funnel ...but none of these seem to work. Thoughts? Goal Settings Exact match : "/wizard/setup-complete/" Funnel Step 1 : "/wizard/step1/" Step 2 : "/wizard/step2/" Step 3 : "/wizard/step3/"

    Read the article

  • Does a lead-to screen with AdSense ad conform to Google's rules?

    - by ElHaix
    Re: Google's ad placement policy I have noticed that when clicking on some Forbes links, I am taken to a screen with an ad in the middle - at the top there is a link to skip the ad. Upon clicking on the skip link I am taken to the article I want to view. I want to implement something similar on my sites, where, when clicking on a search result, a the results window first displays one AdSense add on the screen with a similar UI as what I saw on Forbes. Currently, when a user clicks on a result, a new tab/window opens with the result. What I am proposing is that before the result appears, the screen displays a "Continue to result" link at the top in large letters, and in the center of the page, "Advertisement" with the ad below. This is the only popup that is user initiated and there are no other popups on the site. Navigation elements are not modified in any way. Will I get penalized by Google for implementing this?

    Read the article

  • Dev Connections Azure Tutorial

    I am more than a little tardy with this blog post but the link for the tutorial code can be found here: http://www.dasblonde.net/downloads/windowsazureessentialslaunch042010.zip If you had already downloaded the code from the link specified in my tutorial slides, that link (and this one) are both updated with some new stuff. If you attended my similar tutorial in Norway, there are updates to the scripts here that you might be interested in. I created some PowerShell scripts to delete all Windows Azure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ethernet not working in 12.04 (Dell Inspiron 14z)

    - by Izabela
    When I plug in a network cable, it is not recognized. The WI-FI is working properly, though. ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1645 errors:0 dropped:0 overruns:0 frame:0 TX packets:1645 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:206840 (206.8 KB) TX bytes:206840 (206.8 KB) wlan0 Link encap:Ethernet HWaddr e0:06:e6:de:57:e7 inet addr:150.164.201.145 Bcast:150.164.201.255 Mask:255.255.255.0 inet6 addr: 2001:12f0:601:a921:98a2:3dd:3be8:c483/64 Scope:Global inet6 addr: 2001:12f0:601:a921:e206:e6ff:fede:57e7/64 Scope:Global inet6 addr: fe80::e206:e6ff:fede:57e7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:23892 errors:0 dropped:0 overruns:0 frame:0 TX packets:14676 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30123226 (30.1 MB) TX bytes:2189050 (2.1 MB)

    Read the article

  • Search engine bots accessing strange URLs

    - by casasoft
    We have ELMAH enabled on our site and get errors whenever a Page Not Found error is triggered on the website. We have recently redesigned a new website and so we understand that search engine robots might have previously indexed pages which they try to access and result in a Page Not Found errors. For this reason, we have set up permanent redirects for such previously indexed pages to the respective new pages. The website in mention is www.chambercollege.com and for example, a previously indexed URL was www.chambercollege.com/special-offers.aspx. This page is no longer accessible so we have created the necessary permanent redirect to redirect to the respective page on www.chambercollege.com/en/content/special-offers-161/. Now we are starting to receive Page Not Found errors of search engine bots (e.g. MSN bot) trying to access the URL www.chambercollege.com/special-offers.aspx/images/shadow_right.jpg/. Any idea how could a search engine make up that strange URL and whether you have any suggestions of what to do best?

    Read the article

  • Dev Connections Azure Tutorial

    I am more than a little tardy with this blog post but the link for the tutorial code can be found here: http://www.dasblonde.net/downloads/windowsazureessentialslaunch042010.zip If you had already downloaded the code from the link specified in my tutorial slides, that link (and this one) are both updated with some new stuff. If you attended my similar tutorial in Norway, there are updates to the scripts here that you might be interested in. I created some PowerShell scripts to delete all Windows Azure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google Analytics doesn't show the correct landing page for my Facebookd ads

    - by chiba
    Most of the visitors to my site are supposed to come from external link with URL my-seite.com/en/var from Facebook ads but Google Analytics shows that most of the landing page is my-site.com/var without en which is the prefix for English version of my site. Am I missing something to configure Google Analytics ? or Facebook has sending the visitors to the wrong URL (by the preview page of the Facebbok ads the URL is set correct with the prefix en)? Any advise are appreciated. Thanks.

    Read the article

  • Share buttons vs sharer urls

    - by TeeOh
    As some people might know, adding share buttons from Facebook and Twitter can cause a page to slow down. I've seen many sites pass on the common iframe implementations that these sites offer and simply create icons that link to a sharer url for better control of page performance. http://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.cnn.com%2F&t=CNN%26s+website%27 However, I've also read that Facebook is dropping support for these links. For example, this link now redirects to the Like Button. http://www.facebook.com/facebook-widgets/share.php Here is an article noting that Facebook is deprecating/has deprecated it's share functionality and is sticking with the Like button. http://www.barbariangroup.com/posts/7544-the_facebook_share_button_has_been_deprecated_called_it I'm assuming this is the same for the sharing url. If the sharer url is no longer a reliable option, what other methods are there besides using 3rd party widgets (like Addthis)?

    Read the article

  • SEO penalty for landing page redirects

    - by therealsix
    Using ebay as an example- lets say I have a large number of items whose URLs' look like this: cgi.ebay.com/ebaymotors/1981-VW-Vanagon-manual-seats-seven-/250953153841 I want to give my client the ability to put links to these items on their website EASILY, without knowing or checking my URL. So I created a redirect service that will map their identifier with my URL: ebay.com/fake_redirect_service/shared_identifier9918 would redirect to the link above. This works great- my clients can easily setup these links with information they already have, and the user will see the page as usual. So on to the problem... I'm concerned that this redirecting service will have a negative impact on my SEO ranking. Having a landing page redirect you immediately to a different URL seems like something a typical spam site would do. Will this hurt me? Any better solutions?

    Read the article

  • URIs and Resource vs Resource representation

    - by bckpwrld
    URL is an URI which identifies a resource by location. Resource representation is a view of resource's state. This view is encoded in one or more transferable formats, such as XHTML, Atom, XML, MP3 ... URIs associate resource representations with their resources a) So I assume URI identifies a resource and not resource representation? b) I've read that relationship between an URI and resource representation is one to many. Assuming we're talking about URL, how can a single URL address more than one resource representation? thank you

    Read the article

  • How to dynamic add Google analytics track code using php?

    - by foodil2
    I would like to add track code for each link of in my email content So , i have register a google analytic accounts and found that there is only 1 track code therefore, how to use php , given a google analytics id and password given, register for a new track code add each code to a link (need to use php to add a 1px * 1 px image for each link?) Return the codes added Thank you Besides, if i have to track the result in Google analytic (traffic source -campaign) or i can use an api that can integrate the result panel in Google analytic to my system ? Thank you again for any kindly help

    Read the article

  • List of backlinks to a specific website, listed by decreasing PageRank

    - by Nicolas Raoul
    With backlinkwatch.com I can get a list of pages that link to a particular website. Unfortunately, it lists tons of obscure blogs and small forums, it has hard to find what link is really important. Is there a similar service, where links would be displayed sorted by "importance"? For instance, a link New York Times would be shown at the top of the list, while links in small blogs would not appear before a few pages. "Importance" can be subjective, so I suggest using the PageRank, but other metrics could be fine too.

    Read the article

  • Invoking JavaScript from Java

    - by Geertjan
    Here's an Action class defined in Java. The Action class executes a script via the JavaFX WebEngine: @NbBundle.Messages("CTL_AddBananasAction=Add Banana") private class AddBananasAction extends AbstractAction { public AddBananasAction() { super(Bundle.CTL_AddBananasAction()); } @Override public void actionPerformed(ActionEvent e) { Platform.runLater(new Runnable() { @Override public void run() { webengine.executeScript("addBanana(' " + newBanana + " ') "); } }); } }How does the 'executescript' call know where to find the JavaScript file? Well, earlier in the code, the WebEngine loaded an HTML file, where the JavaScript file was registered: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); webengine = view.getEngine(); URL url = getClass().getResource("home.html"); webengine.load(url.toExternalForm()); Finally, here's a skeleton 'addBanana' method, which is invoked via the Action class shown above: function addBanana(user){ statustext.text(user); } By the way, if you have your JavaScript and CSS embedded within your HTML file, the code navigator combines all three into the same window, which is kind of cool:

    Read the article

  • WiFi USB adapter showing the Network ..... but no connection in effect

    - by Idrees
    I have Pentium 4 system 3 GHz, 1 GB RAM ..... (no built-in WiFi) I installed Ubuntu 12.10 on my PC, works fine. It picked all the drivers for audio, video itself. I plugged TP-Link 54Mbps High Gain Wireless USB Adapter (TL-WN422G) ..... (link for the device: http://www.tp-link.com/en/products/details/?model=TL-WN422G) Now what happens is that the WiFi network is detected and shown in the "Network Connections", and it is also connected to it but when I open Firefox it is as if there no internet connection at all.

    Read the article

  • How do I get the root index page to redirect to a subdirectory without affecting SEO?

    - by paradroid
    I am reviving/reorganising my personal WordPress blog. It's using a URL that looks like this: http://mydomain.com/blog The webserver 301 redirects www.mydomain.com to mydomain.com. I want to use the blog subdirectory because I plan to add other parts to the site, with the blog only being one part of the site. However, at the moment there is nothing there but the blog, so I want to have the root index page redirect to the blog for the time being. I have been using this on the root index.html page to do the redirect... <meta http-equiv="REFRESH" content="0;url=./blog"></HEAD> ...but this seemed to have stopped the site being indexed by Google and Bing. How do I do this without affecting SEO? Also, what URL should I put in the sitemap.xml?

    Read the article

  • Ubuntu 12.04 Ethernet works sporadically on lenovo x200 tablet

    - by user73100
    This is what I got from ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:16:1a:0e:7e inet6 addr: fe80::21f:16ff:fe1a:e7e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:387 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16036 (16.0 KB) TX bytes:98315 (98.3 KB) Interrupt:20 Memory:f2700000-f2720000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:704 errors:0 dropped:0 overruns:0 frame:0 TX packets:704 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:55280 (55.2 KB) TX bytes:55280 (55.2 KB)

    Read the article

  • which status to put for temporarily inactive page

    - by aji
    I was wondering if someone could help me how to manage temporarily inactive website in regards of SEO and search engine. the case is i managed a big ecommerce site, and sometime i need to put down page(s). could be days, could be weeks, could be months, and it depends on our vendor. if my visitors land on the page that been temporarily inactive then i can give them a message that the vendor they looking for is not available at this time and he can check back later OR check another vendor with similar products, but how do i send my message to search engine robots? if i use 301 status and forward URL page to another similar products, then the chance that the current URL being deindex is huge while i still want to use that URL for the future if my vendor want to re-join. any advise will highly appriciated

    Read the article

  • C# 5 - asynch method callback with Task.ContinueWIth? [migrated]

    - by user1142433
    I have a method that pulls some HTML via the HttpClient like so: public static HttpClient web = new HttpClient(); public static async Task<string> GetHTMLDataAsync(string url) { string responseBodyAsText = ""; try { HttpResponseMessage response = await web.GetAsync(url); response.EnsureSuccessStatusCode(); responseBodyAsText = await response.Content.ReadAsStringAsync(); } catch (Exception e) { // Error handling } return responseBodyAsText; } I have another method that looks like so: private void HtmlReadComplete(string data) { // do something with the data } I would like to be able to call GetHTMLDataAsync and then have it call HtmlReadComplete on the UI thread when the html has been read. I naively thought this could somehow be done with something that looks like GetHTMLDataAsync(url).ContinueWith(HtmlReadComplete); But, I can't get the syntax correct, nor am I even sure that's the appropriate way to handle it. Thanks in advance!

    Read the article

  • Drop in rankings after removing sitewide backlinks

    - by user319940
    Here's the scenario: I have a small web design business and was using a branded backlink on the bottom of all client sites. Recently this has become a bit taboo with the Google updates so I went back to a few of my sites and made it so there's only a homepage backlink. After doing this, I've had a drop in rankings, despite this apparently being a best practice. Is this likely a temporary drop that will pick back up? For any new sites, I still want to have a link on all pages of client sites as it's good advertising. I plan to have a do-follow homepage link and then no-follow every other link - is this a good idea?

    Read the article

  • New domain and submission to search engines

    - by Guandalino
    I have registered a new domain with a hosting company. They offer the feature that for each new domain there is an associated placeholder page. Actually it is a "Site not configured page" with some technical text and links to the hosting site. I could: submit its URL to search engines right now remove the page and submit the URL when the site will be online (could be a couple of months) replace the default page with "coming soon" contents and submit the URL opt for simplicity and add a blank html page having a focused and well descriptive title and maybe some meta tag other? I prefer 4 over 3 because at the moment there aren't precise project details to provide. What's the proper way to notify search engines that soon this site will be online, without getting penalized for side effects I'm not considering or aware of?

    Read the article

  • How to execute a "name.desktop" file? [duplicate]

    - by Pubudug
    This question already has an answer here: Running a .desktop file in the terminal 10 answers #!/usr/bin/env xdg-open [Desktop Entry] Version=1.0 Type=Link Name=ShareFolder Icon=/usr/share/icons/DPL/NetworkShare.png Name[en_US]=ShareFolder URL=smb://servername/sharefolder This is my .desktop file which has a URL. How do I execute this desktop shortcut in the terminal? If i double click it works perfectly, but I need to execute this in terminal. I tried Running a .desktop file in the terminal. That didn't work for me either but it does if its an "application" shortcut. I'm trying here to execute "link" .desktop file, where you define in the type section (Type=Link) and (URL=smb://servername/sharefolder)

    Read the article

  • Does spreading content across domains improve ranking? [closed]

    - by usertest
    Possible Duplicate: The SEO Benefit of Breaking Up Content Onto Different Websites I was wondering if (assuming all your content is related) it would be better to put all your content under a single domain or multiple domains that link to each other. Lets say I have Site A which doesn't have a good search ranking. If I have a new product that I'm sure could get a good ranking on its own would I get a better search ranking for Site A if I - Add the new product as a new section to Site A. Or put the product on new Site B and link back to Site A. To give you an example if you were developing a few browser plugins would it be better (in terms of ranking) to showcase them all in the same site, or would you give them each their own domain's that link to each other? Thanks.

    Read the article

  • Off-Page SEO - The Ethos of Backlinking

    I want to delve under the bonnet more regarding off-page, so let's get to it! How do we get backlinks to our site? It may seem easy and what springs to mind for many is to ask someone to link to your site while in return you link back to theirs. In general this is a very fair and sensible undertaking, it really is. However, in Google's eyes and perhaps in the other two big search engines too now, its not counted as a valuable link any longer.

    Read the article

  • Apache loads any file that begins with the same string as used in url. How to prevent this?

    - by MarshallBananas
    If I point to: mywebsite.com/search and there is a file called search.php or search.html or search.inc.php or search.whatthehell.php in website's directory, Apache will point to that file instead of 404'ing. What is even more annoying is that if I point to: mywebsite.com/search/string?also=whatever Apache will still display any file with filename that begins with "search.". Also, all RewriteRules with patterns containing filenames existing in directory are ignored/useless. I'm using Apache 2 on Mac, unmodified httpd.conf. How do I prevent it from redirecting my urls so freely?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >