Search Results

Search found 9757 results on 391 pages for 'shekhar pro'.

Page 131/391 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • How to open the JavaScript console in different browsers?

    - by Šime Vidas
    Chrome: Press CTRL + SHIFT + I to open the Developer Tools. Click on the "Open console." icon in the bottom left corner. Safari: Press CTRL + ALT + I to display the Web Inspector. Click on the "Open Console." icon in the bottom left corner. Note: this only works if the "Show Develop menu in menu bar" check box in the Advanced tab of the Preferences menu is checked! IE9: Press F12 to open the developer tools. Open the Script tab, click the "Console" button on the right. Firefox 4: Press CTRL + SHIFT + K to open the Web console. What about Opera 11? Clarification: By console I mean the JavaScript console that lets you input and execute JavaScript code.

    Read the article

  • Need to redirect Wordpress category archives

    - by Scott
    I recently changed my Wordpress category structure a bit, changing some of the names and placing some under different parent categories. I don't use category name in my post URLs, so that's not a problem. But my category archive pages are indexed and have page rank I don't want to lose. So I need to redirect: "/category/old_cat_name" to "/category/new_cat_name". Or in some cases to /new_cat_name/new_sub_cat. I gather that I can't do this though the WP Redirection plugin and that I have to modify my .htaccess. Can someone show me what lines to add there--or is there another better way to do this? Thanks.

    Read the article

  • Spam bot constantly hitting our site 800-1,000 times a day. Causing loss in sales

    - by akaDanPaul
    For the past 5 months our site has been receiving hits from these 4 sites below; sheratonbd.com newsheraton.com newsheration.com newsheratonltd.com Typically the exact url they come from looks something like this; http://www.newsheraton.com/ClickEarnArea.aspx?loginsession_expiredlogin=85 The spam bot goes to our homepage and stays there for about 1 min and then exist. Luckily we have some pretty beefy servers so it hasn't even come close to overloading our servers yet. Last month I started blocking the IP address's of the spam bots but they seem to keep getting new ones everyday. So far I have blocked over 200 IP address's, below are a few of the ones I have blocked. They all come from Bangladesh. 58.97.238.214 58.97.149.132 180.234.109.108 180.149.31.221 117.18.231.5 117.18.231.12 Since this has been going on for the past 5 months our real site traffic has started to drop, and everyday our orders get lower and lower. Also since these spam bots simply go to our homepage and then leave our bounce rate in analytics has sky rocketed. My questions are; Is it possible that these spam bots are affecting our SEO? 60% of our orders come from natural search, and since this whole thing has started orders have slowly been dropping. What would be the reason someone would want to waste resources in doing this to our site? IP's aren't free and either are domain names, what would be the goal in doing this to us? We have google adwords but don't advertise on extended networks nor advertise in Bangladesh since we don't ship there so they are not making money on adsense. Has anyone experienced anything similar to this? What did you do and what was the final out come?

    Read the article

  • Is there a general rule of thumb for which browsers to optimize your site for?

    - by Christian
    I have a site (recently relaunched it with a new design) that I have put off optimizing for ie7 for far too long. I was just never too worried about it. The site is optimized for ie8-10, Firefox, Chrome, Opera, Safari, etc.. Then I asked myself, is it even worth it? I checked traffic over the last couple months before the relaunch and about 1.3% of the traffic is coming from ie7. So, is there a general cuttoff percentage when you would not optimize for a specific browser?

    Read the article

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • How do I remove a LOT of indexed pages from Google?

    - by Thierry
    A few weeks ago we have figured out that Google has indexed some information we would rather keep in some confidentiality, in the format of individual PDF files. Our assumption was that this was a problem with our robots.txt we had overlooked. Even though we are not sure whether or not this is the case, we are certain that the robots.txt file is in a valid format and is, according to Google's webmaster tools, blocking the files. However, even after this adjustment that has been made weeks ago, Google still has the PDF files indexed, but does tell us further information cannot be provided due to the robots.txt file being present. As you can hopefully understand, this is unwanted behaviour due to the nature of the documents. I am aware that there is a request page being provided by Google for this purpose, but there are a lot of files. Is there an easier way to get Google to remove all of the files from its search engine? If not, is there anything else you could advise us to do besides manually requesting Google to remove every single page? Thanks in advance.

    Read the article

  • adding tagged / dynamic pages in sitemap

    - by sam
    ive got a blog thats been running for about a year ive made about 200 posts, and there should be about 220 pages to index (additional pages for about / contact ect). When i go to crawl the site i get 1900 pages because of all the pages that are related to tags ive used in my blogs these 70% of these pages only contain one blog post. When submitting my site map to google should i exclude all pages with /tagged/ in the url so ill only be submitting unqiue pages, or should i submit the full site map ?

    Read the article

  • Google Webmasters tools crawl error caused by URL split into two lines

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

  • Do CDNs work with POST operations?

    - by iddqd
    I'm using a CDN (Level3) for the first time and I'm a bit confused. I'm accessing dynamic URLs such as http://cdn.mysite.com?getItem=1234 that return text data. Do CDNs work with HTTP POST operations? When i issue a HTTP POST operation, my "real" server receives this request every time, so I'm wondering if the CDN has a problem with POST operations. If i use HTTP GET it seems to work, i call the URL once (from my application), i can see my server receiving the request. If i call it a second time, the CDN delivers it directly, my server doesn't get anything. However if i open same the link manually from a second browser tab, my server is asked to deliver again, shouldn't it be cached by now? Many thanks.

    Read the article

  • Correctly indexing multiple domains with same content in Google and others

    - by AJweb
    I have a client with a dozen territorial domains, like mydomain.co.uk, mydomain.fr, mydomain.de, etc Most of these domains hold a different language of the same dynamic content (shop), but some, like co.uk and .com, have the same language and content, except for some content customized to each country/domain in the front page, contact and other pages. I am aware that we should use the canonical meta tag to mark those duplicated contents, but, we want the co.uk to be present in UK ( indexed in google.co.uk ) and the .com to be present in US and other countries, for example, or least that is the goal. Is there anything we can do to "help" google determine the geographical meaning of each domain? If we mark with canonical tag the .com and co.uk sites, do you know how google will decide which one to show on a given search?

    Read the article

  • Server 2008R2 in Extra Small Windows Azure Instance?

    - by Shawn Eary
    Windows Azure hosting for an Extra Small (XS) Windows VM seems to come out to be about $10 a month right now. I think this XS instance gives you the equivalent of a 1 GHZ CPU with 768MB of RAM. I think the minimum requirements for Server 2008 is 1GHZ CPU with 512MB of RAM. Also, I think the minimum requirements for SQL Server Express is 1GHZ CPU with 256 MB of RAM and that the minimum requirements for Team Foundation Server Express 11 Beta is 2.2 GHZ CPU with 1 Gig of RAM (this 2.2 GHZ part could be a problem for my 1 GHZ XS VM...). Given the performance of the XS Azure instance, would I be able to install: a very basic MVC web site; a free instance of SQL Server Express; a free single user instance of Team Foundation Server Express 11 Beta and run the XS VM instance without serious crashing? I know there are other Shared WebHost providers that can provide these features for me, but those hosting providers have the following disadvantages: They sometimes cost a lot of money after all of the "addons" are in place They probably don't provide the level of security and employee integrity that Microsoft can provide They don't provide the total control that an Azure VM seems to provide

    Read the article

  • How to get useful feedback/bug reports from users

    - by Mikael Eliasson
    I'm sure most webmasters have recived a mail like this: Creating [insert item here] is not working! When you check it out there is no general problem with the function but rather the user has discovered an edge case. Almost every mail I get is like this and in the long run it gets a bit annoying to always have to ask the user for more information. Is there anything I can do to get my users provide more useful feedback? Right now I have a mailto: for the webmaster mail in the page footer. I was thinking of changing this so that they have to report through a form on the site. Anyone got any experience with this? Do you get better/more reports by having a feedback form instead of giving the users the email?

    Read the article

  • Cannot add DataTables.net javascript into Joomla 1.5

    - by mfmz
    I've been having this problem where i couldn't add Datatables.net javascript into my Joomla article. I have been trying to include it through Jumi. To say that my editor strips of the tag is somewhat not right as I have been able to execute Google Chart API in Joomla which also uses javascript. Any clue why? The code is as below : <link href="//datatables.net/download/build/nightly/jquery.dataTables.css" rel="stylesheet" type="text/css" /> <script src="http://code.jquery.com/jquery-1.9.1.min.js"></script> <script src="//datatables.net/download/build/nightly/jquery.dataTables.js"></script> <script type="text/javascript"> $(document).ready( function () { var table = $('#example').DataTable(); } ); </script>

    Read the article

  • Using a front controller design pattern doesn't allow images to be served

    - by MrMe TumbsUp
    I am currently using a front controller. All requests for my website go through it. I have a problem with image links like: <img src="img/image.jpg" /> Then my front controller will try to dispatch the request to: application/controller/ImgController.php. Then the image won't load. I think it has something to do with the .htaccess file: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L]

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Stopping duplicate H1 and title from dynamic content

    - by codemonkey
    I have a web site where there are lots of dynamically (database driven) created pages. These pages are basically used to show uploaded images The pages look a bit like this URL: http://www.mywebsite.com/page-id/page-title/ H1: View from the sea This is a big issue because I might have 10 other pages with the title: 'View from the sea'. I know the simple solution would be to make sure the pages are named differently but I have lots of users on the web site so it's not that simple. What do you guys think to putting the page-id with the page-title in the H1 tag? So it might read 437 - View from the sea. I need to differentiate the h1 titles. I think using the page-id would help but if anyone has a better solution that would be great! Thanks in advance

    Read the article

  • Attaining credit card data

    - by Adam
    I've read the many posts on this site that say we are not allowed to store cc numbers if we are not pci-compliant. But, I'm wondering if it is possible to send a CC number through a form to an email address? Would that be still infringing on the standards? The reason I ask is that a local business owner wants to retrieve a number through a form on his website, so he can manually enter the cc info on his end. I'm assuming the only way to properly get a credit card number is to setup a merchant account? What's the best way to get a cc number without calling the actual customer? I'm thinking email is a bad idea as well.

    Read the article

  • Joomla Hide Menu Item, or: Using Rich Content as part of the navigation

    - by chiccodoro
    In my Joomla based web site, I have a two layer main menu. The page layout contains two sections whereas the left one displays the content and the right one displays some other kind of content which at the same time serves as a menu. For example, if the user clicks on the "Products" - "SomeCategory" 2nd level menu item, the left section displays an image. The right section lists all products of that category. Each product is represented by an image and text. The content is scrollable. This section is implemented by means of a custom module (mod_custom) assigned to the menu. The content is rich text (HTML). Each product is entered manually by adding a picture and a text in the WYSIWYG editor, and by inserting a link for the picture and text. Now the issue: When the user clicks on a product, I want to display the corresponding product description article ("SomeProduct") to the left, accounting for the following requirements: The bread crumb now displays "Products - SomeCategory - SomeProduct" The main menu still displays the 2nd level for "Products", and "SomeCategory" is still marked as selected. (I would love if the right section which lists the product would remain in the exact same scroll state, but that's a completely different story.) If I link the product entry from the right hand side directly to the article "SomeProduct", then the article appears to the left, but the breadcrumb and menu are reset. So I wanted to create a hidden menu item "SomeProduct" beneath "SomeCategory", and to link the product entry to that menu item. This way, if I click on the product entry, the article appears to the left, the breadcrumb behaves correctly, and the menu state is preserved. However, it is not possible to configure the SomeProduct menu item as "hidden", therefore it appears in the main menu. I found some resources that suggest to create another menu, called "hidden", which does not use any modules, and to create the "SomeProduct" menu item in that menu. Unfortunately this did not work for me: If I link that menu item from the product entry, and click on that entry, then the article appears to the left, but the menu is reset, and the breadcrumb displays "SomeProduct" instead of "Products SomeCategory SomeProduct". Lucky me! I found an appropriate stackexchange site where I can pour out my heart to you guys. Sure you can help me :-)

    Read the article

  • New blog post shows immediately in google search results where as other HTML content takes time, why?

    - by Jayapal Chandran
    I have a blog which has been active for 3 years. Recently I posted an article and it immediately appeared in google search. Maybe 5 to 10 minutes. A point to note is I was logged into my google account. Maybe google checked my post's when I searched since I am logged in? Yet I logged out and used another browser and searched again with that specific text and it appeared in google search result. How did this happen? However, if I make an article in static HTML and publish, it takes time. (I assume this is the case but I haven't tested much). Yet tested a few cases after updating it in my sitemap xml. How does google search work for a blog and other content?

    Read the article

  • Tack anchor link with Google Analytics

    - by Fredrik
    I have searched for how to track anchor links in analytics, but couldn't get it working. I have this code in the header: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('_setAllowAnchor', true); ga('create', 'UA-*******-1', '****.com'); ga('send', 'pageview'); </script> And my links looks like this: <a href='#/contact'><span>Contact</span></a> I also tried to use this links: <a href='#/contact' onClick="_gaq.push(['_trackPageview', location.pathname+location.search+location.hash]);"><span>Contact</span></a> Is there any tips on what I can do?

    Read the article

  • ASP.NET website deployment [on hold]

    - by Rei Brazilva
    I am getting my hands wet with ASP and I have been following the tutorials. I deployed the site and in Azure and it worked great. Today I started actually designing the site. And when I published, it looks as if it doesn't read any of the files I just updated, added, and modified. It works on my localhost, but not in the Azure. I thought when you publish, everything goes up, including the new files. I don't have enough reputation to add a picture so, you'll forgive me. SO, basically, how do I get my entire site uploaded? In case anyone does stop by, I was able to pull this out just recently: CA0058 Error Running Code Analysis CA0058 : The referenced assembly 'DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' could not be found. This assembly is required for analysis and was referenced by: C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\WebsiteManager\bin\WebsiteManager.dll, C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\packages\Microsoft.AspNet.WebPages.OAuth.2.0.20710.0\lib\net40\Microsoft.Web.WebPages.OAuth.dll. [Errors and Warnings] (Global) CA0001 Error Running Code Analysis CA0001 : The following error was encountered while reading module 'Microsoft.Web.WebPages.OAuth': Assembly reference cannot be resolved: DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246. [Errors and Warnings] (Global) Could this have something to do with the problem?

    Read the article

  • Sharing one static ip for both ftp and www service

    - by user11496
    Trying to figure out how to update the Zone record and configure webserver so that one application on the webserver is accessible by public. I'm completely not good at NS/DNS/NAT/firewall/routing/port forwarding/networking etc. "faraday" is the intranet name. Everyone within local network can access all applications hosted on "faraday". Hostname for webserver is "www", FTP server is "ftpserver". Both servers running RHEL4 OS. The goal is to allow anyone outside the company network (public) to access only one of the many applications on "faraday". Hope somebody can help me with some of the questions below, if not all. From zoneedit record, the static IP is used by FTP now. Can I use the same existing static IP - 219.95.10.100, for web service? Currently anyone who enter "http://www.abc.com.my" will be directed to "http://www.abc.com". I don't want this to change. Currently, no one else, except employee on local network, can access "faraday" web pages. How to configure so that when anyone type "http://thisapp.abc.com.my" on their web browser, the url will lead them to "http://faraday/thisapp" (application folder is /var/www/html/thisapp on RHEL4 web server). If possible, how to set the URL will continue to show "http://thisapp.abc.com.my" instead of "http://faraday/thisapp" How to limit/restrict user (those who are not from local network) so they only have access to "http://thisapp.abc.com.my", but not "http://faraday" or "http://faraday/anotherapp", etc. What's the configuration changes needed in /etc/httpd.conf on web server? Company domain name is "abc.com.my". Following is the zone records on www.zoneedit.com. Subdomain Type IP sdsl A 219.95.10.100 ftp CNAME sdsl.abc.com.my @ NS ns3.zoneedit.com @ NS ns7.zoneedit.com WebForward record: New Domain Destination Cloaked www.abc.com.my http://www.abc.com N On my local DNS server, there are 2 zone files: abc.com.my and pnmy.abc.com. > cat abc.com.my.zone ftp CNAME ftp.pnmy.abc.com. sdsl A 219.95.10.100 > cat pnmy.abc.com.zone ftp CNAME ftpserver ftpserver A 172.16.5.1 faraday CNAME www www A 172.16.5.2

    Read the article

  • Page appears indexed in Google but not findable for any search terms?

    - by Jeff Atwood
    (Note that I am going to use screenshots here because I suspect writing about this will change the behavior over time.) If you do a Google search for uiviewcontroller best practices either with or without the quotes, you end up with results like this: Note that none of these pages resolve to the actual Stack Overflow question containing those words in the title. They resolve to either a) sites that are mirroring our creative commons data and correctly pointing back to the source question without nofollow, as properly specified by our attribution requirements or b) our own internal links to the question, but not the actual question itself. The actual page with the title ... Custom UIView and UIViewController best practices? ... does exist at this URL ... http://stackoverflow.com/questions/3300183/custom-uiview-and-uiviewcontroller-best-practices ... and apparently it is present in Google's index! But why does it not appear when we search for uiviewcontroller best practices ? We know that Google contains this page in its index Our search terms match the title of the question Stack Overflow has much higher pagerank than the other sites that are mirroring this question under Creative Commons I don't get it. What are we doing wrong here?

    Read the article

  • Reverse proxying only a specific URL

    - by Bart Silverstrim
    I have a web server at www.ourcompany.com running Apache2. Using the proxy modules, I am able to (for example) get 172.16.0.5, an internal IP device, to be accessed on www.ourcompany.com/device. The trouble is that anyone can play with or explore the device using strings sent to www.ourcompany.com/device/change/settings/here.html. I'd like the reverse proxy to only work for a specific URL; www.ourcompany.com/device/you/must/use/this while anything else will be rejected if requested. Is there a setting that can be used to do this, or is it a simple rewrite condition placed in the virtualhost for the site under sites-enabled? What is the simplest, most maintainable way to sanitize requests to the internal device through the reverse proxy? Running Apache2 on Ubuntu.

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >