Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 195/216 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Site inaccessible by some people, fine for others [on hold]

    - by Paul Howell
    A couple of days ago my website www.howellphoto.com (hosted by one.com, wordpress site) started loading really slowly, and I have been unable to access any pages linked from the homepage. Several of my friends have found the same issue, yet many are able to access the site without problem. Live support at one.com have not been all that much help, requesting the ip addresses of a few people who cannot access the site, and saying it could be a firewall issue. Wordpress support (my site was created in prophotoblogs) have been better and have updated all plugins, etc, but can see no issue from their end. My main issue is that even if there was a local fix that I could do on my computer, this would not help wih any potential customers visiting my site for information! This is driving me crazy!!! Any help will be legendary! Cheers, Paul

    Read the article

  • Is the use of hashbang really a good idea? [on hold]

    - by user32642
    I've been working on a WordPress site lately that was design with hashbang or shebang in the dynamically generated URLs. After doing some research, I noticed that there was some preference by Google in their use and how it crawled the site. However, after I ran several sitemap generators and Screaming Frog SEO Spider, I realized that the only page being crawled was the index page. So now I am questioning the use of hashbangs. What do you think? Should I attempt to remove them? Or will it even matter? And does anyone know of a easy way to remove this? The site is www.modernvintage1005.com

    Read the article

  • Best Way for Developers to Upload Files to Production Server

    - by ultrajohn
    Small team of developers doing their work here and there. We have a team leader, and is sole responsible for uploading updated source files from the development server to the production server. So let's say, so if an updated files needs to be uploaded to the prod server, that concerned developer shall notify the team lead about it, and then the team lead will update the files to the prod server. So no developer has an access to the prod server except for the team lead. That's our current setup. Now, what we want to do is to give developers a way for uploading their updated files to the server without the team lead intervening in the process. What do you think is the best way to go about this?

    Read the article

  • Should I create topics in a forum I'm about to launch so that new users won't feel it is "empty"?

    - by janoChen
    I'm about to launch a discussion forum about Taiwan. I'm really trying to figure out how to deal with the first visitors. I've thought about the following so far: Invite few friends to start some discussions and give some replies. Create discussions myself and reply them myself (with another account). I don't want the first visitors to feel like the site is empty. Maybe I'm missing something. Any suggestions?

    Read the article

  • How long is the penalty for Duplicate ecommerce content after it has been ressurected

    - by will
    I am fixing all of the duplicate content on my ecommerce site with all orignal descriptions etc. How long does it take google to start ranking it again? I used to have a good ranking that converted quite a few sales, in the last week i have had next to nothing. Also would the disclaimer i created under each product be considered duplicate content because it is on most of my product pages & is the same.

    Read the article

  • Another website is mirroring and ranks above my site in search results

    - by Marlboro Goodluck
    There is a site of ill-repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log files and noticed that this site has been crawling mine for sometime, and also has 10,000 links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • Adwords: is it possible to automatically copy changes in one campaign to other campaigns?

    - by Richard
    Is it possible to have identical campaigns except for one thing, and whenever changes are made to one campaign, the same changes are automatically made to others? I want to have identical campaigns running for different time zones, but for the campaigns only to run between particular hours of the day in each time zone. If I have a campaign for each time zone, when I make changes to one campaign, such as adding ad groups, or adding keywords, or changing bid amounts, etc, I want those changes to occur to the other campaigns also. Is this possible? Thanks,

    Read the article

  • Should I include everything in the sitemap or only new content?

    - by Mee
    For a website with dynamic content (new content is constantly being added), should I only include the newest content in the sitemap or should I include everything (with a sitemap index)? What are the best practices for sitemaps esp. for large sites? Also, is there anyway to make google (and other search engines) only crawl the pages in the sitemap? Thanks Update: Also, any idea how stackoverflow handle this? I'd like to know but unfortunately (also understandingly) they have blocked access to their sitemap.

    Read the article

  • why my site cache 2 time par day?

    - by clarawood
    I have read the FAQs and checked for similar issues: YES My site's URL (web address) is: www.adultxdating.com Description (including timeline of any changes made): I have lost my top search listings from last 4 months. I am still working on this but not getting proper guidance. This site is caching 2 times in 24Hrs. Some times sites will back in to top 10 search listing on 100s keywords, some time its gone out 1000+. anybody can help me why its happening. I have more than 200K+ incoming links and updating the site regularly. Please help. Thanks clara wood

    Read the article

  • Space in img:s "ALT" attribute good/bad for search engines?

    - by Camran
    I am trying to make it easier for search engines to crawl my website, as it is almost 100% dynamic. I have a couple of transparent images which are actually links to sections of my page. I wonder, if I add an "alt" attribute containing space characters to explain the target, will this improve SE rankings etc? For example: <img src="blabla.png" alt="post new classified"> Or will this just result in errors? Ànd, what should I put in the alt attribute if I can't use space? PS: Another different and short question, will javascript-rich content make a page less important to crawlers? Thanks

    Read the article

  • What framework for text rating site?

    - by problemofficer
    I want to start a "rate my"-style site. The rated objects are mostly texts. I want it to be rather simple. Features I need: object rating (thumb up, thumb down) object comments object tags related object presentation based on tags user authentication and management private message system sanity checks for text inputs (i.e. prevention of code injections) cache open source runs on GNU/Linux I would gladly take something that is tailored for my scenario but a generic framework would be fine too. I simply don't want to write stuff like user authentication that is been written a million times and risking security flaws. Programming language is irrelevant but python/php preferred.

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • Does the user agent in any regular browser contain 'bot' or 'crawl'?

    - by Echo
    Does the user agent in any regular browser contain 'bot' or 'crawl'? I check the user agent on my site to see if it is coming from a bot or not. If it is, I can do some little optimizations since they don't login. (I don't change the content at all) After adding checks for 30-40+ bots, I'm getting tired of added them. So I was wondering if checking if it just contains 'bot' or 'crawl'. I know that wont get all bots, but it would get a lot of them. But if that could cause any false positives, then it would totally mess up the ability to add to cart, place an order, and login in.

    Read the article

  • Getting started with SOAP [closed]

    - by EmmyS
    A site I developed has a new requirement to get weather data from the National Weather Service. They have quite a bit of info on how to use SOAP to get their data and display it in the browser, but what we need to do is use a cron job to get the data at specific intervals, then parse the data out into a database. I have no problem writing PHP code that will run an XSLt and parse xml records out into SQL queries, but I have no idea how to handle this with SOAP (which I've never worked with.) Do I get the data via a SOAP request, save it to an XML file on my web server, then run the XSLt against that? Or is there some other way to go about this?

    Read the article

  • Why my sub-domain redirect returns a blank page?

    - by Tom Brito
    I have the domain http://dropbox.tombrito.com/ (on GoDaddy) forwarding with masking to www.dropbox.com/sh/k6ypvx4y4kf0gu6/rdjxQ1b1OL It was working fine some time ago, but now the result is a blank page (although the Dropbox's favicon appears correctly in the browser's tab title). The DNS manager shows me a single entry with the name "dropbox": A dropbox 64.202.189.170 Any idea what's wrong? Related: Why my domain redirect on Google Apps is returning 404?

    Read the article

  • Convert video files for home IIS server [closed]

    - by Jey
    I am finally learning to set up an IIS server (personal use only) and I thought it would be cool to have some videos on it for me to watch when I am away from home. Since I'm usually on 3G (iPhone) or work wifi, I'd like to convert them to an optimal format that will stream fast. The video files are mostly avi and mp4 (from 30 minutes to 2 hours in length). What would be an easy and fast way to go about doing this? Thanks.

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • using apple-mobile-web-app-capable and cache.manifest issue [migrated]

    - by LocoMike
    So I have this simple html file <!DOCTYPE HTML> <html manifest="cache.manifest"><head> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> <title>Test</title> <meta http-equiv="content-type" content="text/html"> <meta name="HandheldFriendly" content="true"> <meta name="viewport" content="width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"> <style type="text/css"></style></head> <body marginwidth="0" marginheight="0" topmargin="0" leftmargin="0"> <h1>hello</h1> </body> </html> My cache.manifest is simply CACHE MANIFEST I run this website on my local server (localhost). I load it from iphone safari and it works fine. I then stop the server and load it again, and it works, because the offline cache is doing its job. However... if I save the website as a start icon in the iphone dashboard, and then I try to open it with the server stopped it won't load. However... if I open it with the server running at least once (it will work) then I can open it later without problem. It looks like even though the page was cached in safari, it is not cached in this saved app. Anybody knows how to get around this? Thank you!

    Read the article

  • Looking for advice on B2B promotion [closed]

    - by IconicDigital
    Can anyone recommend affiliate networks that focus on B2B development. We are about to launch a UK job search engine that allows job boards to list their jobs on the engine. We have decided to keep the advertising in house, with the goal being of keeping the costs down. I was wondering if anyone could offer any advice on potential advertising routes that we could take. For example B2B affiliate networks, adwords etc. We are in the position of launching an empty site and ideally we would like to be attracting recruitment agencies or businesses to signup to either a free or paid account. They can then begin to populate the engine with job listings. An obvious choice so far would be to promote on networks like Linked In. Any ideas? Thanks

    Read the article

  • .htaccess mobile redirect issues

    - by val
    I'm trying to set up a mobile redirect for a site with 2 subfolders, and I cannot get both to work at the same time. This is the structure of the site www.mysite.com/EN/ www.mysite.com/ES/ This is a bilingual site so each subfolder contains the files corresponding to each language version. Then I was using a 301 redirect, and setting up the index in /EN/ as the main index. Everything was getting redirected to it. I was using DirectoryIndex index.html Redirect /index.html http://www.mysite.com/EN/index.html and several Rewritecond to redirect mysite.com and old urls to the new URL. It worked fine before I decided to add a mobile version - m.mysite.com. I used the solution provided in http://stackoverflow.com/questions/3680463/mobile-redirect-using-htaccess, and it redirects my mobile version properly, but now the desktop is both working. Besides, my mobile version must be bilingual as well.

    Read the article

  • How to make the most of GWT's "Search queries"?

    - by DisgruntledGoat
    I've been looking at the "Search queries" section in Google Webmaster Tools recently, and it seems like there is a lot of potential there in finding which pages on a site need improvement. I'm trying to figure out exactly what to sort or filter on. Do I look at pages with a low average position? Low impressions but high clicks? Pages that are rising up/falling down the rankings? What is the low-hanging fruit here?

    Read the article

  • Does Google penalize pseudo-duplicate pages for different locations?

    - by mikewowb
    My compony's site's home page was not specificly optimized to any location. Now, I am planning to optimize it to Boston, and create ten or so other landing pages for other locations we serve. If we made these new pages by copying the original Boston one and changing the location's name (s/Boston/Montreal/), would Google consider them as duplicate pages and penalize us? What is the best practice for this?

    Read the article

  • How to reverse engineer the SEO on a website?

    - by Startup Crazy
    I have read this question. My question is a bit different from it. I want to know how can I reverse engineer another website that is ranking the best for some keywords. For example some website called www.bla.com is there and it ranks high for many keywords and I want to learn from it how can my website be of the same authority and get the same ranking (or probably better ranking if I found something that they are missing). Can anyone enlist it as a procedure, how to reverse engineer a website?

    Read the article

  • How do I fix the paths of my website?

    - by EASI
    I have Joomla 2.5.7 in my client's server updated recently from 1.6 to 1.7. I did not make that site but I am responsible for it now. I prefer to make a site from zero. Now users area clicking on the menu options and when joomla send them to a meta-url like http://iap.pa.gov.br/acervo they get the message 404 The requested URL /acervo was not found on this server. Would that be because they moved joomla folder from the root to root/iap (name of the site)? If it is, what is the configuration to adapt to that new folder?

    Read the article

  • Is it legal to charge extra fees for copyrighted content on mobile platforms?

    - by Macrow Willson
    this question just came up as we recently bought content from image stock portals. Many of those altered their license agreement in favor of charging more for using in mobile apps. So instead of using their standard licenses, you need to pay an "extended" licenses which multiplies the fee easily by 5-10. That doesn't make sense as the mobile device is just a smaller browser and protects the content even better than a desktop computer. Are those stock agencies allowed to do that, and is it legal at all ? I am not a lawyer but I would even risk to go on with the standard license and wait to be sued in that matter.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >