Search Results

Search found 9727 results on 390 pages for 'llblgen pro'.

Page 246/390 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • How to Automate Checking for Stolen Content?

    - by Hisoka
    So I know about tools like Copyscape and Google Alerts.. great tools, but it's quite tedious for me to copy and paste an URL or phrase for every one of my pages in my sites. Is there any tool out there that monitors your website and emails you or alerts you whenever someone has stolen content from your site? The only service I know is CopySentry and honestly, it's too expensive for me since I got thousands of pages I want to monitor... Anyone else have this problem? or is it just me? Thanks for any help.

    Read the article

  • Is my htaccess setting hurting SEO?

    - by Ramanonos
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http, and from http to https. RewriteCond %{SERVER_PORT} !443 RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] RewriteCond %{SERVER_PORT} 443 RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ https://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • How can I register a domain that requires country residency?

    - by zzatkin
    I tried to register a .pm domain from ovh.co.uk, but they e-mailed me saying they want valid proof that I am a resident of the United Kingdom. I currently live in the United States though. I am aware that I have to be a resident. That's not the question I'm asking. I want to know if it's possible, whether through some website that will get residency or some 'hacking' method, to register the domain I am interested in without having to physically be a resident of the country. I will try and find out if ovh.co.uk will charge me an extra fee, but until then I am curious to know if there is any way I could do this. Also, is there any other website I could purchase .pm domains from?

    Read the article

  • Are there any services that sell 'packs' of domain names?

    - by DC_
    Are there any services that sell 'packs' of domain names? What I'm looking for is a service that will allow me to buy, for example, 50 random domain names. I don't care what the names are, they could be kjsgadkj286.mobi for all I care. Does anything like this exist? I am not interested in registering them myself. I thought that maybe domain squatters or someone similar might have leftover worthless names they want to get rid of.

    Read the article

  • How should I design my website to allow posterity to edit?

    - by SSumner
    I'm building a website for a student organization I am involved in at my college. Most of the site will be static - i.e. won't change from year-to-year, but certain pieces will. I am high-tech, but most of the others aren't, and I am graduating in the spring. So how should I go about building the website so as to allow those that take over in subsequent years to edit information? Examples: Events: I already plan on using a Google calendar for this Officers: There will be profiles/pictures for all the officers on the web page Connections: Partnerships with other organizations that we have currently, but may not in future, or may add more in future Should I use some form of CMS (Content Management System)? If so, how restrictive are they (e.g. Drupal) to what you can build and then how easy is it to edit. What other ways could I make a very nice-looking website but allow certain pieces to be edited later?

    Read the article

  • SEO for duplicate sites with multiple domain extensions

    - by lock
    I am running business in different nations and I got domains for example www.mydomain.com www.mydomain.us www.mydomain.ca www.mydomain.uk www.mydomain.com.au So, if I run same website with same content (of course there will be little changes like address, etc.) as all these domains has same content will it be considered as spam or will the domains rank well as per the country? Also, is there solutions if Google considers this as spam.

    Read the article

  • .Museum Domain Name Registrar

    - by mmundiff
    Anyone know of a reputable domain name company which deals in .museum domain names? Previously we had used DomainBank without much issue. DomainBank was bought by Dotster and am having nothing but trouble since the switch. Currently the website has been down for two days and it is a MAJOR issue. I know .museum is a seldom used domain name but I really need to switch to a reputable company. GoDaddy doesn't work with them unfortunately. Anyone know of a good company that does?

    Read the article

  • Since Google reduces the value of links alongside nofollow links, what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links, and thus reduces the value of the other links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Create an Even Shadow On an Element [migrated]

    - by youarefunny
    When a box-shadow is applied to an element the corners are less "thick" than the middle because they don't have shadow on both sides. This creates an odd effect on full width elements. http://jsfiddle.net/kevincox/6FhYe/18/ If you look at that example you will see that the edges are lighter. If the "banner" is at the top of a page you can spread it and shift it up but that doesn't work for the middle of the page as you can see the top. I was wondering if anyone had a solution with no images and preferably cross-browser but I can deal with vendor prefixes for a bit. Is there something like a separate horizontal and vertical stretch?

    Read the article

  • help me being the next zuckerberg [closed]

    - by Kraken
    I really expect someone to help me with my problem as stated below. I started learning website making by having tutorials from w3schools.com but after doing taking the HTML tutorials I don't think I really can do anything with it. Maybe I don't know everything about HTML yet. I know the online tutorials are not enough but what I need you guys to tell me is that how can I learn making some nice websites. I have vacations for two months now and I think it would be the best time for me to learn making websites as I really love to be a website developer. Now I would like to start from the scratch. I know some C language (though I know it will not help me). So tell me from where shall I learn HTML and what all things shall I learn after that and in what order. If you can also help me with the material (let it be books or online tutorials). But I would like to know how can I gain confidence that I can make some websites as just learning online tutorials doesn't help me much. I also tried some video tutorials but in that too, they will just tell me some of the functions and not all. So what am I supposed to do.. just know a limited functions as they want me to know in video tutorials or do something else.

    Read the article

  • Disallowed images in the robots.txt of my Joomla site can't be displayed when shared in Facebook

    - by opk
    I have noticed that since I have disallowed images using the robots.txt in my Joomla site, when sharing an article in Facebook, the image will not be displayed. Why is that? Is it indeed related? My robots.txt file: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/

    Read the article

  • How to clean unused app files

    - by Ando
    I've finished working on a web app and looking back at the process, I can see that a lot of extra files have been accumulated - backup css, php files, javascripts, images. I'm using an MVC workflow (codeigniter) and I would like to clean the app from unused files. There are also libraries which I've downloaded but referenced some of the files in my code, yet I've kept the unused files too. Total mess really, I'll take a note for the next app to become more organized. There is also a fair security concern regarding these sorts of situations, plus I think that it would be better to omit all the extra files when search engines are indexing the app. Has anyone been in that situation and what is the safest/fastest way to clean the app? My setup: MVC Codeigniter - Netbeans - Mac

    Read the article

  • Dropbox and IIS

    - by DigitalAce7
    I'm running into the problem of using Dropbox and IIS. I am using the Dropbox Folder Sync plugin to sync a folder outside the Dropbox folder. They are synced into my inetpub\root\downloads\ directory. The problem is not that it doesn't sync. The problem is that the files that get synced have no permissions attached to them. I can't open any of them on IIS. Initially the downloads were to be only PDFs but they don't open so I tried an .ASPX file and it fails to load as well. Is there any way around this or another program I can use to sync files but allow them to be opened on an IIS server. Thanks for any help!

    Read the article

  • Change player in javascript game [migrated]

    - by KLUSTER
    Game: onClick startbutton mathrandom for first player who starts the game. 4 Pictures: 2 of it player1 and player2. another 2 Player turn. need help: on button click next player turn function game(){ var PlayerTurn; PlayerTurn=parseInt(Math.random()*2); if(PlayerTurn==0){PlayerTurn=1;window.document.player1.src="Cache/Player3.PNG";} else{PlayerTurn=0;window.document.player2.src="Cache/Player4.PNG";} } Any help is appreciated.

    Read the article

  • Should I design and then look for a CMS or vice versa? [closed]

    - by Livingston Storm
    I am currently designing an e-commerce site, and unfortunately my PHP is garbage at the moment so open source CMS's are out of the question. I am debating between Joomla and Big Commerce and as the title states I am unsure of whether to build first or try the CMS first to see what limitations I will face. I couldn't find any previous questions on this site, forgive me if this is a stupid/commonly asked question. Thanks for any feedback

    Read the article

  • Site loads Extremely slowly over HTTPS; loads perfectly over HTTP - Sudden and new issue

    - by guest234239048
    My business' website suddenly started loading extremely slowly over HTTPS last night and continues through today. The facts: Page loadtime via HTTPS - 2 minutes +, HTTP - 3 seconds max. NO updates were done on any site files in the past 2 weeks. This is on Shared Hosting HTTPS has worked perfectly over the past 9 months then suddenly failed yesterday. Hosting company and SSL issuer say there are no problems on their end. - Searches for others having similar issues return no results, it appears to be just me... Site is primarily run via php/mysql Currently attempted troubleshooting: Tried all major browsers and different versions - same result. Tried 2 separate ISPs - same result. Tried proxies - same result. Tried 3 separate computers - same result. I'm basically at a total loss here. Does anyone know what could cause such a thing to happen? Please help guide troubleshooting.

    Read the article

  • We've had our content copied under a different URL - why and what do we do?

    - by Shaun
    We have a problem. We've noticed a large amount of traffic showing up on our Google Analytics. Upon further investigation we have found that we've had our content copied under a different URL. Our site: http://www.targetis.co.uk The coppied site: http://www.target-is.com (isn't showing up with Chrome for us) We don't own this domain. Their content is hosted with them (not via proxy). The large part of the traffic is coming from video hosting site. What do we do?

    Read the article

  • Content from a domain I used to own is appearing in my twitter feed

    - by user19424
    I had an old domain with a WordPress setup hosted by GoDaddy. I changed the business name and moved everything over to a new site, new domain, and new host a couple of years ago. I let the old domain expire. It was recently purchased by someone for cheap, article spun content. Now, anytime they post a new article, it's automatically posted to my Twitter account. I contacted them to remove this, but given the quality of their spun articles, I doubt they will remove it. Is there any recourse I have through Twitter or the host to get this removed?

    Read the article

  • How to improve a single-paged site search result [closed]

    - by Trisism
    Possible Duplicate: How to SEO a Single-Page website I created an online CV of mine a couple of weeks ago and it has had quite a few visits. Now I want to improve the chance it will appear in google search results; however, my web CV is a one-paged site and it contains only internal links (those with hash #) so I can't really submit a sitemap. I could have changed the internal links to normal links to be processed on server-side, but there's no point of doing so. I'm very new to web SEO so I would really appreciate if somebody can show me what should I do with a single-paged site with internal links to be effectively indexed by crawlers.

    Read the article

  • SEO Influenc search result per device class (mobile/desktop)

    - by user32224
    We're currently building a new responsive website and while working on the site map figured that we don't want to show certain sections on mobile devices. This can be easily done by hiding the navigation parts using css/media queries. However, trouble is that the hidden sites would still show up in search engines' search results. If a user happens to click on one of these links she might happen to see a badly formatted page as we'd use desktop/tablet only code to show images and video. Is there any way to "influence" to exclude certain pages if the search is done on a mobile device? Do search engines crawl pages once or with a device specific view twice? Could we set a noindex meta tag for a specific device class?

    Read the article

  • Exclude PHP from output from WYSIWYG in CMS

    - by bytewalls
    I'm writing a basic CMS for one of my sites and have run into an issue where some pages need to dynamically serve PHP and JS, where as others are plain HTMl. I want there to be a setting which will allow this for the pages that need it and will load ACE editor instead of a different wysiwyg editor. The challenge here is that on the pages which I do not explicitly tell it there will be code, I want to reject any inputs that code. I can set it up to insert a for all pages without JS, but how can I keep php code from running?

    Read the article

  • Facebook Insights numbers don't add up

    - by ChristopherJ
    I'm having trouble understanding the Facebook stats for a Fan page under reach as they don't seem to add up. Under the reach tab, if I expand the countries demographics at the top, then the countries listed add up to around 16k people. But scrolling down under reach, the graph reaches as high as 260k in a particular week. From what they say, it seems that both the graph and demographics are talking about unique users, so where are the other 244k unique users that are in the graph but seemingly not from any country on Earth?

    Read the article

  • CentOS drive mapping? [on hold]

    - by DroidOS
    This is the first time I am posting on this particular StackExchange forum and I hope that I am using the right one for the present question. Briefly, this is what I need to do I am running a web service where users can, amongst other things, upload and store files on the server. What I want to do is to hive off user file storage to a different location so my server (CentOS 64 bit) can concentrate on what it does best - server side scripting and database management. As things stand all user files go into subdirectories in a folder called stash that lies above DOC_ROOT. What I would like to do is Transparently detect all attempts to read/write to stash/sub_folder and get/set file data on a remote server - ideally the latter would be one which replicates files like a CDN so they can be delivered from the closest/fastest location based on where the user's location. Even nicer would be if for all read accesses I could provide a URL that allows the user's browser to fetch the relevant file directly without having to funnel them via my server. I am a relative newbie when it comes to this sort of thing so I hope that I have phrased this question adequately well. From the little searching I done I gathered that WebDAV can be used to map drives to a different location on the web so perhaps that is a starting point. But if that will work I need to Establish how to get WebDAV up and running on my CentOS 64 bit server. Ideally, identify a service that allows this kind of file storage and provides an API I can use in my own scripting. I'd much appreciate any help with this.

    Read the article

  • Exclude pages from search results based on device class (mobile/desktop)

    - by user32224
    We're currently building a new responsive website. While working on the site map, we figured that we don't want to show certain sections on mobile devices. This can be easily done by hiding the navigation parts using CSS/media queries. However, the trouble is that the hidden sites would still show up in search engine results. If a user happens to click on one of these links she might happen to see a badly formatted page as we'd use desktop/tablet only code to show images and video. Is there any way influence the search engines to exclude certain pages if the search is done on a mobile device? Do search engines crawl pages once or with a device specific view twice? Could we set a noindex meta tag for a specific device class?

    Read the article

  • How can I prevent my site from being branded a "content farm?"

    - by Fredashay
    I'm building a small social Q&A site. Another Q&A site that I use was recently branded by Google as a content farm and removed from Google results. I know what Wikipedia says is the definition of a content farm (low quality paid articles and spammy text across the page to catch search engines). That other site I use doesn't do those things, so there must be more to it than that. I want to make sure I don't do anything that causes Google to think my Q&A site is a content farm. What should I do, or avoid doing in designing my site layout?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >