Search Results

Search found 9728 results on 390 pages for 'zee pro'.

Page 216/390 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • How to find a good photo gallery for my website?

    - by Roflcoptr
    For my website I'm searching for a really simple gallery module that looks like the one use by Dropbox. But I'd like to have 2 additional features: allow visitors to make comments and display the number of hits of a photo. I was googling a lot for such gallerys, but could find anyone that really matched my requirements. Could someone reocommend a simple good-looking gallery that fullfills these requirements.

    Read the article

  • Website creation preparation [closed]

    - by Loki
    I am in the pre-coding phase of creating a website. I know that it will be account based (users have to register/login to use the features). I also know that the server will have to do certain operations that are timer based, that is to say that user will have events that will trigger at a point chosen by the user and do something. I am searching for a good choice in server-side technology, and was wondering what my options are and what the best choice is. I would prefer open technology and something that doesn't use interpreted languages (Java, .net). My first thought is PHP + PGSQL for serverside and HTML+CSS+JS for clients, but I am still looking at my options.

    Read the article

  • apache2 and htaccess help

    - by user1052448
    For some reason domain.com/YYYY-MM-DD redirects to domain.com/var/htdocs/public_html RewriteEngine On RewriteCond %{HTTP_HOST} ^[^\./]+\.[^\./]+$ RewriteRule ^/(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteRule ^archive/index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^. /archive/index.php [L] trying to get anything after www.domain.com other than index.php and archive/index.php to display mysql content on archive/index.php (by grabbing PHP Request URI). The browser URL should remain www.domain.com/YYYY/MM/DD/blog-title or www.domain.com/YYYY/MM/ to display all posts from YYYY-MM

    Read the article

  • SEO and domain name - which shape?

    - by user984621
    I just want to register the domain name for my spanish class and wonder, what domain name is beter for this purpose: learningspanish.com or ilearnspanish.com Which one is better? The domain name must be English, but I don't know, what is better for Google and SEO - if learn or learning... I would be grateful for your feedback and sorry if the explanation above is not understandable (I would try to explain it better). Thank you

    Read the article

  • Can I Use A Canonical Tag Instead of a Redirect for Updated Content?

    - by Ewan Heming
    I have some old articles on my blog that get quite a bit of traffic, but are very outdated. I want to remove them from Google's index using the noindex tag, but I'm not sure what the best approach will be to send the same traffic to my new article on the subject without using a redirect (as I want to keep them in my blog archives). I was intending to just put a link at the top of the article pointing to the new one, but was wondering if it was appropriate to use a canonical tag instead; the new article is on the same subject but doesn't contain the same content, so isn't really a copy.

    Read the article

  • Verifying that a user comes from a 'partner' site?

    - by matt_tm
    We're building a Drupal module that is going to be given to trusted 'corporate partners'. When a user clicks on a link, he should be redirected to our site as if he's a logged in user. How should I verify that the user is indeed coming from that site? It does not look like 'HTTP_REFERER' is enough because it appears it can be faked. We are providing these partner sites with API Keys. If I receive the API-key as a POST value, sent over https, would that be a sufficient indicator that the user is a genuine partner-site user?

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • Proper caching method with .htaccess

    - by mark075
    There are a lot of snippets that enable caching on a website and I don't know which one should I use. The most popular is something like this: <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresDefault "access 2 days" </IfModule> I also found something similar, but with keyword 'plus'. Like this: ExpiresByType image/png "access plus 2592000 seconds" What does it mean, because I didn't find anything in the documentation. Another snippet I found: <ifModule mod_headers.c> <filesMatch "\.(ico|jpe?g|png|gif|swf)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\.(css)$"> Header set Cache-Control "max-age=604800, public" </filesMatch> <filesMatch "\.(js)$"> Header set Cache-Control "max-age=216000, private" </filesMatch> <filesMatch "\.(x?html?|php)$"> Header set Cache-Control "max-age=600, private, must-revalidate" </filesMatch> </ifModule> What is the best practice?

    Read the article

  • Changing hosts but keep old emails [closed]

    - by LDaniel
    Possible Duplicate: Change host / keep emails OK here's the situation...I'm trying to transfer my domain and email address to a new hosting service and I would like to start using Google's domain apps for my email, etc. My email address is currently on a WebMail type platform and when I move my domain and start using Google domain apps I would also like to keep the old emails and have them imported the the email address on Google. Note: The email address will be the same on both hosts. For example [email protected]. So its keeping the same address between two different systems. I've been Googling how to do this for awhile and all the migrate options that come up don't appear in the the Google inbox setting or domain apps config settings. Any help is greatly appreciated.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • changed plesk root name, what DNS settings get modified?

    - by NRGdallas
    we recently changed our plesk server's main URL from siteold.com to sitenew.com. many websites had their NS set to ns1.siteold.com - does plesk automatically update that to need ns1.sitenew.com? should I change the godaddy settings? attempting to change them states "Nameserver Not Registered" - is this simply the delay required? lastly, when adding a new domain to plesk, one would simply need to adjust the nameserver for that site in godaddy to ns1.sitenew.com or ns1.newdomain.com? (does plesk have a centralized name server, or does each site acquire its own?)

    Read the article

  • My Herokuapp is inaccessible from custom domain name

    - by picardo
    I have a Heroku app that is located at myapp.herokuapp.com. I have mapped a domain name to this app, using the A properties. I followed the instructions on Heroku's website to the letter, and it worked for a few days. Now when I try to access the site from the custom domain, it's timing out! On Chrome, I am getting "Oops! Google Chrome could not find that page!" message. I tried pinging the name as well, but I got this error: ping: cannot resolve yourhostname.org: Unknown host The app itself is working and I don't see any error messages from Heroku. Or from new Relic. What's going on here? Also tried running host and this was the error message: ;; connection timed out; no servers could be reached

    Read the article

  • How to tackle archived who-is personal data with opt-out?

    - by defaye
    As far as I understand it, it is possible to opt-out (in the UK at least) of having your address details displayed on who-is information of a domain for non-trading individuals. What I want to know is, after opt-out, how do individuals combat archived data? Is there any enforcement of this? How many who-is websites are there which archive data and what rights do we have to force them to remove that data without paying absurd fees? In the case of capitulating to these scoundrels, what point is it in paying for the removal of archived data if that data can presumably resurface on another who-is repository? In other words, what strategy is one supposed to take, besides being wiser after the fact?

    Read the article

  • Schema.org for Product Reviews

    - by Lynda
    I have a product reviews on a site and I am adding schema.org markup to the reviews. Here is the code I am using: <div class="blockquote-wrap"> <blockquote itemprop="review" itemscope itemtype="http://schema.org/Review"><span itemprop="reviewBody">Text of the review itself.</span> <cite><span itemprop="author">Author Name</span>, Location of Author</cite> </blockquote> </div> This is all the reviews are. When I test the page using Google's Structured Data Testing Tool I receive this error: Error: Incomplete microdata with schema.org. My question is what data is missing that is required? I don't see which data is required on the Schema.org page for reviews.

    Read the article

  • Problem on application with asp in IIS 7.5 with oracle 11g

    - by Hichem
    We have an application developed with .Net on server 2008 R2 and IIS 7.5 wich is linked liked to oracle 11g on another server. Our problem is that we lose the service every two or three or more. The application stops and we have to restart IIS or the server to continue the job (the number of users is less then 50). We have no error message. Do you know anything to do to avoid this situation, something like: A parameter to put on asp pages A parameter to switch on or off on the server or on the database

    Read the article

  • Do I need to physically host my website in separate countries for SEO?

    - by noelmcg
    I run an ecommerce store that is hosted in Ireland, and ranks ok with google.ie The market for this comapny is the Republic of Ireland and the UK. Is it beneficial for me to have a UK hosted version of my site (.co.uk) to rank higher in google.co.uk (and other localised search engines of course). If so, how would I prevent the site from being punished for duplicate content? Thanks in advance for any assistance on the above.

    Read the article

  • phishing attack. Where do I start the cleanup?

    - by Suz
    I'm a newbie webmaster. I've got a domain and a site... and no clue about the web (I'm OK with files and programs... ) I got a message from google that my site is a possible phishing site, with a link the the suspect page: http://www.mydomain.com/~phishers/Paypal/us/Confirm.php needless to say, I didn't put that up. Can someone point me to a good tutorial on what to do now? I'd like to figure out what happened so I can defend against it the next time around. How do I identify what kind of attach this is? Also, what is the tilde doing in the URL path? I couldn't find any path like this on my hosting account, so I'm not entirely sure how to delete it.

    Read the article

  • Finding what is causing my site to issue 301 redirects

    - by php-b-grader
    I have an URL which is 301 redirecting but I cannot find where or how it is happening and wanted some checks to perform if possible? I've checked .htaccess - it's not there I've checked cPanel in redirects section - it's not there In WordPress, I have the redirection plugin active and it's not there either Is there anywhere else that could be issuing redirects? I'm at a loss to find out where and how the page is redirecting!

    Read the article

  • What is the benefit of the "download will begin shortly" page?

    - by Fammy
    I've noticed many websites that host files for downloads have an interstitial page between the download link/button and the actual start of the download. Terminology on the page may include "Your download will begin shortly. If it does not, try this direct link". What is the purpose of this page? It seems to draw away from the general experience of downloading a file. Is this beneficial for bookmarking? Less experienced users? Analytics?

    Read the article

  • My site disappeared from Google search, how long does it take to get back?

    - by Sweb Dizajn
    Due to damage by malicious code, Google wrote: Google Analytics web property: link has been removed from http://swebdizajn.com November 29, 2011 Your Webmaster Tools http://swebdizajn.com site is no longer linked to a Google Analytics web property. Possible reasons are: You are no longer the owner of the site in Google Analytics, and nobody else owns both the site and the property Another site owner removed the link. After that I restored to backup and then accepted the Google message to tell them that all is well. How long will I have to wait for my site to return to the position where I was?

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • Image slider not working when website is hosted on remote server [on hold]

    - by Tushar Khatiwada
    I'm having a different problem. I made a html website and it contains Nivo Slider in the index page. The site is working perfectly when viewed locally. I uploaded the site to remote server but the slider is not being displayed and the photo from the gallery is not working as expected ( popups on the local pc). The url of the site is: http://d138444.u24.elitehostingwizard.com/ The screenshot from the local pc: http://postimg.org/image/lxiqzx7br/ Thanks

    Read the article

  • Suddenly my server reject all Post Requests

    - by Sharen Eayrs
    just go to meet-romance.com/test.htm The script there is simple. A form with a button <form action="test.htm" method="post"> <input name="Button1" type="submit" value="button" /> </form> It doesn't work. Press the button in firefox and I got connection reset thingy. I wonder why. It happens since yesterday. I have emigrated all domains that requires post requests somewhere else. I suppose a reset of server would fix that only to happen again some other time. So I wonder if anyone has a clue of why. All domains that require post have been moved to another server.

    Read the article

  • what is optimum length for html title tag in Unicode format?

    - by user1501256
    I have a website that generates its title tag dynamically. the title tag is in unicode format. the title tag is limited to 65 character but sometimes Google doesn't show title tag completely in SERP. I'd like to know what is the optimum length of title tag in terms of seo for unicode titles, and is there any difference between Unicode title and non-Unicode title tag? And what about other search engines Bing, Yahoo and so on.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >