Search Results

Search found 9724 results on 389 pages for 'pro zeck'.

Page 143/389 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Subdomain still times out after set up a month ago

    - by user8137
    I'm a newbie on this and this has probably been asked already but the subjects online were close but too vague in there answers so I've probably really messed this up. I would really appreciate specific step by step instructions. This is what I'd like to do: use the subdomain www.high-res.domain.com to be accessed by external customers with specific permissions to access the site (like ftp). We use Network Solutions to house domain.com. We recently added a new ip address to point to www.high-res.domain.com. I gave the ip address to the company that hosts our website. I pinged www.high-res.domain.com and it points to the correct ip address but still times out. It’s been a few weeks now and when you ping it, it still times out. C:ping XXX.XXX.X.XXX Pinging XXX.XXX.X.XXX with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for XXX.XXX.X.XXX: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss). Tracert times out as well. I even went to DNS tools and a few other sites for checking this and it shows the same thing. I recently went into the DNSmgmt on our server (wink2k3sp1) and created an A record under the DomainDnsZones which translated to a Cname when you look at it. Under the Domain it has two entries one to the subdomain and the other to the website host each with separate ip addresses. Is this correct? The website people are too busy on another project to research it further and my friends haven't gotten back to me. Please help. Thanks KK

    Read the article

  • Changing website Url - Am I making an SEO mistake

    - by Denis
    I have a webiste with a .com domain that is a year old. The business is a shop based in Ireland and I have purchased a .ie domain. I plan to move the website over to the new domain, SEO Good or Bad idea? Old Url - SmythsOfTerenure.com | New Url - SmythsComputerRepair.ie (I am using Fake names and fictional business in the example Url's) The new domain has my main keyword in it. The old domain has my family name and business location (city district) It currently ranks high for lots of relevant keywords in Google with low traffic and low competition. Current website traffic is about 80 session per week. 80% of that traffic is Organic from Google. I am changing domain in an attempt to help SEO long term by having a CC TLD (.ie rather than .com) and having my main Keyword in the domain. I plan to do 301 re-directs from old to new and update GW Tools and G Analytics but am I making a mistake changing it at all as I know rankings may fall in the sort term. Homepage PR=0 and very few inbound links. Should I just leave it on the old domain? Or after a few months should I be back up ranking as well as I am now?

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Sharing unique links on social media vs SEO

    - by MJWadmin
    We're currently implementing a voucher system on our site which will allow our users to obtain a 25+% discount on certain products, provided they donate 10% of the purchase price to charity. We will offer the ability to share the discounts via social media in return for larger discounts to the sharer for each person who clicks through the link and buys an item. I understand that social links have SEO benifits, but this appears to be based on lots of people sharing the same link. If our voucher users share a unique link i.e. http://ourdomain.com/sipsfesdf rather than a fixed link http://ourdomain.com/product-name will we still receive the same benifts? Should we instead share something like http://ourdomain.com/product-name/sipsfesdf Thanks in advance.

    Read the article

  • Searching for a page with a Very Unique title, doesnt find that intended page... Why?

    - by Sam
    Dear folks, a question about appearing in search results in google: A page of mine has this extremely unique page title: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Now, when I search the phrase: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Then all kinds of other irrelevant pages show up having only 1 or at best two words from my unqie title appearing, although I have searched for the entire phrase! And when I search the phrase in between quotes: "Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände" Then it finds 1 result, which is my page. What is going on? Why doesn't show the unique result without the quotes? Thanks: your ideas and suggestions are welcome and much appreciated

    Read the article

  • Using both .co.uk and .com domain

    - by Mantorok
    We presently have a .co.uk domain that has been live now for around 9-10 months, recently the .com version of this domain has become available and we've purchased based on reading that suggests you should try and go for .com where possible. My issue now is, what should I do with the .com? Should I use this in favour of the .co.uk or use both? I'm concerned that rankings might be affected if I completely replace the .co.uk with the .com. Any suggestions?

    Read the article

  • How to write a blog for SEO purpose

    - by Mathieu Imbert
    I have a photo sharing website, which provides very little textual content. Users can add tags to photos and a description, but it creates a lot of duplicate content, because most of the descriptions will be 'wow', 'lol', ... I don't think I should rely on users to build my SEO. I think it would be a great idea to write a blog, and use it to describe the best photos, start contests, explain themes, in short: create original content that search engines will love. Our website's main URL is like www.domain.com, and our new blog is hosted on blog.domain.com. From a SEO perspective, is it a good idea to keep the blog separate from the main site? This has the advantage to leave the original site unchanged, but will it add any page rank to the www.domain.com? If the blog ranks well it will obviously pass some page rank to the original through links. What do you think is the best option from a SEO perspective? Include the blog in www.domain.com? Or leave it in blog.domain.com?

    Read the article

  • Website with sections in Drupal?

    - by Matt Hampel
    What is the best way to create a website with sections in Drupal? Users need to be able to add, remove, and nest pages fairly easily. Pages added to a section should have an appropriate URL, like "/[section name]/[page title]". This seems like a straightforward task, but I can't find the right combination of tools to do it. Subsite comes close, but for some odd reason, doesn't set up the correct content paths. The closest I got was creating a book for each subsection, but that feels like I'm using the wrong tool for the job. Edited with my solution: I used organic groups with pathauto. I set pathauto so that pages in groups had URLs that were of the form [group path]/[page title].

    Read the article

  • Are you aware of any client-side malware that sends lots of junk requests for .gifs?

    - by Matt Sherman
    I am getting dozens of 404 errors on my site that are requests for gif's with apparently random names, like 4273uaqa.gif and 5pwowlag.gif. I see that most of them are coming from one user. I assume something is happening in the background on her machine without her knowledge -- a malware thing on the client. Have you seen this behavior before, and do you know what sort of malware might cause it? Would love to advise my customer that s/he has an issue. I'd also like to stop getting these 404 reports. (reposted from main Stack Overflow)

    Read the article

  • How to hold payment in paygate for a while?

    - by Fero
    Hi all, I have a query regarding holding the payment in PAYGATE PAYMENT GATEWAY. Here is the problem in brief. I am doing a website where the payment should be made only a certain members buy the product. For Example if there is an iPhone in my site, then that particular phone must be buy by certain quantities which given by admin. It may be done one by one user or a single user can buy all the quantities at a single time. In this case i need to hold the payment here.Because i don't want to receive the payments until the certain quantities bought. Because if certain quantities were not buy i need to refund the money to their account. We don't like to do this process. That's why we are looking for holding the payment. Is it possible or what is the best way to solve this problem? Please let me know what is you professional opinion? thanks in advance...

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • Google: What does a return to PR 'Unranked' mean?

    - by UpTheCreek
    One of my sites is very new (about 3 months). When first launched it's pages had (unsurprisingly) a Google PR of 'Unranked' [From Google toolbar stats, via the firefox SearchStatus plugin]. After a few weeks these changed to PR0. Just recently I noticed that they are showing PR 'Unranked' once more in Google Toolbar. As far as I know I'm following the Google guidelines. Results for the site still seem to be showing for its keywords. What could this mean?

    Read the article

  • Can a registrar outage affect my website?

    - by Harry Muscle
    If my registrar has an outage and their servers go down for a while, will that affect my website. I'm not using any of my registrar hosting packages or name servers, I simply have a few domain names registered with them that point to my actual hosting provider. Based on my current understanding on how these things work I believe that an outage at my registrar will not affect my websites at all, but I'd like to hear this from someone with more experience in this field.

    Read the article

  • website attack form submission triggering emails related questions

    - by IberoMedia
    We are experiencing website attacks that trigger the submission of a form, and send alert emails. Normal process of form submission is to fill up a couple of text fields, and when the user is redirected, the next page processes $_POST. If $_POST exists, then the email to intended form recipients is triggered. What is happening right now, we are receiving the email of the form submission, three emails at a time with same information. The information per email is the same, but not all of the spam emails contain the same information, each batch of triggered emails has unique information. The form has no captcha, and if possible we would like to keep it this way. The website has worked fine and had no spamming problems until today. We have monitoring software for the website, but whoever is submitting this form over and over is not being recorded by the tracking software WHY IS THIS? IS THE PERSON ACTUALLY VISITING THE WEBSITE? The only suspicious visit tracked was on November 10th, and this record ALSO shows three forms submitted (this is how I identified possible first visit by attacker). Then no incidents until today. WHAT IS THE GOAL of the spam attack? Is the attacker expecting us to respond to the bogus emails? What can they achieve with repeated submission of form Why are three emails triggered in the row? Is this indicative that they may be using a script? This is a PHP website. Is there a way for a client to view the PHP code of a page? Thank you

    Read the article

  • Adwords: Is there a drawback to setting a really high CPC to learn what works faster?

    - by Rob Sobers
    I'm toying with increasing my max CPC really high on all my keywords so ensure my ad gets shown in the top spot on page one in order to draw more clicks. I think this will be a good way to quickly figure out whether the ads I'm writing have a decent CTR and, more importantly, whether the landing pages I'm building are converting. Since I can set a max daily budget for my campaign, I won't risk breaking the bank. I can't think of any drawbacks, personally. Am I missing any?

    Read the article

  • Looking for someone to point me in the right direction. I want to learn how to use hosted servers

    - by Leisure
    TL;DR: I want a Java program to run on a server, I want the server to forward a particular port from external to internal IP, I want store a few files on the server. Guides please. So I made a hack job Java program that acts as a server for my android application. It stores data in text files and HTML files, uploads them via FTP to my webhost, and manages socket connections (using port forwarding) with any phones connected. Right now I'm running it on NetBeans on my home computer. I know that it will probably slow down or crash once about 50 phones are connected at once. Is there any way I can run this program on a server with a high bandwidth? Can someone please find me a guide for that? I'm noob and don't know where to start looking. I seriously don't know anything about renting or using servers - I need a nice guide, and recommendations. My requirements for the server: Can handle about 2k socket connections at once Can run my Java code and store my txt files Can give me a port and an IP address so TCP/IP clients can be connected My budget: $50 CAD per month. Please someone set my ship sailing in the right direction, I really don't know where to look for resources.

    Read the article

  • Will these type of 403 errors affect my ranking?

    - by Gkhan14
    Let's say I have a directory that has a 403 forbidden error for all of the content in it, however a few of the images in the subdirectoies of the main diretory do NOT have a 403 forbidden error. Will this fact affect my ranking? For example: test.com/system/ (HAS 403 ERROR FOR ALL FILES) - test.com/system/pie/ (HAS 403 ERROR FOR ALL FILES) - test.com/system/pie/image.png (DOES NOT HAVE A 403 ERROR, AND THIS IMAGE IS EMBEDED ON A PAGE ON test.com e.g(test.com/pie/)) This sort of pattern repeats for about 10 different images. This directory is like a secret "system", however all of the content on the main site (test.com) is still accessible to everyone from the public.

    Read the article

  • yahoo media player not working

    - by luca590
    I have a yahoo media player embedded in my webpage. I am currently using Ruby on Rails to create/edit my web page. When i click the play button next to a track the YMP waits a while and then goes to the next track without playing the first one. I then get a warning on my second (last) track that its file could not be found. Does anyone has a better recommendation for an audio player or a way to fix this one?

    Read the article

  • Google's process for publishing/modifying pages [closed]

    - by Glenn Dayton
    I'm assuming that a group of people at Google have control of certain sections of google.com, but how does Google make sure that employees don't accidentally or intentionally sabotage the website? Does Google use Adobe Contribute or some similar product for sharing/publishing the website. Do employees use WebDAV, FTP, SFTP, or SSH to publish the site. Since Google has hundreds of thousands of servers it probably takes some time for its servers to update. Do they transmit the new copy of the website to all servers before publishing at once? This question does not apply to Google editing a database and having a page reflect the database's changes. It applies to employees editing the source code and/ or back end of the site.

    Read the article

  • is a merchant account a requirment for a website to take payments..

    - by calum
    Hi, I have had a quick look but couldn't see anything related. Basically, if we were to accept payments for events on our website, via paypal (essentially a Buy it now! button), as a business, do we need a merchant's account, or will a regular bank account be acceptable? I may have some confusion in terms. My understanding is you need a merchant's account to accept credit card payments, but as we are using PayPal, is this necessary? Thank you for any clarification. disclaimer - I've read What are some options for taking payments on my website? but it doesn't explicitly say if we require a merchant account or not. Thank you.

    Read the article

  • How to disallow indexing but allow crawling?

    - by John Doe
    In the front page of my website, I have some previews to articles (with a small introduction to them) that link to the full articles. I want to disallow the front page to prevent duplicate content. But if I do this (in robots.txt), would it still be crawled? I mean, the full articles would be still reached by the crawler even though I disallowed the only page that links to them? I don't want the webcrawler not to access the page and enter the links in them, but I just don't want it to save the information (that will be repeated in the full articles).

    Read the article

  • What is the the maximum time for a user to return to Google for the visit to be flagged up as a bounce in GA?

    - by Anonymous
    I know that Google measures bounce rates by how fast a user returns to the results page after clicking-through to a website. Roughly what is the maximum duration of the visit for the user to then return for it to be considered a bounce? i.e. <5 seconds, <30 seconds? I'm mainly interested as it appears a lot of users clicking through my PPC adverts (Adwords) are bouncing, despite my ads having a high quality score and the page's being entirely related to the adverts copy and at as best tied to what I think user's may be searching for from the key phrases I've selected so the high bounce rate (100% on some keywords) seems a bit strange. If a bounce isn't determined by time, but simply whether a user returns to the SERP after visiting my site or not after any amount of time that would make more sense but the average duration of visit for my keywords with a 100% bounce rate in GA is 00:00:00, which suggests a user immediately returned to the SERPs, which again, is odd. Is my GA data being skewed by https or anything like that? Scratching my head here.

    Read the article

  • Magento checkout with Paypal

    - by jplozanojuan
    I've setup paypal on Configuration Sales Paypal a thousand times, but still not getting the Paypal option listed in the Payment methods on the Checkout process. And yes, I have filled out all the required info like the API credentials and such. Also, I'm not getting any errors from Magento at all. This is driving me crazy and it seems that no one (as far as I see) has gone throught this situation. Thanks in advance.

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • How to set up Google DFP (DoubleClick for Publishers) for a site?

    - by Manoj Kumar
    I have a website and I have an AdSense account as well. I have integrated AdSense and ads are also getting displayed (480 x 60). Somewhere I read that I can manage the ads that are being shown in my website (480 x 60) and filter out the ads on a CPM/CPC basis. NOTE: I don't have any ads to be displayed on other's websites. I just want to show other's ads on my website. Now, can I use Google DFP to manage the ads? I mean is Google DFP useful for me to filter the ads and get me more revenue?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >