Search Results

Search found 9728 results on 390 pages for 'meysam pro'.

Page 146/390 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • How can I use domain masking without having self referral in Google Analytics

    - by Cdore
    I have one old domain that points to a website's server's ip (let's call it www.oldsite.com). I have a new one, www.newsite.com, that is set up to be forwarded to a specific page on the website. Due to the way the host of newsite.com places the website in a frame, in Google Analystics, the newsite.com is listed as a source rather than the source they were at before hand, causing a self referral. A solution is to edit the code of the iframe as I looked up, but there's no way to really edit the host's masking source code of course. Another solution I did previously was have www.newsite.com point to the address that www.oldsite.come pointed to. It solved the analytics problems, but in exchange, the url masking no longer worked. In the address bar, it came up as www.oldsite.com. Is there a way to make me have url masking and be able to forward to agree with google analytics? The server of the website is hosted on a cloud server, if this is anymore information.

    Read the article

  • Webserver on a rotating server with NAT IP or changing IPs

    - by hpsoftware
    i would have to elaborate my questions so please have patience Explaining the logic. if you are familiar with logmein then it installs a client software on your computer then it kinda keeps tracks where you computer is as long as it's connected to internet. So you can always access your computer no matter where it is whatever it's IP is you just go to logmein.com and then you can just access it. Now what i am asking 1. Let's assume i have a website hosted on my laptop let's call it webserver. so then i move around i have a new IP sometime even on a hotel network is it possible to do something like what logmein does so i can keep moving around my Webserver to new IP but it has some local client or something which keeps updating my IP or something i am sure i would need a gateway server somewhere which is connected to my domain name via DNS so somebody accessing my website www.mywebsite.com goes to my main server then gets routed to my laptop which could be anywhere but my gateway server is able to communicate to my webserver I will keep updating the case description based on comments to make more sense. please have patience with me. Regards

    Read the article

  • Google Goals process not working through similarly named pages

    - by David
    Well, I'm at a loss. I've ensured that my tracking script is in etc etc, and I've set up my goal and funnel path, but only the first step is ever being shown on the funnel. Goal URL: /checkout/checkoutComplete/ Type: Head Match ... but should this be /checkout/checkoutComplete/(.*) and set to regex rather because there are parameters after the main part of the URL (I thought that's what head match was for) Step 1: /checkout/ <-- required Step 2: /checkout/confirm/ both the above are valid and correct URLs for my domain. But for some reason, the funnel visualization shows entries into the first step, then an exits count that matches the entry count, including /checkout/confirm - but it doesn't go on to the next step! Perhaps I'm doing something obviously wrong...but I can't quite see it? Also, semi-related questions. Making a change to the funnel, does it only affect new incoming data? And how often does it update? Thanks in advance for your help.

    Read the article

  • Is it ok for a canonical link to point to itself?

    - by Tom Gullen
    I've got the canonical: <link href="http://www.Site.com/Blog/how-to-know-when-this" rel="canonical" /> Is it ok if this is on the page it is pointing to? Also I'm putting it on all these pages: http://www.Site.com/Blog/how-to-know-when-this http://www.Site.com/Blog/how-to-know-when-this/ http://www.Site.com/Blog.aspx?ID=1 http://www.Site.com/Blog/how-to-know-when-this/?q= Is this correct useage?

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Is it a bad practice to register sntsh.com if my name is Santosh?

    - by Santosh
    My name is Santosh but I can't register a santosh.com because it is already taken. Most extensions for Santosh are already taken. I was trying to register a domain with .sh extension but santo.sh would cost me very high and I can't afford ~$100 for just a personal domain and that's only for one year only. Now I am thinking that I should register a sntsh.com. But there is a problem, will sntsh.com over my name Santosh don't create a SEO problem? One more thing, that totally different from above topic. If I register a santosh.name domain which is not registered, won't it create copyright and any legal problems with other santosh domains?

    Read the article

  • When will my old page stop appearing on Google?

    - by Bane
    I recently bought a new address for my Blogger blog, from yannbane.blogspot.com to www.yannbane.com. However, www.yannbane.com addresses do not appear when they are searched for! Is this natural? How much time will it take for Google to update its index? yannbane.blogspot.com 301's to www.yannbane.com. Both are added to my Webmaster Tools account, but it shows no data for www.yannbane.com (strangely). And, finally, is there something I could do to speed up the process?

    Read the article

  • How to change internet explorer settings through Javascript? [on hold]

    - by Abhi
    I have a webpage which fetched value dynamically from a config file(whose contents changes after some interval). Initially i thought it might be some problem with my code but later when i cross checked it with other browsers it ran successfully. On my further research i changed some settings in internet explorer regarding the temporary file. Tool-internet options-browsing history(settings). i selected "Everytime i visit the webpage" from amongst the 4 options that i had. I wanted to know can set it programatically?

    Read the article

  • What is this chargeback scam from eBooks bought on my website?

    - by Dan Friedman
    We have a scammer that is buying our e-Books and then performing chargebacks. Our e-Books don't have DRM, so if they wanted to resell them, they would only need to buy each book once. But instead, they keep buying the same books over and over again and then performing hundreds of chargebacks. We have created some additional rules in our fraud protection tools to block certain aspects, even though all the info looks legit, and are hopeful this will slow them down. But my question is: What is the scam? If they aren't getting any product and they only get chargebacks for something they already purchased, then they can't get additional money from the credit card company, so then what's their motivation?

    Read the article

  • Find "secret" port number

    - by CJ Sculti
    this may be kind of an odd question. My friend has challenged me. So somehow, he change the "port" of his site to 31337. If you just go to domain.com, you get redirected to google, to access the real site you go to domain.com:31337. He is going to change it again and he is challenging me to find out which port it is. Is this possible without guessing? Hopefully someone can help! Thanks. Oh, and is this the right stack exchange site to post this on...

    Read the article

  • How would I go about setting a CSS gradient background in JavaScript?

    - by Dan
    The CSS gradient is described here, but I have no idea how to select for these properties in JavaScript. I would rather not use jQuery for this if at all possible. EDIT: Just doing the following doesn't seem to work... document.getElementById("selected-tab").style.background = "#860432"; document.getElementById("selected-tab").style.background = "-moz-linear-gradient(#b8042f, #860432)"; document.getElementById("selected-tab").style.background = "-o-linear-gradient(#b8042f, #860432)"; document.getElementById("selected-tab").style.background = "-webkit-gradient(linear, 0% 0%, 0% 100%, from(#b8042f), to(#860432))"; document.getElementById("selected-tab").style.background = "-webkit-linear-gradient(#b8042f, #860432)";

    Read the article

  • Question aboud Headings For Professionals <H1>... <H9> in SEO & Browsercompatibility Differences

    - by Sam
    We all know the importance ans significance of Headings for Professional Webmasters. These were known for professional developers as <h1>Heading 1</h1> h2 ... h6. As a daring webdeveloper I lately needed more short headings for complex structured document and i thought what the hell and went ahead and used in css h1,h2,h3,h4,h5,h6{ } h7{ } h8{ } h9{ } My experiment turned out to pay back. But only in Firefox, Safari, Chrome etc, not in Internet Explorer 8. Q1. Who(&When) decided that All headings should go upto h6, and not h4 or h7? Q2. Why h7 -h9 work perfect in all major browsers, except IE8? Q3. What is the significance for Bing,Yahoo and Googld in terms of recognition or headings h1 ~ h9? obviously h1 is more important than 2, but do they differentiate between h5 and h6? or not anymore after h3?

    Read the article

  • Redirect pages to fix crawl errors

    - by sarah
    Google is giving me a crawl error for pages that I have removed like www.mysite.com/mypage.html. I want to redirect this pages to the new page www.mysite.com/mysite/mypage. I tried to do that by using .htaccess but instead of fixing the problem, the crawl pages increased and a new crawl came www.mysite.com/www.mysite.com. This is my .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /sitename/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /sitename/index.php [L] </IfModule> # END WordPress Should I add this after the rewrite rule or I should do something else? RewriteRule ^pagename\.html$ http://www.sitename.com/pagename [R=301]

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • robots.txt not updated

    - by Haridharan
    I have updated some url's and files in robots.txt file to block url's and files from google search results but, still files displaying in search results. As per a suggestion from a site I tried to update the robots.txt by below steps. In Google Webmaster tools, Health - Fetch as Google - type the url and click the fetch button. but, still files displaying in search results. Note: in Google Webmaster tools, Health - Blocked URL's - robots.txt file - downloaded date looks two dates back.

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Oversizing images to produce better looking pages?

    - by Joannes Vermorel
    In the past, improper image resizing used to be a big no-no of web design (not mentioning improper compression format). Hence, for years I have been sticking to the policy where images (PNG or JPG) are resized on the server to match the resolution pixel-wise they will have with the rendered page. Now, recently, I hastily designed a HTML draft with oversized images, using inline CSS style such as width:123px and height:123px to resize the images. To my (slight) surprise, the page turned out to look much better that way. Indeed, with better screen resolution, some people (like me), tend to browse with some level of zoom (aka 125% or even 150% zoom), otherwise fonts are just too small on-screen. Then, if the image is strictly sized, the enlarged image appears blurry (pixel interpolation effect), but if the image is oversized the results is much better. Obviously, oversizing images is not an acceptable pattern if your website is intended for mobile browsing, but is there case where it would be considered as acceptable? Especially if the extra page weight is small anyway.

    Read the article

  • Track Promotional Code Sales

    - by Scott
    Is there a way I can track actual sales on purchases utilizing Promo or Discount Codes obtained through my site? My site will link to e-commerce sites where users can use those promo codes on their purchases to save money. My site will not actually be selling any items, it is all referrals to other sites. I want this to be done outside of any 3rd party commission platform such as Commission Junction or LinkShare. Thanks!

    Read the article

  • Image SEO - always repeat main keyword in alt text?

    - by Marcus Edensky
    I'm working on an Easter Island website and I'm currently redesigning my image system. Virtually all my photos are of Easter Island. My question is, should I always include the keywords "Easter Island" for Google to easier understand that my photos are from Easter Island, or is it sufficient that the "Easter Island" keywords are in the domain, as well as in all other pages of the site? For example, Alt text 1: "Moai statues at volcano Rano Raraku at Easter Island (Rapa Nui)" or Alt text 2: "Moai statues at volcano Rano Raraku" Would example 1 be considered keyword stuffing by Google

    Read the article

  • Wiki Application With A Reputation System

    - by Christofian
    I'm really impressed with Stack Exchange's concept of reputation (you gain reputation as you post, and the more you post, the more privileges you get), and I want to apply the concept to a wiki that I am building. Does anyone know of a php wiki that has a concept of privileges/reputation similar to Stack Exchange? I'm not necessarily looking for something identical to SE, I'm just looking for a wiki application that gives users more privileges the more they contribute positively to the wiki (SE has down votes, the wiki should have some way of identifying negative contributions too). The privileges should be category based, so the more active you are in a specific category or page, the more privileges you get for that category. There should also be site wide privileges as well, though those should be harder to access than the category privileges. NOTE: If it is not possible to get category wide privileges and site wide privileges, I will be OK with just category wide privileges or just site wide privileges. I should be able to change the requirements for each privilege, through a administration panel or through editing a file (some wiki applications don't have administration interfaces). Does anyone have a script or a solution that will do this? If the script uses something similar to reputation to determine how much a user has positively contributed to the site, then that is OK too. Please Note: I am looking for a way to rate individual user contributions, not a way to rate the quality of an entire page.

    Read the article

  • Content appearing under multiple categories; anything I can do to prevent duplicate penalty?

    - by dave
    I'm working with a CMS that allows me to post content in to multiple categories. So, I have this link: www.site.com/category/green-cars Here are the GREEN cars TITLE: A Big green car INTRO: this is a great big green car. But then I have this link: www.site.com/category/big-cars Here are the BIG cars TITLE: A Big green car INTRO: this is a great big green car. So essentially - for every item of content, header and the intro sentence is the same regardless of the category the item appears in. Will a search engine penalise the site for having the same content in this way? I've looked at canonical links, but I don't think this is relevant here. All my content points to the same page - but the content may appear in multiple categories first. Or am I worrying about nothing? Thanks.

    Read the article

  • Broken links in content reports when tracking subdomains with Google Analytics

    - by Rob Sobers
    I have a tracking code that I use on my main site and my blog, which is on a subdomain: www.example.com blog.example.com I have a single profile in Google Analytics. I use advanced segments to look at traffic to the main site vs. traffic to the blog. Problem 1: When I'm browsing my content reports under Standard Reporting, the "Page" column doesn't show the top-level or sub-domain, so I can't differentiate www.example.com/index.html from blog.example.com/index.html easily. According to the docs, this filter is supposed to make GA prepend the hostname to the page URL in your content reports, but it doesn't seem to work. Problem 2: When I click on the little "Open in new window" icon next to a given page in a content report line, it always assumes the page lives on www.example.com, so I get 404s when the page is actually on blog.example.com. Is there a good solution for these subdomain tracking problems?

    Read the article

  • Which JavaScript carousel zooms blocks from the playlist?

    - by Iain Hallam
    I saw a carousel/slider for displaying featured content a while ago that does something that most don't. It started fairly simply, with the top feature large, and a playlist to the side of other featured stories: Feature 1 then began to slide towards the bottom right, while feature 2 moved to occupy the main slot, and the previews of features 3 and 4 moved up: The slider had now completed a whole swap, and was ready to do the same thing with feature 3. My Google-fu seems to be lacking in finding this again; does anyone know of this slider? I think it was based on one of the frameworks, but I'm not sure whether it was jQuery or one of the others.

    Read the article

  • Is this Anti-Scraping technique viable with Crawl-Delay?

    - by skibulk
    I want to prevent web scrapers from abusing 1,000,000 on my website. I'd like to do this by returning a "503 Service Unavailable" error code for users that access an abnormal number of pages per minute. I don't want search engine spiders to ever receive the error. My inclination is to set a robots.txt crawl-delay which will ensure spiders access a number of pages per minute under my 503 threshold. Is this an appropriate solution? Do all major search engines support the directive? Could it negatively affect SEO? Are there any other solutions or recommendations?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >