Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 181/389 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Is it possible to use a VB master page to cover an entirely separate directory written in C#?

    - by Jason Weber
    I have a company website written in vb.net. There are 5 master pages. I recently began utilizing a forum application, also asp.net 4.0, but this one is written in C#. My forum directory is domain.com/knowledgebase/. Is there any possible way to take one of my vb.net master pages and somehow integrate into the /knowledgebase/ directory? Here's what's currently This is what's in the top of every page in my site: <%@ Page Title="USS Vision Inc." Language="VB" MasterPageFile="~/homepage.master" AutoEventWireup="false" CodeFile="default.aspx.vb" Inherits="_default" culture="auto" meta:resourcekey="PageResource1" uiculture="auto" Debug="true" %> This is what's in my /knowledgebase/ directory: <%@ Page Language="C#" AutoEventWireup="true" ValidateRequest="false" Inherits="YAF.ForumPageBase" culture="auto" uiculture="auto" %> <%@ Register TagPrefix="YAF" Assembly="YAF" Namespace="YAF" %> <script runat="server"> Is it somehow possible to use, for instance, homepage.master in the /knowledgebase/ directory? If so, how would I accomplish this? Thanks for any guidance anybody can offer!

    Read the article

  • Question about server usage, big community platform

    - by Json
    I’m working on a community platform writen in PHP, MySQL. I have some questions about the server usage maybe someone can help me out. The community is based on JQuery with many ajax requests to update content. It makes 5 - 10 AJAX(Json, GET, POST) requests every 5 seconds, the requests fetch user data like user notifications and messages by doing mySQL queries. I wonder how a server will handle this when there are for more than 5000 users online. Then it will be 50.000 requests every 5 seconds, what kind of server you need to handle this? Or maybe even more, when there are 15.000 users online, 150.000 requests every 5 seconds. My webserver have the following specs. Xeon Quad 2048MB 5000GB traffic Will it be good enough, and for how many users? Anyone can help me out or know where to find such information, like make a calculation?

    Read the article

  • What is a good network for full-page rich ads?

    - by Vishnu
    I'm currently developing a website where users will be able to upload content. I would like to be able to show a full-page ad whenever someone tries to view the content. The ad should take up most of the screen, and I should be able to have a "continue to the content --" link at the top. Preferably, I want something like what is currently on Forbes (if you haven't seen it, here: http://www.forbes.com/fdc/welcome.shtml but with an ad in the black area). Of course, the most revenue is the best. Thanks.

    Read the article

  • Google analytics/adwords account and leaking of private data

    - by Satellite
    I am frequently asked to log into clients google analytics and adwords accounts. If I forget to log out before visiting other google properties (google search, youtube etc), this leaves tracks of my views/searches etc, exposing my activities to the client. Summary: Client gives me access to their Google Analytics / AdWords account I log into clients Analytics account and do some stuff Then in another tab I perform some related google searches to solve some related issues Issues solved, I then close the Analytics tab I then visit google.com, perform some unrelated searches I then visit YouTube, view some unrelated videos All Web and YouTube searches are recorded in clients google account, thus leaking potentially sensitive data Even assuming that I remember to log out correctly at step 4 (as I do 95% of the time), anything I do at step 3 is exposed to the client. I would be surprised if this is not a very common issue. I'm looking for a technical solution to ensure that this can never happen. Any ideas?

    Read the article

  • .htaccess rules to rewrite URLs to front end page?

    - by Dizzley
    I am adding a new application to my site at example.com/app. I want views at that URL to always open myapp.php. E.g. example.com/app -> example.com/app/myapp.php and example.com/app/ -> example.com/app/myapp.php What's the correct form of rewrite rules in the .htaccess file? I've got: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /app/ RewriteRule ^myapp\.php$ - [L] RewriteRule ^myapp.php$ - [L] RewriteRule . - [L] </IfModule> ...based on what the Wordpress front-end does. But all I see at example.com/app is a directory of files. :( (I put those rewrites at the top of my .htaccess file). Any ideas? Update What actually worked: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/app(/.*)?$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . /app/myapp.php [L] This is good because: Explicit or implicit calls to app/myapp.php work. example.com/app redirects to app/myapp.php example.com/app/ redirects to app/myapp.php example.com/app/subfunction redirects to app/myapp.php All other calls to example.com/otherstuff are untouched. Item 4 is Wordpress-like Front Controller pattern behaviour. I think that rule RewriteCond %{REQUEST_URI} ^/app.*$ [NC] needs refining as it allows /app-oh-my-goodness etc. through too. Thanks for the answers.

    Read the article

  • Make an agenda view google calendar entry display initially as if it had been clicked

    - by aslum
    So I've got a google calendar embedded in my web page. It's set to agenda view so when you click on an entry it expands and shows you more information on the entry. I'd like to be able to link to the page w/ the embedded calendar from elsewhere, and have a specific entry already expanded (as if it had been clicked). Is this even possible? I'm not really sure where to start. PS: I don't have enough rep on this SE to create tags... and there isn't already a tag for "google-calendar"...

    Read the article

  • Google analytics e-commerce tracking

    - by crayden
    Good morning or afternoon wherever you are, I am having issues with Google Analytics e-commerce tracking. On certain days it the e-commerce tracking is returning a value of $1.00 of revenue which is impossible because it is a hotel booking website. Im am so puzzled and not knowing where to go next with this. Any assistance is greatly appreciated. Thank you! Here is some code that might help, I received this from our contact who develops the booking engine. This is included on every page except the reservation confirmation page: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-26956700-1']); _gaq.push(["_setDomainName", "none"]); _gaq.push(["_setAllowLinker", true]); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> This is included only on the reservation confirmation page: (The "${res.xxx}" elements are replaced on the server side with reservation details.) <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(["_setAccount", "UA-26956700-1"]); _gaq.push(["_setDomainName", "none"]); _gaq.push(["_setAllowLinker", true]); _gaq.push(["_trackPageview"]); _gaq.push(["_addTrans", "${res.confirmationNumber}", "Sunshine", "${res.grandTotal}", "${res.totalPriceTax}", "", "", "", ""]); _gaq.push(["_addItem", "${res.confirmationNumber}", "${res.roomType}", "", "", "${res.totalPrice}", "1"]); _gaq.push(["_addItem", "${res.confirmationNumber}", "Options", "", "","${res.otherChargeChoices.totalCostExclTax}", "1"]); _gaq.push(["_trackTrans"]); (function(){ var ga = document.createElement("script"); ga.type = "text/javascript"; ga.async = true; ga.src = ("https:" == document.location.protocol ? "https://ssl" : "http://www") + ".google-analytics.com/ga.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(ga, s); })();

    Read the article

  • Is there a way to take credit cards on my website without needing a merchant account/payment gateway?

    - by Erik
    I've been looking for a service like this but can't find one -- it boggles my mind that such a thing doesn't exist. The ideal thing I'm looking for would be something like this: User fills out a form on my website I submit data to the service (cc #, payment amount) I get paid perhaps monthly by the service the amounts that were charged (less a fee) This is more or less how accepting paypal for payments works, except it takes my users to paypal's site and forces them to create a paypal account etc, which I'd like to avoid. Does such a service exist?

    Read the article

  • Using CSS3 is a bad practice? [closed]

    - by Qmal
    Possible Duplicate: Should I use HTML5 and/or CSS3 to build my website? I just want to know if it's considered as a "bad practice" to use things like rounded corners, gradients and so on... I understand that there are bots and crawlers that do not process CSS, but they don't need to. And nowadays most people use browsers that can process CSS3 with no problem. So should I make my buttons and shadows and such look pretty with CSS3 or with images?

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • SEO Benefits of adding a Tumblr feed to site

    - by Paul
    A client of ours has a CMS driven Blog in his hotel site - he would like to use the blog to add depth top his site and add seo benefits relating to the blogs content. The current blog is a basic header / text field and doesn't contain any tagging / meta features. Unfortunately we dont have a .net developer in our team to alter the existing blog and add meta / tagging and there isn't budget to hire one - so I considered using a Tumblr blog - setting it up externally - giving it a blog.hotelname.com address and feeding it into the existing page via tumblrs js - which basically does a document.write into the page - which we can style. I understand from a previous post (Poor CMS blog vs Tumblr embed as a general rule most search engines ignore JS created content - but will the above approach act as an improvement on the existing system for now - as the blog will be setup externally with its own url and also feed into the existing site? Cheers Paul

    Read the article

  • Why Wikipedia doesn't appear as a referral in Google Analytics' Traffic sources?

    - by Rober
    One of my clients has a website and got not spammy backlinks in a Wikipedia article. When I test it for SEO purposes with Google Analytics (from different IPs), apparently there is no referral information. On the Real-Time view my test visit is visible but with There is no data for this view in the referrals subview. And this visits appear as (direct) / (none) on the Traffic sources view. Wikipedia is not hiding in any way its links origin, since it is shown in the server visits log. Is Google ignoring Wikipedia as a referral? Am I missing anything else? Update: Now it works, several days after the link was active. Maybe something is detecting for how long the link was there so that it doesn't work just from the beggining, as a security measure? Many visits are actually not recorded.

    Read the article

  • Google search question, front page not showing

    - by user5746
    I know this is probably a dumb question but I hope someone can give me some insight; I was ranked on Google first page of search results for "funny st patricks day shirts" but I was third from the bottom and not familiar enough with SEO, so I signed up for "Attracta" to rank higher. Big mistake. Since using Attracta, I've lost the first page and I'm now on the fourth page in that search. What I noticed is that Google is now just showing a sub-page or side page, (a link from my front page, to a page which has only a few designs in it) this is not where I would want customers to land first... but my front page is not showing in that search anymore. Obviously, the title of this side page is not geared toward that search result, so I know that's why I have the pr drop. Why is my front page not ranking over that page, though? Why is it apparently gone from that search, or so far back no one will ever find it? I need to know how to fix this quick if anyone has any advice at all for me. It's the busiest season for my website and the people who were stealing design ideas from me are all ranked higher than my site now. (I can prove this, lol) So, I'm very frustrated by that. I would be very grateful to have any advice at all as to what I can do to fix this. THANKS in advance for any advice you can offer. Catelyn

    Read the article

  • Validation Meta tag for Bing [closed]

    - by Yannis Dran
    Note of the author: I did my research before posting and the old "duplicate" generated over a year ago and things have changed since then. In addition, it was generic question but mine is targeting only ONE search engine machine: BING. FAQ is not clear about how should we deal with these, "duplicate" cases. The "duplicate" can be found here: Validation Meta tags for search engines Should I remove the validation meta tag for the Bing search engine after I validate the website?

    Read the article

  • Open source login solution

    - by David
    Authentication is such a general problem, which most websites have to implement. There are a few commercial solutions, but all lack sufficient functionality to customize the registration process. Therefore, I am looking for an open-source alternative. I am using PHP and with PostgreSQL as database, but as far as I understand one could utilize authentication solutions using other technologies and integrate them into our site in various ways. Therefore, I am looking for such solutions in any technology apart from those requiring Microsoft infrastructure... I would prefer Open Source solution, which have already implemented the following features: Has password recovery procedure Username is the email address of the user Has "Remember me" functionailty (meaning that the user is logged in automatically without seeing the login page) email address verification Google has gotten me nowhere on this and neither a search on this site...

    Read the article

  • Asterix in URL?

    - by KajMagnus
    Are there any reasons I shouldn't use an asterix (*) in a URL? Background: With asterixes, I could provide these nice and user friendly (or what do you think??) URLs: example.com/some/folder/search-phrase* means search for pages with names starting with "search-phrase", located in /some/folder/. example.com/some/**/*search-phrase* means search for any page with "search-phrase" anywhere in its name. example.com/some/folder/* means list all pages in /some/folder/ (rather than showing the /some/folder/index page).

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • Lazyloading images and SEO

    - by surpr
    Lazyloading images with a noscript fallback. Should I expect any damage in the SERPs? The site is completely thumbnail based. Also should I put a smaller image size in the noscript fallback to increase crawlability? We have nearly 1mil thumbs so it's a decision I'm hesitant to do. The reason why I'm thinking about it in the first place is because we're upping thumbail size about 50% which will add 10% of pagesize.

    Read the article

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • Installing Ruby on Rails without access to command line

    - by Darwin
    I'm VERY new to this whole web dev thing but I can program and I liked Ruby when I used it before. Now, I've got web hosting and a domain and a site on there that's currently ran under Joomla but I'd like to experiment with Rails. The most access I can get to the server is FTP and maybe a setting here and there in the control panel. Definitely no command line. Is there a way to just, I don't know, upload ruby on rails to a folder and run it in a browser? That's how Joomla works I think. Literally every article I read about this starts with "you just do sudo get..." mumbo jumbo.

    Read the article

  • Estimate of Hits / Visits / Uniques in order to fall within a given Alexa Tier?

    - by Alex C
    Hi there! I was wondering if anyone could offer up rough estimates that could tell me how many hits a day move you into a given Alexa rank ? Top 5,000 Top 10,000 Top 50,000 Top 100,000 Top 500,000 Top 1,000,000 I know this is incredibly subjective and thus the broad brush strokes with the number ranges... BUT I've got a site currently ranked just over 1.2M worldwide and over 500k in the USA (http://www.alexa.com/siteinfo/fstr.net) Pretty cool for something hand-built on weekends (pat self on back) I was applying to an ad-platform and was told that their program doesn't accept webmasters who have an Alexa rank of greater than 100,000. (Time to take back that pat on the back I guess). I know that my hits in the last 30 days are somewhere on the order of 15,000 uniques and 20,000 pageviews. So I'm wondering how much harder do I have to work to achieve my next "goals"? I'd like to break into the top million, then re-evaluate from there. It'd be nice to know what those targets translate into (very roughly of course). I imagine that alexa ranks and tiers become very much exponential as you move up the ranks, but even hearing annecdotal evidence from other webmasters would be really useful to me. (ie: I have a site that is ranked X and it got Y hits in the last 30 days) Thanks :) - Alex

    Read the article

  • What is the best shopping cart or implementation for unlimited users posting unlimited products? [closed]

    - by Matt
    I've been working with x-cart much lately, and I was thinking about using it for a much larger site, but I don't know if it can handle what I'm looking for. I need a platform or strategy that can allow for as many users as possible where each can post multiple products (hopeful up to a hundred, but that's less important), but in their own private catalogs. So what am I looking for? With x-cart, I'm used to customizing it with jquery, smarty, and php, so I can handle that much.

    Read the article

  • Strategy for hosting 700+ domains names, each with a static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to manage CNAMEs on the registrar-side, but is there a way to quickly associate CNAMEs with new websites (on the hosting side), maybe using the method offered by gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • I've changed my URL schema. How do I tell Google to index the new schema and forget the old one?

    - by growse
    I had a site where the urls were constructed like this /index.php/Topic /index.php/AnotherTopic These were indexed in google, and search results returned that pointed to these. However, I've recently replatformed that site, and reconfigured it so the above urls would be: /index.php?title=Topic /index.php?title=AnotherTopic The original urls are returning 404s. The site is linking to the correct URL schema internally, but Google is retaining the original schema in its search results. I've updated and resubmitted the sitemap which only contains the new schema. Also, Google's webmasters tool is going slightly bananas at the fact there's now a spike in 404 errors in its crawl results. What would be the best approach to get Google to 'forget' about the old schema, and instead index the new schema? Should I try blocking /index.php/ in robots.txt? Should I be returning 301 codes instead of 404 for the original urls?

    Read the article

  • Using mod_speling with multi-level htaccess and rewriterules

    - by michaelcgorman
    We recently switched formats for managing our 301s. For the most part, everything went well, but it seems to have stopped mod_speling from working properly. Here's what we changed: old /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] Redirect 301 /academics/calendar2012-13.html http://www.example.edu/academics/calendar.html Redirect 301 /academics/departments/ http://www.example.edu/majors/ Redirect 301 /academics/Pre-Medical.pdf http://www.example.edu/academics/Pre-Medicine.pdf Redirect 301 ... new /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) 404/$1 And then we added a new file at /var/www/html/404/.htaccess: RewriteEngine on RewriteBase /404 RewriteRule ^academics/calendar2012-13.html$ /academics/calendar.html [R=302,L] RewriteRule ^academics/departments/$ /majors/ [R=301,L] RewriteRule ^academics/Pre-Medical.pdf$ /academics/Pre-Medicine.pdf[R=301,L] RewriteRule ... I do have (Webmin-based) access to the httpd.conf (though we don't want to store all our 301s there, if possible). We're running Apache 2.2.15 on RHEL 6 on a server in our own data center. Like I said, the only problem we're seeing is that mod_speling isn't doing its magic anymore. The new format has so many advantages over the old that we really don't want to go back, but mod_speling is so nice to have that we'd also really like it to work if possible. Any ideas for how we might be able to fix mod_speling?

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >