Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 221/389 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • CMSs & ERPs for hospital management system

    - by Akshey
    Hi, What are the best free CMSs or CMS plugins or ERPs or any other free tools available for developing a hospital management system? I want to develop it for a children's hospital run by my father. The hospital is small with two doctors. Currently, everything is done manually on paper. The main entities who will be using the system are: Receptionist, the two doctors, chemist and the medical laboratorist. They will use it majorly for keeping the records of the patient. The patients would not be interacting with the system directly. The system needs to be user friendly and should be easy to learn. I was thinking to develop such a system using a CMS or an ERP or any other free tool. I have used wordpress/drupal in past but never used an ERP. Can you please guide me to make such a system using free, and preferably open source, tools? Thanks, Akshey

    Read the article

  • How to CURL and avoid timeout death (Twitter Down) [migrated]

    - by David
    Twitter is down right now, and one of my site's home pages relies on getting data from Twitter (relies is the problem - it should be more of an accessory feature, as it just shows follow count from its feed). Here's the code in question: function socials_Twitter_GetFollowerCount($username) { $method = function () use ($username) { return file_get_contents('https://api.twitter.com/1/users/show.json?screen_name='.$username.'&include_entities=true'); }; $json = cache('bmdtwitter', 3600, $method, false); $json = json_decode($json, true); return intval($json['followers_count']); } What is a good way to make it so if Twitter is down (or not responsive for some reasonable amount of time), our site doesn't appear to be down (I think the timeout maybe defaulting to 30-60 seconds or more).

    Read the article

  • Google Analytics: Custom variables issue difference in data

    - by Bart
    We’ve set up tracking through custom variables in Google Analytics to measure which offices are getting the most traffic. The custom var consists out of the key (=office) and value = (office name). In our Custom Var tab in audience we get no data (actually we got 1 hit, but we think the data is way off). When we setup advanced segments with the filters on key and value we get the correct data. Now we are wondering why we aren’t getting that data in the custom var tab.

    Read the article

  • Can't get Rewrite rule to keep original URL

    - by user38100
    I have these Rewrites, but I would like to have the URL stay the same as what is typed originally, I thought removing the [R] flags would stop it but it hasn't RewriteCond %{HTTP_HOST} ^examplea\.example\.com$ [NC] RewriteRule (.*) http://examplea.example.com:32400/web [L] RewriteCond %{HTTP_HOST} ^exampleb\.example\.com$ [NC] RewriteRule (.*) http://exampleb.example.com:9091 [L] Edit: would this work better? RewriteCond %{HTTP_HOST} ^hello.example.com$ RewriteRule ^(/)?$ welcome [L]

    Read the article

  • How to Stop Browser from rejecting my downloads

    - by melki0795
    I have a portfolio site where I am trying to host some of my work, so people can download my work. Some of these files include exe executables, and some are jar executables, which are run through batch. When a user tries to download my apps, it says that the file is not commonly downloaded and may be harmful, and therefore blocks the download. If I zip the folders, it still does the same thing. Any format i choose, still blocks the downloads. How can I stop chrome from doing this. Is there a way I can verify my files so they will be considered as trusted? Thanks in advance!

    Read the article

  • Does Google counts backlinks from homepage to inside pages?

    - by SharkTheDark
    I have a site with good PR and my inside pages are getting increase of PR, but they don't have links pointing to them, only from my homepage. Does that means that Google counts ALL links on my homepage, including links to inside pages? Does it calculate inside pages PR with one coming from my domain, my homepage, too? Also, if inside pages that got high PR from homepage have link back to homepage, will that increase homepage PR additionally, since those links should count too? By Google PR algorithm formula, by calculations on Wikipedia and Stanford PR algorithm explanation ( which is originally developed by ) it counts those links, and also it counts after-increase backlink again, making few times circle ( it stops because of d ( 0.85 ) factor. ), but it counts them. Does anyone know is this correct?

    Read the article

  • Is there an open source solution that I can host on a web server that will allow users to anonymously upload a file to me?

    - by mjn12
    I'm looking for some kind of web application I can host on my Linux web server that will allow users to upload files of arbitrary size to me from their browser without requiring them to log in. Ideally this application would allow me to generate a link to my website that allowed for a one-time use upload. It might contain a unique, random key that was only good for that session. I could email them the link, they click it and are taken to a page where they can upload their file to me. I'm mainly targeting friends and family that need to send me files that are too large for email. I don't want to require them to install anything (dropbox), sign up and log in, etc. I'm definitely not teaching them to use FTP. This wouldn't be a difficult project for me to roll on my own but I'd like to take something off the shelf if it is possible. Does anything like this exist that my google-foo isn't turning up?

    Read the article

  • Help! Requesting a change of URL on Yahoo Directory!

    - by Sei
    I submitted a couple websites on yahoo directory a month ago. For some reason, the url they listed was not the url I asked for, they listed the Japanese version instead of the submitted English version (this is a English directory so obviously it is a mistake). I requested changes and was accepted. But the reality is they promised a change but it was never really done. I contacted them again and again through the 'request a change in URL' form, but there is no answer. Is there any effective way that I can get to them? preferably a phone number or email? Thanks a lot!

    Read the article

  • Are the contents in the front page considered as duplicate of the post?

    - by yibe
    I asked this same question on stackoverflow, but closed being off topic. Therefore, I am posting it here. In Wordpress blogs, the front page of the blog will display many posts in whole or excerpts. When the link to the post is clicked, the content will be opened with an other template file(single.php). Can we say that the content displayed in the front page and the post pages are considered as duplicate? Does it harm SEO in any way?

    Read the article

  • How to use database to generate multiple folder content page? [migrated]

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • Move site to new domain divided by language across subdomains

    - by mark
    I managed to find a nice domain for a fairly fledgling site of mine that actually hasn't been parked by scumbag squatters. Given the upcoming move I'm thinking I'd take the opportunity to split the content across subdomains according to language, much like wikipedia for example: current: www.old-domain.com/en/subject # English www.old-domain.com/subjecto # Spanish (default so not locale in url) proposed en.new-domain.com/subject es.new-domain.com/subjecto The advantage of doing this is a fairly competitive keyword such that I may wish to put a copy of my application on a Spanish slice in order to gain a few serp's. Also pure vanity. Google's webmaster tools allows me to move to the new domain and I can add the root domain and the subdomains but forward to only one. I'll 301 from the old domain appropriately but is there anything I should know about webmaster tools in this respect where effectively I'm moving to two addresses? (Feel free to dissuade me from doing this if it's a bad idea in comments.) I've now asked this same question on google's forums.

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • Why do spammers use CELESTRON NEXTAR 6SE?

    - by fmz
    I am running a website for a volunteer organization that hosts an annual event. There is a form where people can volunteer to bring items for the event. All too frequently I get spam from users across the globe that enter things like this: Country - 1: Australia Material - 1: CELESTRON NEXTAR 6SE Country - 2: Australia Material - 2: C8 Newton Country - 3: Australia Material - 3: ETX 125EC Country - 4: Australia Material - 4: ETX 125EC Country - 5: Australia Material - 5: CELESTRON NEXTAR 6SE I don't really care about the country, but what is it with the telescope stuff? Is there some hidden meaning behind all this or is it some astronomy group that moonlights as spammers?

    Read the article

  • Comments Application SEO

    - by user1015448
    I am developing a commenting application. Users will be able to integrate this application in Blogs. I am unsure how to make the comments searchable in Search Engines. What I want is all the comments which are being posted should be included in Search Engine results when searched with relevant keywords. Please give me some hint how to do this. Do I need to use meta tags ? If so, how should I create them?

    Read the article

  • i got mysql error on this statement i don't know why [closed]

    - by John Smiith
    i got mysql error on this statement i don't know why error is: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CONSTRAINT fk_objet_code FOREIGN KEY (objet_code) REFERENCES objet(code) ) ENG' at line 6 sql code is CREATE TABLE IF NOT EXISTS `class` ( `numero` int(11) NOT NULL AUTO_INCREMENT, `type_class` varchar(100) DEFAULT NULL, `images` varchar(200) NOT NULL, PRIMARY KEY (`numero`) CONSTRAINT fk_objet_code FOREIGN KEY (objet_code) REFERENCES objet(code) ) ENGINE=InnoDB;;

    Read the article

  • Google search preview shows content not on the website

    - by SDG
    My website google search entry is messed up. In the preview in google search results, I get things like cracks, serials, random ip addresses. I scanned all files and my computer for viruses and malware and could not find anything. I also tried to download and reupload all content from a friend's computer and still that content persists. I also scanned the source code of all files, but the content does not appear in any file. Google also does not detect any malware on the website, as seen in their webmaster tools. I have searched using the same keywords in other search engines such as bing and yahoo and the search results there are fine. I am quite clueless as to what the causes would be for this and what would be a possible remedy.

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • How do you create links with a NULL or # in Drupal?

    - by blunders
    Trying to create folders for links where the parent has no content, it's just a folder. Need to be able to insert #, but Drupal is saying it's not a link. Just want the user to click it and nothing happen, the child of that menu item will already be being displayed without a click. Version: Drupal 6 (appears worked in D5) I've attempted the following: '', #, <#>, empty, <empty>, null, <null>, blank, <blank>, <none>, none, <answer> ...just kidding. ERROR: The path '<insert_non-url>' is either invalid or you do not have access to it. Question, just ask -- thanks!

    Read the article

  • How to Do htaccess 301 Redirect from Old Filename Pattern to New Filename Pattern?

    - by user249493
    I have a bunch of old files prefixed with "old-" (e.g. "old-abcde.php"). I need an htaccess rule to set up a 301 redirect so that any request for a file starting with "old-" goes to its corresponding new version (e.g. "abcde.php"). To be clear, I have many files, not just one, so I can't do a literal filename match. I basically just need to strip off the "old-" from request and redirect to the version without it. I know I probably just need a simple regular expression, but I'm not good at writing them. Can anyone provide assistance?

    Read the article

  • Tracking logged in vs. non-logged in users in Google Analytics

    - by Justin
    I am building a social media site that is similar is structure to twitter and facebook.com where unauthenticated users who go to https://mysite.com will see a login + sign-up page, and authenticated users who go to https://mysite.com will see their timeline. My question is, what is the best practice (using Google Analytics) for tracking these two different types of users who are viewing completely different content but are visiting the same URL. I tried searching the Google Analytics docs but couldn't find what they suggested for this scenario. Perhaps I just don't know what keywords to search for. Thanks in advance for any help.

    Read the article

  • redirect url ending with dot

    - by Michael
    I submitted my site's URL to my workplace's printed newsletter and when I get the printed version, they added a dot to the end of it. Some people will realize that the period is not a part of the url but others will not. Is there an easy way to redirect from http://example.com/home. to http://example.com/home? I have a IIS 7.0 shared hosting with GoDaddy. This means I have access to the box only through their interface so some options might be limited.

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • Schema.org for Product Reviews

    - by Lynda
    I have a product reviews on a site and I am adding schema.org markup to the reviews. Here is the code I am using: <div class="blockquote-wrap"> <blockquote itemprop="review" itemscope itemtype="http://schema.org/Review"><span itemprop="reviewBody">Text of the review itself.</span> <cite><span itemprop="author">Author Name</span>, Location of Author</cite> </blockquote> </div> This is all the reviews are. When I test the page using Google's Structured Data Testing Tool I receive this error: Error: Incomplete microdata with schema.org. My question is what data is missing that is required? I don't see which data is required on the Schema.org page for reviews.

    Read the article

  • How do I get the root index page to redirect to a subdirectory without affecting SEO?

    - by paradroid
    I am reviving/reorganising my personal WordPress blog. It's using a URL that looks like this: http://mydomain.com/blog The webserver 301 redirects www.mydomain.com to mydomain.com. I want to use the blog subdirectory because I plan to add other parts to the site, with the blog only being one part of the site. However, at the moment there is nothing there but the blog, so I want to have the root index page redirect to the blog for the time being. I have been using this on the root index.html page to do the redirect... <meta http-equiv="REFRESH" content="0;url=./blog"></HEAD> ...but this seemed to have stopped the site being indexed by Google and Bing. How do I do this without affecting SEO? Also, what URL should I put in the sitemap.xml?

    Read the article

  • Is there a way to return a response every x seconds or so to a single http request?

    - by luis
    I'm wondering if it's possible to send a response every second or so to a single http request. Like for example the client makes an http request, then the server sends a space character every second. This could be never ending or with a limit, for example a minute. I think the word 'response' is misleading in this context, since I don't necessarily mean an http response. The whole http response could be composed of the space characters, which would mean a single http response to a single http request, except that it is a minute long. I tried chunked encoding but I don't think it works, or at least my implementation's wrong.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >