Search Results

Search found 1856 results on 75 pages for 'hits lucky'.

Page 17/75 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Awstats logformat typo?

    - by user66700
    I've been through the awstats docs for a while now, it just seems to be failing with the Logformat, http://pastebin.com/raw.php?i=J1Ecfu4c I'm using the following in awstats, LogFormat = "%host - - %host_r %time1 %methodurl %code %bytesd %refererquot %uaquot %otherquot" (from nginx) log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sample hits: http://pastebin.com/raw.php?i=qD9PKN52

    Read the article

  • Get statistics on how ReadyBoost is being used by Windows

    - by TomA
    I have experimentally started using a flash drive for ReadyBoost. There's no blinking light on it so I don't even know if it's being accessed at all. Is there some way to get statistics on how often or how well Windows actually uses the drive to improve performance? Something like numbers of cache hits/misses or something...

    Read the article

  • Windows CHKDSK starts when 6 seconds have passed

    - by Whirlwin
    After reading this thread I might have sorted out why the CHKDSK-prompt starts. But the biggest problem is that when it prompts me with Press any key with in 10..0 seconds, nothing happens until the timer hits 6, then it starts the check even though I've pressed any button (and yet it's 4 seconds left). This question states the problem as it appears at me, though the answer is wrong. Why does the actual check start after 6 seconds, and is there any way to disable it from within Windows?

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

  • How to unsubscribe from stumbleupon?

    - by P a u l
    Stumbleupon has started spamming me. I have never registered on their site or installed their software. There seems to be no way to unsubscribe. The google hits I find for 'stumbleupon unsubscribe' are mostly touting or promoting the service in some way. On their site I see no way to unsubscribe, unless perhaps if you create a membership.

    Read the article

  • Partial Client Certificate request for Apache HTTP

    - by Joshua
    I have an Apache HTTP Server with SSL enabled and requesting a Client Certificate. How do I set up Apache to only request the certificate when a user hits a certain part of the website? Example: /myapp/ should not request the cert /myapp2/ should request the cert Note: These applications are being served using the mod-jk

    Read the article

  • Nginx https rewrite turns POST to GET

    - by x7311
    My proxy server runs on ip A and this is how people access my web service. The nginx configuration will redirect to a virtual machine on ip B. For the proxy server on IP A, I have this in my sites-available server { listen 443; ssl on; ssl_certificate nginx.pem; ssl_certificate_key nginx.key; client_max_body_size 200M; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; rewrite ^(.*) https://$http_host$1 permanent; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The proxy_redirect was taken from how do I get nginx to forward HTTP POST requests via rewrite? Everything that hits the public IP will hit 443 because of the rewrite. Internally, we are forwarding to 80 on the virtual machine. But when I run a python script such as the one below to test our configuration import requests data = {'username': '....', 'password': '.....'} url = 'http://IP_A/api/service/signup' res = requests.post(url, data=data, verify=False) print res print res.json print res.status_code print res.headers I am getting a 405 Method Not Allowed. In nginx we found that when it hit the internal server, the internal nginx was getting a GET request, even though in the original header we did a POST (this was shown in the Python script). So it seems like rewrite has problem. Any idea how to fix this? When I commented out the rewrite, it hits 80 for sure, and it went through. Since rewrite was able to talk to our internal server, so rewrite itself has no issue. It's just the rewrite dropped POST to GET. Thank you! (This will also be asked on Nginx forum because this is a critical blocker...)

    Read the article

  • Website & Forum sharing the same login credentials ?

    - by Brian
    I am going to be running a small site (100 hits a week maybe) and I am looking for a quick and easy way to share login information between the main website, a control panel (webmin, cpanel, or something), and the forum. One login needed to access any of the three. The website won't have use for the login, per say. But it will display "logged in" when you are on the website. Any custom solutions, any thoughts, logic, examples?

    Read the article

  • Apache on Ubuntu very slow on inital calls, very fast afterwards

    - by papakost
    I own an Ubuntu 10 VPS Server with Apache 2 hosting a Magento website. The first hit to the site from any client takes about 15-20 sec, while the subsequent hits from the same client take 0-1 sec. I suppose it doesn't have to do with Magento caching, because this happens also when the first call is on a very light page and the next calls are on heavy ones. Does anyone have an idea on what is going wrong here?

    Read the article

  • What would be a quick fix in case of server downtime due to sudden high traffic?

    - by PMoubed
    Let's consider a scenario like below: A small web blog build based on LAMP stack and deployed on a shared hosting. Suddenly it becomes popular in one day and it gets million hits per day. Since the developer have not consider high traffic, it caused server downtime and crashes. What would be a quick fix for such a scenario? BTW I know on cloud Servers I may be able to add more RAM or CPU to avoid that like in Amazon EC2.

    Read the article

  • How to enable WordPress to have multiple sites without a re-direct

    - by user57039
    I'm using WordPress to manage my site and when the site does a re-direct, it slows down performance. For example, WordPress allows you a single default site, www.mycompany.com. If a user goes to mycompany.com, WP will re-direct it www.mycompany.com. Is there a way to configure WP so that it will listen on both www.mycompany.com and mycompany.com without redirects. The redirects are causing performance hits to the site.

    Read the article

  • Does a 300mbps 802.11n wireless connection have any noticeable speed improvement over 54mbps g?

    - by j j
    300mbps sounds wonderful, but not with my horrible Comcast internet connection. I doubt there's an internet connection in America that even hits 54mbps. So I'm guessing that the only reason someone would be inclined to upgrade is for faster data transfer within the local network. With my internet connection where download rates are rarely ever above a few hundred kilobytes a second, would I even see any improvement in switching from 802.11g to 802.11n?

    Read the article

  • What is the best tool to aggregate traffic stats from multiple nginx servers?

    - by gekkz
    The setup: 2 or more nginx machines each machine has the same virtual hosts traffic is load balanced via DNS to each machine I need to figure out what are the best tools to use to get some traffic stats, mostly interested in amount of hits and total traffic in gigabytes. Obviously, the log information will come from nginx, formatted like this: log_format main '$remote_addr $host $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"';

    Read the article

  • std::vector optimisation required

    - by marcp
    I've written a routine that uses std::vector<double> rather heavily. It runs rather slowly and AQTime seems to imply that I am constructing mountains of vectors but I'm not sure why I would be. For some context, my sample run iterates 10 times. Each iteration copies 3 c arrays of ~400 points into vectors and creates 3 new same sized vectors for output. Each output point might be the result of summing up to 20 points from 2 of the input vectors, which works out to a worst case of 10*400*3*2*20 = 480,000 dereferences. Incredibly the profiler indicates that some of the std:: methods are being called 46 MILLION times. I suspect I'm doing something wrong! Some code: vector<double>gdbChannel::GetVector() { if (fHaveDoubleData & (fLength > 0)) { double * pD = getDoublePointer(); vector<double>v(pD, pD + fLength); return v; } else { throw(Exception("attempt to retrieve vector on empty line")); ; } } void gdbChannel::SaveVector(GX_HANDLE _hLine, const vector<double> & V) { if (hLine != _hLine) { GetLine(_hLine, V.size(), true); } GX_DOUBLE * pData = getDoublePointer(); memcpy(pData, &V[0], V.size()*sizeof(V[0])); ReplaceData(); } ///This routine gets called 10 times bool SpecRatio::DoWork(GX_HANDLE_PTR pLine) { if (!(hKin.GetLine(*pLine, true) && hUin.GetLine(*pLine, true) && hTHin.GetLine(*pLine, true))) { return true; } vector<double>vK = hKin.GetVector(); vector<double>vU = hUin.GetVector(); vector<double>vTh = hTHin.GetVector(); if ((vK.size() == 0) || (vU.size() == 0) || (vTh.size() == 0)) { return true; } ///TODO: confirm all vectors the same lenghth len = vK.size(); vUK.clear(); // these 3 vectors are declared as private class members vUTh.clear(); vThK.clear(); vUK.reserve(len); vUTh.reserve(len); vThK.reserve(len); // TODO: ensure everything is same fidincr, fidstart and length for (int i = 0; i < len; i++) { if (vK.at(i) < MinK) { vUK.push_back(rDUMMY); vUTh.push_back(rDUMMY); vThK.push_back(rDUMMY); } else { vUK.push_back(RatioPoint(vU, vK, i, UMin, KMin)); vUTh.push_back(RatioPoint(vU, vTh, i, UMin, ThMin)); vThK.push_back(RatioPoint(vTh, vK, i, ThMin, KMin)); } } hUKout.setFidParams(hKin); hUKout.SaveVector(*pLine, vUK); hUTHout.setFidParams(hKin); hUTHout.SaveVector(*pLine, vUTh); hTHKout.setFidParams(hKin); hTHKout.SaveVector(*pLine, vThK); return TestError(); } double SpecRatio::VValue(vector<double>V, int Index) { double result; if ((Index < 0) || (Index >= len)) { result = 0; } else { try { result = V.at(Index); if (OasisUtils::isDummy(result)) { result = 0; } } catch (out_of_range) { result = 0; } } return result; } double SpecRatio::RatioPoint(vector<double>Num, vector<double>Denom, int Index, double NumMin, double DenomMin) { double num = VValue(Num, Index); double denom = VValue(Denom, Index); int s = 0; // Search equalled 10 in this case while (((num < NumMin) || (denom < DenomMin)) && (s < Search)) { num += VValue(Num, Index - s) + VValue(Num, Index + s); denom += VValue(Denom, Index - s) + VValue(Denom, Index + s); s++; } if ((num < NumMin) || (denom < DenomMin)) { return rDUMMY; } else { return num / denom; } } The top AQTime offenders are: std::_Uninit_copy , double *, std::allocator 3.65 secs and 115731 Hits std::_Construct 1.69 secs and 46450637 Hits std::_Vector_const_iterator ::operator !=1.66 secs and 46566395 Hits and so on... std::allocator<double>::construct, operator new, std::_Vector_const_iterator<double, std::allocator<double> >::operator ++, std::_Vector_const_iterator<double, std::allocator<double> >::operator * std::_Vector_const_iterator<double, std::allocator<double> >::operator == each get called over 46 million times. I'm obviously doing something wrong to cause all these objects to be created. Can anyone see my error(s)?

    Read the article

  • How to make easily PDF version of a web?

    - by MartyIX
    I'm trying to make an offline version of a web and I'm looking for a tool that would do the task automatically for the whole web (circa 1000 pages of HTML + images). Is there anything like that and free? I know it is quite challenge for a program but maybe I'll be lucky :). Thanks!

    Read the article

  • How to access Memory pool mbeans

    - by nandula-shankar
    Hi, I want to access MemoryPool Mbeans through a java program so that I can retrieve the Eden Space, Perm Gen space, CodeCahe, Survior Space statistics during a period of time. How to do this? I tried java.lang:type=MemoryPool,name=Eden Space I wan not lucky Thanks, Shankar

    Read the article

  • What's the worst working environment you've had to suffer?

    - by John
    We'll leave "worst job ever" for another day if it wasn't already done... but after some recent discussions on good environments, what is the worst you've had? I've always been quite lucky - seats that go up and down, some kind of natural light, etc. But I think I dodged a bullet... what horror stories can you share?

    Read the article

  • Instead of buying VS 2010 what options will you use for .net development in the future?

    - by Eric Neunaber
    Given the recent release of VS 2010 I was shocked to see the pricing structure for the different versions of the product. I was lucky enough to receive free versions of VS 2005 and 2008 from attending various MS events. For the hacking I do at home I'm not sure I'm going to spend the money to purchase the IDE and wanted to see what others were using. Like SharpDevelop MonoDevelop Expess Editions

    Read the article

  • shouldAutorotateToInterfaceOrientation always get called more than once.

    - by lovecactus
    (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation My code get this event more than once while device is rotating. I'm searching in apple docs for reference but seems no lucky. Could any one offer a hint why this is happening? My code is an apple doc sample code, without any change but some logs. http://developer.apple.com/iphone/library/samplecode/AlternateViews/Introduction/Intro.html#//apple_ref/doc/uid/DTS40008755

    Read the article

  • Lotto program doesn't stop

    - by Naseyb Yaramis
    So I'm making a lotto game. You have to enter 6 lucky numbers and if they're the same as the lotto numbers then you win. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace OefeningExaam { class Program { static void Main(string[] args) { Random getal = new Random(); int[] lottotrekking = new int[6]; Console.WriteLine("Geef je geluksgetallen in <tussen 1 en 42>"); Console.WriteLine("Geef je eerste getal in"); int getal1 = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Geef je tweede getal in"); int getal2 = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Geef je derde getal in"); int getal3 = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Geef je vierde getal in"); int getal4 = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Geef je vijfde getal in"); int getal5 = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Geef je zesde getal in"); int getal6 = Convert.ToInt32(Console.ReadLine()); while (getal1 != lottotrekking[0] || getal2 != lottotrekking[1] || getal3 != lottotrekking[2] || getal4 != lottotrekking[3] || getal5 != lottotrekking[4] || getal5 != lottotrekking[4] || getal6 != lottotrekking[5]) { for (int i = 0; i < lottotrekking.Length; i++) { int cijfer = getal.Next(1, 43); lottotrekking[i] = cijfer; Console.WriteLine(lottotrekking[0] + "\t " + lottotrekking[1] + "\t " + lottotrekking[2] + "\t " + lottotrekking[3] + "\t " + lottotrekking[4] + "\t " + lottotrekking[5]); } } if (getal1 == lottotrekking[0] && getal2 == lottotrekking[1] && getal3 == lottotrekking[2] && getal4 == lottotrekking[3] && getal5 == lottotrekking[4] && getal5 == lottotrekking[4] && getal6 == lottotrekking[5]) { Console.WriteLine(lottotrekking[0] + " " + lottotrekking[1] + " " + lottotrekking[2] + " " + lottotrekking[3] + " " + lottotrekking[4] + " " + lottotrekking[5]); } Console.ReadLine(); } } } The problem is that the program just keeps going and doesn't stop. It's supposed to stop when the lucky numbers are the same as the lotto numbers.

    Read the article

  • Latest Ubuntu stuck on "completing the ubuntu installation"

    - by Joesph Atkinson
    So my issue is after installing 12.10 Ubuntu using wubi.exe on a dedicated parition and rebooting the computer I am given the option to boot ubuntu, as I choose ubuntu I am brought to the "completing the ubuntu installation page" after it hits 0 on the countdown it just sits doing nothing. I left my computer on all day and still the same issue so I know its not just a slow install. Some say its because ubuntu cannot find a video driver for my card (xfx radeon 5770) which if that is the case is there a way i can run the install without it needing to look for drivers?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >