Search Results

Search found 8224 results on 329 pages for 'sometimes'.

Page 245/329 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • How make rewrite rules relative to .htaccess file.

    - by Kendall Hopkins
    Current I have an .htaccess file like this. RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f [OR] RewriteCond %{REQUEST_URI} ^/(always|rewrite|these|dirs)/ [NC] RewriteRule ^(.*)$ router.php [L,QSA] It works create when the site files are in the document_root of the webserver (ie. domain.com/abc.php - /abc.php). But in our current setup (which isn't changeable), this isn't ensured. We can sometimes have arbitrary folder in between the document root and folder of the .htaccess file (ie. domain.com/something/abc.php - /something/abc.php). The only problem with is that is the second RewriteCond no longer works. Is there anyway to dynamically check if the accessed path by a path relative to .htaccess file. For Example: If I have a site where domain.com/rewrite/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/rewrite/index.php FORCED TO REWRITE -> domain.com/rewrite/rewrite/index.php If I have a site where domain.com/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/index.php FORCED TO REWRITE -> domain.com/rewrite/index.php

    Read the article

  • Arch Linux drops me on my school network

    - by Kravlin
    I'm running a Lenovo X61 which i carry around my college for getting on the internet at various points in the day. The network has always been finicky but recently it's gotten worse. I'll connect using iwconfig, get an ip from dhcpcd and log in using vpnc to their system. Sometimes I'll stay connected for hours but most of the time within 30 seconds my network traffic will drop to zero and i'll be unable to do anything. My computer still belives it's connected, however to try again i need to put my wireless interface down, put it back up and try again. It's gotten so bad that i've got a window on my computer pinging yahoo or google constantly in order to know if i'm still able to get online. I know other people who have used Arch Linux that don't have the same problems as well as people who use Ubuntu who haven't had any problems either. It seems like my computer is a special case. Does anyone have any suggestions on how to fix it? dmesg doesn't show anything out of the ordinary going on and i don't know where else to look for errors or other things to try. Edit: this doesn't happen on my home network. It's a problem that only happens at school.

    Read the article

  • Server 2008R2 in Extra Small Windows Azure Instance?

    - by Shawn Eary
    Windows Azure hosting for an Extra Small (XS) Windows VM seems to come out to be about $10 a month right now. I think this XS instance gives you the equivalent of a 1 GHZ CPU with 768MB of RAM. I think the minimum requirements for Server 2008 is 1GHZ CPU with 512MB of RAM. Also, I think the minimum requirements for SQL Server Express is 1GHZ CPU with 256 MB of RAM and that the minimum requirements for Team Foundation Server Express 11 Beta is 2.2 GHZ CPU with 1 Gig of RAM (this 2.2 GHZ part could be a problem for my 1 GHZ XS VM...). Given the performance of the XS Azure instance, would I be able to install: a very basic MVC web site; a free instance of SQL Server Express; a free single user instance of Team Foundation Server Express 11 Beta and run the XS VM instance without serious crashing? I know there are other Shared WebHost providers that can provide these features for me, but those hosting providers have the following disadvantages: They sometimes cost a lot of money after all of the "addons" are in place They probably don't provide the level of security and employee integrity that Microsoft can provide They don't provide the total control that an Azure VM seems to provide

    Read the article

  • Errors with Using Webcam

    - by C.G.
    I have been having some issues accessing a webcam from my machine. Sometimes (not always) when I run a program that accesses the device (cheese, guvcview, and code using openCV), I get either of two messages, which lead to the program crashing. The first occurs after running the webcam for some time. libv4l2: error dequeuing buf: No such device VIDIOC_DQBUF: No such device The other will occur without even letting me have a chance to run the webcam. libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON - Unable to start capture: No space left on device Occasionally after getting these errors I will also receive a message saying that no such device can be found for subsequent runs. Other than the times that the "No device found" message appears the webcam appears when I use lsusb. My machine runs Linux Fedora 16, and the webcam is a Logitech C920. I do have ffmpeg installed, and I have been able to run the web camera many times in the past without errors. What is particularly puzzling about these errors is that they just sprung up this past weekend. No new software or hardware has been installed on this machine recently; I haven't changed any settings recently either. It could possibly be a driver issue, but I don't know what could have changed which could lead to this issue. Any attempts at researching this problem has been fruitless as this seems to most commonly occur with multiple webcams. I am only working with one device. I'd appreciate any advice for this problem, as this has become a bit frustrating.

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • Windows 7 mapped drive kicking off OS X users

    - by Collin White
    I've mapped a network drive on my Windows 7 PC at my office. The windows machine has a few TB of storage that is being accessed by my development team (all running mac os 10.7). The share seems to work fine for a little while but will timeout and kick the mac users off and sometimes disallows a connection on the next attempt. Restarting the windows machine fixes the problem. I've tried this tutorial as well as setting the maximum session length in the Local Security Policy section to 99999 (I discovered 0 did not mean unlimited, only a 'reasonable ammount of time') anyway, the setting is now for ~208 days which is sufficient (see attached). I'm having trouble debugging this in general so if anyone has some pointers I'm all ears. This is a intermittent issue which in my opinion are the hardest kinds to debug. If anyone knows of how I might monitor connections from the PC that would also be pretty cool. Previously the files were hosted on a mac mini and everything was working just fine (the mini just didn't have the ability for the storage capacity we needed) so I believe it is some windows setting that is kicking users off. Anyway, thanks for reading.

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • Setting Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from newly mounted drives

    - by galacticninja
    How do I set Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from external hard drives and TrueCrypt-mounted volumes? I remember in Windows XP, I can set a percentage of total disk space that will automatically be used as storage capacity for deleted files by the Recycle Bin, and this will be applied to all external HDs or TC-mounted volumes. Windows 7 defaults to the 'Don't move files to the Recycle Bin. Remove files immediately when deleted' setting for newly mounted external HDs and TC mounted volumes. Since I am expecting deleted files to go to the Recycle Bin, sometimes this causes an 'Oops' when I delete files in external hard drives or TC mounted volumes, as Windows does not move deleted files to the Recycle Bin, but just deletes the files permanently. I have to remember to manually set a custom Recycle Bin storage space for each new drive that is mounted by Windows to avoid this issue. I only use and mount TrueCrypt file containers, not drives. I also don't mount TrueCrypt file containers as removable drives. ('Mount volume as removable medium' is unchecked in Mount Options.) In my $Recycle.Bin > Properties > Security settings, 'System' and 'Administrators' are already set to 'Full Control', while 'Users' only have 'Special Permissions' checked in gray. There are no other groups. I haven't changed or edited anything in these settings. I am using Windows 7 Ultimate.

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Windows 7 remote desktop encryption error every few minutes

    - by rfrankel
    Because of an error in data encryption, this session will now end. This is the error I've been getting more and more frequently over the past few days, to the point that I can't ignore it because it's happening consistently within 5 minutes of connecting - sometimes within a few seconds. Both the remote and local machines are Windows 7 Pro x64. The remote machine is behind a Linksys RV082, and I'm using UPnP to forward a remote port to the correct local port. This setup had been working fine for several months, and I can't think of any recent relevant changes that might have been made. Things I've already tried: Disabling unnecessary components of the network connection on the remote machine, until only IPv4 and Client for Microsoft Networks remain. Disabling TCP large send offload on both the remote and local machines. Confirming that the remote machine is not mentioned anywhere in any DMZ settings on the Linksys router. Confirming that there are no x509-related registry keys screwing things up (this is the suggested fix for a slightly different error anyway). These are the only solutions I've been able to find after about an hour of searching, and most of them apply to XP or Server 2003 in any case. If anyone could suggest something else, it would be much appreciated.

    Read the article

  • VNC from Windows to OS X Lion: App stuck in fullscreen mode

    - by Jonny
    I'm connecting to a remote Mac through a Windows. ahh it gets more complicated than that. I'm sitting by my iMac. I use Virtual Box in it to launch Windows 7. In it I have a VPN connection to a remote Windows network, which allows me to use Remote Desktop to one of the Windows (Vista!) boxes over there. From that Vista box I VNC into a Mac OS X Lion. (Don't ask me why, but that Mac doesn't have a public ip which prevents me from accessing it in the first place.) So: OSXLion - (virtual)Windows7 - Windows Vista - OSX Lion That last Mac was recently upgraded from Snow Leopard. Now with Lion, sometimes apps run in fullscreen. Somehow I can't get out of that fullscreen. Normally you'd move the mouse pointer to the top of screen and a menu list bar drops down allowing you to reach the fullscreen button top right. Now, in my current setup that menu list bar never drops down on the remote Mac at the end of the line. Any ideas?

    Read the article

  • Disable CTRL+mouse wheel zooming in Chrome?

    - by Peter Nore
    I'm a normal-sighted person and I would like to view pages at 100% all the time. I use keyboard shortcuts that involve CTRL a lot, so about twenty times a day I accidentally hit CTRL at the same time that I'm scrolling, which results in the page being reflowed and repainted. This in is annoying because it can take up to 30 seconds to fix the issue, depending on how complex the site layout is. On sites with dynamic layout such as Google Docs the problem is more serious; accidentally hitting CTRL+mouse wheel corrupts the display and forces me to refresh the page entirely, sometimes causing me to loose information in the process. I would like to either decouple CTRL+mouse wheel from zoom, or disable zoom functionality altogether. This is possible on Firefox by using about:config; is there a similar way to edit detailed settings in Chrome? Would I have access to the detailed settings if I used Chromium instead of Chrome? I'll probably jump ship back to Firefox if I can't solve this problem. There is a superuser question that asks basically the same thing I'm asking, but for Firefox and Internet Explorer exclusively. Other people on the Chrome forum have had related issues, but none have the same problem. "I would really like it if I could deactivate the auto zoom in/out." had "something with laptops and Windows 7", not the feature built into Chrome. Other people have had PDF specific issues, which doesn't concern me. I've also tried searching for extensions that allow you to disable the scroll; I had hoped that "Zoom Lock" would have the ability to lock the zoom at 100% and prevent CTRL+scroll wheel from distorting the display, but it doesn't work for my use case. Google Chrome version 9.0.597.84 (Official Build 72991) Operating System: Ubuntu 10.10

    Read the article

  • Why *do* windows print queues occasionally choke on a print job

    - by Ian
    Y'know they way windows print queues will occasionally stop working with a print job at the head of the queue which just won't print and which you can't delete? Anyone know whats going on when this happens? I've been seeing this since the NT4 days and it still happens on 2008. I'm talking about standard IP connected laser printers - nothing fancy. I support a lot of servers and loads of workstations and see this happen a few times a year. The user will call saying they can't print. When you examine the print queue, which in my case will generally be a server based queue shared out to the workstations, you find a print job which you cannot cancel. You also can't pause it, reinitialize it, nothing. Stopping the spooler is the usual trick and works sometimes. However I occasionally see cases which even this doesn't cure and which a reboot is the only solution. Pause the queue, reboot, when it comes back up the job can then be deleted. Once gone the printer happily goes back to its normal state. No action is ever necessary on the printer. I regard having to reboot as last resort and don't like it. What on earth can be going on when stopping the process (spooler) and restarting it doesn't clear a problem? Its not linked to any manufacturer either. I've seen this on HPs, lexmark, canon, ricoh, on lasers, on plotters.... can't say I ever saw this on dot matrix. Anyone got any ideas as to what may be going on. Ian

    Read the article

  • Controller Error: Do I need to worry?

    - by Kryten
    I have a HP Pavillion dv5224ea Laptop with Windows 7 on it. Recently I discovered a Error in Event Viewer: The driver detected a controller error on \Device\Ide\IdePort1. (more details): - System - Provider [ Name] atapi - EventID 11 [ Qualifiers] 49156 Level 2 Task 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2010-03-07T12:43:07.090197600Z EventRecordID 30198 Channel System Computer Alistair-Win7 Security - EventData \Device\Ide\IdePort1 0000100001000000000000000B0004C002000000850100C00000000000000000000000000000000000000000000000000000000004100000 -------------------------------------------------------------------------------- Binary data: In Words 0000: 00100000 00000001 00000000 C004000B 0008: 00000002 C0000185 00000000 00000000 0010: 00000000 00000000 00000000 00000000 0018: 00000000 00001004 In Bytes 0000: 00 00 10 00 01 00 00 00 ........ 0008: 00 00 00 00 0B 00 04 C0 .......À 0010: 02 00 00 00 85 01 00 C0 ......À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 00 00 00 00 04 10 00 00 ........ Event Viewer is recording A LOT of these errors (sometimes 13, one after the other!). Do I need to worry? What does this error mean? What device could "\Device\Ide\IdePort1" be? What is a ATAPI Error? Do I need to re-install Windows? I generally find the occurs when I try to backup my machine (using Windows Backup) or when using a program that uses Volume Shadow Copy. I have run "sfc", no problems. There are no Device Errors in Device Manager. I have also run "vssadmin list writers", no problems. Whats going on??? Would it be a good idea to re-install Windows 7?

    Read the article

  • Synaptics touchpad stops working randomly

    - by Jus12
    I have two laptops. One dell Vostro and other Vaio Z. Both have Synaptics (Yes, I have checked, and the original drivers were from Synaptics as well). On both laptops, the touchpad scrolling stops working at some arbitrary time and nothing seems to solve it except a reboot. Sometimes, it randomly starts working again. I have downloaded all latest drivers from OEM. Interestingly, when I run a program as Administrator, scrolling works in that window only. This problem is very odd. It happens without any reason and I've not been able to find a fix for more than a year. I have seen some unusual suggestions on forums (e.g., to "restore windows to a previous working state") but never any fix that solves this issue properly. I have tried installing latest drivers and I DO NOT want to restore windows to a previous working configuration. OS: Windows 7 64 bit Professional (Sony Vaio Z - VPCZ128GG) Windows 7 32 bit Professional (Dell) Edit: Temporary solution is to uninstall the synaptics driver and let Windows 7 use its default built in one. However, I really prefer the Synaptic driver because it activates the scroll button rather than the mouse wheel (useful in some apps such as MS Photo Editor)

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • dns in a small network with router and AD domain

    - by Felix
    I have a small office network with router (running OpenWRT), Windows Domain Controller (used to be 2008R2; I just backed it up and upgraded to 2012), about a dozen AD clients (3 server and windows workstation) and several non-AD clients (network printer, PBX). The problem is that the clients can't access servers by name (only by IP). I tried all kind of permutations. Right now domain controller runs DNS server for all desktops; but unless I put an entry in hosts file - I can only get by IP. I have router as DHCP server (since not all devices are on AD); and except for Domain Controller all IP addresses, including "static", are assigned by the router. Most frustrating, some servers sometimes just work! for example, I can often get to the Linux box by name (it is part of Domain using Beyond Trust Integration Services); but I can never get to SQL Server box. Seems like non-domain devices see more names than domain members... This network should be fairly typical; but I couldn't get any guidance about how to set up DNS/DHCP service to make all nodes happy. The closest is this question, but still it's different! Thanks

    Read the article

  • sporadic routing to another website when opening a common url

    - by user226098
    I have a strange problem in our office: Sometimes when opening a url from one of our projects random url in any browser not the right website shows up but some other website. In most of the cases it redirects to google.com with some parameters like https://www.google.de/?gfe_rd=cr&ei=krOOU8_kGcSKswadyYDQBw&gws_rd=ssl or just the ugly google 404 page). But today it remains on the origial url but shows up the the content of http://debug.netdna-cdn.com/. This happens about 1 time a week and for no apparent reason. Even stranger it only occurs on a single pc in the network. It now happens on two different computers in the network. Both use windows 8. The problem cannot be fixed by clearing the browser cache but by rebooting the pc or using ipconfig /flushdns. So I think it has something to do with the dns cache of the machine. But I have no idea what the reason is for this and how i can figure out how to solve it. Any ideas?

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make active the device again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question.

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by indyvoyage
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here? EDIT: It seems I can't even put ext3 or ext4 on it, they both complain about a corrupted journal. Gheh, guess something is really broken here.

    Read the article

  • Is there a free PDF printer / distiller that creates signable documents?

    - by Coderer
    I've used various methods (mentioned elsewhere on this site) to create PDFs, using a printer driver or converting from PostScript, etc. The common problem is that if I open any of the output files in the newer versions of Adobe Reader, there's an option to "Place Signature" but it's greyed out, or gives an error message that the feature has been disabled for this document. As far as I can tell, there's an option set somewhere in the document metadata that tells Reader "allow the user to sign this document", or don't. None of the free/open source tools that have been been linked to in other SU posts have had this listed as an option (though to be fair I haven't actually downloaded and tried all of them). Is there a tool that does this? Can I just poke a bit with a hex editor somewhere to turn on this functionality? I can sometimes get access to Acrobat Professional to turn on this option, but doing it for every desired case would be more work than I care to do. The current workaround for single-page documents is: Print the document to PDF (possibly via postscript) Open a single-page blank PDF with the "signable" bit turned on in Reader create a custom "stamp" using the Reader markup tools, by importing the printed-to document "stamp" an image of the printed document on the blank page, hoping to get it centered about right place a signature over the document-but-not-really you just stamped This obviously does not scale well at all. It would be much better if I could: Print the document to PDF Drag the document to a simple shortcut / tool / whatever Open the document in Reader Place a signature in the document ETA: Sorry, maybe I should have been clearer -- I'm talking about the certificate-based digital signing available in Adobe Reader, not adding a virtual ink signature. Also, any solution really would have to be available offline.

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • illegitimate traffic from user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729)

    - by user114293
    Since the beginning of the year, I'm getting a lot of traffic with the user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729). My access logs show 40% - 60% from that user agent. That's strange because the user agent states a Firefox 3.0.10 browser (is anybody using that browser in 2012? Definitely not 40%-60% of visitors on a normal website). Also, the logs show that this user agent only requested the HTML document and no referenced assets like images, css, js files. I checked the IPs of those requests (with that UA). It's coming from all over the world. I recognized that those IPs sometimes have a mobile user agent. So my suspicion is a mobile app that is doing a lot of "spider requests" - but if that would be the case than other web sites should have the same problem. That's actually my question: Does anybody experience same/similar problems?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >