Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 862/1008 | < Previous Page | 858 859 860 861 862 863 864 865 866 867 868 869  | Next Page >

  • Windows 7 sharing folder from command line, selecting users and triggering the "Apply" of changes

    - by clintp
    I have a drive that doesn't get mounted until after I log in. (A Truecrypt thumbdrive device, and no, I'm not making it a "System Favorite" to get around this.) I'd like to construct a batch file to share it once I've gotten it mounted because the sharing info doesn't seem to stick through a reboot. From the GUI, I'd go into the folder Properties-Sharing. And then in Advanced Sharing I'd pick the name to share it as. And then under the "Share..." button I'd pick the users and the permissions I want to grant them. After "Apply" there's a pause -- I'm not sure what's happening here, but the dialog says "Sharing Items..." -- and then everything is okay. From the command line, I've done: net share MyFolder=F:\MyFolder cacls F:\MyFolder /G FirstUser:F cacls F:\MyFolder /G OtherUser:F And this almost works. I can see the share on the network then, but nobody has permissions to do anything. If I go into the GUI and change anything (and I can see my command-line changes in there already) and press "Apply" I get the: "Sharing Items.... This may take a few minutes" Dialog... and then Voila! It works. I get the "Your folder is shared" dialog with the command-line changes I made, along with the GUI change that I made to trigger the "Sharing Items..." dialog. Everything's peachy. Is a service being restarted? Which one? What's triggering the sharing to take effect? And -- more importantly -- how do I do it from the command line?

    Read the article

  • Why does Facebook Chat through XMPP protocol on Pidgin Portable not authorize?

    - by Sara Neff
    I heard you can use facebook chat on desktops now. Thats awsome! What i didn't hear is that it is a pain in the butt! Not awsome! I've followed six nearly identical sets of instructions from six different websides, including the one that facebook generates for you, to get facebook chat connected through Pidgin. Its the latest portable version, so from what i hear the plugin is out of the question. Whenever I go to try and connect i get a message saying "Not Authorized" and buttons to either modify the account info, or retry. NOTHING i have done has fixed this, and I can't find anything remotely usefull anywhere. I am running windows xp, and running pidgin (portable) off of a flash drive. Someone please tell me what i have to do. I read about authorizing the chat on my actual facebook page. I'd have tried that if i could find out how to do it, but if its there they hid it good. HELP?!

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Unexpected(?) high 'wasted' memory in memcached

    - by Nanne
    Looking at our memcached stats I think I have found an issue I was not aware of before. It seems that we have a strangely high amount of wasted space. I checked with phpmemcacheadmin for a change, and found this image staring at me: Now I was under the impression that the worst-case scenario would be that there is 50% waste, although I am the first to admit not knowing all the details. I have read - amongst others- this page which is indeed somewhat old, but so is our version of memcached. I think I do understand how the system works (e.g.) I believe, but I have a hard time understanding how we could get to 76% wasted space. The eviction rate that phpmemcacheadmin shows is 2 ev/s, so there is some problem here. The primary question is: what can I do to fix this. I could throw more memory at it (there is some extra available I think), maybe I should fiddle with the slab config (is that even possible with this version?), maybe there are other options? Upgrading the memcached version is not a quickly available option. The secondairy question, out of curiosity, is of course if the rate of 75% (and rising) wasted space is expected, and if so, why. System: This is currently not something I can do anything about, I know the memcached version isn't the newest, but these are the cards I've been dealt. Memcached 1.4.5 Apache 2.2.17 PHP 5.3.5

    Read the article

  • Hardware freeze during disk activity

    - by Thomi
    I built myself a linux-based NAS. It has several drives of various sizes and ages in an LVM configuration, with 800GB or so of data. The data is served using a simple samba server. This was working flawlessly, but after physically moving it, it has developed a strange fault: Whenever I do something on the server to cause disk activity, the entire machine freezes hard. This has the effect of killing any open network connections to the box, and generally making it useless. If I leave the machine for a few minutes it seems to come right again, but obviously this isn't really a solution. There are no error or warning messages in syslog, or the kernel logs. If I power the machine on, and leave it, it runs for several days without locking up. After that time I stopped testing. It doesn't freeze instantly - obviously it doesn't freeze while booting, and I can normally log in via SSH and start poking around in a few log files for a couple of minutes before it dies. My question is: What diagnostic tests can I run to determine the casuse?

    Read the article

  • Apache Virtual Host with directory aliases

    - by brechtvhb
    I'm trying to set up a dynamic virtual host in apache with a directory alias pointing to a difirent path for every domain. Here's what I'm trying to achive. Say I have 2 domains: * www.domain1.com * www.domein2.com I want both to point to the same index.php file (C:/cms/index.php). Now the hard part ... I want directories or certain file types to point to a diffirent path for each domain. Example: * www.domain1.com/layout -> C:/store/www.domain1.com/layout * www.domain2.com/layout -> C:/store/www.domain2.com/layout * www.domain1.com/image.png -> C:/store/www.domain1.com/image.png * www.domain2.com/image.png -> C:/store/www.domain2.com/image.png However the admin directory should point to the same path again for all sites * www.domain1.com/admin -> C:/cms/admin * www.domain2.com/admin -> C:/cms/admin Is there a way to achieve this kind of behaviour in apache 2.2 without having to create a virtualhost entry for each new domain?

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • System State Backup Retention Policies

    - by isoscelestriangle
    I was wondering if there was a general consensus on how long to keep system state backups. I am trying to reevaluate our current backup process, and trying to get a good handle on our current storage requirements. Our current setup involves tapes and sending backups offsite with Barracuda Networks. We have been doing our system state backups with Barracuda now, which does full backups daily, leaving our storage requirements growing quite quickly. My boss is a little too gung-ho with backups and wants our system states saved for quite a while. We currently have 5 days of nightlies, 5 weeklies, 3 monthlies, and so on. I think this is quite overkill for system state backups. My boss wants the ability to go back in time to find when an issue appeared, but I don't think that is practical. Many things change in the course of several months. I also think it would be hard not to notice problems with our DCs and other servers for several months. I would think that a previous week's snapshot and the current week's dailies would suffice. Any advice or reading you can point me to? Thanks!

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • How Would I Restrict a Linux Binary to a Limited Amount of RAM?

    - by Ken S.
    I would like to be able to limit an installed binary to only be able to use up to a certain amount of RAM. I don't want it to get killed if it exceeds it, only that that would be the max amount that it could use. The problem I am facing is that I am running an Apache 2.2 server with PHP and some custom code that a developer is writing for us. The problem is that somewhere in there code they launch a PHP exec call that launches ImageMagick's 'convert' to create a resized image file. I'm not privy to a lot of details to the project or the code, but need to find a solution to keep them from killing the server until they can find a way to optimize the code. I had thought that I could do this with /etc/security/limits.conf and setting a limit on the apache user, but it seems to have no effect. This is what I used: www-data hard as 500 If I understand it correctly, this should have limited any apache user process to a maximum to 500kb, however, when I ran a test script that would chew up a lot of RAM, this actually got up to 1.5GB before I killed it. Here is the output of 'ps auxf' after the setting change and a system reboot: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 5268 0.0 0.0 401072 10264 ? Ss 15:28 0:00 /usr/sbin/apache2 -k start www-data 5274 0.0 0.0 402468 9484 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start www-data 5285 102 9.4 1633500 1503452 ? Rl 15:29 0:58 | \_ /usr/bin/convert ../tours/28786/.…. www-data 5275 0.0 0.0 401072 5812 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start Next I thought I could do it with Apache's RlimitMEM setting, but get the same result of it not getting limited. Here is what I have in my apache.conf file: RLimitMEM 500000 512000 It wasn't until many hours later that I figured out that if the process actually reached that amount that it would die with an OOM error. Would love any ideas on how to set this limit so other things could function on the server, and all of them could play together nicely.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Cloned Win7: Keyboard doesn't work

    - by Marc
    I cloned my old Windows7 hard disk to a shiny new Seagate Momentus XT 500GB using the free EaseUs Disk Copy tool on my laptop. After the clone process I used the Windows 7 installation disc to start the automatic startup repair. This took maybe 15 minutes and then my cloned disk was able to start. Now the cloned disk boots until the login screen and then I can't do anything because my keyboard just doesn't work. I tried connecting an external USB keyboard but this didn't help. The mouse is working fine. Note that the keyboard works fine in BIOS and in the Windows startup options menu. I booted into safe mode and again the keyboard is not working at all. I also noticed that the letters "Press CTRL+ALT+Delete to login" are now shown in italic font but they used to be shown non-italic on the original disk. I have now replaced the clone with the original disk again and from here everything works fine. Doesn't anybody have an idea how I can get my keyboard back?

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • Everyone can access my Windows 7 Homegroup file shares - Even Windows XP computers.

    - by adriangrigore
    Hi, I have 3 computers in my network, two running Windows 7 and one running Windows XP. I've set up a homegroup on both Windows 7 computers. Also, all computers are in the same Workgroup. The problem is that one of the Windows 7 computers makes all shares accessible to the entire Workgroup instead of just sharing to the Homegroup as it should be. I created the file share in Windows 7 via right-click in the explorer, then click on "Share For" - "Homegroup (Read/Write)" (translated from German, so the actual wording may be different). Also, when I look at the file sharing properties of that folder, Windows Explorer informs me that Users must have a valid account and password for this Computer to access drive shares. Unfortunately this is not true. Being in the same Workgroup is enough to get access. Homegroup restrictions work as expected on my other Windows 7 computer. When trying to browse those shares from the XP computer, I get a dialog asking for a login and password. What might cause homegroup restrictions to fail and how can I fix this?

    Read the article

  • ADSL Modem/Router sometimes hands out incorrect IP addresses

    - by Peter Keevill
    My setup is as follows:- Main ADSL modem / router (switch) configured as DHCP server with address range 192.168.0.25-60 The office machines are configured with fixed IP ( not in the same address pool of course ) and hard wired to this router. A wireless access point ( Router ) is connected to provide Internet access for guests in a separate area. This router is NOT configured as a DHCP server. Wireless authentication is turned off. IP address lease times are set to 4 hours. Sometimes guests are able to connect to the wireless access point but they are not given a valid IP. They get 169.x.x.x addresses. Rebooting their machines does not resolve the problem. The only way to resolve is to reboot the main ADSL/router which is often frustrating for other users who are successfully connected with valid IP and DG. The problem seems to occur more frequently to Apple/Mac guests although it also sometimes occurs with Win machines. I personally use Ubuntu on my Laptop and thus far, never have had any problem connecting and getting a valid IP address in the guest area. One further point of note which may give a clue is that certain guests ( always Apple/Mac ) get lease times of 90 days. However, this does not 'stack out' the number of available addresses and of course, rebooting the router clears them until the next time they login.

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • No LAN and SMB access, and Explorer not responsive, when using a second connection

    - by Lorenzo
    I apologize if this is a duplicate question, I know that there are several questions about multiple connection (LAN + LAN and LAN + dialup) but I haven't been able to find one that fits my scenario. I'm still using Windows XP on my corporate laptop, and I'm connected to the corporate LAN via Ethernet. The LAN NIC has a public IP address, although not accessible externally, obtained via the corporate DNS server. This connection is firewalled and requires a proxy to access Internet. To access Internet sites blocked by the corporate firewall, I use my smartphone via USB tethering. It is seen as a new LAN interface, and I get a private IP address (class 192.168..). There are two problems: The LAN is not accessible, as the default gateway goes to the tethering NIC. I'd like to solve this, but I can live with it. My PC becomes unresponsive if I use Windows Explorer to view local files, or even when I open the start menu. I guess that this is caused by attemps to connect to a mapped network drive. But I disabled the "Client for Microsoft Networks" in the tethering NIC. Why the system still hangs? Of course if I disable the Ethernet NIC, Explorer stops hanging. If you need further details, add a comment. Thanks!

    Read the article

  • Boot records messed on dual boot (win7 and ubuntu) machine with SSD and HDD

    - by Michael
    i have a lenovo ideapad y570 with two hard drives: SSD and normal HDD both managed by RapidDrive and windows 7 pre-installed. First, i have shrunk my 500 GB HDD a little bit to make some place for a linux installation. Then i installed linux mint 12 to it, also installed grub onto the drive (dev/sdb). Installation programm has not allowed me to install grub on sda. Then i replaced linux mint with ubuntu 12.04 but installed grub onto the SSD (which is dev/sda and was the default-option). After that i could boot into my windows, only ubuntu worked. So i did a research, and tried: rewriting mbr of windows into sda1, reinstalling grub, replacing grub2 with grub-legacy, and now i think my partitions table are totally messed. Here is fdisk -l output: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 7 HPFS/NTFS/exFAT /dev/sda2 411648 1009430959 504509656 7 HPFS/NTFS/exFAT Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5e5d1cc8 Device Boot Start End Blocks Id System /dev/sdb1 * 1979 884389887 442193954+ 12 Compaq diagnostics /dev/sdb2 884391934 976771071 46189569 5 Extended /dev/sdb5 884391936 937705471 26656768 83 Linux /dev/sdb6 937707520 967006207 14649344 83 Linux /dev/sdb7 967008256 976771071 4881408 82 Linux swap / Solaris I also cant mount any windows partitions to recover data. And when i open gparted, the whole sda-disk appears unallocated and it states "can not have a partition outside the disk!", also the end-sector address of /dev/sda2 confuses me. If i boot from the SSD, it throws some mbr error and wont boot, if i boot from the HDD, i only get the grub bash. How do i restore the partition tables? I can boot only from a live-cd at the machine. Thanks for any help.

    Read the article

  • Weird rendering artefact in vim (terminal, not MacVim)

    - by Tobi Lehman
    Running Mac OS X, using either Terminal.app or iTerm2, there is a strange artefact with the character rendering that I have a hard time explaining and an even harder time understanding. I'll start with a video of my screen so that you can see and example of it in action: From the video you can see a few ways it is weird, for example, sometimes when I hit a letter in insert mode, the character is double printed. When I go into normal mode, the artefact remains. When I re-enter insert mode, hitting backspace copies the characters on the left to the position under the cursor. This has happened in OS X Lion, and Mountain Lion, under both Terminal.app and iTerm 2. This never happens under MacVim. Also, I use GNU/Linux on my other machine, and have never had this happen, I am pretty sure it is strictly a Mac OS X issue, but I do not know how to fix it. For a while, I've been working around it by using MacVim most of the time, but I prefer working in a terminal. Does anyone know what is happening here, and if so, how can I fix it? EDIT: I tried using the macvim Vim executable, and I still get strange artefacts, but they are localized to the left side of the screen, here is an example:

    Read the article

  • How to backup a large FreeNAS?

    - by Ze'ev
    We have a 12TB FreeNAS box in the office, and are looking for a way to keep a backup of it offsite. We're considering (1) tape; (2) a bunch of bare drives (popped into a spare hotswap bay); (3) external drives. Any advice on which solution is best? (Online backup is not an option because our internet connection is too slow.) And, is there some software that will keep track of which files have been backed up and which haven't? So that when one backup unit fills up, we can continue the backup on the next? (We don't want to have to back up to a 12TB device.) This software could run, preferably, on the NAS itself; or from one of our Mac clients. Our goal is a situation where we attach some backup device; it automatically fills up with stuff from the server; the contents of this unit are catalogued somewhere something prompts us to replace with a fresh drive/tape; backup continues until full, including any files that have changed since being backed up.

    Read the article

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • How to stop NAT dropping idle connections?

    - by WGH
    I have a TCP connection that can be idle for many hours. The traffic is flowing from the server to the client only. One might say it's kind of push notification. My home router, however, tends to drop the connection silently after 20 minutes (the value of /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established). The server detects the loss once it tries to send anything (I assume it receives RST from the router itself). As client never sends anything, it never detects the loss. RFC 5382 "NAT Behavioral Requirements for TCP" states the following: A NAT can check if an endpoint for a session has crashed by sending a TCP keep-alive packet and receiving a TCP RST packet in response. It makes sense. It's much more effective than sending keep-alives by the host itself (as only NAT knows its own timeout). And probably not hard to implement. Is there any NAT solutions implementing this? It would be great if there was a way to enable this in iptables.

    Read the article

  • Administrative shares in Windows 7 Pro not visible

    - by Chris Tybur
    My desktop machine has a clean install of Windows 7 Professional. For some reason the standard administrative shares Admin$, C$, D$, etc are not visible, either in Computer Management - Shared Folders - Shares or via net share. I also have a laptop with a clean install of Windows 7 Professional, and I can see the admin shares in both places. As such, I can map to \\laptop\c$ from the desktop, but I can't map to \\desktop\c$ from the laptop. I pretty much took the defaults during the Windows 7 installations. I've tried adding LocalAccountTokenFilterPolicy to the registry on the desktop, but that didn't work. On the desktop I've also disabled UAC, turned off Windows firewall, removed it from a homegroup, made sure file and printer sharing is turned on, but nothing has worked. There is some subtle difference between the two machines that I can't seem to find. I'm logging into both machines using a local account that is in the Administrators group. Both accounts have the same name and password. I really don't want to have to create a new share for the desktop's C drive, especially since C$ is visible and working on the laptop and therefore I should be able to make it work on the desktop. Any idea why the admin shares would work on one machine and not another? Or why LocalAccountTokenFilterPolicy would fail?

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

< Previous Page | 858 859 860 861 862 863 864 865 866 867 868 869  | Next Page >