Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 533/1132 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • How to handle these variables in rsync exclude file?

    - by linux
    I have an ignore file for rsync but I can't figure out how to ignore this string of file names and the username: backup/cpbackup/daily/username/homedir/mail/cur/1244452567.H511146P7355.dwhs45.dwhs.net,S=2161:2, backup/cpbackup/daily/username/homedir/mail/cur/1244455430.H516330P14494.dwhs45.dwhs.net,S=4062:2, I tried this: backup/cpbackup/daily/*/homedir/mail/cur/* and this: *.*.dwhs45.dwhs.* But of course that would be too easy. Basically I just want to not transfer all the mail in the /cur/ directory for all users to the backups.

    Read the article

  • Windows 7 forgets my default settings

    - by j-t-s
    Hi All I recently bought a new computer and Windows 7 Home Premium. I only have one small problem though. I have the option "Show Window Contents While Dragging" enabled, but everytime I restart the computer, it reverts back to DISabled. The only thing i could think of is the system requirements etc. But this is not the case as my computer more than meets the full requirements. Can somebody help me please? Thank you

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • If a raid controller changes, are the drives still usable without re-formatting?

    - by Jeremy
    I've been wanting to do a raid 1 setup in my home with a pair of sata drives. Someone told me that if the controller fails, you can't just get a new controller because you'll have to reformat the drives. Or is that true only in some implementations? I was originally just looking at an onboard raid controller, or an entry level nas drvice like the intel SS4200-E, but If the hardware (controller) ever fails, will I be out of luck accessing the data if I can't get the exact same hardware to replace it?

    Read the article

  • domain is pointing to default static page on server but settings look correct

    - by Cues
    I have edited my apache vhost file in /etc/apache2/sites-enabled to add the following: <VirtualHost *:80> ServerName www.mysite.cn ServerAlias mysite.cn *.mysite.cn DocumentRoot /home/user/static/mysite/cn </VirtualHost> It still points to the default site on the server when i browse to mysite.cn but when i enter anything along the lines of ww3.mysite.cn it point to the new correct document root any clues of what the problem could be as i am lost.

    Read the article

  • Ssh, run a command on login, and then Stay Logged In?

    - by jonathan
    I tried this with expect, but it didn't work: it closed the connection at the end. Can we run a script via ssh which will log into remote machines, run a command, and not disconnect? So ssh in a machine, cd to such and such a directory, and then run a command, and stay logged in. -Jonathan (expect I used) #!/usr/bin/expect -f set password [lrange $argv 0 0] spawn ssh root@marlboro "cd /tmp; ls -altr | tail" expect "?assword:*" send -- "$password\r" send -- "\r" interact

    Read the article

  • Batch script to rename portionof a filename

    - by Rubik'sCube
    I've been trying to make a script that will a file name and change only one word in it. An example would be: projectname.vcproj.domainname.username.user to projectname.vcproj.otherdomainname.username.user I've tried using the if loop to list the directory and set the delimiter to a period but it doesn't seem to be able to identify and change it. I'm using examples of renaming .txt files but it doesn't seem to work, any suggestions?

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • How Do I Cache Just the Homepage with Apache .htaccess?

    - by Volomike
    This config is close... <FilesMatch "\.(php)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> ...but it does all php pages, not just the home page like I want. Basically the developer said he wants example.com to be cached, while: http://example.com/electronics/ would not be cached. Note the developer is using pretty URLs with an MVC framework that runs everything through index.php.

    Read the article

  • How to excport hives from registry in Windows.000 after re-install

    - by Mawg
    I had to reinstall Windwos and my old WIndows directory is now called Windows.000 I tried to resinatll my s/w applications, but one told me it has been insatlled on the max number of PCs, even though this was the first I ever sinsatlled it on. I think it might be ok if I can export teh relvant registry hive from teh old windows & import it into the new ... but how can I do that? Thanks in advance

    Read the article

  • 2Wire USB Network Adapter Not Working With Windows 7

    - by Andrew
    I installed Windows 7 on a separate drive, and everything works fine, but I can't get the above network adapter to work properly. It's displayed in Device Manager as a USB device, and I have the driver that makes it function in Vista, but when I run the Driver Wizard and direct it to directory where I have the Vista driver, it almost immediately says "Can't install driver." Is there any work around to this?

    Read the article

  • Apache HTTPd - rotatelogs not working

    - by Mike C
    I've edited my conf.d/ssl.conf file and changed the TransferLog directive from: TransferLog logs/ssl_access_log to TransferLog "|/usr/sbin/rotatelogs logs/ssl_access_log.%Y-%m-%d.log 60" (I am using 60 seconds for testing) Since that change and an httpd restart my original ssl_access_log is not updating and a new log was not generated. What am I missing? in my error log, I am receiving this message Could not open log file 'logs/ssl_access_log.2014-05-30.log' (No such file or directory) piped log program '/usr/sbin/rotatelogs logs/ssl_access_log.%Y-%m-%d.log 60' failed unexpectedly

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • OSX Samba Server

    - by Chris Maness
    Hello I have a MacPro that I'm using as a local file server. I mapped a directory on the secondary hard drive of the MacPro as a drive on our local Windows box. It worked fine for about a week and then suddenly stopped working. The windows computer gives the error that it's an invalid username and password however I have checked and double checked to ensure that the user and pass are correct to no avail. Anyone have any ideas on what's going on here?

    Read the article

  • ASP.NET web application can't find an assembly

    - by Charlie Somerville
    I deployed an ASP.NET web application last night and I when I woke up this morning it was very slow and would occasionally just throw a 'Service Unavailable' error. I checked the Event Viewer and it was filled up with these errors: I'm puzzled as it was working perfectly when I deployed it (MonoTorrent is required to retrieve the number of seeders/leechers for a certain torrent off the tracker - this was working fine), but it's no longer working and whenever code that uses MonoTorrent gets involved, the worker process just crashes. MonoTorrent.dll is in the /bin/ directory.

    Read the article

  • Why is setuid ignored on directories?

    - by Blacklight Shining
    On Linux systems, you can successfully chmod u+s $some_directory, but instead of forcing the ownership of new subdirectories and files to be the owner of the containing directory (and setting subdirectories u+s as well) as you might expect, the system just ignores the setuid bit. Subdirectories and files continue to inherit the UIDs of their creating processes, and subdirectories are not setuid by default. Why is setuid ignored on directories, and how can I get the system to recognize it?

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by Andrew
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • How to solve "Broken Pipe" error when using awk with head

    - by Jon
    I'm getting broken pipe errors from a command that does something like: ls -tr1 /a/path | awk -F '\n' -vpath=/prepend/path/ '{print path$1}' | head -n 50 Essentially I want to list (with absolute path) the oldest X files in a directory. What seems to happen is that the output is correct (I get 50 file paths output) but that when head has output the 50 files it closes stdin causing awk to throw a broken pipe error as it is still outputting more rows.

    Read the article

  • Installing OpenLDAP on Fedora 12: ldap_bind: Invalid credentials (49)

    - by Alpha Hydrae
    I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Printer driver unavailable after Windows 7 upgrade

    - by kngofwrld
    Upgraded to Windows 7 and lost the ability to print to my old but still perfect Brother HL-1440 laser printer. I cannot run in XP compatibility mode with my version of Windows (Home Professional). Is there anything that can be done to get printing to work? I just want to print via USB but there is no Windows 7 driver.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >