Search Results

Search found 26640 results on 1066 pages for 'a troubled linux newbie'.

Page 266/1066 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Diff -b and -w difference

    - by dotancohen
    From the diff manpage: -b, --ignore-space-change ignore changes in the amount of white space -w, --ignore-all-space ignore all white space From this, I infer that the difference between the -b and -w options must be that -b is sensitive to the type of whitespace (tabs vs. spaces). However, that does not seem to be the case: $ diff 1.txt 2.txt 1,3c1,3 < Four spaces, changed to one tab < Eight Spaces, changed to two tabs < Four spaces, changed to two spaces --- > Four spaces, changed to one tab > Eight Spaces, changed to two tabs > Four spaces, changed to two spaces $ diff -b 1.txt 2.txt $ diff -w 1.txt 2.txt $ So, what is the difference between the -b and -w options? Tested with diffutils 3.2 on Kubuntu Linux 13.04.

    Read the article

  • Creating a file server - How can I use a large VHD file in Hyper-V? (700GB)

    - by barfoon
    Hey everyone, After a few discussions (here, here, and here), I am still unable to create a simple VM that will be used as a fileserver hosted on my Hyper-V box. I have created a fixed 700GB SCSI drive (.vhd file), as I have learned an IDE drive of this size is not possible. Not to sound too cynical, but its blown me away at how much trouble its been to create a large amount of space and start using it. What is the best way to create a fileserver with a drive of this size hosted on Hyper-V Server 2008, and how can I get it going??? Inclusion of OS, driver, integration tools etc, anything you feel is required would be greatly appreciated. Extra information I am using the stand-alone version of Hyper-V server, and not Windows Server 2008. I have tried loading the Linux Integration Tools (linked in the comments of the last link above) onto a SUSE 11 VM and the installation fails, the machine cannot see the vhd at all. Thanks very much,

    Read the article

  • Avoiding users to corrupt and use a script

    - by EverythingRightPlace
    Is it possible to deny the right to copy files? I have a script which should be executable by others. They are also allowed to read the file (though it would not be a problem to forbid reading). But I don't want the script to be changed and executed. It's not a problem to set those permissions, but one could easily copy, change and run the script. Can this even be avoided? /edit The OS is Red Hat Enterprise Linux Workstation release 6.2 (Santiago).

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • Tool for purging unneeded backups

    - by Dana the Sane
    I'm in the common situation where the one of the linux servers I use for storing backups is filling up. I'm wondering what tools are available for doing this. Ideally, what I would like is something that keeps nightlies for the previous month, weeklies for the 2nd to 5th preceding months and retains monthlies (well, every 3rd week) for an indefinite period. Everything that falls outside of that would be deleted after the backups are run. I could write a script to do this, but I feel like there must be a standard tool for this task.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • Bash & 'su' script giving an error "standard in must be a tty".

    - by sHz
    Folks, I'm having an issue with a bash script which runs a particular command as a different user. The background: Running on a Linux box (CentOS), the script is quite simple, its starting the hudson-ci application. declare -r HOME=/home/hudson declare -r RUNAS=hudson declare -r HOME=/home/hudson declare -r LOG=hudson.log declare -r PID=hudson.pid declare -r BINARY=hudson.war su - ${RUNAS} -c "nohup java -jar ${HOME}/${BINARY} >> ${HOME}/${LOG} 2>&1; echo $! > ${HOME}/${PID}" & This is the bridged version of the script, when run, the script exists with "standard in must be a tty". Any ideas on what I could be doing wrong? I've tried Dr Google and all the advise hasn't helped thus far.

    Read the article

  • Updating autoreconf

    - by AzaraT
    So I need to use the autoreconf to configure a package. However I need at least version 2.61. I'm on CentOS 5.8 and it seems like there's no package for it so I went on to compile it myself. So I get the source of autoconf from http://www.gnu.org/software/autoconf/ and compiled that. And sure when I do autoconf -V it shows up as version 2.68 which is indeed the latest version. However autoreconf (nothing the re) still shows up as the old version 2.59 which causes me some problems. So could someone help a relatively new linux user, updating autoreconf properly? Thanks

    Read the article

  • how to remotely open an URL in Firefox in a specific profile?

    - by miernik
    I have several instances of Firefox with several different profiles running. Among them profiles with the names "software" and "test". I am trying to open an URL from a bash script to have it open in profile "test", like this: firefox -P "test" http://www.example.org/ However that opens it in profile "software" anyway. Any ideas? Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100308 Iceweasel/3.5.8 (like Firefox/3.5.8)

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • FTP Error 550 when trying to access a folder via symbolic link

    - by OrangeTux
    I'm configuring svftp on a linux machine. At the moment local users can login via ftp and they will see listened their home dir. They have write acces to it. No I want the users to write in de /var/www/ dir. Therefore I created an new group apache. Added users to the group and gave the group write access to /var/www. Via the terminal all users can write .var/www/. I created a link in the home directory to /var/www via ln -s /var/www/ /home/user/www ls gives: drwxr-xr-x 2 orangetux orangetux 4096 Jun 23 15:06 ftp lrwxrwxrwx 1 orangetux orangetux 21 Jun 23 15:00 www -> /var/www/ But when I use FTP I see the link but I cannot follow it. Error 550 which means file not found or bad access. How can I solve this, so that the users have access to /var/www via their home dir?

    Read the article

  • /proc/pid/environ missing variables

    - by Josh Arenberg
    google is giving no love on this one today, so I turn to the experts... I'm currently hacking together a script that relies on the /proc/pid/environ feature in Linux (RHEL 4) to check for a particular environment variable. Trouble is, it seems certain environment variables aren't showing up in there for some reason. Example: create some test vars: $ export T_1=testval TEST_1=testval T=testval TESTING_LONGEST=testval open a subshell: $bash $ cat /proc/self/environ|tr "\0" "\n"|grep testval TESTVARIABLE_LONGEST=testval T=testval hmm... where did T_1 and TEST_1 go?? what rules govern this strange universe? Thanks in advance, Josh

    Read the article

  • cat contents of one file into another file

    - by Attila O.
    I have a large (binary) file that has some corruption near the beginning. Then, I have a second, smaller file that I obtain by starting to download the same file again, but interrupt after I have enough bytes to fix the original one. My question is, how do I simply overwrite the beginning of the large file with the contents of the second, smaller file? I could use cat, tail and head, but that would create a copy of the file. There must be a more efficient way. Oh yes, and I'm looking for a linux command-line solution, if that wasn't obvious. I'm using bash, but I have other shells if that helps.

    Read the article

  • Systemd-networkd: How can I prepend a static nameserver entry to DHCP-discovered nameservers?

    - by runiq
    I'm using systemd 213 on Arch Linux, and systemd-networkd with DHCP to connect to the internet. I'm also running a caching DNS server on 127.0.0.1. I'd like to make this server the first DNS server in the list, but I'd also like to use the nameservers discovered by systemd-networkd's DHCP facility. Using a static resolv.conf isn't really possible, because I connect to networks with different DNS settings. I know I can set fallback DNS servers in /etc/systemd/resolved.conf, but is there a way with systemd-networkd to prepend my local DNS server to those discovered by DHCP?

    Read the article

  • changing filesystem format from jfx to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux.

    Read the article

  • SSO solution and centralized user mgmt for about 10-30 Ubuntu machines?

    - by nbr
    Hello, I'm looking for a clean way to centralize user management. The setup: About 10-30 linux machines (Ubuntu 10.04 LTS server) Maybe 10-30 users for now. The requirements (hopes and expectations): A single place for the administrator to manage user accounts, passwords and the list of machines each user has access to. (And probably groups.) Doesn't have to be fancy. Single sign-on for SSH: the user should be able to login from machine A to machine B without re-entering his/her password. A Quick Google searches give me pointers to OpenLDAP and Kerberos, but I'm not sure where to start and what problem will each solution actually solve. Which way to go? I'd love to find a clear that focuses on this subject. (Or: am I asking "a wrong question"?)

    Read the article

  • Password Self-serve Active Directory via LAMP environment

    - by keithosu
    I would like to be able to change active directory passwords via a Linux/Apache based webpage. This would be a self serve web page for the user. I have SSL-LDAP setup on the Active Directory to make this happen. Is there any project or code out there that will do this? I've looked at this phpadadmin and I cannot get it to work. I think this is for IIS/php/mySQL Another thing to note is I would like the user to authenticate to change their own password. The product/service should not need a privileged account to run. Thanks Keith

    Read the article

  • Disable passwd history feature with remember=0

    - by user1915177
    PAM version - pam-0.79 Is setting 0 allowed on "remember" option in /etc/pam.d/common-passwd file of pam.d module to disable passwd history feature? With "remember=0" in /etc/pam.d/common-passwd file, I am observing a memfault when running the passwd command as a USER. When browsed the source, the function in _set_ctrl in support.c file of pam_unix module handles wrong values of remember, but currently its not robust enough to handle 0, which is a wrong value. So the valid and only option to disable history feature, is to not include the "remember" option in /etc/pam.d/common-passwd file and not to set-up /etc/security/opasswd file? Could see in the following link mention of setting "remember" to 0 has no effect to remember value in "/etc/security/opasswd" file. =https://lists.fedorahosted.org/pipermail/linux-pam-commits/2011-June/000060.html

    Read the article

  • Remote X-windows between new RHEL5 and old Solaris 8

    - by joshxdr
    I have a very small lab network with three boxes: a modern x86-based RHEL3 box, an x86-based RHEL5 box, and a 1998-vintage SPARC Ultra5 with Solaris 8. I can use ssh -X to run a program on the RHEL5 box and view the windows on the RHEL3 box. I believe this uses xauth and magic cookies?? I have followed the X-Windows HOWTO to set up xauth on the Solaris box, but so far no dice. I would like to be able to use the X-windows server on the RHEL3 box with a client program on the Solaris box (program running on Solaris host, windows appearing at Linux host). Is there a trick to this, or have I made a mistake following the instructions for setting up xauth and magic cookie?

    Read the article

  • Manually forcing TCP connection to retry

    - by Vi.
    I have a TCP connection (SSH session to some computer for example) Network suddenly goes down and drops all packets (disconnected cable, out of range). TCP resends packets again and again, retrying with increasing delays. I see the problem and plug the cable back (or restore network somehow). TCP connection finally successfully resends some packet and continues. The problem is that I need to wait for a some timeout on point 5. I want to use my opened SSH session now and not wait for 5-10 seconds until it finds out that connection is working again. How to force all TCP connections to resend data without delays in GNU/Linux?

    Read the article

  • What is the difference between du -h and ls -lh?

    - by PeanutsMonkey
    I am having a difficult time grasping what is the correct way to read the size of the files since each command gives you varying results. I also came across a post at http://forums.devshed.com/linux-help-33/du-and-ls-generating-inconsistent-file-sizes-42169.html which states the following; du gives you the size of the file as it resides on the file system. ( IE will will always give you a result that is divisible by 1024 ). ls will give you the actual size of the file. What you are looking at is the difference between the actual size of the file and the amount of space on disk it takes. ( also called file system efficiency ). What is the difference between as it resides on the file system and actual size of the fil

    Read the article

  • automated printouts from a wireless printer

    - by Piotr
    I have a wireless printer which is always on, and an always on fanless linux server. Looking at the mprinter project on Kickstarter I started to wonder if there is an app somewhere in the internet already that will allow to prepare an automated daily printout based on some settings. things to be printed could include - weather forecast for my locations, todo`s scheduled for that day, a "quote of a day" or "word of the day", stats from google analytics for my site, and many more ... I would set a printout at 6:15 every work day so its on my printer when I am already up, having a coffee. anyone knows something that can be used for such purpose? While I know this can be done by combining the power of TeX, cron and a script language to manage the dynamic part of the PDF I believe this is a use case someone has already addressed.

    Read the article

  • Dediced server for all network functions?

    - by Alan
    I want to set up a fictional network configuration for a school in my neighborhood. They have about 50 computers altogether, 2X20 in computer rooms for students and another 10 scattered around for various professors. They should all access the internet through a dedicated Linux router machine. What they would like is to have domain names for those three computer groups. Lab1, Lab2 and Professors. The computers in Lab2 and Lab1 should have static ip and should all be named by numbers. So there should be 1@Lab1, 2@Lab1.... etc. And the Professors network should have a DHCP, with authentication. Is it an ok solution to have all these functions on a single server? (The one which will be used as a router) Do I have to set a local DNS for domain naming? Do the host names for Lab computers have to be set on the clients, or can they be automatically assigned?

    Read the article

  • Enabling Compiz Viewport Switcher key bindings

    - by David Moles
    I'm running compiz 0.8.2 with compizconfig on Scientific Linux 6.2 with Gnome 2.28.2. In the compizconfig "General Options" I have "Desktop Size" set as follows: Horizontal Virtual Size: 6 Vertical Virtual Size: 1 Number of Desktops: 1 This gets me the layout I want, i.e. 6 workspaces in a horizontal layout. Ctrl-alt-cursor-keys work fine for switching between them. However, I can't figure out how to get key bindings for specific workspaces. I've tried enabling "Viewport Switcher" in compizconfig, and tried various combinations both in "Number-based viewport switching" and "Go to specific viewport", to no apparent effect. My first thought was that something else was eating the specific key bindings I chose, but I think I've tried every combination of shift, control, alt and super (i.e., the Windows key) by now. I tried setting 6 desktops under "General Options" instead of one desktop with horizontal virtual size 6, but that doesn't seem to make a difference either. What am I missing?

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >