Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 432/1048 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • What's the reason to break HDD to few partitions for MDADM+LVM2?

    - by archer
    I'm using 2 HDDs each 1TB in size. I'm going to create MDADM+LVM2 over them. Initially I though about this partition layout: /dev/sda1 - 1Gb (boot) /dev/sda2 - 500Gb (md0) /dev/sda3 - 499Gb (md1) /dev/sdb1 - 1Gb (boot) /dev/sdb2 - 500Gb (md0) /dev/sdb3 - 499Gb (md1) md0 is going to be raid0 and md1 is going to be raid1 however, I found some info that this would be better to break each drive to more partitions (lets say 10 partitions 100Gb in size each). What's the reason of doing that?

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • Cant get squid proxy to work

    - by danielgratz
    i need squid proxy on my centos server. But i just can't get it to work. I did yum install squid. Here is my squid.conf file (i removed all comments): acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl CONNECT method CONNECT acl our_networks src 192.168.1.0/24 192.168.2.0/24 http_access allow our_networks http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache coredump_dir /var/spool/squid Then i just put my server's public ip and port 3128 into my web browsers proxy settings... but it isn't working i can't visit any website. Please help. Thanks.

    Read the article

  • Verify that a cron job has completed

    - by skylarking
    Is there a command that can be run to verify that a users cron job has run successfully? Platform is Ubuntu 8.04 LTS. I have scripts in /home/useraccount/bin/ running crontab -l while logged in as user results in: # m h dom mon dow command @hourly /home/useraccount/bin/script_1 @hourly /home/locateruser/bin/script_2 I realize scripts could send email or write to a log with a timestamp, but wondering if there is just a way to verify it ran from the command line.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Gentoo+urxvt+terminus: How do I change font version?

    - by gaidal
    In my Debian installation I can type extended ASCII characters such as åäö by default using the terminus font, however in Gentoo I can't get it to work so far. Nothing happens when I hit those keys, like in this thread: Missing glyphs in Terminus font, how to setup a fallback font ? But in this case I know terminus supports those characters in at least some of its versions, since it's works in Debian. So what I want is to find out how to see and choose which of the many different terminus font files is being used. I set the font in the same way on both Debian and Gentoo, using URxvt*font: xft:terminus:size=xx in .Xdefaults. Both systems use en_US.UTF-8 as default locale.

    Read the article

  • Tmux installation problems

    - by RayQuang
    hI, I am trying to install the terminal multiplexer tmux on my Debian Lenny server so that I can have multiple terminals through ssh. However I have had a lot of difficulty installing it from the debian package, and by compiling it. When I try the package it says something about the wrong version of libc6, and when I compile it I get the following error: server.o: In function `server_start': server.c:(.text+0x273): undefined reference to `event_reinit' collect2: ld returned 1 exit status make: *** [tmux] Error 1 Help would be very much appreciated, RayQuang

    Read the article

  • Apache configuration FollowSymlinks- Apply to php scripts?

    - by Josh
    I have Options set to none for my webroot directory. I also have a symmlink /var/coderoot - /var/webroot/coderoot In the php script I can do include("/var/coderoot/file"); and it works fine. Regardless of the option (yes I save and restart apache.) Does follow symlinks only apply to symlinks used in a certain way? Is there a performance loss using the include with a symmlink?

    Read the article

  • Debian or CentOS?

    - by Tres
    I am looking at using either Debian or CentOS for a production server and I've heard mixed reviews of each one. I've heard CentOS performs better under load, however I am aware that Debian has a much larger package repository. Personally, I am partial to Debian since I am less familiar with Red Hat distros, but wanted to reach out on Server Fault to see which I really should be using. Any ideas? Thanks!

    Read the article

  • How to restrict all services to single domain in Ubuntu?

    - by harold
    Someone has pointed an unknown domain to my server's IP address likely via A records. I would like to reject access to ALL services (httpd, ssh, mail, etc.) from this domain and only allow requests from my domain. I want to make it so when I connect to that domain it's completely rejected from my server. I can disallow access from HTTP by changing my web server settings, but I want to do this for every single type of connection. How can I do this?

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • RSync over SSH hangs and fails with timeout

    - by tx2
    Client: Gentoo, GCC 4.3.4, RSync 3.0.9 Server: Ubuntu 10.04.4 LTS, RSync 3.0.7 Client and server connectet through is Internet, about 2Mbps. Ping is ok. RSync called on any files in any direction hangs on random file, then, after timeout, fails with: [sender] io timeout after 30 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(140) [sender=3.0.9] [sender] _exit_cleanup(code=30, file=io.c, line=140): about to call exit(30) In 1/10 trys is pass correctly. I've tryed to add SSH options TcpRcvBufPoll=yes, KeepAlive=yes; disable and enable rsync compression -- no changes. How can i make rsync works properly?

    Read the article

  • Munin "Available entropy" when using adress space layout randomization

    - by clawspoon
    Having just configured munin for statistics logging on my gentoo server (hardened profile), I am noticing that my "Available entropy" is consitently in the 200-300 range. This seems way to low, so I checked it manually using the command $ cat /proc/sys/kernel/random/entropy_avail 3544 Odd. Consistently very low values in Munin and practically filled up when checking manually. After thinking about the problem for a while I came to the conclusion that the problem is probably that I'm using Adress Space Layout Randomization which is using the entropy when running commands/programs. Since Munin runs a whole slew of programs all the entropy is used up, and Munin then measures how much entropy there is, resulting in the low values. Does anyone have any experience with this? How can this be avoided?

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • Shell script to read value from a file and compare it to another one

    - by maneeshshetty
    I have a C program which puts one unique value inside a test file (it would be a two digit number). Now I want to run a shell script to read that number and then compare with my required number (e.g. 40). The comparison should deliver "equal to" or "greater". For example: The output of the C program is written into the file called c.txt with the value 36, and I want to compare it with the number 40. So I want that comparison to be "equal to" or "greater" and then echo the value "equal" or "greater".

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • Setup Webmail server unable to receive mails

    - by user26516
    I installed centos and configured email server and if I send email goes perfectly but if i reply from email that email i am getting this kind of error. Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the server for the recipient domain example.com by mx00.1and1.com. [74.208.5.3]. I have bought domain in 1and1.com and i successful parked the domain. But i have doubt shall i need to add anything in MX record for other mail server. Please anyone help.

    Read the article

  • rsync not writing files

    - by Cyrcle
    I'm trying to setup rsync to backup a remote directory to my local drive. I cd to the directory that I want to pull the files to, then I enter: rsync -vrtW [email protected]:~/public_html I enter the password then it starts running. I get all the files listed, but none of them actually transfer. What am I missing? Thanks

    Read the article

  • (monit) What does failure "Changed" mean

    - by bresc
    Hi, I installed monit on my server and tried to monitor nginx. check process nginx with pidfile /var/run/nginx.pid start program = "/etc/init.d/nginx start" stop program = "/etc/init.d/nginx stop" group server And I get Process 'nginx' status Changed monitoring status monitored data collected Wed Mar 24 00:37:49 2010 What does "Changed" mean? I couldn't find anything. Thx

    Read the article

  • Ubunt doesn't mount one of my NTFS disks

    - by Jader Dias
    There is a mountable /dev/sda NTFS formatted (Windows disk) There is no /dev/sdb when I ls /dev (NTFS Data disk) There is a /dev/sdc which is another disk of the same model, (Ubuntu disk) I can see that Ubuntu detected this unmountable disk in the Disk Utility It states incorrectly it is unpartioned and a RAID volume. (it previously was RAID0 setup with /dev/sdc but now it is a simple volume, no RAID whatsoever) When I boot Windows 7, it uses this unmountable disk without a glitch The problem happens in both IDE and AHCI modes Ubuntu 10.04 Lucid Lynx

    Read the article

  • Mail Secure & Stable Open Source Mail Server

    - by Fanar ALHAYALI
    I have asked question on http://stackoverflow.com/questions/9868426/i-need-to-know-which-email-server-i-have-to-use and someone tell me my question would be better on serverfault. I know that this is a common question and asked many times. but there are so many available mail servers that i am not able to decide the one. Kindly tell that which is the Secure, Stable and fast open source mail server for Centos or Redhat Server. Is there any guide which can be used to deploy the mail server with all its components e.g. smtp, pop3, imap, spam, calender server, antivirus, DNS Setting. Currently I'm using sun messaging V6 which installed on Solaris 10 and my boss ask me to make a report for the best mail server today in the marketing? I tried to have a look on Google but I couldn't find interesting information for my report. Any advice would be appreciated.

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • Triggering GDM login on a remote machine

    - by creator
    I have to briefly describe the situation. We are planning to make a computer classroom with workstations running Ubuntu 10.04. Since making accounts for each student has not been considered reasonable, we decided to make accounts for each student group. We don't want students to share their passwords between groups so the solution would be not to give them passwords at all, but let the teacher log them in instead. Obviously he shouldn't go from one machine to another typing in credentials by hand, so we need some script that will connect to a remote machine by ssh and make GDM (or probably any other login manager if GDM cannot serve this purpose) log in specified user. I couldn't find any solutions, as well as I haven't noticed anybody in similar situation asking for help, so my question will be: can the scheme described be realized and if yes, then how? Thanks in advance.

    Read the article

  • Is it safe to run two instances of svnserve on one repository, or only one?

    - by fredden
    We've two nodes running heartbeat/drbd, and one of the services we're using is subversion. What I want to know is: is it safe to run svnserve on both nodes all the time, or should it only run on the active node? Does svnserve use file-level locking, or is it all in memory? What are the implications of running svnserve without its repositories accessible? Please let me know if this isn't clear, and I'll try my best to rephrase/clarify. :)

    Read the article

  • Is there any way I can add alternative key binding to a feature in compiz?

    - by vava
    I was wondering is there any way to add additional, alternative key binding to a particular feature in compiz? I am using Wall plugin and on my ThinkPad it is convenient to switch between horizontal workspaces with media buttons for browser navigation. But there just two of them, so I have to use completely different combinations to switch between workspaces vertically and that would very helpful if I can also use similar kind of combination to switch horizontally as well in addition to those media buttons. Is there a way maybe to send a message to the compiz to execute particular command? That would solve the issue.

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >