Search Results

Search found 30511 results on 1221 pages for 'linux networking'.

Page 174/1221 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • Mac OSX server command equivalent for dhclient?

    - by John Hall
    Is there an MacOS command that makes a dhcp request, and renews the old lease, drops it for a new one, or usefully reports errors or lack of response from a dhcp server? This would both help fix networking on the machine after problems on the network without rebooting and would also be useful to diagnose wider networking problems from a mac. I can not find any command equivalent of dhclient though obviously some component must be serving this purpose. The question is, is that component exposed to a command line interface? I am biased to the command line for these features and may have overlooked settings panels or tools that might solve it using a gui interface. I believe this question is at the heart of this other question: Is there an equivalent command for 'init.d/networking restart' in OS X

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • Netinstalling CentOS if the gateway is in a different subnet

    - by James Lawrie
    I have a KVM host (A) running a virtual machine (B). They each have their own external IP address and the networking is setup using bridging between eth0 and br0 on A. B uses eth0, with A being the gateway. The problem is that the two external IP addresses are on different subnets (different /8s in fact) so by default, B claims it cannot reach A (Network Unreachable). I can resolve this by adding a static route on B: echo "any host gateway_ip dev eth0" > /etc/sysconfig/static-routes Modifying /etc/init.d/networking to reload the gateway after applying static routes (I only added the final line before fi): if [ -f /etc/sysconfig/static-routes ]; then grep "^any" /etc/sysconfig/static-routes | while read ignore args ; do /sbin/route add -$args done route add default gw "${GATEWAY}" fi If I then restart networking, it comes online. How can I do this (or work around it some other way) prior to the system being installed, ideally inside an Anaconda kickstart file?

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • /dev/input/uinput Device appears to be 'broken'

    - by Adam Luchjenbroers
    I'm trying to setup Pystromo so that I can remap the keys on my Belkin N52TE gamepad. Pystromo basically captures the key strokes and then outputs the remapped keystrokes to the uinput device. However, at the moment it simply swallows the input and outputs absolutely nothing. I've tracked the issue to something being wrong with my uinput device, with the smoking gun being: # ls -l /dev/input/uinput crw-rw---- 1 root plugdev 10, 223 Dec 31 2009 /dev/input/uinput # cat /dev/input/uinput cat: /dev/input/uinput: No such device The uinput module is loaded, and can be clearly seen via lsmod. Anyone seen this before, or can think of something worth attempting? Current Setup Gentoo Linux Kernel 2.6.32 (Gentoo Sources 2.6.32-r1) HP DV7 Laptop Output dmesg dmesg | grep uinput does nothing, and no new lines appear if I run modprobe -r uinput && modprobe uinput. Yet the uinput module can clearly be seen when running lsmod: # lsmod | grep uinput uinput 6200 0 lsusb # lsusb Bus 005 Device 003: ID 050d:0200 Belkin Components Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 1532:0101 Razer USA, Ltd Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 5986:0143 Acer, Inc Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 03f0:171d Hewlett-Packard Wireless (Bluetooth + WLAN) Interface [Integrated Module] Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb -v PasteBin Update Hmm, updating evdev and hal seems to have partially fixed it. /dev/input/uinput still can't be accessed but Pystromo is now remapping keys successfully. I'm a little bit mystified about what's going on here, but it seems that my understanding of how all this works is flawed. Since I've posted a bounty, I'll leave this here for someone to post an explanation for how user-space input devices work under the hood.

    Read the article

  • how i can identify which process is making UDP traffic on linux?

    - by boos
    my machine is continously making udp dns traffic request. what i need to know is the PID of the process generating this traffic. The normal way in TCP connection is to use netstat/lsof and get the process associated at the pid. Is UDP the connection is stateles, so, when i call netastat/lsof i can see it only if the UDP socket is opened and it's sending traffic. I have tried with lsof -i UDP and with nestat -anpue but i cant be able to find wich process is doing that request because i need to call lsof/netstat exactly when the udp traffic is sended, if i call lsof/netstat before/after the udp datagram is sended is impossible to view the opened UDP socket. call netstat/lsof exactly when 3/4 udp packet is sended is IMPOSSIBLE. how i can identify the infamous process ? I have already inspected the traffic to try to identify the sended PID from the content of the packet, but is not possible to identify it from the contect of the traffic. anyone can help me ? I'm root on this machine FEDORA 12 Linux noise.company.lan 2.6.32.16-141.fc12.x86_64 #1 SMP Wed Jul 7 04:49:59 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • How to host a scalable social networking app

    - by christopher-mccann
    I am in the middle of developing a social networking application for a very select user niche which could scale to a few million users. Right now I have always hosted applications on RackSpace Cloud and I have no issues with them at all - always been a really good service and never had any downtime. My question is though does anyone think that cloud computing is not the way to host scalable web apps? Or can anyone with experience of this recommend a better solution. I have always shunned trying to run big servers from my own facilities as I think it seems silly to go to the expense of bringing in big alternative power supplies and all the other necessary precautions when other companies already do this. I looked at managed hosting services but this proved to be a bit too expensive for us at the start and the scalability of it wasnt good enough - it would take a day or two to get a new server provisioned. Therefore I ended up on a cloud platform. If anyone has any recommendations or advice it would be greatly appreciated.

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • Connect two networks

    - by Meek Barrios
    Connecting two different offices with a wireless link and linux boxes. Hardware: 2 CISCO RV42, 2 Dual Homed Linux Boxes running debian, 2 2Wire and 2 AirMax 5 Configuration is: Office A LAN A (10.1.1.0/24) -> RV42 A (WAN1 - 10.1.1.254) -> 2Wire A (Internet) LINUX A ( ETH0 (LAN) 10.1.1.253, ETH1 (LINK) (10.1.3.3) Wireless Link --- AirMax A <-> AirMax B connected as Wireless Bridge Office B LAN B (10.1.2.0/24) -> RV42 B (WAN1 - 10.1.2.254) -> 2Wire B (Internet) LINUX B ( ETH0 (LAN) 10.1.2.253 -> ETH1 (LINK) (10.1.3.4) Network configuration is: LAN A - Default Gateway 10.1.1.254 RV42 A - Static Route 10.1.3.0/24 on 10.1.1.253 Static Route 10.1.2.0/24 on 10.1.1.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux A - ETH0 10.1.1.253 netmask 255.255.255.0 gw 10.1.1.254 ETH1 10.1.3.3 netmask 255.255.255.0 gw 10.1.3.1 AIRMAX A - 10.1.3.1 netmask 255.255.255.0 gw 10.1.3.1 LAN B - Default Gateway 10.1.2.254 RV42 B - Static Route 10.1.3.0/24 on 10.1.2.253 Static Route 10.1.1.0/24 on 10.1.2.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux B - ETH0 10.1.2.253 netmask 255.255.255.0 gw 10.1.2.254 ETH1 10.1.3.4 netmask 255.255.255.0 gw 10.1.3.2 AIRMAX B - 10.1.3.2 netmask 255.255.255.0 gw 10.1.3.2 Both linux have ip_forward set to 1 and the following on the iptables: iptables -F iptables -X iptables -P FORWARD ACCEPT iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT I can ping from Linux B any ip on 10.1.1.0/24 segment and on linux A any ip on 10.1.2.0/24 segment however I cannot connect to HTTP or FTP on those machines. From LAN A I cannot see any other network. I'm looking for some advice for this configuration or a better solution. Regards

    Read the article

  • one 16K random read I/O issues 2 scsi I/O (16K and 4K) requests in linux

    - by hiroyuki
    I noticed weird issue when benchmarking random read I/O for files in linux (2.6.18). The Benchmarking program is my own program and it simply keeps reading 16KB of a file from a random offset. I traced I/O behavior at system call level and scsi level by systemtap and I noticed that one 16KB sysread issues 2 scsi I/Os as following. SYSPREAD random(8472) 3, 0x16fc5200, 16384, 128137183232 SCSI random(8472) 0 1 0 0 start-sector: 226321183 size: 4096 bufflen 4096 FROM_DEVICE 1354354008068009 SCSI random(8472) 0 1 0 0 start-sector: 226323431 size: 16384 bufflen 16384 FROM_DEVICE 1354354008075927 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 21807710208 SCSI random(8472) 0 1 0 0 start-sector: 1889888935 size: 4096 bufflen 4096 FROM_DEVICE 1354354008085128 SCSI random(8472) 0 1 0 0 start-sector: 1889891823 size: 16384 bufflen 16384 FROM_DEVICE 1354354008097161 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 139365318656 SCSI random(8472) 0 1 0 0 start-sector: 254092663 size: 4096 bufflen 4096 FROM_DEVICE 1354354008100633 SCSI random(8472) 0 1 0 0 start-sector: 254094879 size: 16384 bufflen 16384 FROM_DEVICE 1354354008111723 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 60304424960 SCSI random(8472) 0 1 0 0 start-sector: 58119807 size: 4096 bufflen 4096 FROM_DEVICE 1354354008120469 SCSI random(8472) 0 1 0 0 start-sector: 58125415 size: 16384 bufflen 16384 FROM_DEVICE 1354354008126343 As shown above, one 16KB pread issues 2 scsi I/Os. (I traced scsi io dispatching with probe scsi.iodispatching. Please ignore values except for start-sector and size.) One scsi I/O is 16KB I/O as requested from the application and it's OK. The thing is the other 4KB I/O which I don't know why linux issues that I/O. of course, I/O performance is degraded by the weired 4KB I/O and I am having trouble. I also use fio (famous I/O benchmark tool) and noticed the same issue, so it's not from the application. Does anybody know what is going on ? Any comments or advices are appreciated. Thanks

    Read the article

  • How to tell Linux to explicitly swap out main memory of a suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • Unable to change IP address for eth0 without restart in Ubuntu

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I tried to change the IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4 auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status When I issue: sudo service networking restart I get this response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart does the IP change. Any ideas?

    Read the article

  • Unable chage IP address for eth0 without restart in Ubunto

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I try to change IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4: auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status Now I issue: sudo service networking restart I have response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart IP changing... Any ideas?

    Read the article

  • How to tell Linux to explicitly swap out main memory of suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • Good support to multiple desktops AND multiple monitors in Linux (Ubuntu)?

    - by Somebody still uses you MS-DOS
    I'm starting to have A LOT of opened windows in my machine. Sometimes within a project, I have e-mail/task management/personal e-mail/twitter, and a lot of different opened applications/terminal in my Linux environment. Nowadays I have 4 worspaces: Corporate management (e-mail) and corporate messenger; Work (Documents, Requisites) Dev (Development, All gVim windows, terminal and Firefox for development) Personal (Personal stuff: personal e-mail, delicious, twitter and so on) Sometimes it would be interesting to have different workspaces to projects instead of this configuration I have nowadays that are classes of work (bad name, I know, but I think you got the idea). I'm starting to think about using two monitors: one with Corporate Management, Work and Personal. The second monitor is only the development state: each workspace here is about a project being worked on instead of groups of works like before. A workspace may be implementing different classes for example. My question is: I just want to change to a second monitor using the mouse. I want to still be able to change workspaces in the same monitor using keyboard shortcuts. The keyboard shortcuts wouldn't change monitors, just worskpaces on the same monitor. Does Linux (Ubuntu 10.04 Lucid Lynx) support this envisioned setup? If so, how?

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • What are the best linux permissions to use for my website?

    - by Nic
    This is a Canonical Question about File Permissions on a Linux web server. I have a Linux web server running Apache2 that hosts several websites. Each website has its own folder in /var/www/. /var/www/contoso.com/ /var/www/contoso.net/ /var/www/fabrikam.com/ The base directory /var/www/ is owned by root:root. Apache is running as www-data:www-data. The Fabrikam website is maintained by two developers, Alice and Bob. Both Contoso websites are maintained by one developer, Eve. All websites allow users to upload images. If a website is compromised, the impact should be as limited as possible. I want to know the best way to set up permissions so that Apache can serve the content, the website is secure from attacks, and the developers can still make changes. One of the websites is structured like this: /var/www/fabrikam.com /cache /modules /styles /uploads /index.php How should the permissions be set on these directories and files? I read somewhere that you should never use 777 permissions on a website, but I don't understand what problems that could cause. During busy periods, the website automatically caches some pages and stores the results in the cache folder. All of the content submitted by website visitors is saved to the uploads folder.

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >