Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 493/1663 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Is the /etc/inittab file read top down?

    - by PeanutsMonkey
    When the init process is executed when the kernel has loaded, does it read the /etc/inittab file in a top down approach i.e. it executes each line as it appears in the file. If so and based on my reading and understanding, does this mean that it enters the documented run level and then launch sysinit process or vice versa? For example the common examples I have seen are id:3:initdefault: # System initialization. si::sysinit:/etc/rc.d/rc.sysinit

    Read the article

  • Constantly diminishing free space on fedora 17

    - by Varun Madiath
    I don't know how to explain this other than to say that my computer seems to magically run out of free when it runs for a while. The output of df -h . oh my home direction is below /dev/mapper/vg_vmadiath--dev-lv_home 50G 47G 0 100% /home When I run sudo du -cks * | sort -rn | head -11 on /home I get the following output. I got this from decreasing free space on fedora 12 32744344 total 32744328 vmadiath 16 lost+found If I restart my system things seem to fix themselves and I'm left with about 20 or 25GB of free space. I'm running XFCE with XMonad as my window manager under fedora 17. Programs I'm running include the XFCE terminal, grep, find, firefox, eclipse, libre-office writer, zsh, emacs. Any help will be greatly appreciated. I'll gladly give you any other output you might need.

    Read the article

  • Routing RFC1918 addresses through dd-wrt via a switch

    - by espenfjo
    I am a bit stuck with an experiment of mine. I have a network looking somewhat like this. | Internet | | ---- |Switch| ---- | | Server w/pub IP | DD-WRT router 192.168.1.1 | | RFC1918 clients 192.168.1.0/24 What I want is for the RFC1918 clients to speak directly with each others. On the server with the public IP I have this route: 192.168.1.0/24 dev eth0 scope link and can see that packets are infact reaching the dd-wrt router for 192.168.1.1, even though if I get no answer. Trying to reach one of the RFC1918 clients from the public IP server will get no result, as the dd-wrt router is not announcing that network on to its external interface (arp who-has 192.168.1.107 tell xxx.xxx.xxx.xxx, but no answer). The router being an WLAN dd-wrt router has of course a load of routes, VLANs and interfaces: xxx.xxx.xxx.1 dev vlan2 scope link 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.244 84.215.64.0/18 dev vlan2 proto kernel scope link src xxx.xxx.xxx.xxx 169.254.0.0/16 dev br0 proto kernel scope link src 169.254.255.1 127.0.0.0/8 dev lo scope link 0.0.0.0 via xxx.xxx.xxx.1 dev vlan2 xxx.xxx.xxx.xxx being the public IP, and xxx.xxx.xxx.1 being the default route for the public IP. I am not sure where to continue with this. I would recon that I both need routing on the dd-wrt router, as well as some iptables magic? Why do something this complex? Why not ;) Also, do not mind that "Internet" can get RFC1918 traffic, it wont go outside of the walls. EDIT 1: Following the tip from stew I do indeed get the correct ARP flowing. And adding an iptables rule for allowing traffic from that specific public IPd machine I get traffic between the systems! Oddly enough though, the speed I get from Server w/pub IP - RFC1918 clients are the same as if the traffic were routed out onto the Internet and back. Edit 2: Ok, disconnecting the external Internet connection will still give the same, crappy transfer speed. So it has to be something else. Edit 3: Ok, I guess there are other reasons for this crappy speed. Case closed. :)

    Read the article

  • HP Server Automation - agent misreporting hostname

    - by warren
    I've been using HP Server Automation for some time, but have noticed an interesting issue I'm hoping the SF community has seen / knows a workaround to. When the management agent on Solaris or RHEL (only platforms I've noticed it on) reports the hostname of the managed server, it does not return the value of hostname, it returns the first alias to that entry in /etc/hosts. Any ideas on how to get around that? Other than editing /etc/hosts so the alias is at the end of the line instead of the front?

    Read the article

  • How to allow writing to a mounted NFS partition

    - by Cerin
    How do you allow a specific user permission to write to an NFS partition? I've mounted an NFS share on my localhost (a Fedora install), and I can read and write as root, but I'm unable to write as the apache user, even though all the files and directories in the share on my localhost and remote host are owned by apache. For example, I've mounted it via this line in my /etc/fstab: remotehost:/data/media /data/media nfs _netdev,soft,intr,rw,bg 0 0 And both locations are owned by apache: [root@remotehost ~]# ls -la /data total 24 drwxr-xr-x. 6 root root 4096 Jan 6 2011 . dr-xr-xr-x. 28 root root 4096 Oct 31 2011 .. drwxr-xr-x 4 apache apache 4096 Jan 14 2011 media [root@localhost ~]# ls -la /data total 16 drwxr-xr-x 4 apache apache 4096 Dec 7 2011 . dr-xr-xr-x. 27 root root 4096 Jun 11 15:51 .. drwxrwxrwx 5 apache apache 4096 Jan 31 2011 media However, when I try and write as the apache user, I get a "Permission denied" error. [root@localhost ~]# sudo -u apache touch /data/media/test.txt' touch: cannot touch `/data/media/test.txt': Permission denied But of course it works fine as root. What am I doing wrong?

    Read the article

  • Intel cpu hyperthreading on or off for ibm db2?

    - by rtorti19
    Has anyone ever done any database performance comparisons with hyper-threading enabled vs disabled? We are running ibm db2 and I'm curious if anyone has an recommendations for enabling hyper-threading or not. With hyper-threading enabled it makes it quite difficult to do capacity planning for cpu usage. For example. "With 8 physical cores represented as 16 "threads" on the OS and a cpu-bound workload, does that mean when your cpu usage hit's 50% you are actually running at 100%." What real benefits do I gain with leaving hyper-threading enabled on an intel server running DB2? Does hyper-threading help if you're workload is truly disk IO bound? If so, up to what percentage? These are the types of questions I'm trying to answer. Any thoughts?

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Webapp in Jetty can't find properties file after running a couple days

    - by Cuga
    I have a webapp running in Jetty on Mac OS 10.6. After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file. This properties file is included inside the .war file deployed to the /webapps directory. If I restart Jetty as the superuser the web service works again just fine. Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is: Problem accessing /my-web-service. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35) at com.company.service.ServletHandler.doGet(ProxyClass.java:66) ... at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Here's where the properties files exist that it's trying to read from the .war file: And this is how the properties are being read from the classpath: Properties properties = new Properties(); properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream( "app.properties")); Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Ubuntu 12, limit the resolution to 640x480

    - by TimothyP
    How can I set and limit the resolution in Ubuntu 12 to 640x480 There's not much in the xorg.conf file anymore, so I'm guessing this is no longer the place to do it? I can't do it using the GUI either because it doesn't show me the 640x480 option. While setting the resolution the computer is connected to a normal screen but later it will be connected to a screen that only supports 640x480 and doesn't report its supported modes to the computer. The only thing in my xorg.conf (by default) is this: Section "Device" Identifier "Default Device" Option "NoLogo" "True" EndSection

    Read the article

  • How do I configure additional phone lines asterisk/trixbox?

    - by Matt
    I have a 4 port Digium card in there, and have 4 lines running smoothly. Now, we added ANOTHER 4 port card and have 4 more analog lines coming into the Trixbox server. It still runs the 4 fine, but what do I need to do to add the additional 4 phone numbers/lines? I want it to act exactly as before, there's nothing special about the new lines. We just need more lines so that when we have 4 out of state customers call, we can have 4 more call and not get the busy signal. Trixbox CE 2.8

    Read the article

  • Getting rsync to move file from source to destination ?

    - by fabien-barbier
    Is rsync is a good choice for my project ? I have to : - copy files from source to destination folder via SSH, - be sure all files are copied, - delete source files after copy. - if I have conflict name, I have to rename files. It looks like I can use option : --remove-source-files (to delete source files) But how rsync manage conflict, can I had rules ? Use case on my project : I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1. Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A. ...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B : - /process/calc1 - /process/calc2 (and /process/ on server A is empty). How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ? Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ? Thanks.

    Read the article

  • pppd disconnects from 3G, doesn't reconnect, w/ persist set

    - by bytenik
    I am trying to configure pppd to connect to a 3G network (Sprint, in this case) and then stay connected, reconnecting automatically if the remote connection is terminated. I have enabled the persist option. My configuration file is as follows: hide-password noauth connect "/usr/sbin/chat -v -f /etc/chatscripts/cellular" debug /dev/cell 921600 defaultroute noipdefault user " " persist maxfail 0 lcp-echo-failure 10 lcp-echo-interval 60 holdoff 5 However, when the peer disconnects the connection, pppd often waits a long time (substantially more than my holdoff) to reconnect the modem -- if it ever reconnects at all! An example log showing this: May 23 05:17:24 00270e0a8888 pppd[2408]: rcvd [LCP TermReq id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: LCP terminated by peer May 23 05:17:24 00270e0a8888 pppd[2408]: Connect time 60.1 minutes. May 23 05:17:24 00270e0a8888 pppd[2408]: Sent 0 bytes, received 0 bytes. May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down started (pid 2456) May 23 05:17:24 00270e0a8888 pppd[2408]: sent [LCP TermAck id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down finished (pid 2456), status = 0x0 May 23 05:17:24 00270e0a8888 pppd[2408]: Hangup (SIGHUP) May 23 05:17:24 00270e0a8888 pppd[2408]: Modem hangup May 23 05:17:24 00270e0a8888 pppd[2408]: Connection terminated. May 23 05:17:24 00270e0a8888 pppd[2408]: Terminating on signal 15 May 23 05:17:24 00270e0a8888 pppd[2408]: Exit. May 23 06:08:07 00270e0a8888 pppd[2500]: pppd 2.4.5 started by root, uid 0 May 23 06:08:10 00270e0a8888 pppd[2500]: Script /usr/sbin/chat -v -f /etc/chatscripts/cellular finished (pid 2530), status = 0x0 May 23 06:08:10 00270e0a8888 pppd[2500]: Serial connection established. May 23 06:08:10 00270e0a8888 pppd[2500]: using channel 11 The disconnect at the request of the peer occurs at 5:17, but the reconnect didn't happen until 6:08. I had a friend monitoring the server so I'm not certain that this wasn't a manual reconnection. Either way, it either took almost an hour to reconnect or never reconnected. Shouldn't persist + holdoff 5 cause this to automatically reconnect after 5 seconds of the link terminating?

    Read the article

  • Disk Partitioning problem with fdisk.

    - by MA1
    Currently i am using fdisk to create/resize windows partitions. Following is a sample input script to fdisk to create/resize windows partitions: fdisk /dev/sda < partInput the contents of partInput are as follows: d #delete the partition 3 #partition number to be deleted n #add a new partition p #primary: type of new partition 3 #new partition number 18804 #start cylinder of new partition 77433 #end cylinder of new partition t #change the type of partition 3 #partition number whose type(filesystem) is to be changed 7 #HPFS/NTFS: partition type(filesystem) n #add a new partition p #primary: type of partition 77434 #first cylinder of new partition 77825 #end cylinder new partition w #write all the above changes As you see in the above input we are using cylinders for start and end. Earlier i am using sectors as unit and everything is working fine but getting problems when partitioning a 1.5TB hard drive. Then i changed the unit to cylinders but it is working on some machines not all. On some machines fdisk failed to create the partition table correctly. So, i am thinking to move to parted if there is no way to do the above using fdisk. Please also tell me how to correctly convert sectors to cylinders? How to perform all the above steps using parted without losing the data OR how to use fdisk correctly?

    Read the article

  • Missing dependency when trying to update

    - by ant2009
    Hello, Fedora 12 2.6.32.9-67.fc12.i686 I have tried doing the recommended as its saids at the bottom. However, that didn't work. So I have to yum upgrade --skip-broken Does anyone know how to solve this problem? Many thanks nss-3.12.6-1.2.fc12.i686 from updates has depsolving problems --> Missing Dependency: nspr >= 4.8.4 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) nss-3.12.6-1.2.fc12.i686 from updates has depsolving problems --> Missing Dependency: nss-util = 3.12.6 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) Error: Missing Dependency: nspr >= 4.8.4 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) Error: Missing Dependency: nss-util = 3.12.6 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest

    Read the article

  • Why does ganglia think my host is down?

    - by NZKoz
    I have ganglia set up to monitor our staging server, it's working great but I'm confused by the definition of 'down' to ganglia. There's a single node, running gmetad, gmond and the web frontend, but some small percentage of the time the web frontend shows some confusing output. Despite the fact that it's a single server in the cluster, and that server is the one serving the web interface, the dashboard output insists that the host is down. Then below that it has a graph which shows 50% down, 50% up. You can see an example of this here: http://i.imgur.com/MCWaS.jpg There's obviously something confusing ganglia somewhere, but I'm not sure where to start looking. Unfortunately googling for any combination of 'ganglia' 'down' 'metric name' seems return nothing but other people's ganglia installations displaying the same nonsense. Any tips on where to start looking would be greatly appreciated

    Read the article

  • PCI scan findings and problems with week ciphers on ports 993,443,995,465

    - by user64991
    From PCI scan results: Synops is : The remote service encrypts traffic using a protocol with known weaknesses . Description : The remote service accepts connections encrypted using SSL 2.0, which reportedly suffers from several cryptographic flaws and has been deprecated for several years. An attacker may be able to exploit these issues to conduct man-in-the-middle attacks or decrypt communications between the affected service and clients . See also : http://www.schneier.com/paper-ssl.pdf Solution: Consult the application's documentation to disable SSL 2.0 and use SSL 3.0 or TLS 1.0 instead. Risk Factor: Medium / CVSS Base Score : 2 (AV:R/AC:L/Au:NR/C:P/A:N/I:N/B:N) I have tried to change SSLProtocol all -SSLv2 to SSLProtocol -ALL +SSLv3 +TLSv1 And SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW To SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXPORT But using SSLdigger, it shows the same result. Is this the right way to do something like this?

    Read the article

  • How can I make XAnalogTV fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Relevant screenshots (windowed, maximized, and full-screen, respectively):

    Read the article

  • How can I reset the permissions of /bin /boot /etc and /dev to orignal owner, Ubuntu?

    - by Camsoft
    I accidentally changed the ownership of the /bin, /boot, /etc and /dev recursively to nobody:nogroup using chown when I misplaced a forward slash! How can I resort the original file ownerships? I've managed to get them all to root:root but I'm not sure if all the files should be owned by root and if this will break something? Is they are option to fix file permissions like there is in OS X? Help!

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >