Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 380/1051 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • what is ranlib?

    - by Ying
    I have been using a MacOSX system for a while, but just only recently started poking into the guts. I found a guide telling me to run 'sudo ranlib /usr/local/lib/libjpeg.a'(installing libjpeg). I have read the ranlib manual, and tried looking online on it. I simply don't understand. What resources do I need to look up to learn more, or can someone give a concise explanation on its use? Thanks in advance!

    Read the article

  • Is it possible to use SELinux MCS permissions with Samba?

    - by Yuri
    Created a user1: adduser --shell /sbin/nologin --no-create-home user1 passwd user1 smbpasswd -a user1 smbpasswd -e user1 semanage login -a -s "unconfined_u" -r "s0-s0:c0" user1 Added a category c0 for the folder ./123 inside the Samba share chcat s0:c0 /share/123/ After that the user1 can't go into this folder: type=AVC msg=audit(1332693158.129:48): avc: denied { read } for pid=1122 comm="smbd" name="123" dev=sda1 ino=786438 scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0:c0 tclass=dir But if remove the c0 category: restorecon -v /share/123/ user1 opens folder with no problem. Is I'm doing something wrong or Samba doesn't support SELinux MCS? Have installed on CentOS 6.2 are: samba3.i686 3.6.3-44.el6 @sernet-samba selinux-policy.noarch 3.7.19-126.el6_2.10 @updates selinux-policy-targeted.noarch 3.7.19-126.el6_2.10 @updates

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • How to "ignore" username and password prompt in net use

    - by Mattisdada
    I have at the moment a logon.cmd script, that I'm using to map network drives to the users profile. It looks like this: ::Onboarding net use m: /delete net use m: \\BOB\onboarding ::Bookings net use n: /delete net use n: \\BOB\bookings ::Accounts net use j: /delete net use j: \\BOB\accounts It works fine up until it gets up to a folder that the current user cannot access, it then asks for a username and password instead of erroring and continuing. Notes: This very script used to work on another Samba PDC network, but I've moved it over to another server (Still Samba PDC) and now its breaking. Is there anyway for it to ignore the username/password prompt and just continue?

    Read the article

  • Can I mark a folder as mountpoint-only?

    - by Collin
    I have a folder ~/nas which I usually use sshfs to mount a network drive on. Today, I didn't realize the share hadn't been mounted yet, and copied some data into it. It took me a bit to realize that I'd just copied data into my own local drive rather than the network share. Is there some way to mark in the system that this folder is supposed to be a mount point, and to not let anyone copy data into it? I tried the permissions solution here: How to only allow a program to write to a directory if it is mounted?, but if I don't have write access I also can't mount anything to it.

    Read the article

  • Are these mySQL user settings vulnerable?

    - by Kavon Farvardin
    I'm using myphpadmin to manage the databases and I'm new to SQL in general. Am I suppose to keep an open anonymous user on localhost so things like drupal can access mySQL? It seems like having a non-passworded root on my server's hostname is retarded but I don't know what I'm doing with this in general. The user who's name starts with a b is the one I use to login and do things like make a database.

    Read the article

  • Webapp in Jetty can't find properties file after running a couple days

    - by Cuga
    I have a webapp running in Jetty on Mac OS 10.6. After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file. This properties file is included inside the .war file deployed to the /webapps directory. If I restart Jetty as the superuser the web service works again just fine. Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is: Problem accessing /my-web-service. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35) at com.company.service.ServletHandler.doGet(ProxyClass.java:66) ... at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Here's where the properties files exist that it's trying to read from the .war file: And this is how the properties are being read from the classpath: Properties properties = new Properties(); properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream( "app.properties")); Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.

    Read the article

  • Is the /etc/inittab file read top down?

    - by PeanutsMonkey
    When the init process is executed when the kernel has loaded, does it read the /etc/inittab file in a top down approach i.e. it executes each line as it appears in the file. If so and based on my reading and understanding, does this mean that it enters the documented run level and then launch sysinit process or vice versa? For example the common examples I have seen are id:3:initdefault: # System initialization. si::sysinit:/etc/rc.d/rc.sysinit

    Read the article

  • Why would one of my servers stop being able to access other servers by FQDN?

    - by Newlyn Erratt
    I have a number of servers on our local network and our debian server has suddenly stopped being able to access the other servers via their FQDN. Initial symptom was inability to login with Active Directory accounts. On further inspection, this machine, porkbelly, was unable to access our other servers (e.g. bacon and albert) via their FQDN. That is, they can ping albert by running ping albert but not by running ping albert.domain.local though when running ping albert it will be expanded to albert.domain.local. The server is still accessible from other servers via both porkbelly and porkbelly.domain.local. Upon examination of hosts information and running hostname its hostname and FQDN are correct. The resolv.conf appears correct. It contains: domain domain.local search domain.local nameserver 192.168.0.xxx (the nameserver) The dns server is also our Windows AD server. I'm not even sure where to go from here or why dns seems to be partially working though I don't have much experience. Where should I go from here? What might be causing this issue where machines are visible via their hostname but not their FQDN?

    Read the article

  • pppd disconnects from 3G, doesn't reconnect, w/ persist set

    - by bytenik
    I am trying to configure pppd to connect to a 3G network (Sprint, in this case) and then stay connected, reconnecting automatically if the remote connection is terminated. I have enabled the persist option. My configuration file is as follows: hide-password noauth connect "/usr/sbin/chat -v -f /etc/chatscripts/cellular" debug /dev/cell 921600 defaultroute noipdefault user " " persist maxfail 0 lcp-echo-failure 10 lcp-echo-interval 60 holdoff 5 However, when the peer disconnects the connection, pppd often waits a long time (substantially more than my holdoff) to reconnect the modem -- if it ever reconnects at all! An example log showing this: May 23 05:17:24 00270e0a8888 pppd[2408]: rcvd [LCP TermReq id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: LCP terminated by peer May 23 05:17:24 00270e0a8888 pppd[2408]: Connect time 60.1 minutes. May 23 05:17:24 00270e0a8888 pppd[2408]: Sent 0 bytes, received 0 bytes. May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down started (pid 2456) May 23 05:17:24 00270e0a8888 pppd[2408]: sent [LCP TermAck id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down finished (pid 2456), status = 0x0 May 23 05:17:24 00270e0a8888 pppd[2408]: Hangup (SIGHUP) May 23 05:17:24 00270e0a8888 pppd[2408]: Modem hangup May 23 05:17:24 00270e0a8888 pppd[2408]: Connection terminated. May 23 05:17:24 00270e0a8888 pppd[2408]: Terminating on signal 15 May 23 05:17:24 00270e0a8888 pppd[2408]: Exit. May 23 06:08:07 00270e0a8888 pppd[2500]: pppd 2.4.5 started by root, uid 0 May 23 06:08:10 00270e0a8888 pppd[2500]: Script /usr/sbin/chat -v -f /etc/chatscripts/cellular finished (pid 2530), status = 0x0 May 23 06:08:10 00270e0a8888 pppd[2500]: Serial connection established. May 23 06:08:10 00270e0a8888 pppd[2500]: using channel 11 The disconnect at the request of the peer occurs at 5:17, but the reconnect didn't happen until 6:08. I had a friend monitoring the server so I'm not certain that this wasn't a manual reconnection. Either way, it either took almost an hour to reconnect or never reconnected. Shouldn't persist + holdoff 5 cause this to automatically reconnect after 5 seconds of the link terminating?

    Read the article

  • Get XMMS2 to call outside script on automatic playlist advance?

    - by Alex Balashov
    Is there a way to get XMMS2 to call an outside script when it advances in a playlist - either automatically or via manual intervention (e.g. xmms2 next)? The goal is to have balloons pop up on my desktop to tell me what new song has started playing, and I really, really don't want to write a background daemon that polls 'xmms2 info' or 'xmms2 current' if there's a way to get it to issue the callback. Thanks in advance!

    Read the article

  • Difference between tcp recv buffer and tcp receive window size?

    - by pradeepchhetri
    The command shows the tcp receive buffer size in bytes. $ cat /proc/sys/net/ipv4/tcp_rmem 4096 87380 4001344 where the three values signifies the min, default and max values respectively. Then I tried to find the tcp window size using tcpdump command. $ sudo tcpdump -n -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn and port 80 and host google.com' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:15:41.465037 IP 172.16.31.141.51614 > 74.125.236.73.80: Flags [S], seq 3661804272, win 14600, options [mss 1460,sackOK,TS val 4452053 ecr 0,nop,wscale 6], length 0 I got the window size to be 14600 which is 10 times the size of MSS. Can anyone please tell me the relationship between the two.

    Read the article

  • How to know how many files a process opened efficiently?

    - by Victor Lin
    I knows I can use the command lsof -p xxxx | wc -l to know the count of opened files op a process, it works, but however, it is just too inefficient. I have some server process which have too many socket files, the wc -l method never return the result. So, what is the efficient way to know how many files opened on a process? Thanks.

    Read the article

  • How can I determine what gnome desktop number a gnome terminal is connected to?

    - by Ross Rogers
    In KDE's Konsole, I can do the following from the terminal: dcop kwin KWinInterface currentDesktop And it will tell me which desktop my terminal is connected to ( per http://stackoverflow.com/questions/738059/in-kde-how-can-i-automatically-tell-which-desktop-a-konsole-terminal-is-in/745250#745250 ) How can I determine what desktop number the current gnome terminal in a gnome session is connected to?

    Read the article

  • How can I reset the permissions of /bin /boot /etc and /dev to orignal owner, Ubuntu?

    - by Camsoft
    I accidentally changed the ownership of the /bin, /boot, /etc and /dev recursively to nobody:nogroup using chown when I misplaced a forward slash! How can I resort the original file ownerships? I've managed to get them all to root:root but I'm not sure if all the files should be owned by root and if this will break something? Is they are option to fix file permissions like there is in OS X? Help!

    Read the article

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • Shell script to control user initiated processes

    - by Gnanam
    Hi, I'm not a shell script expert. I'm looking for a shell script which checks for maximum number of Java processes (MyJavaStandalone) running in the system before starting/executing the current Java process. Example: Script: /home/myfolder/script.sh script.sh contains /usr/java/jdk1.6.0/bin/java MyJavaStandalone >> $DATE.log & Here, before executing "MyJavaStandalone", if there are already 10 processes running, then this current process should not be started.

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • How to allow writing to a mounted NFS partition

    - by Cerin
    How do you allow a specific user permission to write to an NFS partition? I've mounted an NFS share on my localhost (a Fedora install), and I can read and write as root, but I'm unable to write as the apache user, even though all the files and directories in the share on my localhost and remote host are owned by apache. For example, I've mounted it via this line in my /etc/fstab: remotehost:/data/media /data/media nfs _netdev,soft,intr,rw,bg 0 0 And both locations are owned by apache: [root@remotehost ~]# ls -la /data total 24 drwxr-xr-x. 6 root root 4096 Jan 6 2011 . dr-xr-xr-x. 28 root root 4096 Oct 31 2011 .. drwxr-xr-x 4 apache apache 4096 Jan 14 2011 media [root@localhost ~]# ls -la /data total 16 drwxr-xr-x 4 apache apache 4096 Dec 7 2011 . dr-xr-xr-x. 27 root root 4096 Jun 11 15:51 .. drwxrwxrwx 5 apache apache 4096 Jan 31 2011 media However, when I try and write as the apache user, I get a "Permission denied" error. [root@localhost ~]# sudo -u apache touch /data/media/test.txt' touch: cannot touch `/data/media/test.txt': Permission denied But of course it works fine as root. What am I doing wrong?

    Read the article

  • HP Server Automation - agent misreporting hostname

    - by warren
    I've been using HP Server Automation for some time, but have noticed an interesting issue I'm hoping the SF community has seen / knows a workaround to. When the management agent on Solaris or RHEL (only platforms I've noticed it on) reports the hostname of the managed server, it does not return the value of hostname, it returns the first alias to that entry in /etc/hosts. Any ideas on how to get around that? Other than editing /etc/hosts so the alias is at the end of the line instead of the front?

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >