Search Results

Search found 1034 results on 42 pages for 'chen jun'.

Page 6/42 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Unable to make sound play in headset

    - by user50849
    Top right, I click the sound icon, select sound settings, and connect my USB-headset. I can them see the headset being detected as it pops up in the menu. I click it, and expect the currently played audio to get sent to the headset instead. My problem is that it does not. The audio keeps playing through the built-in speakers. More info: The icon for my built-in card in the sound settings is a circuit with a note symbol on top. The symbol for the headset is just black background with a "No" symbol on it. Might mean it doesn't work somehow. I installed pavucontrol, and notice that no second sound card shows up in there. When connecting, the syslog says Jun 20 09:38:46 yuna kernel: [40144.553431] usb 2-1.2: new full-speed USB device number 11 using ehci_hcd Jun 20 09:38:46 yuna kernel: [40144.650609] input: C-Media USB Headphone Set as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.3/input/input20 Jun 20 09:38:46 yuna kernel: [40144.650895] generic-usb 0003:0D8C:000C.000B: input,hidraw0: USB HID v1.00 Device [C-Media USB Headphone Set ] on usb-0000:00:1d.0-1.2/input3 Jun 20 09:38:46 yuna mtp-probe: checking bus 2, device 11: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2" Jun 20 09:38:46 yuna mtp-probe: bus: 2, device: 11 was not an MTP device

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • Tomcat/Hibernate Problem "SEVERE: Error listenerStart"

    - by JSteve
    I downloaded working example of hibernate (with maven) and installed it on my tomcat, it worked. Then I created a new web project in MyEclipse, added hibernate support and moved all source files (no jar) to this new project and fixed package/paths wherever was necessary. My servlets are responding correctly but when I add "Listener" in web.xml, tomcat returns error "Error ListenerStart" on startup and my application doesn't start. I've carefully checked all packages, paths and classes, they look good. Error message is also not telling anything more except these two words Here is complete tomcat startup log: 17-Jun-2010 12:13:37 PM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8810 17-Jun-2010 12:13:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 293 ms 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/addressbook] startup failed due to previous errors 17-Jun-2010 12:13:37 PM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8810 17-Jun-2010 12:13:37 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 17-Jun-2010 12:13:37 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/22 config=null 17-Jun-2010 12:13:37 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 446 ms My web.xml is: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <listener> <listener-class>addressbook.util.SessionFactoryInitializer</listener-class> </listener> <filter> <filter-name>Session Interceptor</filter-name> <filter-class>addressbook.util.SessionInterceptor</filter-class> </filter> <filter-mapping> <filter-name>Session Interceptor</filter-name> <servlet-name>Country Manager</servlet-name> </filter-mapping> <servlet> <servlet-name>Country Manager</servlet-name> <servlet-class>addressbook.managers.CountryManagerServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>Country Manager</servlet-name> <url-pattern>/countrymanager</url-pattern> </servlet-mapping> </web-app> Can somebody either help me figure out what I am doing wrong? or point to some resource where I may get some precise solution of my problem?

    Read the article

  • Parsing email with Python

    - by Manuel Ceron
    I'm writing a Python script to process emails returned from Procmail. As suggested in this question, I'm using the following Procmail config: :0: |$HOME/process_mail.py My process_mail.py script is receiving an email via stdin like this: From hostname Tue Jun 15 21:43:30 2010 Received: (qmail 8580 invoked from network); 15 Jun 2010 21:43:22 -0400 Received: from mail-fx0-f44.google.com (209.85.161.44) by ip-73-187-35-131.ip.secureserver.net with SMTP; 15 Jun 2010 21:43:22 -0400 Received: by fxm19 with SMTP id 19so170709fxm.3 for <[email protected]>; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.84.1 with SMTP id m1mr2774225mul.26.1276652853684; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) Received: by 10.123.143.4 with HTTP; Tue, 15 Jun 2010 18:47:33 -0700 (PDT) Date: Tue, 15 Jun 2010 20:47:33 -0500 Message-ID: <[email protected]> Subject: TEST 12 From: Full Name <[email protected]> To: [email protected] Content-Type: text/plain; charset=ISO-8859-1 ONE TWO THREE I'm trying to parse the message in this way: >>> import email >>> msg = email.message_from_string(full_message) I want to get message fields like 'From', 'To' and 'Subject'. However, the message object does not contain any of these fields. What am I doing wrong?

    Read the article

  • Snow Leopard mounted directory changes permissions sporadically

    - by Galen
    I have a smb mounted directory: /Volumes/myshare This was mounted via Finder "Connect to Server..." with smb://myservername/myshare Everything good so far. However, when I try to access the directory via PHP (running under Apache), it fails with permission denied about 10% of the time. By this I mean that repeated accesses to my page sometimes result in a failure. My PHP page looks like: <?php $cmd = "ls -la /Volumes/ 2>&1"; exec($cmd, $execOut, $exitCode); echo "<PRE>EXIT CODE = $exitCode<BR/>"; foreach($execOut as $line) { echo "$line <BR/>"; } echo "</PRE>"; ?> When it succeeds it looks like: EXIT CODE = 0 total 40 drwxrwxrwt@ 4 root admin 136 Jun 14 12:34 . drwxrwxr-t 30 root admin 1088 Jun 4 13:09 .. drwx------ 1 galen staff 16384 Jun 14 09:28 myshare lrwxr-xr-x 1 root admin 1 Jun 11 16:05 galenhd - / When it fails it looks like: EXIT CODE = 1 ls: myshare: Permission denied total 8 drwxrwxrwt@ 4 root admin 136 Jun 14 12:34 . drwxrwxr-t 30 root admin 1088 Jun 4 13:09 .. lrwxr-xr-x 1 root admin 1 Jun 11 16:05 galenhd - / OTHER INFO: I'm working with the PHP (5.3.1), and Apache server that comes out of the box with Snow Leopard. Also, if I write a PHP script that loops and retries the "ls -la.." from the command-line, it doesn't seem to fail. Nothing is changing about the code and/or filesystem between succeeds and fails, so this appears to be a truly intermittent failure. This is driving me crazy. Anyone have any idea what might be going on? Thanks, Galen

    Read the article

  • Is it safe to delete rotated MySQL binary logs?

    - by Milan Babuškov
    I have a MySQL server with binary logging active. Once a days logs file is "rotated", i.e. MySQL seems to stop writing to it and creates and new log file. For example, I currently have these files in /var/lib/mysql -rw-rw---- 1 mysql mysql 10485760 Jun 7 09:26 ibdata1 -rw-rw---- 1 mysql mysql 5242880 Jun 7 09:26 ib_logfile0 -rw-rw---- 1 mysql mysql 5242880 Jun 2 15:20 ib_logfile1 -rw-rw---- 1 mysql mysql 1916844 Jun 6 09:20 mybinlog.000004 -rw-rw---- 1 mysql mysql 61112500 Jun 7 09:26 mybinlog.000005 -rw-rw---- 1 mysql mysql 15609789 Jun 7 13:57 mybinlog.000006 -rw-rw---- 1 mysql mysql 54 Jun 7 09:26 mybinlog.index and mybinlog.000006 is growing. Can I simply take mybinlog.000004 and mybinlog.000005, zip them up and transfer to another server, or I need to do something else before? What info is stored in mybinlog.index? Only the info about the latest binary log? UPDATE: I understand I can delete the logs with PURGE BINARY LOGS which updates mybinlog.index file. However, I need to transfer logs to another computer before deleting them (I test if backup is valid on another machine). To reduce the transfer size, I wish to bzip2 the files. What will PURGE BINARY LOGS do if log files are not "there" anymore?

    Read the article

  • Types of ER Diagrams

    - by syrion
    I'm currently taking a class for database design, and we're using the ER diagram style designed by Peter Chen. I have a couple of problems with this style: Keys in relationships don't seem realistic. In practice, synthetic keys like "orderid" seem to be used in almost all tables, including association tables, but the Chen style diagrams heavily favor (table1key, table2key) compound keys. There is no notation for datatype. The diamond shape for associations is horrible, and produces a cluttered diagram. In general, it just seems hard to capture some relationships with the Chen system. What ERD style, if any, do you use? What has been the most popular in your workplaces? What tools have you used, or do you use, to create these diagrams?

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Apache Logs - Not Showing Requested URL or User IP

    - by iarfhlaith
    Hey all, I'm having a problem with a server that keeps falling over. Looking through the Apache error logs it appears to come from a rogue PHP script. I'm trying to track this down using Apache's error_log and access_log but the server log format isn't giving me the detail I need. I suspect the log format isn't sufficient, but I've reviewed the Apache documentation and I've included the switches that I think I need to see. Here's my LogFormat configuration in the httpd.conf file: `LogFormat "%h %l %u %t \"%r\" %s %b %U %q %T \"%{Referer}i\" \"%{User-Agent}i\"" extended CustomLog logs/access_log extended` Using the %U %q %T switches I expected to see the requested URL, query string, and the time it took to serve the request, but I'm not seeing any of this information when I tail the log. Here's an example: 127.0.0.1 - - [01/Jun/2010:14:12:04 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:05 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:06 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:07 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:08 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:09 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" Have a made a mistake in configuring the LogFormat or is it something else? Also, each request appears to come from the localhost. How come it's not giving me the remote user's IP address? Thanks, Iarfhlaith

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • Openfiler crashing without cause or leaving any log messages

    - by user44725
    So my linux machine keeps crashing, without so much as a bye or a leave. I've tried and tried and failed again to work out whats happening. Any help would be much appreciated. Linux chai 2.6.29.6-0.24.smp.gcc3.4.x86_64 #1 SMP Tue Mar 9 05:06:08 GMT 2010 x86_64 x86_64 x86_64 GNU/Linux Openfiler Here is what the /var/log/messages file says at the time of the latest crash. Nothing that unusual - just greg logging in and out via samba. You'll notice there is a cron running for root every minute - ignore this - this isn't the issue either it was some check I've been doing to discover its problem. Jun 2 10:32:01 chai crond(pam_unix)[16529]: session closed for user root Jun 2 10:32:49 chai samba(pam_unix)[15454]: session opened for user greg by (uid=0) Jun 2 10:33:01 chai crond(pam_unix)[16537]: session opened for user root by (uid=0) Jun 2 10:33:04 chai crond(pam_unix)[16537]: session closed for user root Jun 2 10:41:40 chai syslogd 1.4.1: restart. Jun 2 10:41:43 chai syslog: syslogd startup succeeded That restart was called manually by hand - by clicking the restart button on the box. So basically messages isn't revealing many secrets. dmesg only shows from startup. If there is any output I should paste. Just say when and where and it'll be done. Thanks for your help! Tim

    Read the article

  • Postfix not working

    - by user1488723
    A while ago I installed the postfix mail server on my ubuntu 10.04 VPS. At the time it was working good but now it's just stopped working. I was trying to enable SASL authentification and somewhere it must have went really wrong. I've studied the postfix main.cf and done everything in an orderly fashion to ensure that it is nothing wrong. I also have Dovecot installed and configured dovecot.conf to run with Postfix. If I try to do telnet localhost 25 while logged in on the server I just get: Connection closed by foreign host. If I try to do telnet mail.example.com 25 "from the outside" I get: telnet: Unable to connect to remote host: No route to host And when I check the server log after the failed attempts I see this: Jun 28 15:49:31 msv postfix/smtpd[11839]: initializing the server-side TLS engine Jun 28 15:49:31 msv postfix/smtpd[11839]: connect from localhost.localdomain[127.0.0.1] Jun 28 15:49:31 msv postfix/smtpd[11839]: warning: SASL: Connect to /var/spool/postfix/private/auth failed: Connection refused Jun 28 15:49:31 msv postfix/smtpd[11839]: fatal: no SASL authentication mechanisms Jun 28 15:49:32 msv postfix/master[11598]: warning: process /usr/lib/postfix/smtpd pid 11839 exit status 1 Jun 28 15:49:32 msv postfix/master[11598]: warning: /usr/lib/postfix/smtpd: bad command startup -- throttling main.cf file looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no delay_warning_time = 4h myhostname = mail.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydomain = example.com myorigin = $mydomain mydestination = $mydomain relayhost = mynetworks = 127.0.0.1 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_loglevel = 2 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = /var/spool/postfix/private/auth smtpd_sasl_security_options = noanonymous Dovecot.conf file looks like this: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/mail mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Spam or exchange issue?

    - by John
    I am getting an error message to unknow user on my domain. I would like to know is this just a phishing spam email or it was really send from our domain? I have changed our domain name to OURDOMAIN.COM I have Exchange 2010 installed. Body of the email is Delivery has failed to these recipients or distribution lists: sales The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Sent by Microsoft Exchange Server 2007 Diagnostic information for administrators: Generating server: murraygroup.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from ironport.mih.co.uk (10.10.29.9) by mih-exca-01.murraygroup.local (10.10.29.133) with Microsoft SMTP Server id 8.3.106.1; Fri, 29 Jun 2012 12:36:12 +0100 Received: from glamf04.netintelligence.com (HELO mailfilter.iomart.com) ([62.128.193.114]) by ironport.mih.co.uk with SMTP; 29 Jun 2012 12:42:48 +0100 Received: from glamta4.netintelligence.com(localhost.localdomain[127.0.0.1]) by mailfilter.iomart.com ; Fri, 29 Jun 2012 12:37:18 BST Received: from [195.43.137.66] ([195.43.137.66]) by glamta4.netintelligence.com (8.13.1/8.12.8) with ESMTP id q5TBbH4j022142 for <[email protected]>; Fri, 29 Jun 2012 12:37:18 +0100 Date: Fri, 29 Jun 2012 12:37:17 +0100 Message-ID: <20120629145229.4C2A817231D8A7958044@SONW> From: Ines Hampton <[email protected]> To: sales <[email protected]> Reply-To: Marguerite Soto <[email protected]> Subject: User sales MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-Path: [email protected] eporting-MTA: dns;murraygroup.local Received-From-MTA: dns;ironport.mih.co.uk Arrival-Date: Fri, 29 Jun 2012 11:36:12 +0000 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.1.1 Diagnostic-Code: smtp;550 5.1.1 RESOLVER.ADR.RecipNotFound; not found X-Display-Name: sales

    Read the article

  • Repeated installation of malicious software to do outbound DDOS attack [duplicate]

    - by user224294
    This question already has an answer here: How do I deal with a compromised server? 12 answers We have a Ubuntu Vitual Private Server hosted by a Canadian company. Out VPS was affected to do "outbound DDOS attack" as reported by server security team. There are 4 files in /boot looks like iptable, please note that the capital letter "I","L". VPS:/boot# ls -lha total 1.8M drwx------ 2 root root 4.0K Jun 3 09:25 . drwxr-xr-x 22 root root 4.0K Jun 3 09:25 .. -r----x--x 1 root root 1.1M Jun 3 09:25 .IptabLes -r----x--x 1 root root 706K Jun 3 09:23 .IptabLex -r----x--x 1 root root 33 Jun 3 09:25 IptabLes -r----x--x 1 root root 33 Jun 3 09:23 IptabLex We deleted them. But after a few hours, they appeared again and the attack resumed. We deleted them again. They resurfaced again. So on and so forth. So finally we have to disable our VPS. Please let us know how can we find the malicious script somewhere in the VPS, which can automatically install such attcking software? Thanks.

    Read the article

  • Iptables blocking mysql port 3306

    - by valmar
    I got a Tomcat server running a web application that must access a mysql server via Hibernate on the same machine. So, I added a rule for port 3306 to my iptables script but tomcat cannot connect to the mysql server for some reason. I need to reset all iptables rules - Then tomcat can connect to the mysql server again. All the other iptables rules work perfectly though. What's wrong? Here is my script: iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 24 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -s localhost --dport 8009 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 8009 -j ACCEPT iptables -A INPUT -p tcp -s localhost --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 3306 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp --dport 25 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 25 -j ACCEPT iptables -A INPUT -p tcp --dport 587 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 587 -j ACCEPT iptables -A INPUT -p tcp --dport 465 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 465 -j ACCEPT iptables -A INPUT -p tcp --dport 110 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 110 -j ACCEPT iptables -A INPUT -p tcp --dport 995 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 995 -j ACCEPT iptables -A INPUT -p tcp --dport 143 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 143 -j ACCEPT iptables -A INPUT -p tcp --dport 993 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 993 -j ACCEPT iptables -A INPUT -j DROP My /etc/hosts file: # nameserver config # IPv4 127.0.0.1 localhost 46.4.7.93 mydomain.com 46.4.7.93 Ubuntu-1004-lucid-64-minimal 46.4.7.93 horst # IPv6 ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts Having a look into the iptables logs, gives me this: Jun 22 16:52:43 Ubuntu-1004-lucid-64-minimal kernel: [ 435.111780] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52432 DF PROTO=TCP SPT=56108 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:46 Ubuntu-1004-lucid-64-minimal kernel: [ 438.110555] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52433 DF PROTO=TCP SPT=56108 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:46 Ubuntu-1004-lucid-64-minimal kernel: [ 438.231954] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48020 DF PROTO=TCP SPT=56109 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:49 Ubuntu-1004-lucid-64-minimal kernel: [ 441.229778] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48021 DF PROTO=TCP SPT=56109 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:53:57 Ubuntu-1004-lucid-64-minimal kernel: [ 508.731839] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=78.92.97.67 DST=46.4.7.93 LEN=64 TOS=0x00 PREC=0x00 TTL=122 ID=23053 DF PROTO=TCP SPT=1672 DPT=445 WINDOW=65535 RES=0x00 SYN URGP=0 Jun 22 16:53:59 Ubuntu-1004-lucid-64-minimal kernel: [ 511.625038] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=78.92.97.67 DST=46.4.7.93 LEN=64 TOS=0x00 PREC=0x00 TTL=122 ID=23547 DF PROTO=TCP SPT=1672 DPT=445 WINDOW=65535 RES=0x00 SYN URGP=0 Jun 22 16:54:22 Ubuntu-1004-lucid-64-minimal kernel: [ 533.981995] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=27.254.39.16 DST=46.4.7.93 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=6549 PROTO=TCP SPT=6005 DPT=33796 WINDOW=64240 RES=0x00 ACK SYN URGP=0 Jun 22 16:54:44 Ubuntu-1004-lucid-64-minimal kernel: [ 556.297038] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=94.78.93.41 DST=46.4.7.93 LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=7712 PROTO=TCP SPT=57598 DPT=445 WINDOW=512 RES=0x00 SYN URGP=0

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Why is the latency on one LVM volume consistently higher?

    - by David Schmitt
    I've got a server with LVM over RAID1. One of the volumes has a consistently higher IO latency (as measured by the diskstats_latency munin plugin) than the other volumes from the same group. As you can see, the dark orange /root volume has consistently high IO latency. Actually ten times the average latency of the physical devices. It also has the highest Min and Max values. My main concern are not the peaks, which occur under high load, but the constant load on (semi-)idle. The server is running Debian Squeeze with the VServer kernel and has four VServer containers and one KVM guest. I'm looking for ways to fix - or at least understand - this situation. Here're some parts of the system configuration: root@kvmhost2:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/system--host-root 19G 3.8G 14G 22% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 224K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/md0 942M 37M 858M 5% /boot /dev/mapper/system--host-isos 28G 19G 8.1G 70% /srv/isos /dev/mapper/system--host-vs_a 30G 23G 6.0G 79% /var/lib/vservers/a /dev/mapper/system--host-vs_b 5.0G 594M 4.1G 13% /var/lib/vservers/b /dev/mapper/system--host-vs_c 5.0G 555M 4.2G 12% /var/lib/vservers/c /dev/loop0 4.4G 4.4G 0 100% /media/debian-6.0.0-amd64-DVD-1 /dev/loop1 4.4G 4.4G 0 100% /media/debian-6.0.0-i386-DVD-1 /dev/mapper/system--host-vs_d 74G 55G 16G 78% /var/lib/vservers/d root@kvmhost2:~# cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 none /dev devtmpfs rw,relatime,size=16500836k,nr_inodes=4125209,mode=755 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 /dev/mapper/system--host-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/md0 /boot ext3 rw,sync,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-isos /srv/isos ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_a /var/lib/vservers/a ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_b /var/lib/vservers/b ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_c /var/lib/vservers/c ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/loop0 /media/debian-6.0.0-amd64-DVD-1 iso9660 ro,relatime 0 0 /dev/loop1 /media/debian-6.0.0-i386-DVD-1 iso9660 ro,relatime 0 0 /dev/mapper/system--host-vs_d /var/lib/vservers/d ext3 rw,relatime,errors=continue,data=ordered 0 0 root@kvmhost2:~# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda2[0] sdb2[1] 975779968 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 979840 blocks [2/2] [UU] unused devices: <none> root@kvmhost2:~# iostat -x Linux 2.6.32-5-vserver-amd64 (kvmhost2) 06/28/2012 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 3.09 0.14 2.92 1.51 0.00 92.35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 23.25 161.12 7.46 37.90 855.27 1596.62 54.05 0.13 2.80 1.76 8.00 sdb 22.82 161.36 7.36 37.66 850.29 1596.62 54.35 0.54 12.01 1.80 8.09 md0 0.00 0.00 0.00 0.00 0.14 0.02 38.44 0.00 0.00 0.00 0.00 md1 0.00 0.00 53.55 198.16 768.01 1585.25 9.35 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.48 20.21 16.70 161.71 8.62 0.26 12.72 0.77 1.60 dm-1 0.00 0.00 3.62 10.03 28.94 80.21 8.00 0.19 13.68 1.59 2.17 dm-2 0.00 0.00 0.00 0.00 0.00 0.00 9.17 0.00 9.64 6.42 0.00 dm-3 0.00 0.00 6.73 0.41 53.87 3.28 8.00 0.02 3.44 0.12 0.09 dm-4 0.00 0.00 17.45 18.18 139.57 145.47 8.00 0.42 11.81 0.76 2.69 dm-5 0.00 0.00 2.50 46.38 120.50 371.07 10.06 0.69 14.20 0.46 2.26 dm-6 0.00 0.00 0.02 0.10 0.67 0.81 12.53 0.01 75.53 18.58 0.22 dm-7 0.00 0.00 0.00 0.00 0.00 0.00 7.99 0.00 11.24 9.45 0.00 dm-8 0.00 0.00 22.69 102.76 407.25 822.09 9.80 0.97 7.71 0.39 4.95 dm-9 0.00 0.00 0.06 0.08 0.50 0.62 8.00 0.07 481.23 11.72 0.16 root@kvmhost2:~# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 May 11 11:19 control lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm1 -> ../dm-4 lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm2 -> ../dm-3 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-isos -> ../dm-2 lrwxrwxrwx 1 root root 7 May 11 11:19 system--host-root -> ../dm-0 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-swap -> ../dm-9 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_d -> ../dm-8 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_b -> ../dm-6 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_c -> ../dm-7 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_a -> ../dm-5 lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm3 -> ../dm-1 root@kvmhost2:~#

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Apache2 will not start on OpenSUSE after enabling mod_pagespeed

    - by alpha1
    I have a linode VPS, running openSUSE 12.1 (A little outdated, but stable). I have installed the RPMS for mod_pagespeed. mod_pagespeed.conf has "ModPagespeed on". Restarting apache fails after enabling pagespeed. the errors are not very helpful. li361-39:/usr/lib64/apache2 # a2enmod pagespeed li361-39:/usr/lib64/apache2 # service apache2 restart redirecting to systemctl Job failed. See system logs and 'systemctl status' for details. li361-39:/usr/lib64/apache2 # systemctl status apache2.service apache2.service - apache Loaded: loaded (/lib/systemd/system/apache2.service; enabled) Active: failed since Thu, 06 Jun 2013 20:49:00 +0000; 1s ago Process: 6701 ExecStop=/usr/sbin/httpd2 -D SYSTEMD -k stop (code=exited, status=0/SUCCESS) Process: 6704 ExecStart=/usr/sbin/start_apache2 -D SYSTEMD -k start (code=exited, status=1/FAILURE) Main PID: 6637 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/apache2.service li361-39:/usr/lib64/apache2 # a2dismod pagespeed li361-39:/usr/lib64/apache2 # service apache2 restart redirecting to systemctl li361-39:/usr/lib64/apache2 # And the error log (/var/log/apache2/error_log) is useless as well. [Thu Jun 06 20:48:59 2013] [notice] caught SIGTERM, shutting down [Thu Jun 06 20:49:12 2013] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Thu Jun 06 20:49:13 2013] [notice] Apache/2.2.21 (Linux/SUSE) mod_ssl/2.2.21 OpenSSL/1.0.0k PHP/5.4.15 configured -- resuming normal operations EDIT This is from /var/log/messages Jun 12 14:24:14 li361-39 start_apache2[27951]: httpd2-prefork: Syntax error on line 116 of /etc/apache2/httpd.conf: Syntax error on line 34 of /etc/apache2/sysconfig.d/loadmodule.conf: Cannot load /usr/lib64/apache2/mod_pagespeed.so into server: /usr/lib64/apache2/mod_pagespeed.so: undefined symbol: ap_unixd_config Full Log is here: http://pastebin.com/hjnbZZTr I've tried looking for other logs and checking the mod_pagespeed.conf against posts claiming it works and nothing is striking as wrong. Any Ideas?

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • Apache reverse proxy: no protocol handler

    - by gonvaled
    I am trying to configure a reverse proxy with apache, but I am getting a No protocol handler was valid for the URL error, which I do not understand. This is the relevant configuration of apache: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ ProxyPassReverse /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ The requests is reaching apache as: POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1 And they should be forwarded to my internal service, located at: 0.0.0.0:8000/services/EchoService.py These are the logs: ==> /var/log/apache2/error.log <== [Wed Jun 20 02:05:20 2012] [debug] proxy_util.c(1506): [client 127.0.0.1] proxy: http: found worker http://localhost:8000/services/ for http://localhost:8000/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html [Wed Jun 20 02:05:20 2012] [debug] mod_proxy.c(998): Running scheme http handler (attempt 0) [Wed Jun 20 02:05:20 2012] [warn] proxy: No protocol handler was valid for the URL /gonvaled/examples/jsonrpc/output/services/EchoService.py. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. [Wed Jun 20 02:05:20 2012] [debug] mod_deflate.c(615): [client 127.0.0.1] Zlib: Compressed 614 to 373 : URL /gonvaled/examples/jsonrpc/output/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html ==> /var/log/apache2/access.log <== 127.0.0.1 - - [20/Jun/2012:02:05:20 +0200] "POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1" 500 598 "http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19"

    Read the article

  • How to mount vfat drive on Linux with ownership other than root?

    - by Norman Ramsey
    I'm running into trouble mounting an iPod on a newly upgraded Debian Squeeze. I suspect either a protocol has changed or I've tickled a bug, which I don't know where to report. I'm trying to mount the iPod so that I have permission to read and write it. But my efforts come to nothing: $ sudo mount -v -t vfat -o uid=32074,gid=6202 /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control $ sudo umount /mnt $ sudo mount -v -t vfat -o uid=nr,gid=nr /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control As you see, I've tried both symbolic and numberic IDs, but the files persist in being owned by root (and only writable by root). The IDs are really mine; I've had the UID since 1993. $ id uid=32074(nr) gid=6202(nr) groups=6202(nr),0(root),2(bin),4(adm),... I've put an strace at http://pastebin.com/Xue2u9FZ, and the mount(2) call looks good: mount("/dev/sde2", "/mnt", "vfat", MS_MGC_VAL, "uid=32074,gid=6202") = 0 Finally, here's my kernel version from uname -a: Linux homedog 2.6.32-5-686 #1 SMP Mon Jun 13 04:13:06 UTC 2011 i686 GNU/Linux Does anyone know if I should be doing something different, or If there is a workaround, or If this is a bug, where it should be reported?

    Read the article

  • How to delete all mail in solaris

    - by conandor
    I have bunch of mails in my solaris account 107 letters found in /var/mail/icinga, 1 scheduled for deletion, 0 newly arrived 107 d 2886 MAILER-DAEMON Fri Jun 11 00:39:39 2010 > 106 2895 MAILER-DAEMON Fri Jun 11 00:13:02 2010 105 2890 MAILER-DAEMON Fri Jun 11 00:10:05 2010 104 2888 MAILER-DAEMON Tue May 18 15:13:34 2010 103 2874 MAILER-DAEMON Tue May 18 14:58:29 2010 102 2874 MAILER-DAEMON Tue May 18 14:28:34 2010 Any idea how can i delete all of them with 1 command line instead of line by line?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >