Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 708/838 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • Very different results from df after a few seconds

    - by tatus2
    When the backup moves the files from one server to the other the results from df change every few seconds in an impossible manner. The source host is running rsync. On the destination host I'm running the following command every few seconds: echo `date` `df|grep md0` Results are below: Sat Jun 29 23:57:12 CEST 2013 /dev/md0 4326425568 579316100 3527339636 15% /MD0 Sat Jun 29 23:57:14 CEST 2013 /dev/md0 4326425568 852513700 3254142036 21% /MD0 Sat Jun 29 23:57:15 CEST 2013 /dev/md0 4326425568 969970340 3136685396 24% /MD0 Sat Jun 29 23:57:17 CEST 2013 /dev/md0 4326425568 1255222180 2851433556 31% /MD0 Sat Jun 29 23:57:20 CEST 2013 /dev/md0 4326425568 1276006720 2830649016 32% /MD0 Sat Jun 29 23:57:24 CEST 2013 /dev/md0 4326425568 1355440016 2751215720 34% /MD0 Sat Jun 29 23:57:26 CEST 2013 /dev/md0 4326425568 1425090960 2681564776 35% /MD0 Sat Jun 29 23:57:27 CEST 2013 /dev/md0 4326425568 1474601872 2632053864 36% /MD0 Sat Jun 29 23:57:28 CEST 2013 /dev/md0 4326425568 1493627384 2613028352 37% /MD0 Sat Jun 29 23:57:32 CEST 2013 /dev/md0 4326425568 615934400 3490721336 15% /MD0 Sat Jun 29 23:57:33 CEST 2013 /dev/md0 4326425568 636071360 3470584376 16% /MD0 As you can see I start from USE of 15% and after 15 seconds I'm at 37% (I don't need to mention that the backup can not copy this huge amount of data in such a short time). After ~20 seconds the cycle closes. I'm again roughly at the same usage as earlier. The value is reasonable, ca. 35 Mb were copied. Can somebody explain to me what is going on? Does df only make an estimation of usage instead of used value?

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • OS X can connect to Windows machine, but can't access shared folders

    - by Bonnie
    I can create new folders on my Windows XP machine, set them to "shared". On my Mac, I pick Finder → Go → Connect to Server → smb://192.168.1.4 → Connect → Name / Password. It even shows me all the names of the newly created shared-folders on my PC, but when I try to actually connect to any of them I get connection failed, there was an error connecting Any idea on what would cause that? The fact that it successfully gets so far—to actually showing me my PC share-names—must mean I have 99% of this working correctly, i.e. the physical connection, the IP address, the user name, the password, etc. Still, I can't seem to access the folders themselves. I've tried this with my Windows XP firewall on/off, and Norton AntiVirus on/off. Same problem. Everything did work fine, 4 months ago. Were there any odd OS X or Windows updates released recently? I always apply them all. smbclient on the Mac does correctly find the XP machine, my XP user name, and accepts my XP password. I get the following from that smbclient command: Doing spnego session setup (blob length=16) server didn't supply a full spnego negprot Got challenge flags: ... Got NTLMSSP flags: ... Got NTLMSP flags: ... Domain=[XPMACHINE] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] tree connect failed: NT_STATUS_INSUFF_SERVER_RESOURCES I'm not sure why a standard XP box can't "supply a full spnego negprot". Whatever that means. Using XP's RegEdit to change my IRPStackSize from 11... to 13, 15, 20, 22... still gives that "NT_STATUS_INSUFF_SERVER_RESOURCES" error on the Mac.

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • only root can send out mail by postfix

    - by Arash
    I have postfix installed and running. The problem is only root can send email. other users failed to do. Here is the log for user www-data which is a web server application. (the same error for other users) postfix/smtp[32003]: 513765FEB9: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:11125, delay=2.1, delays=0.07/0/1.7/0.32, dsn=5.0.0, status=bounced (host 127.0.0.1[127.0.0.1] said: 550-Verification failed for <[email protected]> 550-Unrouteable address 550 Sender verify failed (in reply to RCPT TO command)) here is the /etc/postfix/main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost relayhost = [127.0.0.1]:11125 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/lizard_password smtp_sasl_security_options = mynetworks = 127.0.0.1/8 [::ffff:127.0.0.1]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost inet_protocols = ipv4 smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination and here is the section that I added to the /etc/stunnel/stunnel.conf: [smtp-tls-wrapper] accept = 11125 client = yes connect = smtp.mydomain.com:465 I appreciate any help.

    Read the article

  • configuration of zend frame work in ubuntu

    - by Rahul Mehta
    Hi, I have created a project zfapi by zf command in ubuntu. Now http://myserverpath.com/zfapi/ gives me listing of folder public application and others. http://myserverpath.com/zfapi/public give me the index page index.php. and i have made the UserController.php in application/controllers but by http://myserverpath.com/zfapi/user/ is saying user not found. what configuration i need to set for running it proper. I had set my /etc/apache2/apache2.conf added the following in the last . <VirtualHost *:80> DocumentRoot "/var/www/mypath/zfapi" <Directory "/var/www/mypath/zfapi"> Order allow,deny Allow from all AllowOverride all </Directory> </VirtualHost> is giving me this error while restarting server. [Sat Jan 08 13:32:53 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sat Jan 08 13:33:03 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results what this i should do .?

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • How can I batch convert SVG files containing text to PDF files (specifically on CentOS 5.3 x86_64)?

    - by molecules
    I would like to programatically convert SVG files to PDF files. However, the SVG files contain text that must be searchable in the generated PDF files. Also, it has to work on Red Hat Enterprise Linux 5.3 or CentOS 5.3 for the x86_64 architecture. It would be nice if it were Open Source or at least not very expensive. Here is what I've tried. All of these, except Batik, work fine on Debian Lenny. Inkscape I can get it installed using autopackages from http://inkscape.modevia.com/ap, but when I use it from the command line, the text is not searchable. Batik rasterizer [sic] When it converts SVG files to PDF files, the text is no longer searchable. svg2pdf The source for this and several of its dependencies are available to download. I have been trying to get it to compile on CentOS, but haven't had success yet. I found a precompiled version for Debian x86_64, but it doesn't work on CentOS. rsvg-convert Generated PDF isn't searchable on CentOS 5.3. Perhaps installing a newer version of cairo would help. Thanks to DaveParillo for mentioning rsvg-convert (on superuser). SOLUTION (but perhaps some of the above will still be useful to the reader) princeXML It works fine on CentOS when installed from source. For some reason it doesn't work when installed from the .rpm. Thanks Erik Dahlström! (provided solution that worked for my case on stackoverflow) Cross posted on stackoverflow

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • How to install Windows with no CD drive boot option in BIOS?

    - by Kris Hollenbeck
    I have a new computer which I built from scratch and I am trying to install a copy of Windows Vista on it. I am able to get to the BIOS and change the boot options which are as follows.. -Built-in EFI Shell -SATA: ST31000528AS I have searched around for and everything I find says to boot from the CD rom. However, as you can see. That is not an option for me. So I am wondering if there is another way around this? Is it possible to boot the Disk from the EFI Shell? Any help is much appreciated. Thanks EDIT: I have tried this.. http://technet.microsoft.com/en-us/library/dd744321%28v=ws.10%29.aspx UPDATE: I managed to make my USB bootable via the BIOS and I have copied my windows Vista disk onto my USB via drag and drop. However I am still not able to get the windows install to start. Also I have tried booting it from the EFI shell using the following command.. blk6: blk6:\> \EFI\BOOT\BOOTX64.EFI Still no luck..

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • Errors generating gpg keypair

    - by Evan Lynch
    I posted this originally on stackoverflow, but was told that it was offtopic and this would be the better place to post it, so I am reposting it here and deleting my original topic. I have a rather old PGP key, but I've long ago lost the private key for it, so I'm trying to generate a new key with GPG on Windows 7. While it technically generates the key, GPA crashes every time I generate the keypair. I've tried this four times now and just downloaded what appears to be the latest version of Gpg4Win and am still receiving this problem. A comment on my original post informed me that GPA crashes is not a very good description of the problem, but unfortunately I can't do much better than that: all it tells me is "gpa.exe has crashed and will close now", I don't get an error dump or anything. Is there anything I can do to fix this, or is this just a bug in the latest version of Gpg4Win? Here are the specs of GPG that I'm using: GPA 0.9.4. GnuPG 2.0.22. My operating system is Windows 7 64 Bit, and I have 5 GB of RAM. Also, I was told to try generating the keypair on the command line but can't find any documentation for how to do this in Windows 7. If anyone could link me to current documentation for this, that would be a good workaround for solving this problem.

    Read the article

  • What is the 'best practice' for installing perl modules on Solaris/OpenSolaris?

    - by AndrewR
    I'm currently in the process of writing setup instructions for some software I've written that is implemented as a set of Perl modules. Having done this for various flavours of Linux, I'm now doing the same for Solaris/OpenSolaris (v10 only). Part of the setup process is to make sure that dependent Perl modules are installed. This has been pretty easy on Linux as the Perl modules I require tend to be within the distro's packaging system (eg yum install perl-Cache-Cache). This is not the case on Solaris so I'm working on setup instructions that use the CPAN module to fetch dependent modules (eg perl -MCPAN -e 'install Cache::Cache'). This works ok but there are known problems with modules that require things to be built with a C compiler. The problem is that the C Makefile generated assumes you're using Sun's compiler and uses command-line options not understood by gcc, which you may be using instead. Consulting teh Internetz has thrown up a number of solutions to this: Install and use Sun's compiler Use the perlgcc wrapper script Edit the makefiles by hand (yuk) All of these work. My question to those more familiar with Solaris than me is: Is one of these the 'best' or 'most commonly used' method?

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • ffmpeg - h264 to xvid creates large file

    - by fatnic
    I'm trying to use ffmpeg to convert a h264/aac video file to an xvid/mp3 file so I can play it in my ultra-cheap media player. At the moment the converted video file is TWICE the size of the original mp4. Is there any way to get a smaller file size without loosing too much quality? Even a drop to -qmin 1 is pretty awful! The command i'm using is ffmpeg -i input.mp4 -vcodec libxvid -sameq -acodec libmp3lame -ab 128k -ac 2 output.avi And the ffmpeg output is Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 Duration: 01:34:27.69, start: 0.000000, bitrate: 1520 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 1387 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0(und): Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1(und): Audio: libmp3lame, 48000 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1

    Read the article

  • Configure IIS site to work with host header & hosts file entry

    - by HarveySaayman
    I'm I bit of an IIS / Web noob (I'm a C# backend service / winforms dev) so please bare with me :-) I've set up a site in IIS on my local dev machine. In the bindings section of the site ive added 4 bindings, all 4 for http: Host Name Port IP Address blog.sourcecube.co.za 26581 * www.blog.sourcecube.co.za 26581 * blog.sourcecube.co.za 26581 127.0.0.1 www.blog.sourcecube.co.za 26581 127.0.0.1 in my hosts file (drivers\etc\hosts), i've added the folling entries: 127.0.0.1 blog.sourcecube.co.za 127.0.0.1 www.blog.sourcecube.co.za when i ping my domain name from the command line it does in fact resolve to the loopback address, 127.0.0.1. So what I'm expecting to happen when i navigate to blog.sourcecube.co.za in my browser is for it to resolve to 127.0.0.1, and when the request hits IIS, it should know which site to serve because of the host header? But when i navigate to blog.sourcecube.co.za, i get an "Unable to connect, Firefox can't establish a connection to the server at blog.sourcecube.co.za" error. What am I doing wrong? --- UPDATE --- Navigating to blog.sourcecube.co.za:26581 from my browser works... I'd like get it working without specifying the port number though.

    Read the article

  • GNU screen, how to get current sessionname programmatically

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >