Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 597/677 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • I just restarted Apache and now the server is down

    - by James
    I am pretty terrified right now. I'm scared I'm going to get a call in a couple minutes from a hundred people saying the website doesn't work. I was at the terminal changing some configuration files when I went to restart the server to update the .conf files with this command: /etc/init.d/apache2 graceful After I ran that, none of the websites work and I have no idea what to do. There are about 100 errors I am getting according to the log files. They all begin with "PHP Notice" and most relate to "use of undefined constant" Also, I just spoke with a coworker, describing what I did, and he noticed that there are two installations of apache on the server and that I restarted the one that we don't use. This is what the error log says (assuming it's the correct error log): [Wed Jan 05 11:52:06 2011] [notice] Graceful restart requested, doing restart Warning: DocumentRoot [/u/apps/staging/antetr/current/public/] does not exist [Wed Jan 05 11:52:08 2011] [warn] NameVirtualHost *:80 has no VirtualHosts (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs FINAL UPDATE: Ok, I fixed it. The problem was (as you experts facepalming probably know) that it couldn't access an error log in the directory I was working in. I created an empty error log file and tried the restart command again and now all the sites are back up... Though my original problem is still there.. Thanks to all those who offered advice, it really helped and let me breathe for a moment.

    Read the article

  • broken apache .htaccess (mod_rewrite)

    - by Tim
    Hey there, I'm running into an apache mod_rewrite configuration issue on one of our machines. Has anyone encountered / overcome anyone of these issues. URL1 ( http://www.uppereast.com ) is not being redirected to URL2 ( http://www.nyclocalliving.com ). This definitely worked in my test environment where a localhost address was rewritten to URL2 ( RewriteRule ^http://upe.localhost$ http://www.nyclocalliving.com ). I'm trying to get the all of the redirect rules working ( 2200 + ), but the 'http://www.nyclocalliving.com' site encounters a server error if I use more that 1000 or more rules. A) .htaccess file - I've tried the simplest approach which worked in a local environment 75 # Various rewrite rules. 76 <IfModule mod_rewrite.c> 77 RewriteEngine on 78 79 # BEGIN new URL Mapping rules 80 #RewriteRule ^http://www.uppereast.com/$ http://www.nyclocalliving.com ... 2307 #RewriteRule ^http://www.uppereast.com/zipcodechange.html$ http://www.nyclocalliving.com/zip-code-change fig. 1 B) /var/log/httpd/error_log file - there are these seg. fault errors when I enable the first rule ( line 80 ). no error logs otherwise. 1893 [Fri Sep 25 17:53:46 2009] [notice] Digest: generating secret for digest authentication ... 1894 [Fri Sep 25 17:53:46 2009] [notice] Digest: done 1895 [Fri Sep 25 17:53:46 2009] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations 1896 [Fri Sep 25 17:53:47 2009] [notice] child pid 29774 exit signal Segmentation fault (11) 1897 [Fri Sep 25 17:53:47 2009] [notice] child pid 29775 exit signal Segmentation fault (11) 1898 [Fri Sep 25 17:53:47 2009] [notice] child pid 29776 exit signal Segmentation fault (11) 1899 [Fri Sep 25 17:53:47 2009] [notice] child pid 29777 exit signal Segmentation fault (11) 1900 [Fri Sep 25 17:53:47 2009] [notice] child pid 29778 exit signal Segmentation fault (11) 1901 [Fri Sep 25 17:53:47 2009] [notice] child pid 29779 exit signal Segmentation fault (11) fig. 2 C) Some more debug information from the shell; the mod_rewrite is turned on and this is the machine architecture 1 # apachectl -t -D DUMP_MODULES | more 2 Loaded Modules: 3 core_module (static) 4 ... 5 rewrite_module (shared) 1 # uname -a 2 Linux RegionalWeb 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 x86_64 x86_64 GNU/Linux fig. 3 I looked into some previous posts (http://serverfault.com/questions/18744/htaccess-not-working-modrewrite), but didn't find a solution for this. I'm sure there's a small switch somewhere that I'm missing. Thanks in advance Tim

    Read the article

  • Providing reverse records for records that map to ISP IP

    - by thejartender
    I have been instructed to use my ISP ip (as a temporary fix for mapping my name server and domain records as my router dishes out rfc 1918 adresses to devices in my network where I am running an Ubuntu server, my router and my development laptop andso I have fixed: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. To a temporary version of: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 88.89.190.171 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 88.89.190.171 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. I am using bind and when using named-checkzone on this file according to my zone configurations, this file has no errors. I then run dig thejarbar.org @88.89.190.171 and get an expected authorative reply. My issue is creating my reverse DNS SOA zone and I would gratly appreciate assistance and guidance. I am stuck on how to represent the reverse records correctly for the eddresses that map to my isp IP. I am trying: $TTL 3H 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 171.190.89.88. IN PTR thejarbar.org. 171.190.89.88. IN NS ns.thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. www IN PTR www.thejarbar.org. But running named-checkzone on this file leaves an erroneous return that IN: has no NS records I would greatly appreciate assistance

    Read the article

  • DNS entries for OCS 2007 R2 basic deploy

    - by Anero
    I'm doing a test deploy on a Lab with 3 VMs: TEST-DC: DC / DHCP / DNS / Root CA (Joined to TEST.AD Domain) TEST-CS: OCS Front End (Joined to TEST.AD Domain - IP: 10.0.128.1) TEST-EDGES: OCS Edge Server (Joined to Workgroup: EDGE-WKG - Internal IP: 10.0.128.3, External IPs: 192.168.129.12 - Access Edge Server, 192.168.129.13 - Web Conferencing, 192.168.129.14 - A/V) I can login with the Communicator Client from within computers in the domain (using [email protected]) and even the Automatic Sign-In works as expected. Nevertheless, I cannot login neither from within machines in the domain nor from outside the domain using [email protected]. I'm pretty sure it is a DNS related issue, so I'm including below a list of the entries. DNS Entries on TEST-DC: Forward Lookup Zones TEST.AD sip.test.ad (Host A). IP Address: 10.0.128.1 sipinternal.test.ad (Host A). IP Address: 10.0.128.1 sipexternal.test.ad (Host A). IP Address: 10.0.128.3 _sipinternaltls._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipinternal.test.ad _sipinternal._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipinternal.test.ad _sip._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipexternal.test.ad _sipfederationtls._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipexternal.test.ad _sip._tls.test.ad (Service Location SRV). Port: 443. Host: sipexternal.test.ad TEST.COM sip.test.com (Host A). IP Address: 10.0.128.1 sipinternal.test.com (Host A). IP Address: 10.0.128.1 sipexternal.test.com (Host A). IP Address: 10.0.128.3 _sipinternaltls._tcp.test.com (Service Location SRV). Port: 5061. Host: sipinternal.test.com _sipinternal._tcp.test.com (Service Location SRV). Port: 5061. Host: sipinternal.test.com _sip._tcp.test.com (Service Location SRV). Port: 5061. Host: sipexternal.test.com _sip._tls.test.ad (Service Location SRV). Port: 443. Host: sipexternal.test.ad Validation Errors OCS Front End Edge Server I ran the OCS 2007 Automatic Sign-In Troubleshooting and all DNS entries for both TEST.AD and TEST.COM are reported to be OK. What am I missing?

    Read the article

  • Setting up Apache and PHP on Mac OS X Snow Leopard

    - by Martin Bean
    I've recently purchased an Apple iMac. Unfortunately, enabling Apache and PHP has thrown up some problems. I enabled Mac's built-in Web Sharing through System Preferences, at which point I got an output and could add HTML files to my user directory. However, PHP files were being displayed rather than interpreted. I then discovered this is because PHP isn't enabled by default on Mac's Apache set-up. After a quick Google search, I came across this page: http://developer.apple.com/mac/articles/internet/phpeasyway.html I proceeded to the section, Enabling PHP in Apache, copying and pasting the following code snippet into a new Terminal window and hitting Return: set admin_email to (do shell script "defaults read AddressBookMe ExistingEmailAddress") user_www=$HOME/Sites filename=php-test user_index=${user_www}/${filename}.php user_db=${user_www}/${filename}-db.sqlite3 # NOTE: Having a writeable database in your home directory can be a security risk! conf=`apachectl -V | awk -F= '/SERVER_CONFIG/ {print \$2}'| sed 's/"//g'` conf_old=$conf.$$ conf_new=/tmp/php_conf.new touch $user_db chmod a+r $user_index chmod a+w $user_db chmod a+w $user_www echo "Enabling PHP in $conf ..." sed '/#LoadModule php5_module/s/#LoadModule/LoadModule/' $conf | sed "s^[email protected]^<b>\$admin_email</b>^" > $conf_new echo "(Re)Starting Apache ..." osascript <<EOF do shell script "/bin/mv -f $conf $conf_old; /bin/mv $conf_new $conf; /usr/sbin/apachectl restart" with administrator privileges EOF Unfortunately, this has completed thrown Apache and now nothing is being served; instead I'm receiving "Failed to open page" errors because it cannot connect to the server, despite Web Sharing still being active in System Preferences. So therefore I guess my question is this: how can I undo the changes made by the copy-and-pasting of the above code snippet? Admittedly, I don't understand what the above did; I just thought it looked like a Terminal command and tried it. I have no experience in setting up Apache on Mac OS X (and I've only installed XAMPP and WampServer on Windows). So any points on reversing the aforementioned, and then successfully enabling PHP would be great. EDIT: I've discovered, via Console, the following error message is being recorded when trying to browse to 127.0.0.1... (org.apache.httpd) Throttling respawn: Will start in 10 seconds no listening sockets available, shutting down Unable to open logs (org.apache.httpd[13453]) Exited with exit code: 1 Does this point any more to the issue? EDIT #2: I'm now getting this in Console... 15/02/2010 21:24:14 osascript[3597] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper

    Read the article

  • umount bind of stale NFS

    - by Paul Eisner
    i've got a problem removing mounts created with mount -o bind from a locally mounted NFS folder. Assume the following mount structure: NFS mounted directory: $ mount -o rw,soft,tcp,intr,timeo=10,retrans=2,retry=1 \ 10.20.0.1:/srv/source /srv/nfs-source Bound directory: $ mount -o bind /srv/nfs-source/sub1 /srv/bind-target/sub1 Which results in this mount map $ mount /dev/sda1 on / type ext3 (rw,errors=remount-ro) # ... 10.20.0.1:/srv/source on /srv/nfs-source type nfs (rw,soft,tcp,intr,timeo=10,retrans=2,retry=1,addr=10.20.0.100) /srv/nfs-source/sub1 on /srv/bind-target/sub1 type none (rw,bind) If the server (10.20.0.1) goes down (eg ifdown eth0), the handles become stale, which is expected. I can now un-mount the NFS mount with force $ umount -f /srv/nfs-source This takes some seconds, but works without any problems. However, i cannot un-mount the bound directory in /srv/bind-target/sub1. The forced umount results in: $ umount -f /srv/bind-target/sub1 umount2: Stale NFS file handle umount: /srv/bind-target/sub1: Stale NFS file handle umount2: Stale NFS file handle Here is a trace http://pastebin.com/ipvvrVmB I've tried umounting the sub-directories beforehand, find any processes accessing anything within the NFS or bind mounts (there are none). lsof also complains: $ lsof -n lsof: WARNING: can't stat() nfs file system /srv/nfs-source Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/sub1 (deleted) Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/ Output information may be incomplete. I've tried with recent stable Linux kernels 3.2.17, 3.2.19 and 3.3.8 (cannot use 3.4.x, cause need the grsecurity patch, which is not, yet, supported - grsecurity is not patched in in the tests above!). My nfs-utils are version 1.2.2 (debian stable). Does anybody have an idea how i can either: force the un-mount some other way? (any dirty trick is welcome, data loss or damage neglible at this point) use something else instead of mount -o bind? (cannot use soft links, cause mounted directories will be used in chroot; bindfs via FUSE is far to slow to be an option) Thanks, Paul Update 1 With 2.6.32.59 the umount of the (stale) sub-mounts work just fine. It seems to be a kernel regression bug. The above tests where with NFSv3. Additional tests with NFSv4 showed no change. Update 2 We have tested now multiple 2.6 and 3.x kernels and are now sure, that this was introduced in 3.0.x. We will fille a bug report, hopefully they figure it out.

    Read the article

  • Linux not picking up new partition correctly on emc pseudo device

    - by James
    Hi We have a database server running oracle rac. We were recently running out of space on the main LUN that it is attached to. I created a new 100GB LUN and concatenated this onto the existing LUN creating a new MetaLUN. After some messing I managed to get linux to recognise the new space. I then created a new partition in on the pseudo device, to use the new space. Previously when I have done this on other system the next step is to create an ASM disk on the new partition and add this disk to the oracle disk group. This however fails. I am aware of various issues with ASM and powerpath, but I don't think this is the issue here. As on while investigating the issue I discovered that one of the underlying logical device is not reflecting the size change. See below; Powermt displays all of the underlying logical units [root@XXXXX~]# powermt display dev=emcpowerd Pseudo name=emcpowerd CLARiiON ID=CKM00091500009 [VFRAC2] Logical device ID=6006016030312200787502866C65DE11 [LUN 30] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3 qla2xxx sde SP A0 active alive 0 0 3 qla2xxx sdj SP B0 active alive 0 0 4 qla2xxx sdo SP A1 active alive 0 0 4 qla2xxx sdt SP B1 active alive 0 0 Fdisk on the pseudo device shows correct space. [root@XXXXX ~]# fdisk -l /dev/emcpowerd Disk /dev/emcpowerd: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/emcpowerd1 1 39162 314568733+ 83 Linux /dev/emcpowerd2 39163 52216 104856255 83 Linux fdisk on one of the logical units is wrong [root@XXXXX~]# fdisk -l /dev/sde Disk /dev/sde: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 39162 314568733+ 83 Linux /dev/sde2 39163 52216 104856255 83 Linux fdisk on the rest of the units is fine [root@XXXXX ~]# fdisk -l /dev/sdj Disk /dev/sdj: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 39162 314568733+ 83 Linux /dev/sdj2 39163 52216 104856255 83 Linux Also when I created the the partition linux did not create the any entries in the /dev directory for the second partition so I created these manually [root@XXXXX dev]# mknod sde2 b 8 66 [root@XXXXX dev]# ls -al sd[ejot]? brw-r----- 1 root disk 8, 65 Dec 29 14:20 sde1 brw-r--r-- 1 root disk 8, 66 Apr 8 20:31 sde2 brw-r----- 1 root disk 8, 145 Dec 29 14:19 sdj1 brw-r--r-- 1 root disk 8, 146 Apr 8 20:33 sdj2 brw-r----- 1 root disk 8, 225 Apr 6 23:12 sdo1 brw-r--r-- 1 root disk 8, 226 Apr 8 20:33 sdo2 brw-r----- 1 root disk 65, 49 Dec 29 14:19 sdt1 brw-r--r-- 1 root disk 65, 50 Apr 8 20:33 sdt2 This is a production server that we cannot easily reboot. Any ideas would be much appreciated. J

    Read the article

  • Incorrect gzipping of http requests, can't find who's doing it

    - by Ned Batchelder
    We're seeing some very strange mangling of HTTP responses, and we can't figure out what is doing it. We have an app server handling JSON requests. Occasionally, the response is returned gzipped, but with incorrect headers that prevent the browser from interpreting it correctly. The problem is intermittent, and changes behavior over time. Yesterday morning it seemed to fail 50% of the time, and in fact, seemed tied to one of our two load-balanced servers. Later in the afternoon, it was failing only 20 times out of 1000, and didn't correlate with an app server. The two app servers are running Apache 2.2 with mod_wsgi and a Django app stack. They have identical Apache configs and source trees, and even identical packages installed on Red Hat. There's a hardware load balancer in front, I don't know the make or model. Akamai is also part of the food chain, though we removed Akamai and still had the problem. Here's a good request and response: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Length: 47 < Content-Type: application/x-javascript < Connection: keep-alive < Vary: Accept-Encoding < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 {"msg": "", "status": "OK", "printer_name": ""} And here's a bad one: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Type: application/x-javascript < Content-Encoding: gzip < Content-Length: 59 < Connection: keep-alive < Vary: Accept-Encoding < X-N: S < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 ?V?-NW?RPR?QP*.I,)-???A??????????T??Z? ??/ There are two things to notice about the bad response: It has two Content-Encoding headers, and the browsers seem to use the first. So they see an identity encoding header, and gzipped content, so they can't interpret the response. The bad response has an extra "X-N: S" header. Perhaps if I could find out what intermediary adds "X-N: S" headers to responses, I could track down the culprit...

    Read the article

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • PHP 5.3 Not Logging

    - by BHare
    I have set error_log = "/var/log/apache2/php_errors.log" and made sure errors were being logged. I have set the file to be owned by the www-data owner and group and even set the permissions to 777. I have confirmed with phpinfo() that the error_log is correctly set, however The logging still only happens in my vhost's apache error log. The following is my php.ini for 5.3.3-7 on Debian Squeeze Apache 2: The top is populated with comments on what I have been interested, or have changed. I have deleted all comments to save space. Full versions here: http://pastebin.com/AhWLiQBR [PHP] ;short_open_tag = On ;allow_call_time_pass_reference = On ;error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ;display_errors = On ;display_startup_errors = Off ;log_errors = On ;html_errors = On error_log = "/var/log/apache2/php_errors.log" engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = On safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED display_errors = On display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On variables_order = "GPCS" request_order = "GPC" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 100M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_tmp_dir = /tmp upload_max_filesize = 100M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba]

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • Apache + Tomcat error 120006 Using mod_proxy_ajp for Load Balance

    - by Wakaru44
    I have an apache 2 frontend with two nodes, and a backend with two instances of tomcat 6 balance with mod_proxy_ajp. The bbdd is in a separate machine. All machines use RHEL, 6.2 on the frontend, 5.5 on the backend. The infraestructure is virtualized using VMware. # This is the apache config in one of the virtualHost. ProxyPreserveHost On ProxyPass / balancer://liferay/ <Proxy balancer://liferay> BalancerMember ajp://lrab:8009 route=liferaya BalancerMember ajp://lrbb:8009 route=liferayb status=+H ProxySet lbmethod=byrequests nofailover=on </Proxy> The conector in tomcat is now configured like this: <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" enableLookups="false" allowTrace="true" /> Do you think it could be useful to set a maxThreads parameter, like in this post?? in that case, How can i determine a proper number of threads? From time to time, we get errors like this [Tue Sep 18 17:57:02 2012] [error] ajp_read_header: ajp_ilink_receive failed [Tue Sep 18 17:57:02 2012] [error] (120006)APR does not understand this error code: proxy: read response failed from 192.168.1.104:8009 (lrab) And apache switches to the pasive node (if its active) or fails with 503. Some things i have tried so far: I think that i have some performance issues with one of the applications, Here you can see a threadDump But i'm not quite sure about it. I also started to monitor the network connection. I have noticed that there are some pings lost when i have a "ping -f " so maybe it could be a network issue, but the success rate is 100% (so the lost packets are only a few among the flood, but maybe, i don't know, enough to break the link betwen apache and tomcat). I wrote a python script to check connectivity with timestamps on the pings, so i can know when the network fails. After sniffing the network , i can also see some RST packets, but i don't know if that is a normal behaviour (some applications do that to end a network communication). I have also noticed that the applications have problems communicating with the database, but im not even sure if this could be related or not. If you think so, i can post more info about it. I changed the connector on the tomcats to use the native one, but still the same. I need not even a solution to this, but maybe some guidance on how can i troubleshoot this better ¿Analyze threads, monitor mysql performance, sniff the traffic between apaches and tomcats? Ultimately, all i need is to balance the tomcat instances in Active/pasive mode, so if there is another way to do it, i could give it a try.

    Read the article

  • Windows: Impact of clean install Service Pack 2 to applications & data?

    - by Thomas Matthews
    My Windows Vista Home Premium system is corrupt and won't install Service Pack 2. I have followed all the advice from Microsoft and still no luck. I would like to perform a clean install of Vista, then SP1, and then SP2. My concern is the effect of the clean install on the registry, my apps and all my data. My plan: 1. Download Vista Service Pack 1 (SP1) ISO and write to DVD. 2. Download Vista Service Pack 2 (SP2) ISO and write to DVD. 3. Backup all data, applications and registry to external hard drive (file copy not disk image) 4. ?? Format hard drive?? (is this necessary?) 5. Install Vista from DVDs / CDs. 6. Install SP1 from DVD 7. Install SP2 from DVD 8. Restore registry, applications and data from external hard drive. My questions: 1. Is formatting the hard drive a necessary step? 2. Will restoring the registry from the backup corrupt the system? 3. Should I use Windows Backup or ZIP/RAR? 4. Any gotcha's that I should look out for? Background: I am using Windows Vista Home Premium with SP1. The sfc program does not finish due to a resources problem (even when run as administrator). I have 5 users on it. After a while, the screen goes black and shows an error message window about an error with login.scr. Standard accounts display a black screen and can't run any applications. Administrative accounts have no problems (even standard accounts when converted to Administrative have no problem). The CBS log contains a lot of 0x8000ffff and E_UNEXPECTED errors (which Microsoft defines as catastrophic failure). This is the reasoning behind performing a clean install up to service pack 2.

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • Add Mirror for volumes other than the last one in Windows 7 (disk "not up-to-date")

    - by rakslice
    I'm using Windows 7 x64 Ultimate. I have an existing 4TB disk with 3 NTFS volumes, a new 3TB blank disk, and I'm trying to mirror the volumes onto the new disk. My Windows install is on an SSD which is Disk 0. The 4TB disk with volumes is Disk 1, and the new blank disk is Disk 2. I can add a mirror successfully for the last volume, but when I try to add a mirror for the first volume I immediately get errors (see below). Is there something I special I need to do to add a mirror for a volume other than the last one? More info: I opened Disk Management, right-clicked on the first volume on the existing disk, went to Add Mirror, and selected the new disk. The first time I did this I was prompted to convert the new disk to a Dynamic Disk, which I approved. Subsequently I got a message: The operation failed to complete because the Disk Management console view is not up-to-date. Refresh the view by using the refresh task. If the problem persists close the Disk Management console, then restart Disk Management or restart the computer. I've refreshed disk management, restarted the computer, and converted the new disk to basic and back to dynamic, but I still get that error message. Looking around for suggestions of a workaround, I saw a suggestion to use the diskpart command line tool. Running diskpart from the Start Menu as Administrator, I did select volume 2 (the first volume I want to mirror) and then add disk 2 (the new disk), and received a somewhat similar error: Virtual Disk Service error: The disk's extent information is corrupted. DiskPart has referenced an object which is not up-to-date. Refresh the object by using the RESCAN command. If the problem persists exit DiskPart, then restart DiskPart or restart the computer. A rescan appears to be successful: DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> rescan Please wait while DiskPart scans your configuration... DiskPart has finished scanning your configuration. but attempting to add the mirror again resulted in the same error. The only similar report I found online was this: http://www.sevenforums.com/hardware-devices/335780-unable-mirror-all-but-last-partition-drive.html Based on that I attempted to mirror the last volume on the disk to the new disk using diskpart, and that started successfully -- it is currently resynchronizing. More Background: In the course of dealing with a failing 3TB hard drive, I bought a replacement 4TB drive and installed it, then copied the partitions from the failing drive to it using Minitool Partition Wizard Home, and then removed the failing drive and was up and running again normally. Now I've received a warranty replacement for the failing drive, and installed it, and now I'm attempting to mirror my partitions to it.

    Read the article

  • Resolving CloudFlare DNS related mail delivery problems

    - by Andy Castles
    I recently started using CloudFlare and am having a few teething problems. Our domain is netlanguages.com and while we have a lot of sub-domains listen, we are currently only trialling a few of the servers through the CloudFlare CDN (for example, www.netlanguages.com is enabled for CDN, netlanguages.com is not). The actual CDN service seems to be reliable, but the problem that we are having is with DNS, and specifically with mail delivery. The background is that we have contact forms on our web site which use PHP mail() to send the details to end-users' email addresses, with the "from" address of the messages being [email protected] which is a valid address on our mail server. Most of the mails are arriving correctly, but a few specific people are not receiving them. The webserver uses qmail to deliver the messages, and the qmail log files show us some of the errors that the receiving mail servers return when they reject the mail delivery attempt. Two examples: Connected to 94.100.176.20 but sender was rejected./Remote host said: 421 DNS problem (interdominios.netlanguages.com). Try again later Connected to 213.186.33.29 but sender was rejected./Remote host said: 451 DNS temporary failure (#4.3.0) From what I can tell, the receiving SMTP server is doing a DNS lookup of some description on either the host of the "from" email address (netlanguages.com) or the server name given in the EHLO command of the SMTP conversation (in the first example above, interdominios.netlanguages.com), both of which should resolve to non-CloudFlare IP addresses. I've read that the CloudFlare DNS service is very reliable and fast but both of the problems above seem to point to a problem with remote servers unable to do DNS lookups. I should also point out that we changed our DNS to CloudFlare on 6th Feb, and since then started experiencing these mail delivery problems. On 22nd Feb we moved our DNS away from CloudFlare to see if the issues were related to CloudFlare and after a few hours delivery began to work. Then on 26th Feb I moved the DNS back to CloudFlare again and delivery problems started again. The issues definitely seems to be related to DNS, but I don't know if it's a configuration issue, or something else. Finally, I should say that our two DNS MX records point to non-CDN A record IP addresses, interdominios.netlanguages.com (the web and qmail server) also points to a non-CDN A record IP address. Does anyone know what the problem could be here? Any light you can shed on this will be most appreciated. Many thanks, Andy

    Read the article

  • Unable to add IPv6 address to sendmail access list

    - by David M. Syzdek
    I am running Sendmail 8.14.4 on Slackware 13.37. I have the following in my /etc/mail/access file and it works without any errors: Connect:127 OK Connect:10.0.1 RELAY # Net: office Connect:50.116.6.8 RELAY # Host: glider Connect:96.126.127.87 RELAY # Host: kite The above configuration also allows me to send an e-mail via IPv6 to a local user on the mail server. However, it does not allow my office to relay via IPv6. I have tried two ways of adding IPv6 networks to my access file. Method 1: Connect:127 OK Connect:10.0.1 RELAY # Net: office Connect:IPv6:2001:470:b:84a RELAY # Net: office Connect:50.116.6.8 RELAY # Host: glider Connect:96.126.127.87 RELAY # Host: kite Method 2: Connect:127 OK Connect:10.0.1 RELAY # Net: office Connect:[IPv6:2001:470:b:84a] RELAY # Net: office Connect:50.116.6.8 RELAY # Host: glider Connect:96.126.127.87 RELAY # Host: kite However whenever I try using either method 1 or 2, I am unable to relay e-mail messages through the host. /var/log/maillog entry: May 31 11:57:15 freshsalmon sm-mta[25500]: ruleset=check_relay, arg1=[IPv6:2001:470:b:84a:223:6cff:fe80:35dc], arg2=IPv6:2001:470:b:84a:223:6cff:fe80:35dc, relay=[IPv6:2001:470:b:84a:223:6cff:fe80:35dc], reject=553 5.3.0 RELAY # Net:office Test session from telnet: syzdek@blackenhawk$ telnet -6 freshsalmon.office.example.com 25 Trying 2001:470:b:84a::69... Connected to freshsalmon.office.bindlebinaries.com. Escape character is '^]'. 220 office.example.com ESMTP Sendmail 8.14.4/8.14.4; Thu, 31 May 2012 11:57:15 -0800 HELO blackenhawk.office.example.com 250 office.example.com Hello [IPv6:2001:470:b:84a:223:6cff:fe80:35dc], pleased to meet you MAIL FROM:[email protected] 553 5.3.0 RELAY # Net:office What is the correct way to add an IPv6 address/network to the access file in sendmail? Update: Apparently my access file was not working regardless. Removing the comments at the end of the line seems to have fixed the problem. Here is the lines which worked: Connect:127 OK Connect:IPv6:::1 OK # Net: office Connect:10.0.1 RELAY Connect:IPv6:2001:470:b:84a RELAY # Host: glider Connect:50.116.6.8 RELAY Connect:IPv6:2600:3c01::f03c:91ff:fedf:381a RELAY # Host: kite Connect:96.126.127.87 RELAY Connect:IPv6:2600:3c00::f03c:91ff:fedf:52a4 RELAY

    Read the article

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • Windows 7 The boot selection failed because a required device is inaccessible 0xc000000f

    - by piratejackus
    I have a problem with my Windows 7, hardware : Acer 3820TG Operating Systems : Windows 7 and Ubuntu 10.04 dual Case: When I try to boot my windows 7 I see an error: "Window failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert.... 2. .... ... status : 0xc000000f info : The boot selection failed because a required device is inaccessible .... " I can't exactly remember what were my last actions on Windows. I already searched this error and applied the proposed solutions, I created a repair USB (because I don't have a CD-ROM nor a Windows 7 CD) such as; -repair operating system :it says it cannot repair it -checking disk (chkdsk D: /f /r) : it checks the disk without a problem or error and it takes pretty long (more than a hour). But when I restart, still the same error. -I didn't create a restore point so I pass this option -I don't have a system image -I tried to run windows recovery (I have a recovery partition) but there are just two options: 1- Format the operating system but retain user data (copies the files under users to c\backup folder, but when I searched deeper I found that there are some people who already tried this option and couldn't find their user files under backup directory). Plus, I have unfortunately just one partition D (it is a fault I know) because I use always Ubuntu. So this is not applicable in my situation 2- Format entire system (Windows). I keep my valuable data in windows but not in user folder. I was reaching them from Windows. -I tried to repair windows boot by: bootrec /fixMBR bootrec /fixBoot bootrec /rebuildBCD I lost all grub menu, and reinstalled it. - ubuntuforums.org/showthread.php?t=1014708&page=29 nothing changed, same error. I created a thread in microsoft forums - http://social.answers.microsoft.com/Forums/en-US/w7install/thread/69517faf-850a-45fd- 8195-6d4ed831f805 but I couldn't find a solution. Before I run chkdsk from usb repair disk I couldn't able to mount Windows (NTFS) partition from Ubuntu, I was getting "couldn't mount file system, error code 2". I tried to fix ntfs partition from ubuntu and got "segmentation fault". I also created a thread on ubuntuforums for this mount problem: - http://ubuntuforums.org/showthread.php?t=1606427 So, after chkdsk, I could enable to mount windows partition but all I see in this partition is chkdsk logs, no any other data. Now, I don't think I lost my data because I don't get any filesystem errors, just the boot section, but this log files under windows partition makes me afraid. I see that Microsoft developers don't have a solution yet for this error. If you need any information to get more idea I can give, maybe I miss some points or it could be complicated. Thanks in advance.

    Read the article

  • Providing DNS redirection to honeypot server for known bad domains

    - by syn-
    Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains. Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems. The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone. named.conf entry: include "blackholes.conf"; blackholes.conf entry example: zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; }; blackhole.zone entries: $INCLUDE std.soa @ NS ns1.ourdomain.com. @ NS ns2.ourdomain.com. @ NS ns3.ourdomain.com.                        IN            A                192.168.0.99 *                      IN            A                192.168.0.99

    Read the article

  • why nginx rewrite post request from /login to //login?

    - by jiangchengwu
    There is a if statement, which will rewrite url when the client is Android. Everything ok. But, something got strange. Nginx will write post request /login to //login, even if the block of if statement is bank. So I got a 404 page. As the jetty server only accept /login request. Server conf: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; if ( $http_user_agent ~ Android ){ # rewrite something, been commented } } Debug info, origin log https://gist.github.com/3799021 ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http script regex: "Android" 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:29:49 [debug] 26416#0: *1 HTTP/1.1 404 Not Found Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:29:49 GMT Content-Type: text/html;charset=ISO-8859-1 Transfer-Encoding: chunked Connection: keep-alive Cache-Control: must-revalidate,no-cache,no-store Content-Encoding: gzip ... Only when I commented the block in the configration file: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; #if ( $http_user_agent ~ Android ){ # #} } The client can get an 200 response. Debug info, origin log https://gist.github.com/3799023 ... "POST /login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:27:19 [debug] 26319#0: *1 HTTP/1.1 200 OK Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:27:19 GMT Content-Type: application/json;charset=UTF-8 Content-Length: 17 Connection: keep-alive ... As the log: 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script if 2012/09/28 16:29:49 [debug] 26416#0: *1 post rewrite phase: 4 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 5 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 6 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 7 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 8 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 9 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 10 2012/09/28 16:29:49 [debug] 26416#0: *1 post access phase: 11 2012/09/28 16:29:49 [debug] 26416#0: *1 try files phase: 12 2012/09/28 16:29:49 [debug] 26416#0: *1 posix_memalign: 0000000001E798F0:4096 @16 2012/09/28 16:29:49 [debug] 26416#0: *1 http init upstream, client timer: 0 2012/09/28 16:29:49 [debug] 26416#0: *1 epoll add event: fd:13 op:3 ev:80000005 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Host: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "X-Real-IP: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "106.187.97.22" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Connection: close " 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept-Encoding: identity, deflate, compress, gzip" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept: */*" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "User-Agent: Android/1.0" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... Maybe post rewrite phase had rewrite the request. Anybody can help me to solve this problem or know why nginx do that ? Much appreciated.

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >