Search Results

Search found 13469 results on 539 pages for 'avoid trouble'.

Page 415/539 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • Can a CNAME be a hostname

    - by pulegium
    This is bit of a theological question, but nonetheless... So, a server has a hostname, let's say the fqdn is hostname.example.com (to be really precise about what I mean, this is the name that is set in /etc/sysconfig/network). The very same server has multiple interfaces on different subnets. Let's say the IPs are 10.0.0.1 and 10.0.1.1. Now the question is, is it theoretically (mind you, this is important, I know that practically it works, but I'm interested in purely academic answer) allowed to have the following setup: interface1.example.com. IN A 10.0.0.1 interface2.example.com. IN A 10.0.1.1 hostname.example.com. IN CNAME interface1.example.com. OR should it rather be: hostname.example.com. IN A 10.0.0.1 interface2.example.com. IN A 10.0.1.1 interface1.example.com. IN CNAME hostname.example.com. I guess it's obvious which one is making more sense from the management/administration POV, but is it technically correct? The argument against the first setup is that a reverse lookup to 10.0.0.1 returns interface1.example.com and not what one might expect (ie the hostname: hostname.example.com), so the forward request and then sub sequential reverse lookups would return different results. Now, as I said, I want a theoretical answer. Links to RFC sections etc, that explicitly allows or disallows use of CNAME name as a hostname. If there's none, that's fine too, I just need to confirm. I failed to find any explicit statements so far, bar this book, where this situation is given as an example and implies that it can be done as one of the ways to avoid MX records pointing to a CNAME.

    Read the article

  • Use RAID for desktop computer?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering if I should consider RAID/what RAID to use. I plan to purchase 2x1TB drives. Currently I'm leaning toward RAID 1 for the redundancy -- I've heard newer super-capacity drives fail more often than one would think, and I don't want to have a problem and lose all my data. What do you think? My mobo supports RAID 0/1/5/10. Is it worth it to use RAID at all, or should I just use a backup service like Mozy? Should I consider RAID 0 instead, for the performance? I'm kind-of going back and forth on this one. Thanks a lot for your help. EDIT: I'd like to avoid the OS drive different from Data drive situation, because that can get frustrating when programs like to store a lot of data in their program files folder. I've lived with that situation before and it gets annoying after a while.

    Read the article

  • How can I recover data from a dmg that cannot mount?

    - by Benjamin Lee
    I backed up a hard drive in a dmg, then reformatted the hard drive. (I had deleted the efi partition before which was preventing me from reinstalling the operation system). When I tried use the restore function in disk utility it gave an input/output error. I get this error with anything I do to the image including mounting, converting, attaching, verifying, scanning, and getting info through hdiutil imageinfo. I have run all of these with hdiutil and the -noverify, -nomount, and -ignorebadchecksums flags. When I copy the image onto another disk/partition I get a different error: something like "No filesystem". I cannot repair the image with disk utility or asr, which both throw the I/O error. When I put the -verbose flag on the command I actually get a different error: "hdiutil: attach failed - No child processes". I have output from both the -verbose and -debug flags but it is fairly long so I had to attach a link to avoid the 3000 character limit. No recovery system can get the data because it is both compressed and unmountable. How can I get the data back, and what has gone wrong? -debug -verbose

    Read the article

  • Home Networking Questions

    - by Eddie Parker
    Hello: I'm looking to wire my home with CAT-X (where X is probably going to be CAT-6, unless someone can convince me differently. ;) ). I'm seeking advice on what equipment I'll need for the job, and any things I should watch out for. It's a two story half-duplex I'll be wiring, roughly about 1800 sq ft. Here's what I believe I need so far: Bulk CAT-6 Ethernet cabling CM Rated Gigabit switch(es?) Patch panel Equipment for cutting, terminating wire, fishing through walls, etc Wall outlet covers, etc. Questions I have: Does it matter the MHz rating on the Ethernet cable? If so, why? I have two gigabit switches currently, an 8-port and a 5-port. Should I buy one massive switch to cover all the connections I need, or should I just chain the two together and buy a switch for however many other connections I need? Do I really need a patch panel? I understand it keeps the cables looking cleaner than coming out of a hole in the wall, but is there some other product I can use, perhaps combining a switch with a patch panel or some such? Ideally I'll have all this running out of a relatively small closet, so the less components (or smaller) the better. Any advice, links, or suggested product to use/avoid would be appreciated!

    Read the article

  • Route specific HTTP requests through pfSense OpenVPN

    - by DennisQ
    Hi, to start, I have very little knowledge on routes, iptables, etc. That said, here's what I'm trying to accomplish and where I think I'm stumped: Problem: We have an external website which we recently firewalled so it only accepts traffic from our office IP addresses. This works well at the office, but doesn't work for remote access through VPN as we don't route all traffic through OpenVPN. I would rather avoid forcing everyone to route all traffic through just to accommodate this one site. Environment: Main router box is running pfSense. Em0 is internal IP, Em1 is external. Internal net is 10.23.x and VPN is 10.0.8.0/24 I believe what I need to do is add a route to the VPN server config to send all traffic to that IP over the VPN tunnel. I think that part's working, but I don't get a response back, so I'm assuming that I need some NAT config on the VPN server to route the response back over the tunnel? What I've found so far is to try the following, but since this is a pfSense box on FreeBSD, I can't run iptables, etc. Make sure ip forwarding is enabled: echo 1 /proc/sys/net/ipv4/ip_forward Setup NAT back out: iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o em0 -j MASQUERADE Am I on the right path, and if so how do I accomplish this through pfSense UI or FreeBSD CLI? Thanks!

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • How do I use a Zyxel P660 router as just modem so that I can connect a WRT54GL router in cascade?

    - by Kenji Kina
    I have a Zyxel P660HW-t1 v2 router (which has a DSL port) and a WRT54GL router (which does not) and the exact same situation as in this thread (UPDATE: the connection between both devices is the important part, since I have been able to set the zyxel router to act as bridge by itself quite nicely. I have accessed my internet connection directly through a PC using PPPoE without any problems, the issues arise when I try to connect the WRT54GL router between the zyxel "modem" and my PCs). I've been trying to use my Zyxel P660 as a modem only: Setup P660 to bridge mode. Changed WRT54GL's IP address to 192.168.2.1 to avoid a conflict on the network. Configured the PPPoE settings as required on WRT54GL. The thing is that when I connect the Zyxel modem/router on the WRT54GL's internet port the light doesn't turn on. I can confirm that this port has been working ok, so I'm not really sure what's going between the devices. I checked several settings such as IPs, tried disabling DHCP on Zyxel/Linksys, Firewall on both and still nothing. Also, I tried connecting Zyxel directly to a computer in bridge mode and dialed successfully. I have even posted a question here before, thinking that what I asked there was the only thing I needed to get things done. Unfortunately it wasn't, and the guy that solved his issue didn't give enough details in his post (and is quite unlikely to give more details since he was an anonymous user). For one, I don't know how to do this part: connected to the Zyxel through telnet and forced LAN port 1 to be at 100mb as well I can't find the option that does this on the zyxel router. Not through telnet or the web admin. Can anyone help me solve this?

    Read the article

  • EC2 Ubuntu - Force instance to use internal IP

    - by Peter
    I've just set up a micro instance on EC2 (AMI ID ami-e59ca991). I had hoped to avoid charges for a year as my usage falls well within the bound of the free tier. I have been charged $0.01 for "regional data transfer". I read here that this is because my instance is talking to its self via it's external IP address. From what I've Googled it looks like you can stop the charges by making sure that the instance uses its internal IP address. However, when I ping the hostname of my instance internally (via an ssh session) it resolves to the instances internal IP address. How can I configure my instance so that I do not get these charges? Is it as simple as adding a line to my hosts file? Additionally, is this the real reason for the charge? I'm concerned that I've misunderstood the pricing somewhere. I have Apace and MySQL (with phpmyadmin) running on the machine - could I be being charged for data transfer associated with these (I have only one flat HTML page and I have only logged in via phpmyadmin - I have no data in my database). Edit: Additionally, my user account on MySQL was declared as: grant all privileges on *.* to 'peter'@'localhost'; Should I have instead used the internal hostname for the instance? grant all privileges on *.* to '[email protected]'; Cheers, Pete

    Read the article

  • How to fix Firefox Blue Screen of Death?

    - by WilliamKF
    I am running Windows XP SP3 and Firefox v3.6.2 and have an issue with Firefox causing the Blue Screen of Death. If I run in Windows safe mode, it does not occur, but running normally, it seems my firefox profile is going bad and results in certain web pages causing the BSOD. For example, presently, if I visit ebay.com, it gets BSOD. Before it was happening on google.com, but IT removed my Firefox profile and that seemed to fix the Google failure. However, now it has started occurring on ebay. I turned off all extensions and it still occurs. I'd like to fix my system so this does not occur. The IT folks don't seem to be able to solve this, so I am trying to fix it on my own. The BSOD is about something like (from memory) IRQL_NOT_LESS_THAN_OR_EQUAL. Why would safe mode avoid the issue and what does that tell us about the probable cause? I don't want to have to keep deleting my profile, so I'd like to find out the cause of the corruption.

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • Windows 7 migration led to crashdump and hibernate problems

    - by MartyMacGyver
    Note: I'm using a Samsung 830 SSD (migrated OS from defunct PC) and other than these two (interrelated?) problems it's working fine. Surprisingly well actually. Motherboard is a ASUS P8Z77-V Deluxe. Problem 1: Crashdumps are not working. volmgr throws an event 45 "The system could not sucessfully load the crash dump driver." whenever you modify crashdump settings, or if a crashdump occurs. diskpart says that "Crashdump disk = no" which is peculiar. Problem 2: Hibernation isn't working. Again, volmgr throws the same event 45 if you try to hibernate. The screen blanks, then you're at the password prompt. No sleepage occurs. (Yes, I know I should avoid hibernation on SSDs but it's enabled and the hibernation file is definitely there so I'd like to know why it's failing). Diskpart claims "Hibernation file = no" which is again peculiar... it's plainly there and getting created by the system. The common factor appears to be volmgr and/or the crashdump "service" (if that's what it is). I'd much rather get this working than spend days reinstalling and reconfiguring the entire system, especially when it's working perfectly otherwise. Sleep works as well (as long as it's not hybrid sleep). So, what defines the flags "Crashdump disk" and "Hibernation file disk" in diskpart's output? And what might be going wrong that's breaking crashdumps in particular?

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Problem recreating BCD on Windows 7 64bit - The requested system device cannot be found

    - by Domchi
    NVIDIA drivers upgrade crashed my Windows 7 installation, so I'm working to undo the damage. What I can do: I can boot Windows install from the USB drive, and I can boot the Hiren's Boot CD. Although automated Windows repair fails, I can get to command prompt when I boot Windows install from USB drive, and I can see my drive and all my data. What I cannot do: I cannot boot into Windows - I get this message: Windows failed to start. A recent hardwware or software change might be the cause. To fix the problem: 1.insert windos cd and run a repair your computer option. File: /boot/bcd Status: 0xc000000f Info: an error occured while attempting to read the boot configuration data. It seems that something is wrong with my /Boot/BCD, so I'm trying to recreate it from scratch. I've tried all the methods detailed here (including Windows repair which fails), and I'm left with the last one (near the bottom of that page). When I type the following command as in the tutorial: bcdedit.exe /import c:\boot\bcd.temp ...it fails with the following error: The store import operation has failed. The requested system device cannot be found. Many Google results say that I must use diskpart to set my partition active, however it's already set as active. Also, when I try this: bcdedit /enum It fails with similar message: The boot configuration data store could not be opened. The requested system device cannot be found. Does anyone know what does that error message mean, and what is the requested system device? I'd like to avoid having to reinstall Windows since all the files on disk seem to be fine.

    Read the article

  • Xen hipervisor 4.1 Kernel Panic on Ubuntu 12.04

    - by rkmax
    I have a fresh Ubuntu 12.04.1 amd64 server install following this guide I have used LVM option used all disk and make 2 LV /dev/mapper/vg-root / (80GB) vg-swap swap (4GB) now i install xen with apt-get install xen-hypervisor-4.1-amd64 and config /etc/default/grub like the guide and add GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=768M" later all this i exec update-grub and reboot. but when i try to boot with Xen 4.1-amd64 always i get a kernel panic with the message Domain-0 allocation is too small for kernel image my questions are: this error is about what? where i can grow this allocation for avoid this error? grub.cfg menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3541e241-7f39-4ebe-8d99-c5306294c266 echo 'Loading Xen 4.1-amd64 ...' multiboot /xen-4.1-amd64.gz placeholder dom0_mem=768M echo 'Loading Linux 3.2.0-29-generic ...' module /vmlinuz-3.2.0-29-generic placeholder root=/dev/mapper/backup--xen-root ro rootdelay=180 echo 'Loading initial ramdisk ...' module /initrd.img-3.2.0-29-generic } Note: I've followed this guide too

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Apple video adapters: Mini-DVI to DVI + DVI to video?

    - by Ken
    I have: a Macbook (which has Mini-DVI output) an Apple Mini-DVI to DVI adapter (so I can hook up my Macbook to my DVI LCD) a Mac mini (which has DVI output) I see that I can get a DVI-to-Video (s-video and composite) adapter from Apple that will let me hook up my Mac mini to my TV set (which has only component, s-video, and composite -- nothing digital). So far so good. Question: Will that same adapter also let me hook my Macbook to my TV? That is, can I hook Macbook - Mini-DVI to DVI - DVI to video - TV set, and see picture? I know there are digital/analog/integrated variants of DVI, and it's not at all clear to me what pins these things have and what signals they're sending. One website I found suggested that it would work, and another suggested that it wouldn't even physically connect. So I'm looking for, ideally, someone who's actually tried it, or has these adapters sitting around to try. I know I can buy 2 adapters (Mini-DVI to Video, and DVI to Video) to do this, but at $20 a pop I'll avoid that if at all possible.

    Read the article

  • Compiling mod_auth_kerb on OS X

    - by bshacklett
    I'm trying to get mod_auth_kerb installed, but I can't seem to find any information on compiling it on OS X. I'm getting the following when I attempt to compile: ./apxs.sh "-I. -Ispnegokrb5 -I/include " "-dynamic -g -O2 -arch x86_64 -Wl,-search_paths_first -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lresolv -lresolv" "" "/Applications/XAMPP/xamppfiles/bin/apxs" "-c" "src/mod_auth_kerb.c" /Applications/XAMPP/xamppfiles/build/libtool --silent --mode=compile gcc -prefer-pic -I/Applications/XAMPP/xamppfiles/include -L/Applications/XAMPP/xamppfiles/lib -mmacosx-version-min=10.4 -arch i386 -arch ppc -DDARWIN -DSIGPROCMASK_SETS_THREAD_MASK -no-cpp-precomp -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include -I. -Ispnegokrb5 -I/include -c -o src/mod_auth_kerb.lo src/mod_auth_kerb.c && touch src/mod_auth_kerb.slo src/mod_auth_kerb.c: In function ‘authenticate_user_krb5pwd’: src/mod_auth_kerb.c:1030: warning: passing argument 8 of ‘verify_krb5_user’ discards qualifiers from pointer target type src/mod_auth_kerb.c: In function ‘authenticate_user_krb5pwd’: src/mod_auth_kerb.c:1030: warning: passing argument 8 of ‘verify_krb5_user’ discards qualifiers from pointer target type /Applications/XAMPP/xamppfiles/build/libtool --silent --mode=link gcc -o src/mod_auth_kerb.la -dynamic -g -O2 -arch x86_64 -Wl,-search_paths_first -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lresolv -lresolv -rpath /Applications/XAMPP/xamppfiles/modules -module -avoid-version src/mod_auth_kerb.lo ld: warning: in src/.libs/mod_auth_kerb.o, missing required architecture x86_64 in file warning: no debug symbols in executable (-arch x86_64) I'm configuring as follows: ./configure --with-krb4=no CFLAGS='-g -O2 -arch x86_64' I should mention that I'm using XAMPP with the development package on this machine.

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • How do you add a domain name to a VPS?

    - by jasonaburton
    Hi all, I have a VPS with allgamer.net (I use it to play minecraft). I also have a domain name with networksolutions.com What I want to do is attach that domain name to the VPS. I want to run a wiki on my domain name. If this is possible I can avoid buying another hosting plan just for the wiki. How do I go about doing this? I have very little knowledge in server administration so any advice you guys have is greatly appreciated! I'm pretty sure I have to change the DNS in my domain name to the DNS for my VPS, but on allgamer.net's interface there is no discernable place to find out what I need to change it to. Is there a way to find out the DNS via SSH on my VPS? As well, when I first got my VPS with allgamer.net I filled out a form for it with all my information, but they also wanted a domain name along with it. I gave them the domain name I currently own, but for some reason, it's like it's not connected to the VPS, like if I go to mydomain.com there's nothing, as well, if I use mydomain.com for my minecraft server, it also doesn't work. It's as if it's serving no purpose by being "attached" to my VPS. Any insights into this? Thanks for any help you guys can give me.

    Read the article

  • Network corruption - corrupt downloads, corrupt streams, etc.

    - by rfrankel
    I've been having some problems with my home LAN. Downloaded executables won't run, my remote desktop sessions keep getting interrupted due to encryption errors, flash video streams show visible corruption (both Hulu and YouTube), and I've had a couple downloads for which the md5 hashes don't match. The problem has even occurred with a couple images embedded in webpages, though that's rare enough (presumably because images are relatively smaller files). I've had this problem across two Windows machines and a Mac, so it's neither machine-specific nor at the app or OS level. Comcast claims it's nothing to do with them, and my Linksys/Cisco RV016 router is out of warranty, so I have no access to official support. When I log into my router, it shows no error packets or dropped packets received. I plugged a laptop directly into the router and was able to download a 5.5 MB file and verify its MD5 hash, which is not proof that the problem is downstream of the router, but makes it seem quite likely, since I failed to download the same file several times from two desktops (one Mac, one Windows). Could this be a wiring problem? If so, is there any way clever/elegant to determine which wiring is faulty with just software? If I can avoid tracing all the wires throughout my entire house it would make my life quite a bit easier.

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >