Search Results

Search found 7374 results on 295 pages for 'strange behaviour'.

Page 207/295 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • Bad Blocks Exist in Virtual Device PERC H700 Integrated

    - by neoX
    I have a DELL server with PERC H700 Integrated controller. I've made RAID5 with 12 harddrives and the virtual device is in Optimal state, but I receive such errors under linux: sd 0:2:0:0: [sda] Unhandled error code sd 0:2:0:0: [sda] Result: hostbyte=0x07 driverbyte=0x00 sd 0:2:0:0: [sda] CDB: cdb[0]=0x88: 88 00 00 00 00 07 22 50 bd 98 00 00 00 08 00 00 end_request: I/O error, dev sda, sector 30640487832 sd 0:2:0:0: [sda] Unhandled error code sd 0:2:0:0: [sda] Result: hostbyte=0x07 driverbyte=0x00 sd 0:2:0:0: [sda] CDB: cdb[0]=0x88: 88 00 00 00 00 07 22 50 bd 98 00 00 00 08 00 00 end_request: I/O error, dev sda, sector 30640487832 sd 0:2:0:0: [sda] Unhandled error code sd 0:2:0:0: [sda] Result: hostbyte=0x07 driverbyte=0x00 sd 0:2:0:0: [sda] CDB: cdb[0]=0x88: 88 00 00 00 00 07 22 50 bc e0 00 00 01 00 00 00 end_request: I/O error, dev sda, sector 30640487648 But all disk are in Firmware state: Online, Spun Up. Also there is not a single ATA read or write error in any disk in the raid (I check them with smartctl -a -d sat+megaraid,N -H /dev/sda). The only strange thing is in the output in megacli: megacli -LDInfo -L0 -a0 ... Bad Blocks Exist: Yes How could there be bad blocks in a Virtual Drive, which is in optimal state and no disk is broken or even with a single error? I tried "Consistency Check", but it finished successfully and the errors are still in dmesg. Could Someone help me to figure it out what is wrong with my raid?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • How does a vsftpd server work and how to configure it?

    - by ysap
    I was asked to configure a FTP server, based on the vsftpd package. The server is running on a remote machine to which I have a superuser privilege access. Being unfamiliar with the mechanics of FTP servers, I tried to figure out how user ftp accounts are configured. The previous maintainer used a shell script, which works on a list that we maintain to track users accounts and passwords, to configure the ftp accounts. From reading the script, I see that he generates a list of usernames and passwords, and actually creates a user account on the Linux machine. This means that for each user that we configure in the list, a new user account is being added by the adduser command: adduser --home /home/ftp --no-create-home $user (but w/o a private /home/username directory - using the /home/ftp instaed). Each of these users can log into his account using the ssh command. This fact seems a little strange to me, as I'd think that the ftp account should be decoupled from the Ubuntu user accounts. As another side effect, when a user connects using a web browser, he is connected to the /home/ftp directory. However, he can then use "Up to a higher level directory" link to go up and effectively have access to all of our system. So, the questions are: Is this really how the FTP server supposed to work in terms of configuring ftp accounts? If not, how do I configure the vsftpd server in a way that I have only the superuser Ubuntu account on that machine and all ftp account are... just FTP user accounts? Additionally, these ftp account should be configured in terms of how and what they are allowed to access.

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Windows 7 alt-tab window disappears to back when aero peek is enabled

    - by nhinkle
    There is a strange bug with Aero Peek and Alt-Tab where the Alt-Tab window itself is sent all of the way to the back, and is obscured by whatever window may be being previewed in front of it. While it still works for switching tabs, it's extremely annoying to not be able to see what other windows are there. One solution I've found is to just disable Aero Peek, but I like the Aero Peek feature when it works, and want it enabled. Some users of Lenovo products have found that uninstalling the "Thinkvantage communications" VoIP suite fixes it for them, but my laptop is an HP, not a Lenovo. The only VoIP software I have installed is Skype, which I removed to see if it had something to do with VoIP, but that didn't have any effect. I had seen this issue before on my old laptop, but it usually went away after a while, and always after a reboot. On my new computer (HP dm4t) it always occurs, and is driving me nuts. If anybody can actually pinpoint what the problem is and more importantly how to fix it, I will be extremely thankful. Window being previewed is in front of the Alt-Tab window Update: The issue seems to have randomly resolved itself, at least for now. I have no idea what changed, since I haven't made any modifications to the system between when it was broken and when it started working. Attached is another screenshot of it working properly, with the alt-tab window in front of everything else. I'd still like to know what causes this if anybody can determine it. Update 2: And now it's broken again...

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • /scripts/easyapache broke PHP and now outputs binary

    - by James Lawrie
    I ran /scripts/easyapache on one of my cpanel servers last night and since then PHP just gives 501 errors, which no logs I can find. I tried running it again and it's running as normal but all output has been replaced with strange characters, eg: ????+?+± °?? ?-?.?... =?? ????+?+± °?? ¦+?+?.?... =?? ????+?+± °?? ?=?/¦+?+?.?... +? ????+?+± °?? ?=?/??++.?... =?? ????+?+± °?? ??++.?... =?? ????+?+± °?? ???+?+.?... +? ????+?+± °?? ?=?/????¦???.?... =?? ????+?+± °?? +??±?+.?... =?? ????+?+± °?? +??¦+?.?... =?? Has anyone come across this before? Is it fixable?

    Read the article

  • Embedded Spotlight does not function in Outlook 2011

    - by syntaxcollector
    I have a rather strange problem. I manage a network of about 35 Mac, and we all recently switched from Mail.app to Outlook 2011 (Please don't debate this, I've already had this conversation ad nauseum) We are using network home directories (NHD) server from a Windows file server over the SMB protocol. The problem I'm having is Spotlight does not function inside of Outlook. But ONLY inside of Outlook. The global Spotlight can find all email and contacts with Outlook, but the embedded Spotlight cannot. As a test, I took one of my users and switched them from a network home directory to a portable home directory (PHD) (this means the home folder was copied to the local hard drive). This resulted in a working Spotlight within Outlook, as soon as I switched the user back to an NHD, however, it stopped working. I have already tried erasing the Spotlight index and killing the process to force re-indexing. I have exhausted all Spotlight troubleshooting, and since the global is working that is obviously not the issue. I believe it has something to do with the Spotlight plugin Microsoft wrote that is located in /Library/Spotlight. Any ideas?

    Read the article

  • VMware databursts to vCenter

    - by Erwin Blonk
    [edit Nov.16th 2009: Thank you for the responses. I'm no longer on this project so the problem is out of my hands. Be sure I will return with other problems :-) ] I have a strange occurence of regular databursts over my WAN links from 2 sites to a site with vCenter. What happens is that a vCenter server is managing local ESX 3.5 servers and 2 from 2 sites across a WAN link. Each server sends an approximately 3MB worth of TLS data (less than 10% of the time it varies to higher or lower) every 15 minutes (with a margin of 2 minutes). So far, I've not been able to single out a process that causes it. I looked through all applications on each site. So far, it seems to originate from one server on each site. Although it may be coincidence and therefore not relevant, I found that one server, with very few exceptions, does a burst at 00:00 on the hour. The other 3 during the hour are a bit off the 15 minute mark but back at the top of the hour, you can sync your watch on it. The other server follows 5 minutes after that with no such precision. But, as said, it never differs more than 2 minutes. Servers are ESX 3.5, vCenter is 2.5.

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • Vista ICS issue

    - by Bill Grey
    I have a strange problem with Internet Connection Sharing on a laptop running Vista Business. This laptop is connected to the internet via the ethernet port, which goes to an ADSL modem. it is automatically assigned the IP address 192.168.1.50, and the modem/gateway is 192.168.1.1 My friends laptop is running Vista Home. Previously, I would create an ad hoc wireless network, enable ICS, and everything would be perfect. My friend would have internet access via this. However, something has now mysteriously broken. If I enable ICS on the wireless connection, it resets my Local Area Connection, assigning it the manual IP address of 192.168.0.1, which means my connection to the internet is destroyed. Both wireless adapters on each network are assigned auto configuration addresses, in the 168. range. They can see each other fine, but my friends laptop cannot access the internet via mine, even after I have restored the Local Area Connection settings. I understand the computer with ICS enabled must have the IP of 192.168.0.1, but previously, before whatever went wrong, my wireless adapter would be 192.168.0.1 and my friends computer would get an IP via DHCP. I have also tried setting static IP address and making a bridge, none of which works. How can I fix this problem, and prevent enabling ICS from touching my Local Area Connection? Both machines have no firewall, have appropriate settings etc...

    Read the article

  • error 20014 with mod_proxy

    - by punkish
    I have strange situation. I need to call a program in cgi-bin from within a perl script. When I try to do that with exec($program), I get (20014)Internal error: proxy: error reading status line from remote server proxy: Error reading from remote server returned by ... The long story... I am calling mapserv (http://mapserver.org) as a cgi program from OpenLayers (http://openlayers.org). Ordinarily, my web site is served by Perl Dancer, but the mapserver calls are made directly to http://server/cgi-bin/mapserv from JavaScript. The Dancer web site is served by Starman behind an Apache2 proxy front-end. This is how it looks [browser] -> http://server/app -> [apache2] -> proxy port 5000 -> Starman | | +-> http://server/cgi-bin/mapserv -> [apache2] -> cgi-bin -> mapserv This is what I am trying to accomplish [browser] -> http://server/app -> [apache2] -> proxy port 5000 -> Starman | | mapserv <-- cgi-bin <-- [apache2] <--+ I saw this question re: 20014 error, but that suggested solution didn't help. Any other hints?

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • Clicking a link in IE6 doesn't load page (internal DNS entry on our intranet)

    - by Callum
    I have a very strange problem that is only affecting some versions of IE6. The problem does affect IE 6.0.2900.5512, but does not seem to affect 6.0.3790.3959 Basically I work for a company and we have an intranet. While I'm not an expert on "internal DNS pointers", what I was able to do was create a website (let's say about football), and when an employee who is sitting behind the company firewall types the word "football" in to the web address bar of their web browser, they get redirected to a particular server. I am told this is some kind "internally pointing DNS entry". So, I've set one of these up, and I have a placed a link to it on our company intranet page. However, when the link is clicked in IE6.0.2900.5512, the page goes blank. Clicking "refresh" then loads the correct page (the one specified in the link). Can anyone help me out here. I have tried changing the way URL is formed, everything from //football to http://football/ etc. The link works fine in every other browser and IE7+, but unfoturnatly, IE6 is still the most common browser in use at my organisation.

    Read the article

  • Postfix character encoding?

    - by Anonymous12345
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • MySQL replication - rapidly growing relay bin logs

    - by Rob Forrest
    Morning all, I've got a really strange situation here this morning much like a reportedly fixed MySQL bug. http://bugs.mysql.com/bug.php?id=28421 My relay bin logs are rapidly filling with an infinite loop of junk made of this sort of thing. #121018 5:40:04 server id 101 end_log_pos 15598207 #Append_block: file_id: 2244 block_len: 8192 # at 15598352 #121018 5:40:04 server id 101 end_log_pos 15606422 #Append_block: file_id: 2244 block_len: 8192 # at 15606567 ... # at 7163731 #121018 5:38:39 server id 101 end_log_pos 7171801 #Append_block: file_id: 2243 block_len: 8192 WARNING: Ignoring Append_block as there is no Create_file event for file_id: 2243 # at 7171946 #121018 5:38:39 server id 101 end_log_pos 7180016 #Append_block: file_id: 2243 block_len: 8192 WARNING: Ignoring Append_block as there is no Create_file event for file_id: 2243 These log files grow to 1Gb within about a minute before rotating and starting again. These big files are interspersed with 1 or 2 smaller files with just this in /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #121023 9:43:05 server id 100 end_log_pos 106 Start: binlog v 4, server v 5.1.61-log created 121023 9:43:05 BINLOG ' mViGUA9kAAAAZgAAAGoAAAAAAAQANS4xLjYxLWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAUwAEGggAAAAICAgC '/*!*/; # at 106 #121023 9:43:05 server id 100 end_log_pos 156 Rotate to mysqld-relay-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; We're running a master-master replication setup with the problematic server running mysql 5.1.61. The other server which is, for the moment, stable is running 5.1.58. Has anyone got any ideas what the solution is to this and moreover, what might have caused this?

    Read the article

  • USB Wifi will not connect on Windows 7 (Even though the driver installs OK)

    - by Pete Roberts
    Windows 7 will not connect to a WiFi Netowrk using a USB Network Adapter. I have 3 adapters, A Senoa SUB 364 (EXT), a Repeatit SU2410 USB V2 and a ZYXEL G202. All of these devices install OK on Windows 7 Home Premium on my Destop PC (64 bit) and on my Asus Wii Netbook (32 bit). In each case the adapter can be enabled/disabled and the driver properties says it is working correctly. When I try and connect to a network Windows 7 behaves as though the adapter does not exist and reports no networks. The Wii has an integrated adapter which works perfectly under Windows and connects to either of the 3 networks available to me. I have done all the checks I can on the configuration. What seems odd to me is that it happens to all 3 devices on 2 different windows 7 PCs both of which are working perfectly in any other respect. This suggests the common denominator is me and I must be doing something wrong.. what's also strange is that I cannot find any similar problems being reported on any of the forums. From what reading I've been able to do it seems like the new wifi virtualisation thingy in W7 is not recognising the adapters which suggests I', missing a configuration option somewhere. Looking forward to finding out if I'm not alone or just being stupid. Pete

    Read the article

  • SQL Server 2008 cluster freezing

    - by Ed Leighton-Dick
    We have run into a strange situation in which a SQL Server 2008 single-node cluster hangs. As background, we are rebuilding a Windows Server 2003/SQL Server 2005 two-node cluster using Windows 2008 and SQL Server 2008. Here's the timeline: Evicted the passive node (server B) from the Windows 2003/SQL 2005 cluster. The active node now functions as a single-node cluster with no problems. Wiped server B's disks and installed Windows 2008 and SQL Server 2008 as a single-node cluster. Since we do not want to the two clusters to communicate yet, we left the cluster's private network "heartbeat" adapter unconfigured. The cluster comes up and functions normally. Moved all databases to the new cluster. Cluster continues to function normally. Turned off server A (old cluster) in preparation for rebuilding as the second node of the new cluster. SQL Server instance on server B (new cluster) locks up, even though it should have no knowledge of or interaction with server A. Restarted server A. SQL Server instance on server B (new cluster) immediately begins working again. Things we have tried: The new cluster's name responds to ping and NETBIOS requests, even while the SQL Server is hung. We have confirmed that no IP address is assigned to the old heartbeat adapter, and it is not pulling an IP address from DHCP. Disabling the heartbeat's network card has the same effect. No errors were generated in any logs - Windows or SQL. When the error first occurred, it sat in the hung state for quite some time (well over 10 minutes) before anyone figured out what was going on. This would seem to eliminate any sort of normal cluster timeout in which it would have been searching for the other node (even if one had been configured). Server B is running Windows 2008 SP2, fully patched, and SQL Server 2008 SP1 CU7 (10.0.2775).

    Read the article

  • INFORMIX - listener thread err 25582

    - by Samuel Lao
    I´ve been digging different forums in the last 7 days looking for a possible solution.... Our database is based on informix running in a Linux server (LINUX SUSE 11). Suddenly, last saturday informix began to show an error message: listener-thread err=-25582 oserr=0, network connection is broken End users started to call reporting about slow network performance to this server, moments where the database application lost connection with server...so we proceeded doing a ping to the db server, getting good responses (1ms) without losing packets. I tried typing telnet (ipserver) 1526 which is informix's port for the application, it works. We had to disconnect the server and enable a backup db server which is located on another branch. It has been working in a regular way because the backup server hasn´t good specs (it is an old dell server model). So, I scanned the main server looking for viruses using Trend Micro Server Protect, it didn´t find anything (0 viruses and spywares). I revised the switches and routers, but I haven´t find anything strange... What else could be ? Thanks in advanced for your time and help with this issue.....I would really appreciate any advice...

    Read the article

  • FTP on Linux "Failed to retrieve directory listing" not firewall issue

    - by Jaka Prasnikar
    I've got an VPS in germany running Debian X64. I have very strange issue. I have ISPConfig CP installed using proftpd and I can not connect to FTP by any means. Few hours ago I've had installed DirectAdmin on CentOS same VPS and same issue. Simply when I connect to FTP server I get these: Status: Resolving address of web02.defikon.com Status: Connecting to 130.255.190.71:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 12:15. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER default1 Response: 331 User default1 OK. Password required Command: PASS ****** Response: 230-User default1 has group access to: client0 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing I even tried telnet localhost 21 and the same happends. Once I issue command "LIST" I get time out. I've tried every thing and I can't get this to work =( Please help ! P.S.: iptables is turned off.

    Read the article

  • Printer deployment via Group Policy not working on a single system

    - by Aron Rotteveel
    One of my coworkers just got a new laptop running Windows 7 Pro x64. We use a GPO to deploy the printers to every system, but for some reason it is not working on this system. I have been breaking my head over this for the past 3 hours now without any result. The strange thing is that gpresult /H seems to indicate that the GPO did run. The hardware: Laptop: Windows 7 Professional x64 Print server: Windows Server 2008 x64 R1 HP Color LaserJet 2605dn HP LaserJet P2015 Driver packages on server: HP universal printer driver PCL5, both X86 as X64 Oddities and other info: GPO working flawlessly on every other system, including my own Windows 7 Ultimate X64 laptop gpresult /H shows the GPO being ran Windows Firewall completely disabled on the new laptop Below is the output for gpresult /H (in Dutch sadly, but I think you'll recognize it): Beleidsregels Windows-instellingen Printerverbindingen Pad Dominerend groepsbeleidsobject \\Server2008\HP Color LaserJet 2605dn Printers \\Server2008\HP LaserJet P2015 Printers Beheersjablonen Beleidsdefinities (ADMX-bestanden) opgehaald van de lokale computer. Configuratiescherm/Printers Beleid Instelling Dominerend groepsbeleidsobject Beperkingen van point-and-print Uitgeschakeld Printers Like I said, I have been trying to figure this out for the past few hours or so without any result, so you are my last hope. Any help is appreciated.

    Read the article

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • Vagrant up doesn't load chef configs and doesn't keep an error log

    - by la_f0ka
    I'm trying to set up a vagrant box and I'm running with all sort of troubles. Right now I'm getting a strange error message where it states there's a stack trace file with more info, but that file is no where to be found. This is the error: stdin: is not a tty [Sun, 16 Sep 2012 18:31:47 +0000] INFO: *** Chef 0.10.0 *** [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Setting the run_list to ["recipe[apt]", "recipe[openssl]", "recipe[apache2]", "recipe[mysql]", "recipe[mysql::server]", "recipe[php]", "recipe[php::module_apc]", "recipe[php::module_curl]", "recipe[php::module_mysql]", "recipe[apache2::mod_php5]", "recipe[apache2::mod_rewrite]"] from JSON [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Run List is [recipe[apt], recipe[openssl], recipe[apache2], recipe[mysql], recipe[mysql::server], recipe[php], recipe[php::module_apc], recipe[php::module_curl], recipe[php::module_mysql], recipe[apache2::mod_php5], recipe[apache2::mod_rewrite]] [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Run List expands to [apt, openssl, apache2, mysql, mysql::server, php, php::module_apc, php::module_curl, php::module_mysql, apache2::mod_php5, apache2::mod_rewrite] [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Starting Chef Run for natty.talifun.com [Sun, 16 Sep 2012 18:31:48 +0000] ERROR: Running exception handlers [Sun, 16 Sep 2012 18:31:48 +0000] ERROR: Exception handlers complete [Sun, 16 Sep 2012 18:31:48 +0000] FATAL: Stacktrace dumped to /tmp/vagrant-chef-1/chef-stacktrace.out [Sun, 16 Sep 2012 18:31:48 +0000] FATAL: NameError: wrong constant name Chef-symfony2Console Chef never successfully completed! Any errors should be visible in the output above. Please fix your recipes so that they properly complete. And this is what my vagrantfile looks like: Vagrant::Config.run do |config| config.vm.box = "ubuntu-1104-server-i386" config.vm.network :hostonly, "33.33.33.33" config.vm.forward_port 80, 8000 config.vm.share_folder "symfony.tests", "/var/www/symfony.tests", "data", :nfs => true config.vm.provision :chef_solo do |chef| chef.cookbooks_path = ["../my-recipes/cookbooks", "site-cookbooks"] chef.add_recipe "apt" chef.add_recipe "openssl" chef.add_recipe "apache2" chef.add_recipe "mysql" chef.add_recipe "mysql::server" chef.add_recipe "php" chef.add_recipe "php::module_apc" chef.add_recipe "php::module_curl" chef.add_recipe "php::module_mysql" chef.add_recipe "apache2::mod_php5" chef.add_recipe "apache2::mod_rewrite" chef.add_recipe "Symfony" chef.json = { :mysql => { :server_root_password => 'root', :bind_address => '127.0.0.1' } } end end

    Read the article

  • How to test email spam scores with amavis?

    - by CaptSaltyJack
    I'd like a way to test a spam message to see its spam scores that SpamAssassin gives it. The SA db files (bayes_toks, etc) reside in /var/lib/amavis/.spamassassin. I've been testing emails by doing this: sudo su amavis -c 'spamassassin -t msgfile' Though this yields some strange results, such as: Content analysis details: (3.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% [score: 1.0000] -0.0 NO_RELAYS Informational: message was not relayed via SMTP 0.0 LONG_TERM_PRICE BODY: LONG_TERM_PRICE 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% [score: 1.0000] -0.0 NO_RECEIVED Informational: message has no Received headers 0.2 is an awfully low scores for BAYES_999! But this is the first time I've used amavis, previously I've always just used spamassassin directly as a content filter in postfix, but apparently running amavis/spamassassin is more efficient. So, with amavis in the picture, how can I run a test on a message to see its spam score breakdown? Another email I ran a test on got this result: 2.0 BAYES_80 BODY: Bayes spam probability is 80 to 95% [score: 0.8487] Doesn't make sense, that BAYES_80 can yield a higher score than BAYES_999. Help!

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >