Search Results

Search found 8715 results on 349 pages for 'bad sectors'.

Page 250/349 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Is "DSLAM congestion" a legitimate reason for slow DSL?

    - by Jay Bazuzi
    My DSL has been extremely slow in the evenings recently. To test it, I telnet to my DSL Modem, and ping the gateway. This way I eliminate internet congestion and local network issues. In the mornings I get 30ms - 50ms pings. In the evenings, it bounces around a lot, but 10000ms pings are common. I complained to Qwest support, and they said it was a known issue on their end, their engineers were working on it, and wouldn't say anything else. A couple days later I complained again, and they sent out a technician. He tested my house wiring and found that one of them had a short. It was an unused line, so we disconnected it, and he said things looked better and left. My daytime speeds improved at this point, but evening is still bad. I complained to Qwest support again, and they said it was a problem with DSLAM congestion at their end, and that they were working on it, but no ETA. My neighbor has Qwest DSL and doesn't seem to have these problems. That seems strange. I go use her network when I absolutely must get online and mine is behaving badly. I can't tell if they're yanking my chain or not. Regardless, these speeds are crap. I'm paying for 7Mpbs but am lucky if I get 1/10th that in the evenings. My kids like to watch Netflix streaming movies, and it's just impossible after 5pm or so. Should I wait it out? Will complaining again produce any results? Should I change my subscription to a lower speed until they fix their end? Or switch to cable?

    Read the article

  • Windows Server 2008 R2 DNS - One IP, multiple servers

    - by Blu Dragon
    I need opinions and examples on how to best to accomplish the setup I am looking for. I have a public-facing AD domain server with one public IP address. I have setup an external zone for example.com and I successfully have my own name servers pointing to it at ns0.example.com and ns1.example.com. I also have an internal zone for my private network at home.example.com. I am behind a router with the domain server in the DMZ. I want dev.example.com to be accessible from the outside world over https and to point to internal IP address 192.168.1.78. Likewise, I want www.example.com to be accessible from the outside world and point to internal IP address 192.168.1.79. Both dev and www servers are CentOS 5.6 VMs running inside of Hyper-V on the domain server (bad idea I know but I am limited on hardware atm). What is best way to achieve this? From what I have read and researched on Google, I may need to setup a reverse proxy but I am not sure how well that will work with SSL.

    Read the article

  • Firebird database corruption causes

    - by Rytis
    I am running several different Firebird versions (2.0, 2.1) on multiple entry level Windows-based servers with wildly varying hardware. The only matching thing between them is that they are running same home built application with the same database structure. Lately I've been seeing massive slowdowns on multiple servers. Turns out that database gets corrupted, so each time it breaks, I get to mend, backup and restore the database, and it all is fine for some time (1-2 weeks), and then it repeats once again. Thankfully, I haven't seen any data loss or damage... yet. The thing is that every such downtime results in lost productivity, and often quite some driving for me as some of the databases are in remote locations. I've been trying to find out what's causing the corruption, but I haven't been able to. The fact that it's running on different hardware hints that it should not be a hardware based problem. If we rule out hardware issues, I have a bad feeling that it's a bug in Firebird as I'm not doing anything fancy via SQL. Do you have any idea how to find out exactly what's causing the corruption and hopefully fix the problem?

    Read the article

  • Opera 10.5 RAM usage and Google Reader?

    - by David
    Hi all, Today I upgraded to Opera 10.5 from Google Chrome and I have two really important questions about it. 1) Is it normal for it to use SO MUCH RAM!!!!? Closing tabs doesn't help, but opening new ones add on to the usage. I can have just 4 tabs open and it goes up to the 300MB mark and I only have 1.5GB in my laptop, 596MB of it used by the graphics card so this really unacceptable. Is there a way to fix it? 2) Why does Google Reader feel so slow and unresponsive on it? It lags so bad when I just try scrolling through the page. I know Opera is known for being really smooth while scrolling through pages. There's also a white bar at the bottom of the page that I can get rid of. It blocks the "Next" and "Previous" buttons. The test between articles is also sort of intersecting each other and that just looks completely unattractive and that's something i'm not used with any web browser. I realize there's a built-in RSS reader, but it doesn't sync across multiple computers and is very late at updating. Here are my specs: Windows 7 Ultimate (x86), Intel Pentium M 1.86 GHz, 1.5GB RAM, ATI Mobility Radeon X600 (64MB dedicated, 596MB shared)

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • USB mouse disconnecting and reconnecting in windows and linux

    - by Kalak
    I have a problem similar to what is described at "Why is my USB mouse disconnecting and reconnecting randomly and often?" except is is happening in both Windows 7 and Linux (Ubuntu 12/04TLS, fully patched), multiple mice, multiple OSs. It stops responding to input for about 3-5 seconds, then starts responding again. It's more frequent and lasts longer when running games (TF2, L4D, Dishonored, Borderlands 2, and more), but happens when just running the OS as well. I was hoping it was the motherboard so I bought a USB 2.0 PCI card to try that, but it's still happening. I've stripped it down to just the keyboard and mouse (different keyboard too just in case the keyboard was the problem), but it's still happening. All the hardware (mice and keyboards) work fine on other computers. I have literally pulled the mouse and keyboard out and plugged them into another computer (laptop) and re-joined the online game and had no problem with the keyboard and mouse combo that just failed on my gaming rig. Please no driver / Windows or Linux only suggestions, as that wouldn't effect both OSs. edit: known good mice I've brought home are now going bad. I suspect the hardware is messed up (voltage?) and has been frying the mice.

    Read the article

  • Windows 7 doesn't boot when second hard disk is connected

    - by kenshin9786
    I'm sorry for my bad english on beforehand. I have two hard disks, one SATA and another IDE. I have windows XP and 7 on the SATA one, and Ubuntu on the IDE. Both of them boots and works, bios recognizes them, just works. After I installed Windows 7, and connected the IDE drive, it freezes on "Starting Windows" (the black screen with the Windows logo). I unplug the IDE drive, and it starts normally. Windows XP starts normally on both situations (with or without the IDE one connected), same for Ubuntu (it works with both disks connected or just the IDE where it is). The IDE drive is on good status according to SMART. The IDE is first on boot order. It goes to the Ubuntu's grub first, then by default it goes to the Windows 7 bootloader, and then to XP. I think the problem is not about the bootloader or grub. I just read that it can be solved formatting the "problematic" hard disk because Windows 7 cannot handle so many active partitions or something like that. But that's not an option for me, I don't want to lose my Ubuntu nor have it unbootable. How can I solve this without this consecuences I mentioned? Any help would be appreciated.

    Read the article

  • Networkmapping script (VBS) Vista doesn't work, XP does

    - by The_cobra666
    Hi all, I've got a weird problem, (like always :p ) Okay: Situation: Windows 2003 domain with XP clients. With a GPO I'm running a VBS script on login to map a few drives. This works great on XP, but not on Vista. If I manually run the script after the user has logged on, it works. So I know the script works on Vista, it just doesn't run via the GPO. The user has admin privileges. I also have the same problem on Windows 7 RC1. So it must be related. The script: on error resume next Dim objNetwork Dim strDriveLetter, strRemotePath, strUserName strDriveLetter = "Z:" strRemotePath = "\\Onsgeluk.ons_geluk.local\Profieldoc" Set objNetwork = WScript.CreateObject("WScript.Network") strUserName = objNetwork.UserName objNetwork.RemoveNetworkDrive "Z:" objNetwork.MapNetworkDrive strDriveLetter, strRemotePath _ & "\" & strUserName objNetwork.RemoveNetworkDrive "X:" objNetwork.MapNetworkDrive "X:" , "\\Onsgeluk.ons_geluk.local\Data" objNetwork.RemoveNetworkDrive "Y:" objNetwork.MapNetworkDrive "Y:" , "\\Onsgeluk.ons_geluk.local\Mappen\hoofdverpleging" Does anyone have a clue? Thanks in advance guys (and girls) ps: sorry for my bad english!

    Read the article

  • GNU screen multiuser mode is broken in OS X 10.6 (Snow Leopard)

    - by schustafa
    I'm using GNU screen for remote pair programming. Let's call the local account for the remote user 'pairpair'. I have the following lines in my .screenrc: multiuser on acladd pairpair I have run sudo chmod u+s /usr/bin/screen. However, when the remote user tries to connect to my screen with the command screen -r [my_account_name]/[pid_of_screen] I receive the following message: Attach attempt with bad pid(xxx) The pid listed in the error message matches the pid of the screen process run by the remote user. The remote user's screen process hangs; my screen session continues happily along after the error message disappears. I've tried using both the built-in screen (at /usr/bin/screen) and the screen available from MacPorts, but I get the same error in both cases. This worked on OS X 10.5 (Leopard). I've googled around for the error message, but most of the hits relate to some BSD bug from 2003 or so (which was fixed). Has anyone else seen this behavior? Does anyone have any idea how to make multiuser support in screen work in SL?

    Read the article

  • 32bit SQLServer with AWE NOT enabled. Buffer Cache Hit Ratio High, Disk Read Queue VERY HIGH, WHY?

    - by chenwq
    We have a "SQLServer 2005 SP3 32bit Enterprise Edition" running on a 32 bit Windows 2003 32bit Enterprise Edition 12GB RAM with AWE enabled using RAID5(5 pysical disks). We tuned AWE to enabled and restart sqlserver this afternoon after work, hope the performance will be better than old time. But there is something that we are very confused. On working days, SQLServer has a very bad performance. When we are looking for reasons, we check Windows Performance counter. Avg. Disk Read Queue Lenght > 140 Avg. Disk Write Queue Length < 1 SQL Server Buffer Cache Hit Ratio > 96% %Processor Time < 30% SQL Server Total Server Memory < 1.8G Obviously, without AWE enabled, SQL Server can use only less than 2G memory. My Question is: why "SQL Server Total server Memory" is less than 2G?I think SQL Server will use all 2G process address space. Does this counter count anything out? we known that sql server is sufferring lack of memory, but why "buffer hit ratio“ is as high as 96? Any advice is welcomed!

    Read the article

  • MySQL-Cluster or Multi-Master for production? Performance issues?

    - by Phillip Oldham
    We are expanding our network of webservers on EC2 to a number of different regions and currently use master/slave replication. We've found that over the past couple of months our slave has stopped replicating a number of times which required us to clear the db and initialise the replication again. As we're now looking to have servers in 3 different regions we're a little concerned about these MySQL replication errors. We believe they're due to auto_increment values, so we're considering a number of approaches to quell these errors and stabilise replication: Multi-Master replication; 3 masters (one in each region), with the relevant auto_increment offsets, regularly backing up to S3. Or, MySQL-Cluster; 3 nodes (one in each region) with a separate management node which will also aggregate logs and statistics. After investigating it seems they both have down-sides (replication errors for the former, performance issues for the latter). We believe the cluster approach would allow us to manage and add new nodes more easily than the Multi-Master route, and would reduce/eliminate the replication issues we're currently seeing. But performance is a priority. Are the performance issues of MySQL-Cluster as bad as people say?

    Read the article

  • error in auth.log but can login; LDAP/PAM

    - by Peter
    I have a server running OpenLDAP. When I start a ssh-session I can log in without problems, but an error appears in the logs. This only happens when I log in with a LDAP account (so not with a system account such as root). Any help to eliminate these errors would be much appreciated. The relevant piece from /var/log/auth.log sshd[6235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=example.com user=peter sshd[6235]: Accepted password for peter from 192.168.1.2 port 2441 ssh2 sshd[6235]: pam_unix(sshd:session): session opened for user peter by (uid=0) pam common-session session [default=1] pam_permit.so session required pam_unix.so session optional pam_ldap.so session required pam_mkhomedir.so skel=/etc/skel umask=0022 session required pam_limits.so session required pam_unix.so session optional pam_ldap.so pam common-auth auth [success=1 default=ignore] pam_ldap.so auth required pam_unix.so nullok_secure use_first_pass auth required pam_permit.so session required pam_mkhomedir.so skel=/etc/skel umask=0022 silent auth sufficient pam_unix.so nullok_secure use_first_pass auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth required pam_deny.so pam common-account account [success=2 new_authtok_reqd=done default=ignore] pam_ldap.so account [success=1 default=ignore] pam_unix.so account required pam_unix.so account sufficient pam_succeed_if.so uid < 1000 quiet account [default=bad success=ok user_unknown=ignore] pam_ldap.so account required pam_permit.so account sufficient pam_ldap.so account sufficient pam_unix.so

    Read the article

  • Chipset fan on the frtiz - compressed air hasn't fixed anything - is there anything I can do?

    - by Anthony
    Yesterday, my computer started to make an annoying whining noise. Knowing that this is likely a fan issue, I opened the case and proceeded to determine which fan was causing the issue. I got some compressed air and tried cleaning out the dust around it (and the rest of the computer while I was at it). This hasn't seemed to fix the issue. Now, if it were just any fan, I would probably just replace the fan - they're relatively cheap after all. However, this is a special fan. Aside: For what its worth, I feel bad that the graphics card blocks part of the fan, but it is the only slot the graphics card fits, so I had no choice. After pulling out my motherboard user guide, it looks like this is a fan placed directly on top of the chipset. To be perfectly honest, I have no clue what the purpose of the chipset is - but it sounds important. After some quick research, I see that it is responsible for providing the bridge between my CPU, RAM and graphics, among other things. Just a quick search at Newegg tells me that chipset fans can be purchased at pretty reasonable prices (< 20 dollars). Is it practical to replace this fan? It is an old computer as computers go and I wouldn't be terribly upset to upgrade the motherboard and processor, so perhaps this is a sign. Hardware Specs: Motherboard: Asus A8N-E Chipset: NVIDIA nForce4 Ultra

    Read the article

  • "Open file location" is broken for Win Live Photo Gallery and Chrome

    - by Arnold Spence
    Symptoms: Windows Live Photo Gallery: Right clicking on an image (.jpg, .png) and selecting the context menu item "Open file location" should open the containing folder in explorer. Instead, I get an error dialog stating "This file does not have a program associated with it for performing this action. Please install a program or, if one is already installed, create an association in the Default Programs control panel." Chrome: After downloading a file (of any type), clicking the down arrow on the download status bar at the bottom and clicking "Show in folder" results in the same error dialog mentioned above. I'm using Windows 7. I'm pretty sure this is a registry entry gone bad but I've been unable to locate any information about this specific problem. It's not a file type association problem as I am not trying to open the files concerned, I'm trying to open an explorer window at the location for the file. I've found a similar issue that somebody had with explorer itself. However, the suggested registry fixes here for "Folder", "Directory" and "Drive" did not solve the problem. Also, If I use the searchbar in explorer to do a search, right click on a file and choose "Open file location", explorer jumps to that location without any trouble. I have not yet identified other programs with this issue.

    Read the article

  • SQL Server 2005 SP3 on Windows 7 - No Management Studio

    - by Mike Thomas
    I've been trying for a day and a half now to get SQL Server 2005, DEV edition, to work on Windows 7, 64 bit prof. I install from the disk, then run SP 3. I get a failure on the Client Components section of the Installation Progress along with this vague message - Product : Client Components Product Version (Previous): 1399 Product Version (Final) : Status : Failure Log File : C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQLTools9_Hotfix_KB955706_sqlrun_tools.msp.log Error Number : 1712 Error Description : MSP Error: 1712 One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible. I've uninstalled all Visual Studio and tried to make this as clean as possible, and have read a lot of the blog posts, but am really at my wits end about this. I am not a DBA, but I use SQL Server all the time when coding and testing apps. Does anyone have any ideas as to where I can get this sorted out? I've been ati this for a long time and have never encountered an installation as bad as this one. Thanks Mike Thomas

    Read the article

  • Hostname and SSL (apache) issue on Debian

    - by user105566
    I have been trying to setup SSL virtual host ServerAdmin [email protected] ServerName moclm.tap.pt DocumentRoot /var/www/tapme/ <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <Directory /var/www/tapme/> Options -Indexes FollowSymLinks MultiViews AllowOverride All #Order allow,deny #allow from all </Directory> SSLEngine on SSLCertificateFile /etc/ssl/moclm.cer SSLCertificateKeyFile /etc/ssl/moclm.pem </VirtualHost> For some reason, the server automatically redirect to SSL (http:// to https://). The apache is not configured to redirect and application was working fine on port 80 only. I have no knowledge how the internal network works as i am working remotely. The SSL error logs show: [Tue Oct 02 22:40:32 2012] [error] Hostname linemnt01.tap.pt provided via SNI and hostname moclm.tap.pt provided via HTTP are different I thought may be the hostname has some issue and have changed the hostname of the server from "linemnt01.tap.pt" to "moclm.tap.pt" but the issue is still there. I am getting the following error on browser: Bad Request Your browser sent a request that this server could not understand. i have /etc/hosts: 127.0.0.1 localhost.localdomain localhost moclm.tap.pt moclm and openssl returns: openssl verify -CAfile cert-CA.cer moclm.cer moclm.tap.pt.cer: OK I have been trying to troubleshoot the issue but no luck. Need help Thanks

    Read the article

  • Disk IO slow on ESXi, even slower on a VM (freeNAS + iSCSI)

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas 8.0.4). Those two machines are connected to each other with Gigabit ethernet. The raid-z volume is divided into three parts: two zvols, shared with iscsi, and one directly on top of zfs, shared with nfs and similar. I ssh'd into the freeNAS box, and did some testing on the disks. I used ddto test the third part of the disks (straight on top of ZFS). I copied a 4GB (2x the amount of RAM) block from /dev/zero to the disk, and the speed was 80MB/s. Other of the iSCSI shared zvols is a datastore for the ESXi. I did similar test with time dd .. there. Since the dd there did not give the speed, I divided the amount of data transfered by the time show by time. The result was around 30-40 MB/s. Thats about half of the speed from the freeNAS host! Then I tested the IO on a VM running on the same ESXi host. The VM was a light CentOS 6.0 machine, which was not really doing anything else at that time. There were no other VMs running on the server at the time, and the other two "parts" of the disk array were not used. A similar dd test gave me result of about 15-20 MB/s. That is again about half of the result on a lower level! Of course the is some overhead in raid-z - zfs - zvolume - iSCSI - VMFS - VM, but I don't expect it to be that big. I belive there must be something wrong in my system. I have heard about bad performance of freeNAS's iSCSI, is that it? I have not managed to get any other "big" SAN OS to run on the box (NexentaSTOR, openfiler). Can you see any obvious problems with my setup?

    Read the article

  • VoIP setup for one external PSTN line

    - by Jcl
    I'm completely new to VoIP and the likes, and I'm trying to find information about what could be the best setup for this. I need 4 (maybe more in the future, but maximum 5 or 6) wireless extensions, connected to 1 PSTN line, and maybe 2 in the future. I've been trying to gather information about the gear needed but everything I find seems too much over-the-top (and extremely expensive). The main problem is that the physical place we are on doesn't have possibilities of having a decent internet connection, so using a external VoIP "virtual PBX" is not an option. Thing is, even if small, phone is critical to this organization. I currently have an analog DECT/GAP PBX which does what I need, however the PBX is very bad and the call quality is horrible, and that's why I want to change it. The requirements would be: 4 wireless terminals (routing cable is not an option), all of them ringing on incoming PSTN calls. Ability to do internal calls (4 separate offices) and ability to pass calls between terminals. The 4 terminals should be able to access the external PSTN line without dialing any special codes. Very important: terminals should be able to issue commands on the PSTN line to the external operator in the form *nn*nnnnnnnn# . Don't know wether this could face to be a problem, but I've had problems with analog PBX which would take any * as a PBX command and wouldn't allow terminals to send it to the external lines. Not so important, but would be nice to have: call waiting music Could anyone recommend such a setup? I need to be able to do this on a EXTREMELY LIMITED budget (that is: I don't have a limit, but all should get as much to zero as possible). I have enough spare powerful computers and a 300mbps wireless network which works just fine, so that's not to include in the budget. Don't really know if this is the best place to ask, but it's the most StackExchange-related site I've found to this subject.

    Read the article

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • How to eliminate the downtime when a dynamic IP address changes?

    - by xenon
    We currently have a number of client computers linked up to a database server (MS SQL 2008) for replication. The database server recognises the computers based on their Windows hostname. We are using dynamic IP addresses at this time because we tend to change the computers’ hardware quite frequently, and so the MAC address may be different. Unless static IP has a good way for us to manage frequent changing of MAC addresses, we are keeping it to dynamic IP. The problem with dynamic IP addresses, however, is that when a client fetches an new IP from the DHCP, ie, there is a change in the IP address, there is going to have a downtime for the hostname to reflect the new IP address, the client’s DNS cache of the hostname to reload, and also the server’s DNS cache to reload to see the new IP from the hostname. All of these have different timings and the delay can be really bad at times. Restarting the computer doesn't work all the time too. The clients are on Windows 7. How can I eliminate the amount of downtime required when there is a change in IP in the case of dynamic IP addresses?

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • Munin with postgresql 9.2

    - by jreid9001
    I am trying to set up Munin to collect stats on a server with postgresql 9.1 and 9.2 (the server is currently running 9.1, have tested on a fresh VM with 9.2 to rule out some weird problem on the running server. I had to patch some of the plugins for 9.2 due to renamed columns (e.g. procpid to pid), but that's no problem). Munin is installed from the EPEL repos, postgres from the official one. Both up to date. When I try to run munin-node-configure --suggest, I get this output: # The following plugins caused errors: # postgres_bgwriter: # Junk printed to stderr # postgres_cache_: # Junk printed to stderr # postgres_checkpoints: # Junk printed to stderr # postgres_connections_: # Junk printed to stderr # postgres_connections_db: # Junk printed to stderr # postgres_locks_: # Junk printed to stderr # postgres_querylength_: # Junk printed to stderr # postgres_scans_: # Junk printed to stderr # postgres_size_: # Junk printed to stderr # postgres_transactions_: # Junk printed to stderr # postgres_tuples_: # Junk printed to stderr # postgres_users: # Junk printed to stderr # postgres_xlog: # Junk printed to stderr After a lot of searching around, I edited /etc/munin/plugin-conf.d/munin-node and added the following: [postgres*] user postgres This stops munin-node-configure complaining about stderr and lets me add the plugins, but when I telnet to the server on 4949 and try to fetch the stats, I just get "Bad exit". When I run the plugin individually via munin-run (e.g. munin-run postgres_size_ALL ), it works completely fine. Looking at /var/log/munin/munin-node.log, this is the output: Error output from postgres_size_ALL: DBI connect('dbname=template1','',...)failed: could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? at /usr/share/perl5/vendor_perl/Munin/Plugin/Pgsql.pm line 377 Service 'postgres_size_ALL exited with status 1/0. I am now out of ideas... the socket definitely exists, and pg_hba.conf is set to allow all users/databases from localhost with trust.

    Read the article

  • Bounce backs from web-generated e-mails are missing

    - by JerSchneid
    We use Google Apps to host my company's mail. On our website, we send some e-mails on behalf of our users. In those e-mails we include lines like this: Return-Path: <[email protected]> Sender: <[email protected]> Sending the messages works great (passes SPF tests), but in the case that the message is sent TO an invalid e-mail address, we expect to get a bounce back message sent to "[email protected]". That message never arrives. (If we send an e-mail manually from within the gmail interface to the same bad e-mail, the message does arrive). We used to receive the bounce back messages as expected, but it seems like they are always quietly blocked now (not in spam or anything). Is there a new policy that blocks bounce backs when the "From" does not match the "Return-Path" or something? We would really like to get these bounce-backs to verify the delivery of the messages. Is there any way to prevent them from being blocked?! Thank you!

    Read the article

  • Very poor SCSI hd performance on IBM x336 with LSI 1030 RAID1

    - by David Tschoepe
    I'm experiencing very poor performance on an IBM x336 server with dual 73GB 15k hard drives on a U320 controller, LSI 1030. We're getting maybe 3.5MB/sec max (per HD Tune utility). It should be over 100MB/sec at least, I would think (another x335 box is running 70-80MB/sec). The server was recently setup and didn't really notice the problem, but may have been there from the beginning, so not sure. I have installed the IBM ServerRAID Windows utility. The server is running Windows 2008 R2 Web edition (if that matters). I thought maybe one of the drives was bad, so far I have removed one of the drives out of the array and tested again, but still the same results. I'm waiting for the RAID1 to resync and I will try pulling the other drive next. I've also used the ServerRAID utility but haven't noticed anything in there that might indicate a problem. Not sure if I'm on the right path here. So looking for some advice to track this down.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >