Search Results

Search found 4155 results on 167 pages for 'joe weeks'.

Page 132/167 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Speakers silent, headphones work in Ubuntu 9.04

    - by CarlF
    I'm running Ubuntu 9.04. Worked fine for months, then I rebooted yesterday after weeks of continuous operation. Now audio won't play through the speakers. The USB headset works fine, but the Conexant audio (CX20549) does not. Weirdly, it thinks it's playing. pavumeter shows appropriate levels, volume looks OK in alsamixer, but no sound. I did find this page: http://www.eugeneteplitsky.com/fixing-silent-pulseaudio-in-ubuntu-9-04/ Unfortunately the advice there doesn't help me. For one thing, the syntax for the alsa-base.conf file is apparently not actually documented anywhere. For another, my chipset isn't listed in the kernel.org docs he links to! EDIT: would upgrading to 9.10 help? Is there a major change in the audio subsystem between 9.04 and 9.10? Any suggestions? EDIT 2: This is stranger than I thought. Sound works normally in Xine, but is silent in Audacity, VLC and mplayer. What the?

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • Ubuntu: Tar doesn't work correctly

    - by Phuong Nguyen
    It looks like tar is having some problems. I installed Ubuntu 10.04 alpha a few of weeks ago. After that, this alpha version is so terrible that I must switched back to 9.10. So, I backed up all of my profiles data (/home/my_user_name) to my_user_name.tar.gz Here what I did: (in U10.04) Open Nautilus and goto /home/my_user_name Press Ctrl+H to view all hidden files. Press Ctrl+A to select all files Right click and choose [Compress...] Now, when I have set up Ubuntu 9.10 again, I extract the tar using Extract Here command from Nautilus. Funny things happened: Instead of extracting to current folder, the archive manager create a folder named my_user_name and put the extracted content into it. All of the files that I placed directly under /home/my_user_name doesn't get extracted All of the directories that started with . (dot) is not extracted. I wonder if there is any incompatibility between tar in U10.04alpha and U9.10 that cause the problem? Now every of my email, basket data is long gone. I'm freaking right now. Is there anything I can do to get the tar return back my data?

    Read the article

  • Diagnosing a BSOD involving USB

    - by David Ebbo
    [Running Win7 Ultimate 64 bit] My new HP Pavilion Elite HPE-450t has been plagued by BSDO crashes since I got it about 5 weeks ago. The crashes are somewhat rare, sometimes not occurring for 3 or 4 days. I have spent a lot of time trying to isolate the device that could be at fault, but I have seen crashes with only the keyboard and mouse plugged in (as USB devices), and I tried two sets of keyboard/mouse, so I'm running out of ideas. :( The WhoCrashed tool gave this info about my latest BSOD: crash dump file: C:\Windows\Minidump\121310-11887-01.dmp This was probably caused by the following module: usbport.sys (USBPORT+0x2DE4E) Bugcheck code: 0xFE (0x5, 0xFFFFFA8008F571A0, 0x80863B34, 0xFFFFFA80092F2510) Error: BUGCODE_USB_DRIVER file path: C:\Windows\system32\drivers\usbport.sys product: Microsoft® Windows® Operating System company: Microsoft Corporation description: USB 1.1 & 2.0 Port Driver Bug check description: This indicates that an error has occurred in a Universal Serial Bus (USB) driver. The crash took place in a standard Microsoft module. Your system configuration may be incorrect. Possibly this problem is caused by another driver on your system which cannot be identified at this time. I looked at http://msdn.microsoft.com/en-us/library/ff560407(VS.85).aspx, and for Parameter1 = 0x5, it says "A hardware failure has occurred due to a bad physical address found in a hardware data structure. This is not due to a driver bug". Should I conclude that it's a hardware issue in the machine itself, rather than a bad USB driver or USB device? Here is the MiniDump, in case someone can get more info out of it: http://ewt52q.blu.livefilestore.com/y1peS4Ce8nSK1SXghzMDoxDWXlaEu-EKCJsv25y8y5DXXIUzZ9U0_tYgFJXd939fykwa0zRmx98IW0PYG18GioqKAuARYjtspSA/121310-11887-01.dmp?download&psid=2

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • Wireless card on HP laptop not working

    - by D. Strout
    I just bought an HP Envy m6-1125dx online from Best Buy. When I got it home and started it up, the wireless card did not work well - at all. I could connect, but any real usage would cause the connection to start dropping every 30 seconds or so, and it would be really slow. Taking another look at the reviews on the Best Buy site, it seems only a few others had this problem, so I took it to my local Best Buy and exchanged it for another unit. Got it home again and the card had the same issues. Which leads to my dilemma. First: does this model have several different cards that it could come with? Mine is a Ralink RT5390R (on both units I received). If it does, then I can keep exchanging until I get a unit with a different card. I wouldn't ask this, except it seems weird that only a few people mentioned this issue, so I thought that might be one possibility. I looked in to replacing the card with a different one myself, but it seems that HP blocks certain wireless cards. However, some people reported success in replacing the card, and this site said it was only an issue on "older HP computer[s]". Can anyone confirm this? Finally, if that fails/will not work, does anyone know what I can get through Best Buy? I am concerned that they will not put any different card than the Ralink, and after two of those, I don't want that. Can I ask Best Buy support to use a different card? Can they even get another card from HP? I guess the base question is: should I attempt to replace the card myself (two days via Amazon to get a new card), should I try to get the laptop repaired through Best Buy (two - four weeks), should I go for a different model laptop from Best Buy, or should I try a different unit of the same model (three's the charm?).

    Read the article

  • iPhone Lag Terrible - SLOW - What's going on with the iPhone OS?

    - by Sam Schutte
    I've had my iPhone 3G for about a year now, and it seems like at least once a month, it gets bogged down and gets slower and slower - horrible lag when typing, going back to the home screen or opening an app can take 20 seconds. Has anyone else run into this and found "the" solution. What you always read on other boards is to reboot the handset (hold down home and the power button), but that doesn't improve anything for me. I've reinstalled the OS like 5 times now, and I'm getting pretty sick of doing it so often. And I don't buy that it's a hardware issue really, since it works fine for weeks after a fresh install. Anyone have a solution or an idea of what specific actions cause this kind of evident data corruption (OS corruption?) and slowness? Note - I'm looking for specific things here. That is, has anyone done the research to see exactly what on the phone operating system is getting messed up that causes this lag (which is discussed all over the internet, with no working solutions). I don't own a mac, so I can't delve into the guts of the iPhone very well to see what's up with it... Some additional info: Reboots (hold down power/home) and "Sleeps" (slide off) do nothing. Only fresh re-installs help I only have about 15 apps installed - sometimes you see the answer to uninstall apps if you have too many, I'd hope that 15 isn't too many, and even when I've had none installed, it still gets hung up after a period of time. This phone is not jailbroken, and it is running the 3.0.1 release.

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • Why does my DSL modem now need a reboot each time for my laptop to connect?

    - by msorens
    I have a rather peculiar home networking issue. For sometime my home network was purring along fine. I could turn on either of my laptops and they would quickly find and connect to my DSL modem (and thence the internet). Several days ago I unplugged my DSL modem for the first time in months. Upon turning it back on and waiting for the boot to finish, the lights on the panel indicated the DSL modem was fully operational, just as before. But that's not what happened. Not at all. Now when I turn on my Win7 laptop, the network icon in my system tray shows a small starburst; hovering over it the tooltip states "Not connected; connections are available". Clicking it lists several nearby networks including my own network showing a strong signal. If I click to connect, it attempts a connection but then I get a dialog stating "Windows was unable to connect to MyNet.". Turning off wireless on my laptop and turning it back on yields no difference. Running the network troubleshooter (which includes doing a repair on the network connection) yields no difference. The only remedy is to reboot the DSL modem (i.e. unplug it, wait a few seconds, then plug it back in). As soon as it goes online my laptop finds it and connects properly. To add one more twist to the story, this happened to me once before, several months ago. After a couple weeks, the situation resolved itself(!). Everything started working properly again, due to nothing I did. Final note: this problem only affects the wireless connection to the DSL modem. My desktop computer, connected via hardline to the DSL modem, connects fine when I turn it on. Any thoughts on why this is happening or how to fix it?

    Read the article

  • Why is Apache htdigest authentication failing in IE10 on Windows 8?

    - by Kevin Fodness
    One of our developers reported that for the past week or two, the htdigest authentication that we have set up on our test sites in Apache is not working in IE10 on Windows 8. It's fine on IE10 on Windows 7, and it's fine on Chrome on Windows 8. The specific behavior is: Navigate to site with htdigest authentication enabled, username and password form pops up, enter correct username and password, and the username and password box pops up again. Potentially useful information: All patches applied on Windows 8 box No additional software on Windows 8 box other than Outlook 2013 and a browser test suite (Chrome, Firefox, Opera, Chrome Canary, Opera Next) Win8 running in a virtual machine on Xen Same behavior can be replicated on Win8/IE10 on Browserstack.com Server running Ubuntu 10.10 with Apache 2.2.16 This feels like a patch was applied to the Windows box that broke digest authentication for IE10 on Win8 (box configured for automatic updates). However, without knowing a specific date I can't necessarily nail this down. Has anyone else experienced this problem? EDIT: This problem only happens in the "Metro" interface, not when running IE10 in desktop mode. As of a few weeks ago, it worked fine even in the "Metro" interface.

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • Backup Exec 10 - Network connection to the remote agent has been lost

    - by jherlitz
    Okay, so I have 4 remote offices, all running off of a 3mb ethernet connection. Two sites are part of a WAN and 2 sites are using 3mb connections over a site to site tunnel. I am using Backup Exec 2010, I have the remote agent installed on all the remote servers. For the past few weeks now, on the two sites running over the site to site tunnel have been failing with the following error message now. "The network connection to the Backup Exec Remote Agent has been lost. Check for network errors" We used to be on a DSL connection site to site tunnel, now we changed to the 3mb ethernet connection using site to site tunnel. I have to find out, has it been failing ever since we changed, or just recently. Backup exec support is telling me it is a network issue. My communication or connection to the server is solid, we don't have any issues, or outages. So I am baffled on why this continues to fail. And why just those two sites.. Any advice?

    Read the article

  • [Mac OS X] Terratec Cinergy Hybrid T USB XS is not recognized anymore

    - by Gabble
    I have used Terratec Cinergy Hybrid T USB XS for years now, alongside with Elgato EyeTV software. I am a happy and completely satisfied user! Since a couple of weeks the USB sitck stopped working on my MacPro1,1 (OS version 10.6.3): EyeTV does not see any device attached, and actually, the green led on the stick stays off. 1) it is not a USB port fault: I have unplugged any other USB/Firewire device and tried with different USB ports, to no avail (any other USB devices work as expected on any port) 2) I have completely uninstalled EyeTV software, including preferences and system daemons/extensions, rebooted and reinstalled the latest EyeTV. No way. 3) reset the PRAM. Nope. 4) checked the Apple System Profiler - USB: No device attached, The MacPro does not see it at all. ============= I need to say that: a) the device worked as a charm even with the latest OS 10.6.x (so it's not a OS upgrade cause) b) I have plugged the Terratec Cinergy Hybrid T USB XS to my MacBook5,1 where EyeTV is not installed and was never installed: The green led on the stick turns on, the growl bubble pops up, and the device is perfectly recognized by the system. Apple System Profiler says (sorry, Italian language): Cinergy Hybrid T USB XS (2882): ID prodotto: 0x005e ID fornitore: 0x0ccd Versione: 1.10 Numero di serie: 061102005755 Velocità: Fino a 12 Mb/sec Produttore: TerraTec Electronic GmbH ID posizione: 0x04100000 Corrente disponibile (mA): 500 Corrente necessaria (mA): 500 At this point I am pretty sure the Terratec stick is not damaged and there is something wrong with my MacPro. I kindly ask you: Is there a way to force my MacPro recognize the USB device? What can I check? Is there something that caches USB connection that can be reset? A OS reinstall would be the very last resort for me… Thanks in advance for any help you will offer!

    Read the article

  • MSSQL error: consistency-based I/O error - can it be caused by an MSSQL or OS problem?

    - by Philipp Keller
    This is what I saw in the windows error log: SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x19fedd20; actual: 0x19fed5e3). It occurred during a read of page (1:1764) in database ID 6 at offset 0x00000000dc8000 in file 'D:\mssql\local_repository_pbdiffimport.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. I ran dbcc checkdb which told me I should restore with option REPAIR_ALLOW_DATA_LOSS, so I eventually ran DBCC CHECKDB (my_db_name, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS But that resulted in about 2'000 rows being lost. I restored a backup but now I'm afraid this will happen again since we already had a consistency problem in the same database about 2 weeks ago but then it happened in an index (recreated indexes solved the problem). We have investigated the discs - RAID5 looks good, no errors, and also none of the disc-check-utilities have revealed any hardware problem. Can this be caused by OS (Windows Server 2003) or by MSSQL (MSSQL Server 2005)?

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected]>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • Wildcard DNS entry to match lang subdomain

    - by Adam Benayoun
    Hey, We have a website www.example.com pointing to x.x.x.1 and a system with multiples minisites all having subdomains.examples.com pointing to x.x.x.2 Basically what we have in place is a wildcard DNS entry who could basically match any possible subdomain, once reaching x.x.x.2, the apache vhost would intercept and basically redirect it to a php script who in turn would know what minisites to serve. On www.example.com however, we server contents which are translates in several languages, until few weeks ago you could switch languages by clicking on a flag and you'd be served with the translated content. The only problem is that the URL wouldn't change and SEO wise this isn't the best solution. Now I cannot change the way subdomain are handled (being redirected to x.x.x.2) since we have hundreds, if not thousands of minisites live. I have to come up with a solution to have language.example.com redirecting to x.x.x.1 and then a rewrite rule who would basically rewrite the fake subdomain into a URL in order to pass the parameter of the language to example.com On solution is to list all possible language as DNS entries right before the wildcard DNS entry. The other solution which I am almost sure is not feasible is to have some kind of regex in a DNS entry matching all subdomain with 2 letters ( en|es|fr|cn|cl etc... ) Any ideas?

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    I already posted this one on stackoverflow but someone gave me the hint to that I might have more luck on serverfault. We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We do not use cursors in our app We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Differential backup missing moved folders (flawed archive attribute logic)

    - by Max
    Recently I've discovered that my backup system it flawed: there are situation where various files/folders are missed. I do my backup from local disk to a network NAS. I use Cobian backup, and I have setup the backup software to create one full backup every week, and one differential backup every day. Now, the backup software (to my knowledge any backup software work this way) decide the files that go in the differential backup by looking at the file archive attribute. If the attribute is set, then the file go in to the backup. Now, when you move a file to a new location, on Windows systems, the archive attribute get set and the file is included in the backup, and that's fine... but when you move an entire folder, no archive attribute is set, nor on the folder, nor in any files inside the folder, so the moved folder isn't included in the differential backup! So, if you have a full backup plus a differential backup, and you moved folders around... then it's impossible to reconstruct the original files/folders structure starting from the full+differential backup, because the backup software didn't include the moved folders in the differential backup. So my differential backup are useless... Why does windows set the archive attribute when moving a file, but not when moving a folder? How can I deal with this issue? Is there a way to create a differential backup that works as it's supposed to do? Doing full backup every day is not practical, because the changed data is about 0.1% at day (by using a differential backup I can keep 4 weeks of files history without using too much disk space.)

    Read the article

  • Diagnosing Logon Audit Failure event log entries

    - by Scott Mitchell
    I help a client manage a website that is run on a dedicated web server at a hosting company. Recently, we noticed that over the last two weeks there have been tens of thousands of Audit Failure entries in the Security Event Log with Task Category of Logon - these have been coming in about every two seconds, but interesting stopped altogether as of two days ago. In general, the event description looks like the following: An account failed to log on. Subject: Security ID: SYSTEM Account Name: ...The Hosting Account... Account Domain: ...The Domain... Logon ID: 0x3e7 Logon Type: 10 Account For Which Logon Failed: Security ID: NULL SID Account Name: david Account Domain: ...The Domain... Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x154c Caller Process Name: C:\Windows\System32\winlogon.exe Network Information: Workstation Name: ...The Domain... Source Network Address: 173.231.24.18 Source Port: 1605 The value in the Account Name field differs. Above you see "david" but there are ones with "john", "console", "sys", and even ones like "support83423" and whatnot. The Logon Type field indicates that the logon attempt was a remote interactive attempt via Terminal Services or Remote Desktop. My presumption is that these are some brute force attacks attempting to guess username/password combinations in order to log into our dedicated server. Are these presumptions correct? Are these types of attacks pretty common? Is there a way to help stop these types of attacks? We need to be able to access the desktop via Remote Desktop so simply turning off that service is not feasible. Thanks

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >