Search Results

Search found 8785 results on 352 pages for 'bug reporting'.

Page 289/352 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Some apps very slow to load on Windows 7, copy & paste very slow.

    - by Mike
    Hello, I searched here and couldn't find a similar issue to mine but apologies if I missed it. I've searched the web and no one else seems to be having the same issue either. I'm running Windows 7 Ultimate 64bit on a pretty high spec. machine (well, apart from the graphics): Asus M4A79T Deluxe AMD Phenom II 965 black edition (quad core, 3.4GHz) 8GB Crucial Ballistix DDR 1333MHz RAM 80GB Intex X25 SSD for OS 500GB mechanical drive for data. ATi Radeon HD 4600 series PCI-e Be Quiet! 850W PSU I think those are all the relevant stats, if you need anything else let me know. I've updated chipset, graphics and various other drivers all to no avail, the problem remains. I have also unplugged and replugged every connection internally and cleaned the RAM edge connectors. The problems: Video LAN (VLC) and CDBurnerXP both take ages to load, I'm talking 30 seconds and 1 minute respectively which is really not right. Copy and paste from Open Office spread sheet into Fire Fox, for example, is really, really slow, I'll have pressed control V 5 or 6 times before it actually happens, if I copy and then wait 5 to 10 seconds or so it'll paste first time so it's definitely some sort of time lag. Command and Conquer - Generals: Zero Hour. When playing it'll run perfectly for about 10 or 20 seconds then it'll just pause for 3 or 4 then run for another 10 or 20 seconds and pause again and so on. I had the Task Manager open on my 2nd monitor whilst playing once and I noticed it was using about 25% of the CPU, pretty stable but when the pause came another task didn't shoot up to 100% like others on the web have been reporting (similar but not the same as my issue, often svchost.exe for them) but dropped to 2 or 3% usage then back up to 25% when it started playing properly again. Very odd! But it gets even odder... I had a BSOD and reboot last week, when it rebooted the problem had completely gone, I could play C&C to my heart's content and both the other apps loaded instantly, copy and paste worked instantly too. I did an AVG update earlier this week which required a reboot, rebooted and the problem's back. I don't think it's AVG related though, I think it was just coincidence that's the app that required a reboot. I think any reboot would have brought the issue back. A number of reboots later and it hasn't gone away again. If any one could make any suggestions as to the likely cause and solution to these issues I'd be most grateful, it's driving me nuts! Thanks, Mike....

    Read the article

  • Exchange 2007 Standard Edition

    - by Phrontiste
    We Have : Exchange 2007 Standard Edition IBM System X3650 2 x Intel Xeon 5430 2.66 GHz Version 8.1 Build 240.6 Mailbox, Hub Transport, Client Access Role Installed on One Box Total Number of Mailboxes : 110 - 130 6 Physical Disks Disk 0,1 (68 GB) = Raid-1, OS Partition ( C: Partition) Disk 2,3 (279GB) = Raid-1, Exchange Database (First and Second Storage Groups) ( D: Partition ) Disk 4,5 (68 GB) = Raid-1, Exchange Transaction Logs ( E: Partition ) Setup: Storage Groups : D:\First Storage group\Mailbox database.edb Storage Groups : D:\Second Storage Group\Public Folder Database.edb Transaction Logs : E Partition Problem 1: On our D Partition (Mailbox Database Partition), total size is 279 GB, free space remaining is 64.7 GB, when I select the first storage group and second storage group folders and right click properties they report a size of 165 GB. Mailbox database reports a size of 157GB when right clicked Properties. where as the size displayed in the folder is 164,893,456 KB So, we are missing around 50-54 GB, there is nothing else on these drives, no page file, nothing at all. The partition housing the Transaction logs is reporting the sizes accurately. Any suggestions / fixes on the above ? Problem 2: As you may have already read in Problem 1, the size of the mailbox database is 157GB or 164GB reported; which is not recommended, a) What would you suggest we should do to divide mailboxes in storage groups on this same server ? b) How would we move mailboxes into different storage groups ? c) This is the information store size ? (Am I right in thinking that this is not recommended) d) Having multiple storage groups with one Mailbox DB in each, would that reduce the size of the Information Store? e) Any suggestions / how-to reduce the size of information store ? We didn't install this, we have inherited this - what other recommendations you can make in order to keep ourselves better prepared for any server disaster? We are backing up with Yosemite Backup on RD1000 (320GB) at the moment, which is backing up successfully, flushing the logs daily. We haven't done a test restore YET. I have tried to provide as much info as possible, please let me know if you need further info. Also, we haven't yet faced any problems in mailflow, access speeds, everything is working fine, we have two to five people accessing OWA or Outlook via vpn only. Thanks for your time to read the above - will look forward to your expert suggestions.

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Email server can send internal, but messages never arrive at external recipients

    - by Chase Florell
    I'm running MailEnable on my server, and have been for many years. Recently we had an attack on our server, and I was able to close the hole. Since then, our mail server doesn't seem to be sending mail out. If I send an email from myself to another account hosted on the server, the email arrives as expected. If I send an email from my gmail account to my business account, the email also arrives as expected The problem comes when I send from my business account to an external domain I tried the following Gmail.com Hotmail.com Shaw.ca When I send to any of the above The message leaves my client as expected, The logs appear to accept and forward on the message The SMTP outbound que is empty The message never arrives I have checked our domain with mxtoolbox.com senderbase.org And neither of them are reporting any problems with our domain. I have ensured that port 25 is open (along with the other standard ports) Here is one of the log entries from the SMTP connector 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 220 mx1.example.com ESMTP MailEnable Service, Version: 6.81--6.81 ready at 11/05/13 12:10:00 0 0 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH AUTH LOGIN 334 VXNlcm5hbWU6 18 12 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH {blank} 334 UGFzc3dvcmQ6 18 26 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH Y29sb25lbGZhY2U= 235 Authenticated 19 18 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 MAIL MAIL FROM:<[email protected]> 250 Requested mail action okay, completed 43 31 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 RCPT RCPT TO:<[email protected]> 250 Requested mail action okay, completed 43 35 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 DATA DATA 354 Start mail input; end with <CRLF>.<CRLF> 46 6 [email protected] Here are the headers of the sent message X-Assp-Version: 1.7.5.7(1.0.07) on ASSP.nospam X-Assp-ID: ASSP.nospam 78601-04523 X-Assp-Intended-For: [email protected] X-Assp-Envelope-From: [email protected] Received: from [10.10.1.101] ([68.147.245.149] helo=[10.10.1.101]) with IPv4:587 by ASSP.nospam; 5 Nov 2013 12:10:00 -0700 From: Chase Florell <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: 7bit Subject: Test Message Message-Id: <[email protected]> Date: Tue, 5 Nov 2013 12:10:18 -0700 To: Chase Florell <[email protected]> Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1816\)) X-Mailer: Apple Mail (2.1816) . Where else can I check to see if there is something broken? What could cause a problem like this whereby the message appears to send, but never arrives, and never returns a bounce?

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • Cannot access website from inside network

    - by musclez
    I have a website running from my internal network available at the example IP 192.168.1.5. When I type this in to the browser, it redirects to my domain name ie, "example.com", and gives me Error code: ERR_CONNECTION_REFUSED. Any other machine that is inside of the network can access the website. The website is also accessible outside of the network. Other services from the server, like file sharing or ftp, are available to all machines in the network including the one i'm having issues http issues with. The issue may be linked to a proxy service, but from my understanding the service has been completely disabled and any executable have been uninstalled from the machine. I am wondering if there is some residual proxy information remaining on the machine that limits the connection. I'm fairly positive that "example.com" is what is being blocked by the local machine, and not an IP address being blocked or a faulty connection. When I examine the hosts file, there are no redirects to the local machine for "example.com". There was a rule, as on my other machines within the network: 192.168.1.5 example.com But i have since removed that for troubleshooting purposes. What intrigued me is that when I use the actual IP, the IP address will redirect to the domain in the browser and THEN say ERR_CONNECTION_REFUSED. Server-Side Results The server logs are reporting this: example.com ::1 - - [Date & time] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2. 2.22 (Unix) (internal dummy connection)" However, this seems to be irrelevant as it is not triggered when I try to connect to the server with the specified machine. Fiddler results: Host: *example.com* Proxy-Connection: keep-alive Chrome-Side [Fiddler] The connection to 'example.com' failed. Error: ConnectionRefused (0x274d). System.Net.Sockets. SocketException No connection could be made because the target machine actively refused it 01.23.45.67:80 01.23.45.67:80 would be the external IP, which the server and the machine in question both share. I am doing so reading into 0x274d and its coming back with .NET web.config information. I am still at a loss to what to do with this information. I have WireShark running as well. Theres is a lot of sensitive information in the readout and I'm not sure what to extract from it. Either way, if it helps, I can access that information if anyone would like me to. Thanks for the help!

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

  • Computer experiencing slowdowns and lockups despite low cpu useage

    - by user157145
    my setup i5-2300 nvidia gtx 550 ti 6 gigs ram 600 w ocz modular psu recently reformatted and already experiencing drastic slowdown as soon as windows comes up, including repeated lockups with multiple various programs reporting that they are not responsive, then recovering after 10-30 seconds. ive checked memory and hard drive both of which come out fine. despite my plethura of worthless antiviral software im forced to assume that my illicit downloading practices have lead me into some comp trouble that i cant seem to determine. i have used ccleaner, search and destroy and malware bytes, all of which have found nothing to indicate what is causing this massive slowdown. in addition according to my resource manager my computer is operating at a load of only 30-50 percent CPU useage and 60 ram useage but taking 5-10 seconds to load files and open folders, and repeated lockups of multiple programs, especially firefox which seems to go unresponsive every 2-3 minutes. any help would be appreciated, i used a program called OTL by old timer, but cant make any sense of the results i was given. any help or suggestions would be appreciated, thank you for taking the time to read this i have avast but it didnt even find anything when i had it do a full system scan, so im thinking its clueless(also nortons, avg, and ad-aware). i also have mse but it has yet to complete a full scan it takes so long (i left it on last night but when i woke up my computer had a problem and had to restart). my hard drive has 300 gigs out of 1tb open and i already used hd tune pro, which said my harddrive was fine and its not a ssd. also im a noob at comps and only have the hd that is currently inside the computer in addition im not sure if studdering is the issue im suffering. my problem is that during my typing of these responses firefox has gone "not responsive" at least 5 times, each for times of about 5-10 seconds. when i try to control alt delete to bring up windows task manager it took 20 seconds. essentially its that my computer goes super slow at bringing up anything, or taking any action whatsoever that opens a program or file and has repeated incidents where i cant even click on whatever im trying to do because it locks up. the confusing thing about these incidents is that its right after restarting where there are minimal programs running and the computer and memory load is light.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • PTR Record Troubles

    - by Physikal
    I am having a hell of a time getting our PTR record right. Our current PTR zone looks like this: $ttl 38400 @ IN SOA ns1.domain.com. admin.domain.com. ( 1268669139 10800 3600 604800 38400 ) xxx.xxx.xxx.in-addr.arpa. IN NS ns2.domain.com. xxx.xxx.xxx.in-addr.arpa. IN NS ns1.domain.com. 97 IN PTR mail.domain.com. xxx.xxx.xxx.xxx.in-addr.arpa. IN PTR mail.domain.com. 97.96/28. IN PTR mail.domain.com For some reason the only thing that works is the 97.96/28. When this line is in there it actually says I have a PTR record when reporting from intodns.com. If I remove that line, it says I have no PTR. I have followed instructions from http://www.philchen.com/2007/04/04/configuring-reverse-dns and when I follow those instructions intodns.com says I have no PTR. When it does work with the line 97.96/28., the PTR kicks back as (from intodns.com) : 97.xxx.xxx.xxx.in-addr.arpa -> mail.domain.com.xxx.xxx.xxx.in-addr.arpa Which is, to my knowledge, an incorrect PTR. I want it to just kick back as mail.domain.com, without the xxx.xxx.xxx.in-addr.arpa extension. I have tried everything I can think of but I can't fix it. I can't help but think it's one of those things that is so stupid and simple I'm going to do the ol'facepalm. Any help is greatly appreciated. Thanks! In the event that the domain zone is needed, here it is: $ttl 38400 @ IN SOA domain.com. [email protected]. ( 1265221037 10800 3600 604800 38400 ) domain.com. IN A xxx.xxx.xxx.xxx www.domain.com. IN A xxx.xxx.xxx.xxx ftp.domain.com. IN A xxx.xxx.xxx.xxx m.domain.com. IN A xxx.xxx.xxx.xxx localhost.domain.com. IN A 127.0.0.1 webmail.domain.com. IN A xxx.xxx.xxx.xxx admin.domain.com. IN A xxx.xxx.xxx.xxx mail.domain.com. IN A xxx.xxx.xxx.xxx domain.com. IN MX 5 mail.domain.com. domain.com. IN TXT "v=spf1 a mx a:domain.com ip4:xxx.xxx.xxx.xxx ?all" domain.com. IN NS ns1 domain.com. IN NS ns2 ns1 IN A xxx.xxx.xxx.xxx ns2 IN A xxx.xxx.xxx.xxx Any double entries in different formats were part of my troubleshooting process.

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • Suspicious process running under user named

    - by Amit
    I get a lot of emails reporting this and I want this issue to auto correct itself. These process are run by my server and are a result of updates, session deletion and other legitimate session handling reported as false positives. Here's a sample report: Time: Sat Oct 20 00:00:03 2012 -0400 PID: 20077 Account: named Uptime: 326117 seconds Executable: /usr/sbin/nsd\00507d27e9\0053\00\00\00\00\00 (deleted) The file system shows this process is running an executable file that has been deleted. This typically happens when the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file. See csf.conf and the PT_DELETED text for more information about the security implications of processes running deleted executable files. Command Line (often faked in exploits): /usr/sbin/nsd -c /etc/nsd/nsd.conf Network connections by the process (if any): udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 udp: 127.0.0.1:53 -> 0.0.0.0:0 udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: 127.0.0.1:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 Files open by the process (if any): /dev/null /dev/null /dev/null Memory maps by the process (if any): 0045e000-00479000 r-xp 00000000 fd:00 2582025 /lib/ld-2.5.so 00479000-0047a000 r--p 0001a000 fd:00 2582025 /lib/ld-2.5.so 0047a000-0047b000 rw-p 0001b000 fd:00 2582025 /lib/ld-2.5.so 0047d000-005d5000 r-xp 00000000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d5000-005d7000 r--p 00157000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d7000-005d8000 rw-p 00159000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d8000-005db000 rw-p 005d8000 00:00 0 005dd000-005e0000 r-xp 00000000 fd:00 2582087 /lib/libdl-2.5.so 005e0000-005e1000 r--p 00002000 fd:00 2582087 /lib/libdl-2.5.so 005e1000-005e2000 rw-p 00003000 fd:00 2582087 /lib/libdl-2.5.so 0062b000-0063d000 r-xp 00000000 fd:00 2582079 /lib/libz.so.1.2.3 0063d000-0063e000 rw-p 00011000 fd:00 2582079 /lib/libz.so.1.2.3 00855000-0085f000 r-xp 00000000 fd:00 2582022 /lib/libnss_files-2.5.so 0085f000-00860000 r--p 00009000 fd:00 2582022 /lib/libnss_files-2.5.so 00860000-00861000 rw-p 0000a000 fd:00 2582022 /lib/libnss_files-2.5.so 00ac0000-00bea000 r-xp 00000000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bea000-00bfe000 rw-p 00129000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bfe000-00c01000 rw-p 00bfe000 00:00 0 00e68000-00e69000 r-xp 00e68000 00:00 0 [vdso] 08048000-08074000 r-xp 00000000 fd:00 927261 /usr/sbin/nsd 08074000-08079000 rw-p 0002b000 fd:00 927261 /usr/sbin/nsd 08079000-0808c000 rw-p 08079000 00:00 0 08a20000-08a67000 rw-p 08a20000 00:00 0 b7f8d000-b7ff2000 rw-p b7f8d000 00:00 0 b7ffd000-b7ffe000 rw-p b7ffd000 00:00 0 bfa6d000-bfa91000 rw-p bffda000 00:00 0 [stack] Would /etc/nsd/restart or kill -1 20077 solve the problem?

    Read the article

  • graphic error on console

    - by Christian Elsner
    I have Linux on an embedded system. There is no graphic system, but I still have graphic errors. For example, if I type: ifconfig eth2 hw ether 00:0e:8c:d0:59:d2 I see: ifconfig eth hw ether 00:0e:8:2:2 If I type Enter, it accepts the command I typed, so it's just a matter of displaying. Everything is fine, when I log in via SSH. Anyone any ideas, what could be the cause or where to look at? Output of lspci: 00:00.0 Host bridge: Intel Corporation 3100 Chipset Memory I/O Controller Hub 00:00.1 Unassigned class [ff00]: Intel Corporation 3100 DRAM Controller Error Reporting Registers 00:01.0 System peripheral: Intel Corporation 3100 Chipset Enhanced DMA Controller 00:02.0 PCI bridge: Intel Corporation 3100 Chipset PCI Express Port A 00:03.0 PCI bridge: Intel Corporation 3100 Chipset PCI Express Port A1 00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #2 (rev 01) 00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset EHCI USB2 Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c9) 00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC Interface Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation 631xESB/632xESB/3100 Chipset SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus Controller (rev 01) 02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 03:01.0 Network controller: Siemens Nixdorf AG Device 4003 (rev 02) 03:01.1 Unassigned class [ff00]: Siemens Nixdorf AG Device 4003 (rev 02) 03:02.0 Ethernet controller: Siemens Nixdorf AG Device 4047 (rev 01) 03:03.0 Ethernet controller: National Semiconductor Corporation DP83815 (MacPhyter) Ethernet Controller 03:04.0 Unassigned class [ff00]: Siemens Nixdorf AG Device 4057 (rev 01) 04:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) 05:00.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 44) 06:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) 07:00.0 VGA compatible controller: Silicon Motion, Inc. SM720 Lynx3DM (rev c1) 07:01.0 USB Controller: NEC Corporation USB (rev 43) 07:01.1 USB Controller: NEC Corporation USB (rev 43) 07:01.2 USB Controller: NEC Corporation USB 2.0 (rev 04) The whole thing is running on an Intel Core 2 Duo U2500

    Read the article

  • startup Error for Zend Server CE

    - by Jamison
    Hello! I've got a strange startup error for Zend Server CE - it's probably easy to fix, but I don't have much experience with Zend Server! I'm running the latest OSX 10.6.6 and the latest Zend Server CE for Mac. When I run the "start" command from the command line, here is what I get: /usr/local/zend/bin/apachectl start [OK] spawn-fcgi: child spawned successfully: PID: 4206 /usr/local/zend/bin/shell_functions.rc: line 133: 4210 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4211 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Zend Server GUI [Lighttpd] [FAILED] /usr/local/zend/bin/lighttpdctl.sh: line 46: 4212 Bus error $WATCHDOG -i $BINARY Starting MySQL SUCCESS! /usr/local/zend/bin/shell_functions.rc: line 133: 4304 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4425 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Java bridge [FAILED] /usr/local/zend/bin/java_bridge.sh: line 39: 4426 Bus error $WATCHDOG -i $BINARY Zend Server started... The challenge is that ZEND SERVER wont open the GUI with this error, and seemingly I can click on Zend Server in the Applications folder and it opens for a second and immediately closes. I've made sure that Web Sharing is turned off to avoid conflicts, and I've run Disk Utility from my recovery disk to make sure there are no file system errors. Here is what the lines that are referenced in the errors have in terms of code: shell_functions.rc: (starting on line 132 - the error message says line 133...): launch() { if [ -z "$DEBUG" ]; then exec 3>/dev/null 4>&3 else exec 3>&1 4>&2 fi $WATCHDOG -i $BINARY 1>&3 2>&4 RET=$? if [ $RET -eq 0 ];then $ECHO_CMD "$BINARY watchdog is up and running.. ${OK_COLOR}[OK]${T_RESET}" return $RET else #$WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY >> "$PREFIX/logs/watchdog_$BINARY.log" 2>&1 $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 report $? "Starting" fi } _kill() { $WATCHDOG -i $BINARY > /dev/null 2>&1 if [ $? -eq 1 ];then $ECHO_CMD "$BINARY is not running" else $WATCHDOG -t $BINARY > /dev/null 2>&1 report $? "Stopping" fi } lighttpdctl.sh: (starting on line 45 - the error message says line 46...): status() { $WATCHDOG -i $BINARY } case "$1" in start) start status ;; stop) stop ;; restart) stop sleep 1 start ;; status) status ;; *) usage exit 1 esac exit $? java_bridge.sh: (starting on line 38 - the error message says line 39...): status() { $WATCHDOG -i $BINARY } Question: "Watchdog" is library in this zend BIN folder - it seems to handle error reporting? all the errors in my start command seem to deal with this Watchdog thing, but I don't know what to do about it... Thanks!

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • Both nginx and php5-fpm init.d startup scripts are non-functional and returning no errors..? But they used to work perfectly

    - by Ollie Treend
    I have been using nginx and php5-fpm on my Ubuntu box for a while now. Everything has been configured and setup correctly, and it ran like a charm. I have been keeping the packages updated & upgraded as usual, but haven't touched the nginx OR php5-fpm config files at all (thus I'm pretty sure this isn't my fault... ) Basically, I noticed nginx wasn't running as it should be. I ran the command sudo service nginx start, and the script did nothing. The same thing happens when trying to do anything - start, stop, restart or reload. This also happens for the "php5-fpm" init script - although all other init scripts seem to be functioning correctly. When trying to start nginx OR php5-fpm, this is what happens: root@HAL:/etc# service php5-fpm start root@HAL:/etc# I can't understand what is going wrong. The script isn't returning errors, but similarly it isn't starting the daemon or reporting success as usual. For reference, both installations are from the official nginx and php5-fpm PPAs. The fact that both started doing this at the same time has thrown me - since they are both unrelated packages. I have since purged both sets of packages from my system with apt-get purge ... and also apt-get remove --purge ... both of which have successfully removed the packages, their config files, and their init.d startup scripts. After having reinstalled nginx, I now have a functioning startup script again - I can start the web server as usual. However, php5-fpm is still experiencing the strange premature exiting of the startup script.. and I really can't figure out what's causing it. I have no idea what caused this to occur initially, but have managed to fix nginx. I now need to fix the php5-fpm startup script. If anybody could shed some light on this situation, I would be very grateful! The chances are both these issues are related - and they were caused by me doing something stupid. But now I need to fix it. This time I was lucky - because these problems are just on my development server. But I have 2 other live servers which are configured in a similar way, and I am worried the same thing will happen to these two as well! Has anybody else come across this? Do you have any words of advice? Thank you

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Undelivered Mail Returned to Sender

    - by Alex
    When sending to [email protected] via PHP mail() function, I receive mails. When sending emails from external machines, I receive the following (e.g., sending from [email protected]. [mail.ru is Russian gmail]): This is the mail system at host fallback2.mail.ru. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: lost connection with mail.mydomain.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Reporting-MTA: dns; fallback2.mail.ru X-mPOP-Fallback_MX-Queue-ID: D8C19F2411F1 X-mPOP-Fallback_MX-Sender: rfc822; [email protected] Arrival-Date: Tue, 29 Oct 2013 10:09:21 +0400 (MSK) Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 4.4.2 Diagnostic-Code: X-mPOP-Fallback_MX; lost connection with mail.tld.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Here is my postfix main.cf: command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix myhostname = mail.mydomain.com mydomain = mydomain.com myorigin = mydomain.com inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 in_flow_delay = 1s alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mail_name = mydomain.com daemon debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES bounce_queue_lifetime = 4h maximal_queue_lifetime = 4h delay_warning_time = 1h strict_rfc821_envelopes = yes show_user_unknown_table_name = no allow_percent_hack = no swap_bangpath = no smtpd_delay_reject = yes smtpd_error_sleep_time = 20 smtpd_soft_error_limit = 1 smtpd_hard_error_limit = 3 smtpd_junk_command_limit = 2 mydestination = mydomain.com, localhost.localdomain, localhost smtpd_client_restrictions = permit_inet_interfaces smtpd_recipient_limit = 100 virtual_alias_domains = mydomain.com virtual_alias_maps = hash:/etc/postfix/virtual smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination Why emails from external server are not being delivered? Thank you! Update In a log, the following lines appear a lot of times Oct 30 10:48:29 mydomain postfix/smtpd[16216]: connect from fallback5.mail.ru[94.100.176.59] Oct 30 10:48:29 mydomain postfix/smtpd[16216]: warning: SASL: Connect to private/auth failed: Connection refused Oct 30 10:48:29 mydomain postfix/smtpd[16216]: fatal: no SASL authentication mechanisms It appears I have to configure SASL? I would understand if I would like to send emails from postfix, but why do I need it to receive emails?

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • Add Windows 7’s AeroSnap Feature to Vista and XP

    - by Asian Angel
    Are you using Windows Vista or XP and want that Windows 7 AeroSnap goodness on your own system? Then join us as we look at AeroSnap for Windows Vista and XP. Note: Requires .NET Framework 2.0 or higher (link provided at bottom of article). Setup What exactly does AeroSnap do you might ask…here is a quote directly from the website: “AeroSnap is a simple but powerful application that allows you to resize, arrange or maximize your desktop windows with just drag’n'drop. Simply drag a window to a side of your desktop to snap it or drag it to the top to maximize. When you drag it back to the last position, the last window size will be restored.” As soon as you have finished installing AeroSnap and started it for the first time the only item that will be visible is the “System Tray Icon”. Before going any further you should take a moment to view and make any desired adjustments in the “Options”. Note: AeroSnap works with multiple monitors. You may want to have AeroSnap start with Windows each time but the really nice setting to enable here is the “Snap Preview”. If you are using AeroSnap on Vista and have Aero enabled this will really be nice. The second portion may be of interest for those who would like to enable the keyboard shortcut function. One point worth noting about this screen is that the highest number of pixels from the screen’s edge that you can set AeroSnap for is 20 pixels. AeroSnap in Action AeroSnap is extremely easy to use…just grab the top of an app window and drag it to the left, right, or top of your screen. Since we installed this on Windows Vista we made certain to enable the “Snap Preview” in the “Options”.  We started off with dragging our Firefox 3.7 window towards the left…once we got close to the edge of the screen you can see that the left half of the screen temporarily “shaded over”. Note: The “Snap Preview” displays on the left and right movements but not the top movement. Releasing Firefox snapped it right into the “shaded over” part of the screen. The great thing about AeroSnap is that it is really easy to return the app window to it former size…all that you have to do is simply click on and grab the top portion of the app window. Moving Firefox towards the top of our screen and… It quickly snaps into filling the screen. One thing that we did notice is that the window did not “Maximize” as per the function for the button in the upper right corner. Dragging towards the right side now… And snap! Tucked in all nice and neat… You can minimize the app windows to the Taskbar and they will return to their previous “snap area” when “maximized” again. Conclusion If you have been wanting to add Windows 7’s AeroSnap goodness to your Vista and XP systems then you should definitely give this app a try. AeroSnap is very easy to set up and operate… Links Download AeroSnap for Windows Vista & XP Download the .NET Framework Similar Articles Productive Geek Tips Using Windows 7 or Vista System RestoreRoundup: 16 Tweaks to Windows Vista Look & FeelSelect Files using Check Boxes in Windows VistaSpeed up Your Windows Vista Computer with ReadyBoostHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >