Search Results

Search found 59449 results on 2378 pages for 'disk error'.

Page 104/2378 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Saving a movie generated with Jython/JES on local disk

    - by Golgauth
    I made an autogenerated movie clip using JES (Jython Environment for Students). I can play it without any problem using playMovie(), but I can't figure out how to have it saved physically on disk. The full script is located here. ... movie = synthesizeFrameAndCreateMovie("D:\\FOLDER") print movie writeQuicktime(movie,"D:\\FOLDER\\movie.mov", 30) [LINE 35] #playMovie(movie) I get this error when calling the function writeQuicktime(): >>> ======= Loading Progam ======= Movie, frames: 60 The error was: Index: 0, Size: 0 I wasn't able to do what you wanted. The error java.lang.IndexOutOfBoundsException has occured Please check line 35 Note : I also tried the function writeAVI(), with the exact same result. This error sounds like a java bug in Jython/JES library. I am running JES under Windows 7 and have all the common Quicktime and AVI codex installed as well as the QTjava library in my jre... Any brilliant idea ? EDIT : Also tried the Linux version with same result for both QuickTime and AVI...

    Read the article

  • DiskArbitration image Disk (dmg)

    - by Nyxem
    Hello, I have a problem using DiskArbitration framework, to catch disk image mounting I register for DARegisterDiskMountApprovalCallback The problem is that each time a disk image is mounted the callback is called twice. Why is that and how can I solve this ? Thanks.

    Read the article

  • How to copy a ram_base file to disk efficiently

    - by Hui Jin
    I want to copy a large a ram-based file (located at /dev/shm direcotry) to local disk, is there some way for an efficient copy instead of read char one by one or create another piece memory? I can use only C language here. Is there anyway that I can put the memory file directly to disk? Thanks!

    Read the article

  • Iomega eGo Encrypt Plus Encrypted Partition not mounting properly says "local disk"

    - by mosiac
    I'm working with an Iomega eGo 500gb Encrypt Plus portable drive. When I first set it up and installed the software and set a user password everything worked fine. The partition labeled "IomegaHDD" mounted properly and I could access the free space. Then I changed the ADMIN password which required me to lockout the device, wait 60 seconds, and then login to the Admin section and change the password, lockout the device again, wait 60 seconds, and then log back in with my user password. When I did that it of course unmounted the IomegaHDD partition to secure it, when it remounts it, it only shows up as "local disk" now and will not remount properly. I had not removed the cable while doing any of this. I have since tried unplugging and plugging back in to login to the drove but that has not worked. I'm wondering if I should remove every instance of "generic usb hub" from device manager and wait for it to re-add itself, or move it to a new set of USB ports temporarily to seee if that helps. Any ideas?

    Read the article

  • ASP.Net application can no longer write to DB after having run out of disk space

    - by remi.despres-smyth
    I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem. They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig). My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk". As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect. I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio. I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell. I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again. Can anyone help me out? What the heck is going on here?

    Read the article

  • error in php not logging or displaying

    - by Grant M
    I can not get any errors to display on screen on write to a log file. When check phpinfo() print out I have same value of master a local for display_errors On display_startup_errors On error_log /var/log/php.log error_reporting E_ALL & ~E_NOTICE log_errors On log_errors_max_len ls -l /var/log/php.log is -rw-rw-rw- 1 root root 0 Jun 21 07:47 /var/log/php.log for /var and /varlog drwxrwxrwx 23 root root 4096 Jun 2 11:13 var when there is an error in the code the page the browsers shows nothing and browser says there is no source for the page. Any suggestions of where else to look or change to errors to appear somewhere (anywhere would be good) edit: my error script is now <?php ini_set('display_errors', 1); error_reporting(E_ALL); echo "print from error.php 2 "; error //print from erroerror to cuase logging to happen. ?> this will print on display and to log Notice: Use of undefined constant error - assumed 'error' in /var/www/piku_dev2/error.php on line 7 but if I put garbage like #@$%$ on the error line I get nor error messages anywhere. Edit2: The problem turned out to in the httpd.conf file. I don't know what it was yet as it was fixed y someone else.

    Read the article

  • I used disk copy to clone my drive, now my windows 7 profile won't load correctly

    - by RzK
    I used easeuse disk copy, after acronis, clonezilla, windows image restore failed me. Basically it copys all sectors, I set it to skip bad sectors(40). The source drive works, it just gave me a couple errors and stopped booting at one point. The drive is an identical copy, minus 40 bad errors. The drive is set to C and active partition, I rebuilt the boot order. I've ran sfc /scannow and ran chkdsk /r chkdsk found 20kb of bad sectors if I remember right. Now the issue I get is when I log into my profile which was saved right, I get a blank light blue wallpaper (non-license) explorer.exe is not running, and there are only 4 processes running in taskmanager, including taskmanager. I would try a repair install but CRTL-E would not open, nothing will open once I force start explorer.exe, almost like all services are down. What should I do? Fresh install is almost not a possibility I will try and fix this issue. sfc /scannow /offbootdir=c:\ /offwindir=c:\windows returns "Windows Resource Protection could not perform the requested operation"

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • System Center Essentials server running out of disk space due to stored old updates

    - by Ricket
    We have a System Center Essentials (SCE) server to filter updates to our laptops. We've configured it to download the update, and then the laptops get the update from this server; this of course reduces our internet bandwidth and the time it takes for employees to receive the updates, which reduces the complaints we get about how long updates take. However we currently have a total of 2,255 updates stored on the server. SCE gives a breakdown: Updates with installation errors: 29 Updates needed by computers: 280 Updates installed/up-to-date: 0 Updates with no status: 1946 Our little server has 68gb of hard disk space, and the updates are currently taking 32gb and counting. Some of the updates date back to 2003, but we can't figure out a way to delete them to free up space on the server. Right-clicking an update and clicking Uninstall threatens to remove the update from all computers, which is not what we want. Some of the updates even inform us upon viewing: This update has been replaced by a newer update. Before declining this update, it is recommended that you approve the new update first and verify that this update is no longer needed by any computers. How do you prevent your SCE server from filling its hard drive space? Is there a way to configure the server to only keep updates that are still needed? Furthermore, why (in the above breakdown of updates) are there so many updates with "no status" and 0 updates that are "installed/up-to-date"?

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

  • Windows 7 doesn't boot when second hard disk is connected

    - by kenshin9786
    I'm sorry for my bad english on beforehand. I have two hard disks, one SATA and another IDE. I have windows XP and 7 on the SATA one, and Ubuntu on the IDE. Both of them boots and works, bios recognizes them, just works. After I installed Windows 7, and connected the IDE drive, it freezes on "Starting Windows" (the black screen with the Windows logo). I unplug the IDE drive, and it starts normally. Windows XP starts normally on both situations (with or without the IDE one connected), same for Ubuntu (it works with both disks connected or just the IDE where it is). The IDE drive is on good status according to SMART. The IDE is first on boot order. It goes to the Ubuntu's grub first, then by default it goes to the Windows 7 bootloader, and then to XP. I think the problem is not about the bootloader or grub. I just read that it can be solved formatting the "problematic" hard disk because Windows 7 cannot handle so many active partitions or something like that. But that's not an option for me, I don't want to lose my Ubuntu nor have it unbootable. How can I solve this without this consecuences I mentioned? Any help would be appreciated.

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • Export-Mailbox Error

    - by tuck918
    All, I am using export-mailbox to move some data and it is working fine until I get this error: StatusMessage : Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000 This is the command I am using: export-mailbox -identity mailboxA -targetmailbox mailboxB -targetfolder folderA -allowmerge We are on SP2 and I am running this under an account that is not a domain or enterprise admin. THe account has Exchange Server Administrator Permission Both Source and Target Exchange Mailbox Server. THe account is part of the Local Administrators Group Member Both Source and Target Exchange Mailbox Server. This account has Full Access permission on both the target and source servers. THe issue happens at any time and I am only trying to run this on one mailbox, the only mailbox I need to run it on. THe event log is "Error Exchange Migration Export Mailbox Event 1008". The log under migration logs just shows that it was running okay then it gives the same error as above "Error was found for mailboxA ([email protected]) because: Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" Any ideas on what to do/try?

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • Disk wipe preferences

    - by hmvm123
    I manage a pool of systems that are loaded with software and sent to potential customers for evaluations which often land sensitive information on the drives. Before shipping them back, they typically like a standard wipe to be run to clean out the drives. Most are familiar with DBAN so I try to make sure it can work on my systems. Unfortunately, this means I'm usually in RAID driver hell trying to make sure that the versions out there support the ones my systems are shipping with. These are various kinds of 3ware and LSI ones. Consequently, I have DBAN 1.0.7 working on some, a beta version of 2.0 on the others and 2.2.6 on some of the latest SSD based ones. Now with the LSI controllers on my IBM x3550 M3s (1064/1068) I'm getting no love at all. Is there a way out? Do you buildroot with DBAN and try to piece the drivers together? Any other tools, free or commerical, that stay updated. I'm trying to walk people of varying technical proficiencies through this, so a boot disk with simple choices is preferable.

    Read the article

  • Is this hard disk dead?

    - by Korjavin Ivan
    Not sure, is this right site for this Q, but let me try Last time i have problem with hard disk. Sometimes its do strange sound, and i get it from logs: $dmesg | grep ata4 [29409.945516] ata4.00: exception Emask 0x10 SAct 0xf SErr 0x90202 action 0xe frozen [29409.945529] ata4.00: irq_stat 0x00400000, PHY RDY changed [29409.945538] ata4: SError: { RecovComm Persist PHYRdyChg 10B8B } [29409.945546] ata4.00: failed command: READ FPDMA QUEUED [29409.945562] ata4.00: cmd 60/30:00:56:22:5f/00:00:00:00:00/40 tag 0 ncq 24576 in [29409.945573] ata4.00: status: { DRDY } [29409.945580] ata4.00: failed command: READ FPDMA QUEUED [29409.945594] ata4.00: cmd 60/18:08:8e:22:5f/00:00:00:00:00/40 tag 1 ncq 12288 in [29409.945605] ata4.00: status: { DRDY } [29409.945611] ata4.00: failed command: READ FPDMA QUEUED [29409.945625] ata4.00: cmd 60/08:10:46:02:66/00:00:00:00:00/40 tag 2 ncq 4096 in [29409.945635] ata4.00: status: { DRDY } [29409.945641] ata4.00: failed command: READ FPDMA QUEUED [29409.945656] ata4.00: cmd 60/80:18:ee:04:66/00:00:00:00:00/40 tag 3 ncq 65536 in [29409.945666] ata4.00: status: { DRDY } [29409.945679] ata4: hard resetting link [29413.976083] ata4: softreset failed (device not ready) [29413.976097] ata4: applying SB600 PMP SRST workaround and retrying [29414.148070] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [29414.184986] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243280] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243292] ata4.00: configured for UDMA/133 [29414.243324] ata4: EH complete [680674.804563] ata4: exception Emask 0x50 SAct 0x0 SErr 0x90a02 action 0xe frozen [680674.804575] ata4: irq_stat 0x00400000, PHY RDY changed [680674.804584] ata4: SError: { RecovComm Persist HostInt PHYRdyChg 10B8B } [680674.804603] ata4: hard resetting link [680678.840561] ata4: softreset failed (device not ready) Is this ata4 sata hard drive dead? Must i change it ASAP ? Need I specify more info?

    Read the article

  • Hardware freeze during disk activity

    - by Thomi
    I built myself a linux-based NAS. It has several drives of various sizes and ages in an LVM configuration, with 800GB or so of data. The data is served using a simple samba server. This was working flawlessly, but after physically moving it, it has developed a strange fault: Whenever I do something on the server to cause disk activity, the entire machine freezes hard. This has the effect of killing any open network connections to the box, and generally making it useless. If I leave the machine for a few minutes it seems to come right again, but obviously this isn't really a solution. There are no error or warning messages in syslog, or the kernel logs. If I power the machine on, and leave it, it runs for several days without locking up. After that time I stopped testing. It doesn't freeze instantly - obviously it doesn't freeze while booting, and I can normally log in via SSH and start poking around in a few log files for a couple of minutes before it dies. My question is: What diagnostic tests can I run to determine the casuse?

    Read the article

  • Incredble low disk performance on HP DL385 G7

    - by 3molo
    Hi, As a test of the Opteron processor family, I bought a HP DL385 G7 6128 with HP Smart Array P410i Controller - no memory. The machine has 20GB ram 2x146GB 15k rpm SAS + 2x250GB SATA2, both in Raid 1 configurations. I run Vmware ESXi 4.1. Problem: Even with one virtual machine only, tried Linux 2.6/Windows server 2008/Windows 7, the VMs' feel really sluggish. With windows 7, the vmware converter installation even timed out. Tried both SATA and SAS disks and SATA disks are nearly unsusable, while SAS disks feels extremely slow.I can't see a lot of disk activity in the infrastructure client, but I haven't been looking for causes or even tried diagnostics because I have a feeling that it's either because of the cheap raid controller - or simply because of the lack of memory for it. Despite the problems, I continued and installed a virtual machine that serves a key function, so it's not easy to take it down and run diagnostics. Would very much like to know what you guys have to say of it, is it more likely to be a problem with the controller/disks or is it low performance because of budget components? Thanks in advance,

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • Apache displays error page half way through PHP page execution

    - by Shep
    I've just installed Zend Server Community Edition on a Windows Server 2003 box, however there's a bit of a problem with the display of a lot of our PHP pages. The code has previously running under the same version of PHP (5.3) on IIS without any issues. By the looks of things, Apache (installed as part of Zend Server) is erroring out during the rendering of the page when it comes across something it doesn't like in the PHP. Going through the code, I've been able to get past some of the problems by removing the error suppression operator (@) from function calls and by changing the format of some includes. However, I can't do this for the whole site! Weirdly, the error code is reported as "200 OK". The code snippet below shows how the Apache error HTML interrupts the regular HTML of the page. <p>Ma quande lingues coalesce, li grammatica del resultant lingue es plu simplic e regulari quam ti del coalescent lingues.</<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>200 OK</title> </head><body> <h1>OK</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log The Apache error log doesn't offer any explanation for this, and I've exhausted my Googling skills, so any help would be greatly appreciated. Thanks.

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >