Search Results

Search found 52450 results on 2098 pages for 'disk operating system'.

Page 263/2098 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Setup access to SAS RAID drives with NTFS partitions on CentOS Machine

    - by Quanano
    We have a Dell Poweredge 2900 system with Adaptec 39320A SCSI CONTROLLER CARD and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on the other raid array with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log

    Read the article

  • libvirt upgrade caused vms to not see drives (boot media not found)

    - by bias
    I upgraded to Ubuntu 12.04.1 and now libvirt (via open nebula) successfully runs vms but they aren't finding the 2 drives (specifically, the boot drive). One is "hd" the other is "cdrom". The machine boots but fails and displays something like "boot media not found hd" (this was in a vnc terminal and I didn't copy the output anywhere so that's not the verbatim message). I tried constructing a new disk using the new version of qemu (via vmbuilder) and this new machine has the same problem as the old machine. In case it matters (I can't see why it would) I'm using open nebula to manage the machines. There's nothing relevant in any of the logs: syslog, libvirtd, oned. Which is to say nothing interesting/anomalous is reported when the machine is brought up. Versions libvirt 0.9.8-2ubuntu17.4 qemu-kvm 1.0+noroms-0ubuntu14.3 The libvirt xml config portions (relavent) <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> ... <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/one//203/images/disk.0'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//203/images/disk.1'/> <target dev='sdc' bus='scsi'/> <readonly/> <alias name='scsi0-0-2'/> <address type='drive' controller='0' bus='0' unit='2'/> </disk> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> ... </devices> The libvirt/qemu log contains 2012-11-25 22:19:24.328+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-204 -uuid 4be6c276-19e8-bdc2-e9c9-9ca5352f2be3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-204.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one//204/images/disk.0,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0,bootindex=1 -drive file=/var/lib/one//204/images/disk.1,if=none,media=cdrom,id=drive-scsi0-0-2,readonly=on,format=raw -device scsi-disk,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3 -netdev tap,fd=19,id=hostnet1 -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4 -usb -vnc 0.0.0.0:204 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom" kvm: -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom"

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • Weird system freeze. Nothing works keyboard/mouse/reset button - Ubuntu 12.04 64bits [closed]

    - by Simon
    I have fresh PC: i5 3570K with Intel 4000 onboard ASrock Z77 Extreme4-m 8GG RAM - Adata 1333Mhz... 1TB Seagate drive 7200rpm I've also fresh systems - Ubuntu/Win7 and today there was something strange. Ubuntu twice just freezed. Everything stopped, even keyboard and mouse wasn't responding. Even RESET button didn't work. Right now memtest is running, but I'd like to know, where else can i look for cause. Can it be software fault if even reset isn't working? Only long reset pressing rebooted PC... I'm a bit confused. Or should I test components - CPU, motherboard, disk. Which logs in Ubuntu should I check to diagnose cause? EHm I had few adventures with this PC already. Shipped motherboard was broken (ASrock Z68 Extreme3) and had old bios, so I had to contact with reseller, replace it and at the end decided on Z77, but everything took 3 weeks, so I have bad feelings... Edit: Both freezes were during editing something in gedit (it can be coincidence) and after few updates today - when memtest is end I'll check what was updated

    Read the article

  • How to make my SanDisk Cruzer Blade 4GB disk back to normal?

    - by Jack
    Currently, my SanDisk Cruzer Blade 4GB have become a 64MB Firebird RAW flash drive (thumb drive). I don't know why it become like this but when I plug it into a PC, it suddenly transform itself to become a 64MB flash drive. (From 4 GB to 64 MB, that is a huge change!) I read the following articles: http://forums.sandisk.com/t5/All-SanDisk-USB-Flash-Drives/Cruzer-Blade-will-not-format/td-p/214932 http://forums.whirlpool.net.au/archive/1691847 http://forum.hddguru.com/sandiskfirebird-64mb-t23539.html and notice that their solutions does not work at all. Some of their solution is as follows: Using the HP USB Disk Storage Format Tool (http://files.extremeoverclocking.com/file.php?f=1970), which did not work for me as it could not format. Using the h2testw to see if it is genuine, which did not work for me because it is a RAW partition. Return to the vendor, which I doubt since the product only have 1 year warranty and it has expired. I also notice that the flash drive is very fragile as after a few use, the flash drive have become something like the following: (The upper cover of the USB connector was broken or torn) So, wondering if someone have any good solution to fix it back so that at least I can retrieve my data on the fragile thumb drive.

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • Why my hard disk can't boot from the BIOS?

    - by Mario
    I installed a new sata DVD burner. When I turned on the machine (windows 7) it didn't boot. It can boot from a lubuntu CD. There is an option on lubuntu to boot form the first hard disk. If I select it, the machine boots normally to windows 7. So from the CD I can boot but not from the BIOS. I checked all the options more than once: boot from HD, not boot from removable, boot from USB, boot from optical. The order of the boot sequence is HD then DVD. I tried booting only with the HD; I disconnected both DVDs. I even tried recovery of the MBR: bootsect, bootrec, fixmbr, buildbcd, nt60, etc. So, the question is, does this have a reason, what's the difference between booting from the BIOS (as I think) to from the DVD?. The BIOS is intel, it has BIOS codes on the right bottom corner, it stays at 5A for a while. 5A is "Resetting PATA/SATA bus and all devices".

    Read the article

  • copying an lvm partition to a smaller disk, and renaming volume groups.

    - by dlamblin
    I was trying to shrink a vmdk (VMWare disk image) file to be as small as possible, and found two recommendations. The first is to cat /dev/zero into the fs then delete it, and run VMWare tools' shrink. This works okay. The second is to copy everything into a new vmdk. I went the second route. I did not use dd because I actaully want to use as few blocks as possible, instead of having a block-by-clok copy. Any unlinked files will still have blocks that aren't zeroed out. Secondly the centos image was mostly lvm, except for the boot partition, and my target was going to be 4gb instead of 8gb. I did use dd for the first 40mb to get the boot blocks and partition copied. I then used parted to create an identical primary boot, and smaller primary lvm. Then I used pvcreate on that device sdb2, vgcreate, and lvcreate to create a root and swap. I used mkfs.ext3fs on the root partition and then rsync -av / /2root excluding /proc /sys /2root /dev. So far everything went fine. My problem is that: The result is 2.7 GB while the source was 2.1 GB. This is weird to me. The second vgroup is called VolGroup01, while the original was called VolGroup00. How can I rename the VolGroup01 to VolGroup00 and swap it out after all this?

    Read the article

  • Does AMD Cool n Quiet Slow Down Your System?

    - by Software Monkey
    I discovered today that having AMD Cool n Quiet enabled in my BIOS appears to be slowing down my Windows XP SP2 system by about 29% on memory & CPU intensive workloads. I was wondering if (a) anyone else had encountered this, (b) anyone can offer an explanation, (c) there are any negatives I need to be aware of if I keep AMD CnQ disabled. With some superficial testing so far, I don't immediately notice any difference with CnQ off (other than the performance being what I expected from this new hardware). It seems to ramp up the CPU fan a little bit as my program maxes out 1 core, but that's the same as with CnQ on. And when I let the system idle the CPU fan slows down and the systems as quiet as a mouse (after years of 6 small fans churning like they want to go into orbit it's nice to again have a system where I can hear the HDDs seeking). Bonus question: Does CnQ cause issues with system stability? I ask because the reason I disabled it was because I have had a few freezes and 1 spontaneous reboot with my new hardware.

    Read the article

  • How would I force Debian to use the physical sector size on a hard disk?

    - by Confused User
    I just purchased a few new 3TB WD drives. These have physical 4k sectors, but there is some sort of layer which is providing 512B logical sectors (see the partition table below). In order to attempt to get some more speed out of my hard drives, I would like to get rid of this logical layer and actually use the physical 4k sectors. However, I can't figure out how to do this (or even if it's possible) from the man pages of fdisk and parted, or from searching Google. Does anybody know how this could be done? As to why this is relevant, this page demonstrates that meerly aligning the sectors properly can already make up to a 25% speed difference for reads, and more than 2500% for writes in some cases! Getting rid of the logical sectors in favor of the physicals ones should improve speeds even more. Thanks! $ parted /dev/sdc GNU Parted 2.3 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB zfs 9 3001GB 3001GB 8389kB P.S. I don't care about the data on the drives, I was just playing with different file systems. Also, this is my first time posting here, so please let me know if my posts should be formatted differently, etc.

    Read the article

  • Messed up partitions... system will not boot!

    - by someguy
    I did a really dumb thing. cfdisk threw an error at me saying "FATAL ERROR: Bad primary partition 3: Partition ends in the final partial cylinder", so I installed Partition Table Doctor to see if I could fix the problem. When the program started up, it told me there were problems with my partitions, and asked if I wanted them fixed (cannot remember real message, but I believe it had something to do with the cylinder boundaries), so, blindly, without thinking of the consequences, I did. Now, my system will not boot. I tried booting from the Windows 7 installation CD. I went to install a fresh copy, but it said that "No drives were found". I then opened up diskpart. According to diskpart, there is only one partition, containing one volume, assigned the letter "C". Before, I had four partitions! It is also saying that the file system is RAW. Is there any way I can fix this? I have important data that I do not want to lose. Later on... I tried fdisk with the option -l, which lists the partition table(s), and this is what I got: Ignoring extra extended partition 4 Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 64 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x163df116 Device Boot Start End Blocks Id System /dev/sda1 6 18 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 18 7851 62918572+ 7 HPFS/NTFS /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30403 139203193 7 HPFS/NTFS I don't know if this will help, but it's extra information, at least. Also, this is how I had my partitions: 40MB (Unallocated) 100MB (System Reserved) 60GB (Windows, C:) 40GB (Was reserved for secondary OS) ~132GB (Home, E:)

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • How can I access user files on a disk moved from a Windows 7 machine to an XP machine?

    - by Fantius
    I moved the hard drive from one machine (Win 7) to another (XP) and now certain folders tell me "Access denied". I am logged in as an administrator. I had a different account on the other machine. Neither account authenticated to anything besides the local machine. The old machine is apparently dead, so I can't do anything in there like change permissions, etc. How can I access these files? Edit: After changing the ownerships of all the files and folders on the drive, I am getting a different error. And it is troubling me deeply. "xxx refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet or your network, and then try again. If it still cannot be located, the information might have been moved to a different location." No change after rebooting. Any ideas? Surely the files are still there, right?

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Can't create system image. 0x80780119 error after upgrade from 8 to 8.1

    - by cichy202
    I have upgraded my Windows 8 PC to 8.1 yesterday and it seemed like everything is working fine until I tried to create System Image. I got an error 0x80780119 saying that there is to little space on one of the partitions. I started looking into this problem and indeed one of the partitions does not meet the requirements. There are following partitions on my drive: DISKPART> list partition Partition ### Type Size Offset ------------- ---------------- ------- ------- Partition 1 Recovery 300 MB 1024 KB Partition 2 System 100 MB 301 MB Partition 3 Reserved 128 MB 401 MB Partition 4 Primary 74 GB 529 MB Partition 5 Primary 390 GB 75 GB Partition 1 has only 13MB free space. Partition 2 has 70MB free space, partition 3 is MSFTRES, partition 4 is my C drive with around 35GB free and partition 5 is not included in system image. Partitions were create like this during installation of Windows 8 - clean install from scratch. I am using UEFI so the drive is GPT formatted. So I thought, OK I can resize my C drive a little, move the partitions and expand the 1st one. I tried using GParted but it is not able to move the MSFTRES partition. It does not recognize the file system on it. So the question is: Is it possible to "clean up" the 1st partition in anyway? If not, is there anything special about MSFTRES partition? Or can I just remove it and create it a little further and just flag it as msftres with GParted?

    Read the article

  • How do I prevent lighttpd from caching static files, even when modified on disk?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • How to avoid LinearAlloc Exceeded Capacity error android

    - by Udaykiran
    The application gets crashing every-time, when am running eclipse saying LinearAlloc exceeded capacity (5242880), last=208 This is happening, when am creating AsyncTask, thats strange this is happening everytime . when am commenting and running its running. Logcat is: 02-09 04:02:23.374: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/http/HttpEntityEnclosingRequest;' 02-09 04:02:23.374: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/codec/binary/Base64;' 02-09 04:02:23.378: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/codec/net/QuotedPrintableCodec;': multiple definitions 02-09 04:02:23.378: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/codec/net/StringEncodings;': multiple definitions 02-09 04:02:23.378: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/codec/net/URLCodec;': multiple definitions 02-09 04:02:23.394: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.397: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/codec/net/URLCodec;' 02-09 04:02:23.487: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/impl/LogFactoryImpl;' 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/LogFactoryImpl;': multiple definitions 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/NoOpLog;': multiple definitions 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/SimpleLog$1;': multiple definitions 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/SimpleLog;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/ConnectionClosedException;': multiple definitions /http/StatusLine;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/TokenIterator;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/UnsupportedHttpVersionException;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AUTH;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthScheme;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthSchemeFactory;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthSchemeRegistry;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthScope;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/ConnectionKeepAliveStrategy;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/ConnectionPoolTimeoutException;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/EofSensorInputStream;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/HttpHostConnectException;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/ManagedClientConnection;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/MultihomePlainSocketFactory;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/OperatedClientConnection;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnConnectionParamBean;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnManagerParamBean;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnPerRoute;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnManagerParams$1;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/DefaultRequestDirector;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/DefaultTargetAuthenticationHandler;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/DefaultUserTokenHandler;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/RequestWrapper;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/EntityEnclosingRequestWrapper;': multiple definitions 02-09 04:02:23.597: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/http/client/methods/HttpRequestBase;' 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/RedirectLocations;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/RoutedRequest;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/TunnelRefusedException;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/conn/AbstractClientConnAdapter;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/conn/AbstractPoolEntry;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/protocol/ResponseServer;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/protocol/SyncBasicHttpContext;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/protocol/UriPatternMatcher;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/ByteArrayBuffer;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/CharArrayBuffer;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/EncodingUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/EntityUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/ExceptionUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/LangUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/VersionInfo;': multiple definitions 02-09 04:02:23.608: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.612: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.612: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.612: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.616: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/kxml2/io/KXmlSerializer;' 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlPullParser;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/io/KXmlParser;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlSerializer;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/io/KXmlSerializer;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/kdom/Node;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/kdom/Document;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/kdom/Element;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/Wbxml;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/WbxmlParser;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/WbxmlSerializer;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/syncml/SyncML;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/wml/Wml;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/wv/WV;': multiple definitions 02-09 04:02:24.323: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/codec/binary/Base64;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParserFactory;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.495: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlPullParserException;': multiple definitions 02-09 04:02:24.495: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlPullParserFactory;': multiple definitions 02-09 04:02:24.612: E/dalvikvm(3351): LinearAlloc exceeded capacity (5242880), last=208 02-09 04:02:24.612: E/dalvikvm(3351): VM aborting 02-09 04:02:24.640: D/dalvikvm(3307): GC_FOR_MALLOC freed 18195 objects / 1125640 bytes in 287ms 02-09 04:02:24.745: I/DEBUG(2372): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 02-09 04:02:24.745: I/DEBUG(2372): Build fingerprint: 'samsung/SGH-T849/SGH-T849/SGH-T849:2.2/FROYO/UVJJB:user/release-keys' 02-09 04:02:24.745: I/DEBUG(2372): pid: 3351, tid: 3351 >>> /system/bin/dexopt <<< 02-09 04:02:24.745: I/DEBUG(2372): signal 11 (SIGSEGV), fault addr deadd00d 02-09 04:02:24.745: I/DEBUG(2372): r0 00000026 r1 afd14921 r2 afd14921 r3 00000000 02-09 04:02:24.745: I/DEBUG(2372): r4 800a13f4 r5 800a13f4 r6 004fffa4 r7 000000d0 02-09 04:02:24.745: I/DEBUG(2372): r8 00000000 r9 00000000 10 00000000 fp 00000000 02-09 04:02:24.745: I/DEBUG(2372): ip deadd00d sp beade740 lr afd1596b pc 80042078 cpsr 20000030 02-09 04:02:24.745: I/DEBUG(2372): d0 643a64696f72646e d1 6472656767756265 02-09 04:02:24.745: I/DEBUG(2372): d2 410be43800000067 d3 00000000410c080a 02-09 04:02:24.745: I/DEBUG(2372): d4 6c706d49746e6569 d5 74746977744c293b 02-09 04:02:24.745: I/DEBUG(2372): d6 746e692f6a347265 d7 74682f6c616e7265 02-09 04:02:24.745: I/DEBUG(2372): d8 0000003108f12b80 d9 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d10 0000000000000000 d11 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d12 0000000000000000 d13 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d14 0000000000000000 d15 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d16 0000000000000000 d17 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d18 0000000000000000 d19 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d20 0000000000000000 d21 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d22 0000000000000000 d23 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d24 0000000000000000 d25 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d26 0000000000000000 d27 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d28 0000000000000000 d29 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d30 0000000000000000 d31 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): scr 00000000 02-09 04:02:24.757: I/DEBUG(2372): #00 pc 00042078 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #01 pc 00049f40 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #02 pc 00067998 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #03 pc 00067dba /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #04 pc 00068612 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #05 pc 00068846 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #06 pc 0006806a /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #07 pc 00057a0c /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #08 pc 00057fe6 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #09 pc 00053d1e /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #10 pc 000566d4 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #11 pc 000576c0 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #12 pc 00057948 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #13 pc 0005a1f4 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #14 pc 0005a25c /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #15 pc 0005a32a /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #16 pc 000590f2 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): code around pc: 02-09 04:02:24.761: I/DEBUG(2372): 80042058 20061861 f7d418a2 2000eb8e ece6f7d4 02-09 04:02:24.761: I/DEBUG(2372): 80042068 58234808 b1036bdb f8df4798 2026c01c 02-09 04:02:24.761: I/DEBUG(2372): 80042078 0000f88c ed4cf7d4 0005f3a0 fffe300c 02-09 04:02:24.761: I/DEBUG(2372): 80042088 fffe6280 0000039c deadd00d f8dfb40e 02-09 04:02:24.761: I/DEBUG(2372): 80042098 b503c02c bf00490a 188ba200 f853aa03 02-09 04:02:24.761: I/DEBUG(2372): code around lr: 02-09 04:02:24.761: I/DEBUG(2372): afd15948 b5f74b0d 490da200 2600189b 585c4602 02-09 04:02:24.761: I/DEBUG(2372): afd15958 686768a5 f9b5e008 b120000c 46289201 02-09 04:02:24.761: I/DEBUG(2372): afd15968 9a014790 35544306 37fff117 6824d5f3 02-09 04:02:24.761: I/DEBUG(2372): afd15978 d1ed2c00 bdfe4630 0002c9d8 000000d8 02-09 04:02:24.761: I/DEBUG(2372): afd15988 b086b570 f602fb01 9004460c a804a901 02-09 04:02:24.761: I/DEBUG(2372): stack: 02-09 04:02:24.761: I/DEBUG(2372): beade700 410e9e18 /dev/ashmem/mspace/dalvik-heap/0 (deleted) 02-09 04:02:24.765: I/DEBUG(2372): beade704 410e9e18 /dev/ashmem/mspace/dalvik-heap/0 (deleted) 02-09 04:02:24.765: I/DEBUG(2372): beade708 afd425a0 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade70c afd4254c /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade710 00000000 02-09 04:02:24.765: I/DEBUG(2372): beade714 afd1596b /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade718 afd14921 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade71c afd14921 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade720 afd14978 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade724 800a13f4 /system/lib/libdvm.so 02-09 04:02:24.765: I/DEBUG(2372): beade728 800a13f4 /system/lib/libdvm.so 02-09 04:02:24.765: I/DEBUG(2372): beade72c 004fffa4 02-09 04:02:24.765: I/DEBUG(2372): beade730 000000d0 02-09 04:02:24.765: I/DEBUG(2372): beade734 afd14985 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade738 df002777 02-09 04:02:24.765: I/DEBUG(2372): beade73c e3a070ad 02-09 04:02:24.765: I/DEBUG(2372): #00 beade740 00016810 [heap] 02-09 04:02:24.765: I/DEBUG(2372): beade744 80049f45 /system/lib/libdvm.so 02-09 04:02:24.765: I/DEBUG(2372): #01 beade748 000000d0 02-09 04:02:24.765: I/DEBUG(2372): beade74c 000fc750 [heap] 02-09 04:02:24.765: I/DEBUG(2372): beade750 0050007c 02-09 04:02:24.765: I/DEBUG(2372): beade754 00000004 02-09 04:02:24.765: I/DEBUG(2372): beade758 00016814 [heap] 02-09 04:02:24.765: I/DEBUG(2372): beade75c afd0c9c3 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade760 42978eee /system/framework/core.odex 02-09 04:02:24.765: I/DEBUG(2372): beade764 42978efe /system/framework/core.odex 02-09 04:02:24.765: I/DEBUG(2372): beade768 410e9e18 /dev/ashmem/mspace/dalvik-heap/0 (deleted) 02-09 04:02:24.765: I/DEBUG(2372): beade76c 00000000 02-09 04:02:24.765: I/DEBUG(2372): beade770 00000004 02-09 04:02:24.765: I/DEBUG(2372): beade774 8006799d /system/lib/libdvm.so 02-09 04:02:25.129: I/DEBUG(2372): dumpmesg > /data/log/dumpstate_app_native.log 02-09 04:02:25.218: I/dumpstate(3355): begin 02-09 04:02:25.253: I/dalvikvm(2495): threadid=3: reacting to signal 3 02-09 04:02:25.276: I/dalvikvm(2495): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.444: I/dalvikvm(2593): threadid=3: reacting to signal 3 02-09 04:02:25.452: I/dalvikvm(2593): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.460: I/dalvikvm(2598): threadid=3: reacting to signal 3 02-09 04:02:25.464: I/dalvikvm(2598): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.480: I/dalvikvm(2601): threadid=3: reacting to signal 3 02-09 04:02:25.487: I/dalvikvm(2601): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.503: I/dalvikvm(2655): threadid=3: reacting to signal 3 02-09 04:02:25.526: I/dalvikvm(2655): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.703: I/dalvikvm(2676): threadid=3: reacting to signal 3 02-09 04:02:25.851: I/dalvikvm(2708): threadid=3: reacting to signal 3 02-09 04:02:25.855: I/dalvikvm(2676): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.866: I/dalvikvm(2746): threadid=3: reacting to signal 3 02-09 04:02:25.886: I/dalvikvm(2746): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.901: I/dalvikvm(2753): threadid=3: reacting to signal 3 02-09 04:02:25.905: I/dalvikvm(2753): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:26.097: I/dalvikvm(2795): threadid=3: reacting to signal 3 02-09 04:02:26.315: I/dalvikvm(2850): threadid=3: reacting to signal 3 am using jaxab-xalan-1.5 jar in referenced libraries. How to avoid this Linearalloc exceeded capacity error ? Thanks

    Read the article

  • Can i get the particular disk space(like C: ) using Jav program..?

    - by Venkats
    I used SystemEnvironment class in java for getting system information. In that i can get only RAM size, i can't get the specific disk space like c: and D: code is, com.sun.management.OperatingSystemMXBean mxbean = (com.sun.management.OperatingSystemMXBean)ManagementFactory.getOperatingSystemMXBean(); System.out.println("Total RAM:"+mxbean.getTotalSwapSpaceSize()/(1024*1024*1024)+""+"GB"); Can i get this using in java program?

    Read the article

  • C# Item system design approach, should I use abstract classes, interfaces or virutals?

    - by vexe
    I'm working on a Resident Evil 1/2/3/0/Remake type of game. Currently I've done a big part of the inventory system (here's a link if you wanna see my inventory, pretty outdated, added a lot of features and made a lot of enhancements) Now I'm thinking about how to approach the items system, If you've played any Resident Evil game or any of its likes you should be familiar with what I'm trying to achieve. Here's a very simple category I made for the items: So you have different items, with different operations you could perform on them, there are usable items that you could use, like for example herbs and first aid kits that 'using' them would affect your health, keys to unlock doors, and equipable items that you could 'equip' like weapons. Also, you can 'combine' two items together to get new one, like for example mixing a green and red herb would give you a new type of herb, or combining a lighter with a paper, would give you a burnt paper, or ammo with a gun, would reload the gun or something. etc. You know the usual RE drill. Not all items are 'transformable', in that, for example: lighter + paper = burnt paper (it's the paper that 'transforms' to burnt paper and not the lighter, the lighter is not transformable it will remain as it is) green herb + red herb = newHerb1 (both herbs will vanish and transform to this new type of herb) ammo + gun = reload gun (ammo state will remain as it is, it won't change but it will just decrease, nothing will happen to the gun it just gets reloaded) Also a key note to remember is that you can't just combine items randomly, each item has a 'mating' item(s). So to sum up, different items, and different operations on them. The question is, how to approach this, design-wise? I've been learning about interfaces, but it just doesn't quite get into my head, I mean, why not just use classes with the good old inheritance? I know the technical details of interfaces and that the cool thing about them is that they don't require an inheritance chain, but I just can't see how to use them properly, that is, if they were the right thing to use here. So should I go with just classes and inheritance? just like in the tree I showed you? or should I think more about how to use interfaces? (IUsable, IEquipable, ITransformable) - why not just use classes UsableItem, Equipable item, TransformableItem? I want something that won't give me headaches in the long run, something resilient/flexible to future changes. I'm OK using classes, but I smell something better here. The way I'm thinking is to possibly use both inheritance and interfaces, so that you have a branch like this: item - equipable - weapon. but then again, the weapon has methods like 'reload' 'examine' 'equip' some of them 'combine' so I'm thinking to make weapon implement ICombinable?... not all items get used the same way, using herbs will increase your health, using a key will open a door, so IUsable maybe? Should I use a big database (XML for example) for all the items, items names, mates, nRowsReq, nColsReq, etc? Thanks so much for your answers in advanced, note that demo 3 is coming after I'm done with items :D

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only The hell you are: [root@localhost ~]# mount -o remount,rw /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • System.AccessViolationException when calling DLL from WCF on IIS.

    - by Wodzu
    Hi guys. I've created just a test WCF service in which I need to call an external DLL. Everything works fine under Visutal Studio development server. However, when I try to use my service on IIS I am getting this error: Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The stack trace leeds to the call of DLL which is presented below. After a lot of reading and experimenting I am almost sure that the error is caused by wrong passing strings to the called function. Here is how the wrapper for DLL looks like: using System; using System.Runtime.InteropServices; using System.Text; using System; using System.Security; using System.Security.Permissions; using System.Runtime.InteropServices; namespace cdn_api_wodzu { public class cdn_api_wodzu { [DllImport("cdn_api.dll", CharSet=CharSet.Ansi)] // [SecurityPermission(SecurityAction.Assert, Unrestricted = true)] public static extern int XLLogin([In, Out] XLLoginInfo _lLoginInfo, ref int _lSesjaID); } [Serializable, StructLayout(LayoutKind.Sequential)] public class XLLoginInfo { public int Wersja; public int UtworzWlasnaSesje; public int Winieta; public int TrybWsadowy; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 0x29)] public string ProgramID; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 0x15)] public string Baza; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 9)] public string OpeIdent; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 9)] public string OpeHaslo; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 200)] public string PlikLog; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 0x65)] public string SerwerKlucza; public XLLoginInfo() { } } } this is how I call the DLL function: int ErrorID = 0; int SessionID = 0; XLLoginInfo Login; Login = new XLLoginInfo(); Login.Wersja = 18; Login.UtworzWlasnaSesje = 1; Login.Winieta = -1; Login.TrybWsadowy = 1; Login.ProgramID = "TestProgram"; Login.Baza = "TestBase"; Login.OpeIdent = "TestUser"; Login.OpeHaslo = "TestPassword"; Login.PlikLog = "C:\\LogFile.txt"; Login.SerwerKlucza = "MyServ\\MyInstance"; ErrorID = cdn_api_wodzu.cdn_api_wodzu.XLLogin(Login, ref SessionID); When I comment all the string field assigments the function works - it returns me an error message that the program ID has not been given. But when I try to assign a ProgramID (or any other string fields, or all at once) then I am getting the mentioned exception. I am using VS2008 SP.1, WinXP and IIS 5.1. Maybe the ISS itself is a problem? I've tried all the workarounds that has been described here: http://forums.asp.net/t/675515.aspx Thansk for your time. After edit: Installing Windows 2003 Server and IIS 6.0 solved the problem.

    Read the article

  • WCF REST POST error bad request 400

    - by lyatcomit
    Here's my code: DOAMIN: using System; using System.Collections; using System.Runtime.Serialization; namespace Comit.TrafficService.Services.Mobile { [DataContract(Namespace = "http://192.168.0.161:9999/TrafficService/Domain/Mobile")] public class Error { [DataMember] public int Id { get; set; } [DataMember] public DateTime Time { get; set; } public string Message { get; set; } [DataMember] public string Stacktrace { get; set; } [DataMember] public string Os { get; set; } [DataMember] public string Resolution { get; set; } } } CONTRACT: using System.ServiceModel; using System.ServiceModel.Web; using Comit.TrafficService.Services.Mobile; namespace Comit.TrafficService.Services.Contracts { [ServiceContract(Name = "MobileErrorService")] public interface IMobileError { /// <summary> /// ??????????? /// </summary> /// <param name="Error">??????</param> /// <returns></returns> [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedResponse, UriTemplate = "ErrorReport", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml) ] int ErrorReport(Error error); } } SERVICE: using System.ServiceModel.Web; using Comit.TrafficService.Services.Contracts; using Comit.TrafficService.Dao.Mobile; using System; using Comit.TrafficService.Services.Mobile; namespace Comit.TrafficService.Services { public class MobileErrorService : IMobileError { public int ErrorReport(Error error) { return HandleAdd(error); } public int HandleAdd(Error error) { Console.WriteLine("?????error.Message:" + error.Message); ErrorDao edao = new ErrorDao(); Console.WriteLine("??error" ); int result = (int)edao.Add(error); return result; } } } Configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Comit.TrafficService.Services.MobileErrorService"> <host> <baseAddresses> <add baseAddress="http://192.168.0.161:9999"/> </baseAddresses> </host> <endpoint address="http://192.168.0.161:9999/Comit/TrafficService/Services" binding="webHttpBinding" contract="Comit.TrafficService.Services.Contracts.IMobileError" behaviorConfiguration="RestfulBehavior" name="webHttpBinding"> </endpoint> </service> </services> <behaviors> <endpointBehaviors> <behavior name="RestfulBehavior"> <webHttp/> <dataContractSerializer ignoreExtensionDataObject="true"/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> Host: using System; using System.Collections.Generic; using System.Linq; using System.ServiceModel; using System.Text; using Comit.TrafficService.Services; namespace ServiceTest { class Program { static void Main(string[] args) { using (ServiceHost host = new ServiceHost(typeof(MobileErrorService))) { host.Opened += delegate { Console.WriteLine("CalculaorService????,????????!"); }; host.Open(); Console.Read(); } } } } Client code: using System; using System.ServiceModel; using System.ServiceModel.Description; using TestWCFRest.WcfServices.Services; using System.Net; namespace TestWCFRest.WcfServices.Hosting { class Program { static void Main(string[] args) { //using (ServiceHost host = new ServiceHost(typeof(CalculatorService))) //{ // host.Opened += delegate // { // Console.WriteLine("CalculaorService????,????????!"); // }; // host.Open(); // Console.Read(); //} HttpWebRequest req = null; HttpWebResponse res = null; try { string url = "http://192.168.0.161:9999/Comit/TrafficService/Services/ErrorReport"; req = (HttpWebRequest)WebRequest.Create(url); req.Method = "POST"; req.ContentType = "application/xml; charset=utf-8"; req.Timeout = 30000; req.Headers.Add("SOAPAction", url); System.Xml.XmlDocument xmlDoc = new System.Xml.XmlDocument(); xmlDoc.XmlResolver = null; xmlDoc.Load(@"d:\test.xml"); string sXML = xmlDoc.InnerXml; req.ContentLength = sXML.Length; System.IO.StreamWriter sw = new System.IO.StreamWriter(req.GetRequestStream()); sw.Write(sXML); sw.Close(); res = (HttpWebResponse)req.GetResponse(); } catch (Exception ex) { System.Console.WriteLine(ex.Message); } } } } It's my first time I'm trying to do somethinf with WCF so I don't know how to solve this problem. Since there is a lot of professionals here, I would appreciate your help in solving this. Thank you in advance!

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >