Search Results

Search found 7078 results on 284 pages for 'extended range'.

Page 271/284 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • What is "elegant" code?

    - by Breton
    I see a lot of lip service and talk about the most "elegant" way to do this or that. I think if you spend enough time programming you begin to obtain a sort of intuitive feel for what it is we call "elegance". But I'm curious. Even if we can look at a bit of code, and say instinctively "That's elegant", or "That's messy", I wonder if any of us really understands what that means. Is there a precise definition for this "elegance" we keep referring to? If there is, what is it? Now, what I mean by a precise definition, is a series of statements which can be used to derive questions about a peice of code, or a program as a whole, and determine objectively, or as objectively as possible, whether that code is "elegant" or not. May I assert, that perhaps no such definition exists, and it's all just personal preference. In this case, I ask you a slightly different question: Is there a better word for "elegance", or a better set of attributes to use for judging code quality that is perhaps more objective than merely appealing to individual intuition and taste? Perhaps code quality is a matter of taste, and the answer to both of my questions is "no". But I can't help but feel that we could be doing better than just expressing wishy washy feelings about our code quality. For example, user interface design is something that to a broad range of people looks for all the world like a field of study that oughtta be 100% subjective matter of taste. But this is shockingly and brutally not the case, and there are in fact many objective measures that can be applied to a user interface to determine its quality. A series of tests could be written to give a definitive and repeatable score to user interface quality. (See GOMS, for instance). Now, okay. is Elegance simply "code quality" or is it something more? Is it something that can be measured? Or is it a matter of taste? Does our profession have room for taste? Maybe I'm asking the wrong questions altogether. Help me out here. Bonus Round If there is such a thing as elegance in code, and that concept is useful, do you think that justifies classifying the field of programming as an "Art" capital A, or merely a "craft". Or is it just an engineering field populated by a bunch of wishful thinking humans? Consider this question in the light of your thoughts about the elegance question. Please note that there is a distinction between code which is considered "art" in itself, and code that was written merely in the service of creating an artful program. When I ask this question, I ask if the code itself justifies calling programming an art. Bounty Note I liked the answers to this question so much, I think I'd like to make a photographic essay book from it. Released as a free PDF, and published on some kind of on demand printing service of course, such as "zazz" or "tiggle" or "printley" or something . I'd like some more answers, please!

    Read the article

  • Bug in CF9: values for unique struct keys referenced and overwritten by other keys.

    - by Gin Doe
    We've run into a serious issue with CF9 wherein values for certain struct keys can be referenced by other keys, despite those other keys never being set. See the following examples: Edit: Looks like it isn't just something our servers ate. This is Adobe bug-track ticket 81884: http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=81884. <cfset a = { AO = "foo" } /> <cfset b = { AO = "foo", B0 = "bar" } /> <cfoutput> The following should throw an error. Instead both keys refer to the same value. <br />Struct a: <cfdump var="#a#" /> <br />a.AO: #a.AO# <br />a.B0: #a.B0# <hr /> The following should show a struct with 2 distinct keys and values. Instead it contains a single key, "AO", with a value of "bar". <br />Struct b: <cfdump var="#b#" /> This is obviously a complete show-stopper for us. I'd be curious to know if anyone has encountered this or can reproduce this in their environment. For us, it happens 100% of the time on Apache/CF9 running on Linux, both RH4 and RH5. We're using the default JRun install on Java 1.6.0_14. To see the extent of the problem, we ran a quick loop to find other naming sequences that are affected and found hundreds of matches for 2 letter key names. A similar loop found more conflicts in 3 letter names. <cfoutput>Testing a range of affected key combinations. This found hundreds of cases on our platform. Aborting after 50 here.</cfoutput> <cfscript> teststring = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; stringlen = len(teststring); matchesfound = 0; matches = ""; for (i1 = 1; i1 <= stringlen; i1++) { symbol1 = mid(teststring, i1, 1); for (i2 = 1; i2 <= stringlen; i2++) { teststruct = structnew(); symbol2 = mid(teststring, i2, 1); symbolwhole = symbol1 & symbol2; teststruct[ symbolwhole ] = "a string"; for (q1 = 1; q1 <= stringlen; q1++) { innersymbol1 = mid(teststring, q1, 1); for (q2 = 1; q2 <= stringlen; q2++) { innersymbol2 = mid(teststring, q2, 1); innersymbolwhole = innersymbol1 & innersymbol2; if ((i1 != q1 || i2 != q2) && structkeyexists(teststruct, innersymbolwhole)) { // another affected pair of keys! writeoutput ("<br />#symbolwhole# = #innersymbolwhole#"); if (matchesfound++ > 50) { // we've seen enough abort; } } } } } } </cfscript> And edit again: This doesn't just affect struct keys but names in the variables scope as well. At least the variables scope has the presence of mind to throw an error, "can't load a null": <cfset test_b0 = "foo" /> <cfset test_ao = "bar" /> <cfoutput> test_b0: #test_b0# <br />test_ao: #test_ao# </cfoutput>

    Read the article

  • Google App Engine + Form Validation

    - by Iwona
    Hi, I would like to do google app engine form validation but I dont know how to do it? I tried like this: from google.appengine.ext.db import djangoforms from django import newforms as forms class SurveyForm(forms.Form): occupations_choices = ( ('1', ""), ('2', "Undergraduate student"), ('3', "Postgraduate student (MSc)"), ('4', "Postgraduate student (PhD)"), ('5', "Lab assistant"), ('6', "Technician"), ('7', "Lecturer"), ('8', "Other" ) ) howreach_choices = ( ('1', ""), ('2', "Typed the URL directly"), ('3', "Site is bookmarked"), ('4', "A search engine"), ('5', "A link from another site"), ('6', "From a book"), ('7', "Other") ) boxes_choices = ( ("des", "Website Design"), ("svr", "Web Server Administration"), ("com", "Electronic Commerce"), ("mkt", "Web Marketing/Advertising"), ("edu", "Web-Related Education") ) name = forms.CharField(label='Name', max_length=100, required=True) email = forms.EmailField(label='Your Email Address:') occupations = forms.ChoiceField(choices=occupations_choices, label='What is your occupation?') howreach = forms.ChoiceField(choices=howreach_choices, label='How did you reach this site?') # radio buttons 1-5 rating = forms.ChoiceField(choices=range(1,6), label='What is your occupation?', widget=forms.RadioSelect) boxes = forms.ChoiceField(choices=boxes_choices, label='Are you involved in any of the following? (check all that apply):', widget=forms.CheckboxInput) comment = forms.CharField(widget=forms.Textarea, required=False) And I wanted to display it like this: template_values = { 'url' : url, 'url_linktext' : url_linktext, 'userName' : userName, 'item1' : SurveyForm() } And I have this error message: Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp_init_.py", line 515, in call handler.get(*groups) File "C:\Program Files\Google\google_appengine\demos\b00213576\main.py", line 144, in get self.response.out.write(template.render(path, template_values)) File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp\template.py", line 143, in render return t.render(Context(template_dict)) File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp\template.py", line 183, in wrap_render return orig_render(context) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 168, in render return self.nodelist.render(context) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 705, in render bits.append(self.render_node(node, context)) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 718, in render_node return(node.render(context)) File "C:\Program Files\Google\google_appengine\lib\django\django\template\defaulttags.py", line 209, in render return self.nodelist_true.render(context) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 705, in render bits.append(self.render_node(node, context)) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 718, in render_node return(node.render(context)) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 768, in render return self.encode_output(output) File "C:\Program Files\Google\google_appengine\lib\django\django\template_init_.py", line 757, in encode_output return str(output) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\util.py", line 26, in str return self.unicode().encode(settings.DEFAULT_CHARSET) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\forms.py", line 73, in unicode return self.as_table() File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\forms.py", line 144, in as_table return self._html_output(u'%(label)s%(errors)s%(field)s%(help_text)s', u'%s', '', u'%s', False) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\forms.py", line 129, in _html_output output.append(normal_row % {'errors': bf_errors, 'label': label, 'field': unicode(bf), 'help_text': help_text}) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\forms.py", line 232, in unicode value = value.str() File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\util.py", line 26, in str return self.unicode().encode(settings.DEFAULT_CHARSET) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\widgets.py", line 246, in unicode return u'\n%s\n' % u'\n'.join([u'%s' % w for w in self]) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\widgets.py", line 238, in iter yield RadioInput(self.name, self.value, self.attrs.copy(), choice, i) File "C:\Program Files\Google\google_appengine\lib\django\django\newforms\widgets.py", line 212, in init self.choice_value = smart_unicode(choice[0]) TypeError: 'int' object is unsubscriptable Do You have any idea how I can do this validation in different case? I have tried to do it using this kind of: class ItemUserAnswer(djangoforms.ModelForm): class Meta: model = UserAnswer But I dont know how to add extra labels to this form and it is displayed in one line. Do You have any suggestions? Thanks a lot as it making me crazy why it is still not working:/

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Cascading S3 Sink Tap not being deleted with SinkMode.REPLACE

    - by Eric Charles
    We are running Cascading with a Sink Tap being configured to store in Amazon S3 and were facing some FileAlreadyExistsException (see [1]). This was only from time to time (1 time on around 100) and was not reproducable. Digging into the Cascading codem, we discovered the Hfs.deleteResource() is called (among others) by the BaseFlow.deleteSinksIfNotUpdate(). Btw, we were quite intrigued with the silent NPE (with comment "hack to get around npe thrown when fs reaches root directory"). From there, we extended the Hfs tap with our own Tap to add more action in the deleteResource() method (see [2]) with a retry mechanism calling directly the getFileSystem(conf).delete. The retry mechanism seemed to bring improvement, but we are still sometimes facing failures (see example in [3]): it sounds like HDFS returns isDeleted=true, but asking directly after if the folder exists, we receive exists=true, which should not happen. Logs also shows randomly isDeleted true or false when the flow succeeds, which sounds like the returned value is irrelevant or not to be trusted. Can anybody bring his own S3 experience with such a behavior: "folder should be deleted, but it is not"? We suspect a S3 issue, but could it also be in Cascading or HDFS? We run on Hadoop Cloudera-cdh3u5 and Cascading 2.0.1-wip-dev. [1] org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3n://... already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) at com.twitter.elephantbird.mapred.output.DeprecatedOutputFormatWrapper.checkOutputSpecs(DeprecatedOutputFormatWrapper.java:75) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:923) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:856) at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:104) at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:174) at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:137) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:122) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:42) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.j [2] @Override public boolean deleteResource(JobConf conf) throws IOException { LOGGER.info("Deleting resource {}", getIdentifier()); boolean isDeleted = super.deleteResource(conf); LOGGER.info("Hfs Sink Tap isDeleted is {} for {}", isDeleted, getIdentifier()); Path path = new Path(getIdentifier()); int retryCount = 0; int cumulativeSleepTime = 0; int sleepTime = 1000; while (getFileSystem(conf).exists(path)) { LOGGER .info( "Resource {} still exists, it should not... - I will continue to wait patiently...", getIdentifier()); try { LOGGER.info("Now I will sleep " + sleepTime / 1000 + " seconds while trying to delete {} - attempt: {}", getIdentifier(), retryCount + 1); Thread.sleep(sleepTime); cumulativeSleepTime += sleepTime; sleepTime *= 2; } catch (InterruptedException e) { e.printStackTrace(); LOGGER .error( "Interrupted while sleeping trying to delete {} with message {}...", getIdentifier(), e.getMessage()); throw new RuntimeException(e); } if (retryCount == 0) { getFileSystem(conf).delete(getPath(), true); } retryCount++; if (cumulativeSleepTime > MAXIMUM_TIME_TO_WAIT_TO_DELETE_MS) { break; } } if (getFileSystem(conf).exists(path)) { LOGGER .error( "We didn't succeed to delete the resource {}. Throwing now a runtime exception.", getIdentifier()); throw new RuntimeException( "Although we waited to delete the resource for " + getIdentifier() + ' ' + retryCount + " iterations, it still exists - This must be an issue in the underlying storage system."); } return isDeleted; } [3] INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] at least one sink is marked for delete INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] sink oldest modified date: Wed Dec 31 23:59:59 UTC 1969 INFO [pool-2-thread-15] (HiveSinkTap.java:148) - Now I will sleep 1 seconds while trying to delete s3n://... - attempt: 1 INFO [pool-2-thread-15] (HiveSinkTap.java:130) - Deleting resource s3n://... INFO [pool-2-thread-15] (HiveSinkTap.java:133) - Hfs Sink Tap isDeleted is true for s3n://... ERROR [pool-2-thread-15] (HiveSinkTap.java:175) - We didn't succeed to delete the resource s3n://... Throwing now a runtime exception. WARN [pool-2-thread-15] (Cascade.java:706) - [...] flow failed: ... java.lang.RuntimeException: Although we waited to delete the resource for s3n://... 0 iterations, it still exists - This must be an issue in the underlying storage system. at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:179) at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:40) at cascading.flow.BaseFlow.deleteSinksIfNotUpdate(BaseFlow.java:971) at cascading.flow.BaseFlow.prepare(BaseFlow.java:733) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:761) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:710) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Moving the swapfiles to a dedicated partition in Snow Leopard

    - by e.James
    I have been able to move Apple's virtual memory swapfiles to a dedicated partition on my hard drive up until now. The technique I have been using is described in a thread on forums.macosxhints.com. However, with the developer preview of Snow Leopard, this method no longer works. Does anyone know how it could be done with the new OS? Update: I have marked dblu's answer as accepted even though it didn't quite work because he gave excellent, detailed instructions and because his suggestion to use plutil ultimately pointed me in the right direction. The complete, working solution is posted here in the question because I don't have enough reputation to edit the accepted answer. Complete solution: 1. Open Terminal and make a backup copy of Apple's default dynamic_pager.plist: $ cd /System/Library/LaunchDaemons $ sudo cp com.apple.dynamic_pager.plist{,_bak} 2. Convert the plist from binary to plain XML: $ sudo plutil -convert xml1 com.apple.dynamic_pager.plist 3. Open the converted plist with your text editor of choice. (I use pico, see dblu's answer for an example using vim): $ sudo pico -w com.apple.dynamic_pager.plist It should look as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$ <plist version="1.0"> <dict> <key>EnableTransactions</key> <true/> <key>HopefullyExitsLast</key> <true/> <key>Label</key> <string>com.apple.dynamic_pager</string> <key>OnDemand</key> <false/> <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager</string> <string>-F</string> <string>/private/var/vm/swapfile</string> </array> </dict> </plist> 4. Change the ProgramArguments array (lines 13 through 18) so that it launches an intermediate shell script instead of launching dynamic_pager directly. See note #1 for details on why this is necessary. <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager_init</string> </array> 5. Save the plist, and return to the terminal prompt. Using pico, the commands would be: <ctrl+o> to save the file <enter> to accept the same filename (com.apple.dynamic_pager.plist) <ctrl+x> to exit 6. Convert the modified plist back to binary: $ sudo plutil -convert binary1 com.apple.dynamic_pager.plist 7. Create the intermediate shell script: $ cd /sbin $ sudo pico -w dynamic_pager_init The script should look as follows (my partition is called 'Swap', and I chose to put the swapfiles in a hidden directory on that partition, called '.vm' be sure that the directory you specify actually exists): Update: This version of the script makes use of wait4path as suggested by ZILjr: #!/bin/bash #launch Apple's dynamic_pager only when the swap volume is mounted echo "Waiting for Swap volume to mount"; wait4path /Volumes/Swap; echo "Launching dynamic pager on volume Swap"; /sbin/dynamic_pager -F /Volumes/Swap/.vm/swapfile; 8. Save and close dynamic_pager_init (same commands as step 5) 9. Modify permissions and ownership for dynamic_pager_init: $ sudo chmod a+x-w /sbin/dynamic_pager_init $ sudo chown root:wheel /sbin/dynamic_pager_init 10. Verify the permissions on dynamic_pager_init: $ ls -l dynamic_pager_init -r-xr-xr-x 1 root wheel 6 18 Sep 15:11 dynamic_pager_init 11. Restart your Mac. If you run into trouble, switch to verbose startup mode by holding down Command-v immediately after the startup chime. This will let you see all of the startup messages that appear during startup. If you run into even worse trouble (i.e. you never see the login screen), hold down Command-s instead. This will boot the computer in single-user mode (no graphical UI, just a command prompt) and allow you to restore the backup copy of com.apple.dynamic_pager.plist that you made in step 1. 12. Once the computer boots, fire up Terminal and verify that the swap files have actually been moved: $ cd /Volumes/Swap/.vm $ ls -l You should see something like this: -rw------- 1 someUser staff 67108864 18 Sep 12:02 swapfile0 13. Delete the old swapfiles: $ cd /private/var/vm $ sudo rm swapfile* 14. Profit! Note 1 Simply modifying the arguments to dynamic_pager in the plist does not always work, and when it fails, it does so in a spectacularly silent way. The problem stems from the fact that dynamic_pager is launched very early in the startup process. If your swap partition has not yet been mounted when dynamic_pager is first loaded (in my experience, this happens 99% of the time), then the system will fake its way through. It will create a symbolic link in your /Volumes directory which has the same name as your swap partition, but points back to the default swapfile location (/private/var/vm). Then, when your actual swap partition mounts, it will be given the name Swap 1 (or YourDriveName 1). You can see the problem by opening up Terminal and listing the contents of your /Volumes directory: $ cd /Volumes $ ls -l You will see something like this: drwxrwxrwx 11 yourUser staff 442 16 Sep 12:13 Swap -> private/var/vm drwxrwxrwx 14 yourUser staff 5 16 Sep 12:13 Swap 1 lrwxr-xr-x 1 root admin 1 17 Sep 12:01 System -> / Note that this failure can be very hard to spot. If you were to check for the swapfiles as I show in step 12, you would still see them! The symbolic link would make it seem as though your swapfiles had been moved, even though they were actually being stored in the default location. Note 2 I was originally unable to get this to work in Snow Leopard because com.apple.dynamic_pager.plist was stored in binary format. I made a copy of the original file and opened it with Apple's Property List Editor (available with Xcode) in order to make changes, but this process added some extended attributes to the plist file which caused the system to ignore it and just use the defaults. As dblu pointed out, using plutil to convert the file to plain XML works like a charm. Note 3 You can check the Console application to see any messages that dynamic_pager_init echos to the screen. If you see the following lines repeated over and over again, there is a problem with the setup. I ran into these messages because I forgot to create the '.vm' directory that I specified in dynamic_pager_init. com.apple.launchd[1] (com.apple.dynamic_pager[176]) Exited with exit code: 1 com.apple.launchd[1] (com.apple.dynamic_pager) Throttling respawn: Will start in 10 seconds When everything is working properly, you may see the above message a couple of times, but you should also see the following message, and then no more of the "Throttling respawn" messages afterwards. com.apple.dynamic_pager[???] Launching dynamic pager on volume Swap This means that the script did have to wait for the partition to load, but in the end it was successful.

    Read the article

  • Scripts help FIND command via atime output to multiple files

    - by sswagner
    here is a script I have wrote that I need help with. in the script I do a find for any file that has not been access for over 30 days, 60, 90, 180, 270 & 365 days. This works just fine. however, this takes a few days just to finish the 30 day portion. it is scanning a NAS. (millions and millions of files) as you see, the 30 day information really holds all the data need for the rest of the scripts. the 60, 90, etc. portion of the script are just redoing the same effort as the 30 day portion, except for an extended time frame. it would save in this case weeks worth of re-scanning if some how the 60, 90 180, etc.. portions could just get its data from the 30 day output. this is where I am asking for help. the output is just like an ls -l command. and you can also see from the output below, there are multiple years in this output. the script is attached and printed below. total 24 -rw-r--r-- 1 root bin 60 Apr 12 13:07 config_file -rw-r--r-- 1 root bin 9 Apr 12 13:07 config_file.InProgress -rw-r--r-- 1 root bin 0 Apr 12 13:07 config_file.sids -rw-r--r-- 1 root bin 1284 Apr 19 10:41 rpt_file -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/dat1_01.gif -rw-r--r-- 1 16074 5003 20088 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/set1_04.gif -rw-r--r-- 1 16074 5003 2008 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/get2_03.htm -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/per1_01.gif any help is appreciated. these are linux distro boxes, so I am sure perl is on there too if needed.. Thanks! !/bin/ksh # search shares for files that have not been accessed for a certain time. NOTE: $IN = input search $OUT = output directory for text file # TESTS Numeric arguments can be specified as # +n for greater than n, -n for less than n, n for exactly n. # -atime n File was last accessed n*24 hours ago. # # IN1=/nas/quota/slot_2/CR* IN2=/nas/quota/slot_3/CR* IN3=/nas/quota/slot_4/CR* IN4=/nas/quota/slot_5/CR* OUT=/nas/quota/slot_3/CR_PRJ144/steve mkdir ${OUT} for dir in ${IN1}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN2}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN3}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN4}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN1}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN2}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN3}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN4}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN1}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN2}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN3}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN4}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN1}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN2}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN3}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN4}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN1}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN2}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN3}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN4}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN1}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN2}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN3}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN4}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done

    Read the article

  • Business rule validation of hierarchical list of objects ASP.NET MVC

    - by SergeanT
    I have a list of objects that are organized in a tree using a Depth property: public class Quota { [Range(0, int.MaxValue, ErrorMessage = "Please enter an amount above zero.")] public int Amount { get; set; } public int Depth { get; set; } [Required] [RegularExpression("^[a-zA-Z]+$")] public string Origin { get; set; } // ... another properties with validation attributes } For data example (amount - origin) 100 originA 200 originB 50 originC 150 originD the model data looks like: IList<Quota> model = new List<Quota>(); model.Add(new Quota{ Amount = 100, Depth = 0, Origin = "originA"); model.Add(new Quota{ Amount = 200, Depth = 0, Origin = "originB"); model.Add(new Quota{ Amount = 50, Depth = 1, Origin = "originC"); model.Add(new Quota{ Amount = 150, Depth = 1, Orinig = "originD"); Editing of the list Then I use Editing a variable length list, ASP.NET MVC 2-style to raise editing of the list. Controller actions QuotaController.cs: public class QuotaController : Controller { // // GET: /Quota/EditList public ActionResult EditList() { IList<Quota> model = // ... assigments as in example above; return View(viewModel); } // // POST: /Quota/EditList [HttpPost] public ActionResult EditList(IList<Quota> quotas) { if (ModelState.IsValid) { // ... save logic return RedirectToAction("Details"); } return View(quotas); // Redisplay the form with errors } // ... other controller actions } View EditList.aspx: <%@ Page Title="" Language="C#" ... Inherits="System.Web.Mvc.ViewPage<IList<Quota>>" %> ... <h2>Edit Quotas</h2> <%=Html.ValidationSummary("Fix errors:") %> <% using (Html.BeginForm()) { foreach (var quota in Model) { Html.RenderPartial("QuotaEditorRow", quota); } %> <% } %> ... Partial View QuotaEditorRow.ascx: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Quota>" %> <div class="quotas" style="margin-left: <%=Model.Depth*45 %>px"> <% using (Html.BeginCollectionItem("Quotas")) { %> <%=Html.HiddenFor(m=>m.Id) %> <%=Html.HiddenFor(m=>m.Depth) %> <%=Html.TextBoxFor(m=>m.Amount, new {@class = "number", size = 5})%> <%=Html.ValidationMessageFor(m=>m.Amount) %> Origin: <%=Html.TextBoxFor(m=>m.Origin)%> <%=Html.ValidationMessageFor(m=>m.Origin) %> ... <% } %> </div> Business rule validation How do I implement validation of business rule: Amount of quota and sum of amounts of nested quotas should equal (e.a. 200 = 50 + 150 in example)? I want to appropriate inputs Html.TextBoxFor(m=>m.Amount) be highlighted red if the rule is broken for it. In example if user enters not 200, but 201 - it should be red on submit. Using sever validation only. Thanks a lot for any advise.

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • PHP MVC Framework Structure

    - by bigstylee
    I am sorry about the amount of code here. I have tried to show enough for understanding while avoiding confusion (I hope). I have included a second copy of the code at Pastebin. (The code does execute without error/notice/warning.) I am currently creating a Content Management System while trying to implement the idea of Model View Controller. I have only recently come across the concept of MVC (within the last week) and trying to implement this into my current project. One of the features of the CMS is dynamic/customisable menu areas and each feature will be represented by a controller. Therefore there will be multiple versions of the Controller Class, each with specific extended functionality. I have looked at a number of tutorials and read some open source solutions to the MVC Framework. I am now trying to create a lightweight solution for my specific requirements. I am not interested in backwards compatibility, I am using PHP 5.3. An advantage of the Base class is not having to use global and can directly access any loaded class using $this->Obj['ClassName']->property/function();. Hoping to get some feedback using the basic structure outlined (with performance in mind). Specifically; a) Have I understood/implemented the concept of MVC correctly? b) Have I understood/implemented Object Orientated techniques with PHP 5 correctly? c) Should the class propertise of Base be static? d) Improvements? Thank you very much in advance! <?php /* A "Super Class" that creates/stores all object instances */ class Base { public static $Obj = array(); // Not sure this is the correct use of the "static" keyword? public static $var; static public function load_class($directory, $class) { echo count(self::$Obj)."\n"; // This does show the array is getting updated and not creating a new array :) if (!isset(self::$Obj[$class]) && !is_object(self::$Obj[$class])) //dont want to load it twice { /* Locate and include the class file based upon name ($class) */ return self::$Obj[$class] = new $class(); } return TRUE; } } /* Loads general configuration objects into the "Super Class" */ class Libraries extends Base { public function __construct(){ $this->load_class('library', 'Database'); $this->load_class('library', 'Session'); self::$var = 'Hello World!'; //testing visibility /* Other general funciton classes */ } } class Database extends Base { /* Connects to the the database and executes all queries */ public function query(){} } class Session extends Base { /* Implements Sessions in database (read/write) */ } /* General functionality of controllers */ abstract class Controller extends Base { protected function load_model($class, $method) { /* Locate and include the model file */ $this->load_class('model', $class); call_user_func(array(self::$Obj[$class], $method)); } protected function load_view($name) { /* Locate and include the view file */ #include('views/'.$name.'.php'); } } abstract class View extends Base { /* ... */ } abstract class Model extends Base { /* ... */ } class News extends Controller { public function index() { /* Displays the 5 most recent News articles and displays with Content Area */ $this->load_model('NewsModel', 'index'); $this->load_view('news', 'index'); echo $this->var; } public function menu() { /* Displays the News Title of the 5 most recent News articles and displays within the Menu Area */ $this->load_model('news/index'); $this->load_view('news/index'); } } class ChatBox extends Controller { /* ... */ } /* Lots of different features extending the controller/view/model class depending upon request and layout */ class NewsModel extends Model { public function index() { echo $this->var; self::$Obj['Database']->query(/*SELECT 5 most recent news articles*/); } public function menu() { /* ... */ } } $Libraries = new Libraries; $controller = 'News'; // Would be determined from Query String $method = 'index'; // Would be determined from Query String $Content = $Libraries->load_class('controller', $controller); //create the controller for the specific page if (in_array($method, get_class_methods($Content))) { call_user_func(array($Content, $method)); } else { die('Bad Request'. $method); } $Content::$var = 'Goodbye World'; echo $Libraries::$var . ' - ' . $Content::$var; ?> /* Ouput */ 0 1 2 3 Goodbye World! - Goodbye World

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • "C variable type sizes are machine dependent." Is it really true? signed & unsigned numbers ;

    - by claws
    Hello, I've been told that C types are machine dependent. Today I wanted to verify it. void legacyTypes() { /* character types */ char k_char = 'a'; //Signedness --> signed & unsigned signed char k_char_s = 'a'; unsigned char k_char_u = 'a'; /* integer types */ int k_int = 1; /* Same as "signed int" */ //Signedness --> signed & unsigned signed int k_int_s = -2; unsigned int k_int_u = 3; //Size --> short, _____, long, long long short int k_s_int = 4; long int k_l_int = 5; long long int k_ll_int = 6; /* real number types */ float k_float = 7; double k_double = 8; } I compiled it on a 32-Bit machine using minGW C compiler _legacyTypes: pushl %ebp movl %esp, %ebp subl $48, %esp movb $97, -1(%ebp) # char movb $97, -2(%ebp) # signed char movb $97, -3(%ebp) # unsigned char movl $1, -8(%ebp) # int movl $-2, -12(%ebp)# signed int movl $3, -16(%ebp) # unsigned int movw $4, -18(%ebp) # short int movl $5, -24(%ebp) # long int movl $6, -32(%ebp) # long long int movl $0, -28(%ebp) movl $0x40e00000, %eax movl %eax, -36(%ebp) fldl LC2 fstpl -48(%ebp) leave ret I compiled the same code on 64-Bit processor (Intel Core 2 Duo) on GCC (linux) legacyTypes: .LFB2: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 movq %rsp, %rbp .cfi_offset 6, -16 .cfi_def_cfa_register 6 movb $97, -1(%rbp) # char movb $97, -2(%rbp) # signed char movb $97, -3(%rbp) # unsigned char movl $1, -12(%rbp) # int movl $-2, -16(%rbp)# signed int movl $3, -20(%rbp) # unsigned int movw $4, -6(%rbp) # short int movq $5, -32(%rbp) # long int movq $6, -40(%rbp) # long long int movl $0x40e00000, %eax movl %eax, -24(%rbp) movabsq $4620693217682128896, %rax movq %rax, -48(%rbp) leave ret Observations char, signed char, unsigned char, int, unsigned int, signed int, short int, unsigned short int, signed short int all occupy same no. of bytes on both 32-Bit & 64-Bit Processor. The only change is in long int & long long int both of these occupy 32-bit on 32-bit machine & 64-bit on 64-bit machine. And also the pointers, which take 32-bit on 32-bit CPU & 64-bit on 64-bit CPU. Questions: I cannot say, what the books say is wrong. But I'm missing something here. What exactly does "Variable types are machine dependent mean?" As you can see, There is no difference between instructions for unsigned & signed numbers. Then how come the range of numbers that can be addressed using both is different? I was reading http://stackoverflow.com/questions/2511246/how-to-maintain-fixed-size-of-c-variable-types-over-different-machines I didn't get the purpose of the question or their answers. What maintaining fixed size? They all are the same. I didn't understand how those answers are going to ensure the same size.

    Read the article

  • Data adapter not filling my dataset

    - by Doug Ancil
    I have the following code: Imports System.Data.SqlClient Public Class Main Protected WithEvents DataGridView1 As DataGridView Dim instForm2 As New Exceptions Private Sub Button1_Click_1(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles startpayrollButton.Click Dim ssql As String = "select MAX(payrolldate) AS [payrolldate], " & _ "dateadd(dd, ((datediff(dd, '17530107', MAX(payrolldate))/7)*7)+7, '17530107') AS [Sunday]" & _ "from dbo.payroll" & _ " where payrollran = 'no'" Dim oCmd As System.Data.SqlClient.SqlCommand Dim oDr As System.Data.SqlClient.SqlDataReader oCmd = New System.Data.SqlClient.SqlCommand Try With oCmd .Connection = New System.Data.SqlClient.SqlConnection("Initial Catalog=mdr;Data Source=xxxxx;uid=xxxxx;password=xxxxx") .Connection.Open() .CommandType = CommandType.Text .CommandText = ssql oDr = .ExecuteReader() End With If oDr.Read Then payperiodstartdate = oDr.GetDateTime(1) payperiodenddate = payperiodstartdate.AddSeconds(604799) Dim ButtonDialogResult As DialogResult ButtonDialogResult = MessageBox.Show(" The Next Payroll Start Date is: " & payperiodstartdate.ToString() & System.Environment.NewLine & " Through End Date: " & payperiodenddate.ToString()) If ButtonDialogResult = Windows.Forms.DialogResult.OK Then exceptionsButton.Enabled = True startpayrollButton.Enabled = False End If End If oDr.Close() oCmd.Connection.Close() Catch ex As Exception MessageBox.Show(ex.Message) oCmd.Connection.Close() End Try End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles exceptionsButton.Click Dim connection As System.Data.SqlClient.SqlConnection Dim adapter As System.Data.SqlClient.SqlDataAdapter = New System.Data.SqlClient.SqlDataAdapter Dim connectionString As String = "Initial Catalog=mdr;Data Source=xxxxx;uid=xxxxx;password=xxxxx" Dim ds As New DataSet Dim _sql As String = "SELECT [Exceptions].Employeenumber,[Exceptions].exceptiondate, [Exceptions].starttime, [exceptions].endtime, [Exceptions].code, datediff(minute, starttime, endtime) as duration INTO scratchpad3" & _ " FROM Employees INNER JOIN Exceptions ON [Exceptions].EmployeeNumber = [Exceptions].Employeenumber" & _ " where [Exceptions].exceptiondate between @payperiodstartdate and @payperiodenddate" & _ " GROUP BY [Exceptions].Employeenumber, [Exceptions].Exceptiondate, [Exceptions].starttime, [exceptions].endtime," & _ " [Exceptions].code, [Exceptions].exceptiondate" connection = New SqlConnection(connectionString) connection.Open() Dim _CMD As SqlCommand = New SqlCommand(_sql, connection) _CMD.Parameters.AddWithValue("@payperiodstartdate", payperiodstartdate) _CMD.Parameters.AddWithValue("@payperiodenddate", payperiodenddate) adapter.SelectCommand = _CMD Try adapter.Fill(ds) If ds Is Nothing OrElse ds.Tables.Count = 0 OrElse ds.Tables(0).Rows.Count = 0 Then 'it's empty MessageBox.Show("There was no data for this time period. Press Ok to continue", "No Data") connection.Close() Exceptions.saveButton.Enabled = False Exceptions.Hide() Else connection.Close() End If Catch ex As Exception MessageBox.Show(ex.ToString) connection.Close() End Try Exceptions.Show() End Sub Private Sub payrollButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles payrollButton.Click Payrollfinal.Show() End Sub End Class and when I run my program and press this button Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles exceptionsButton.Click I have my date range within a time that I know that my dataset should produce a result, but when I put a line break in my code here: adapter.Fill(ds) and look at it in debug, I show a table value of 0. If I run the same query that I have to produce these results in sql analyser, I see 1 result. Can someone see why my query on my form produces a different result than the sql analyser does? Also here is my schema for my two tables: Exceptions employeenumber varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS exceptiondate datetime no 8 yes (n/a) (n/a) NULL starttime datetime no 8 yes (n/a) (n/a) NULL endtime datetime no 8 yes (n/a) (n/a) NULL duration varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS code varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS approvedby varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS approved varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS time timestamp no 8 yes (n/a) (n/a) NULL employees employeenumber varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS name varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS initials varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS loginname1 varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS

    Read the article

  • trie reg exp parse step over char and continue

    - by forest.peterson
    Setup: 1) a string trie database formed from linked nodes and a vector array linking to the next node terminating in a leaf, 2) a recursive regular expression function that if A) char '*' continues down all paths until string length limit is reached, then continues down remaining string paths if valid, and B) char '?' continues down all paths for 1 char and then continues down remaining string paths if valid. 3) after reg expression the candidate strings are measured for edit distance against the 'try' string. Problem: the reg expression works fine for adding chars or swapping ? for a char but if the remaining string has an error then there is not a valid path to a terminating leaf; making the matching function redundant. I tried adding a 'step-over' ? char if the end of the node vector was reached and then followed every path of that node - allowing this step-over only once; resulted in a memory exception; I cannot find logically why it is accessing the vector out of range - bactracking? Questions: 1) how can the regular expression step over an invalid char and continue with the path? 2) why is swapping the 'sticking' char for '?' resulting in an overflow? Function: void Ontology::matchRegExpHelper(nodeT *w, string inWild, Set<string> &matchSet, string out, int level, int pos, int stepover) { if (inWild=="") { matchSet.add(out); } else { if (w->alpha.size() == pos) { int testLength = out.length() + inWild.length(); if (stepover == 0 && matchSet.size() == 0 && out.length() > 8 && testLength == tokenLength) {//candidate generator inWild[0] = '?'; matchRegExpHelper(w, inWild, matchSet, out, level, 0, stepover+1); } else return; //giveup on this path } if (inWild[0] == '?' || (inWild[0] == '*' && (out.length() + inWild.length() ) == level ) ) { //wild matchRegExpHelper(w->alpha[pos].next, inWild.substr(1), matchSet, out+w->alpha[pos].letter, level, 0, stepover);//follow path -> if ontology is full, treat '*' like a '?' } else if (inWild[0] == '*') matchRegExpHelper(w->alpha[pos].next, '*'+inWild.substr(1), matchSet, out+w->alpha[pos].letter, level, 0, stepover); //keep adding chars if (inWild[0] == w->alpha[pos].letter) //follow self matchRegExpHelper(w->alpha[pos].next, inWild.substr(1), matchSet, out+w->alpha[pos].letter, level, 0, stepover); //follow char matchRegExpHelper(w, inWild, matchSet, out, level, pos+1, stepover);//check next path } } Error Message: +str "Attempt to access index 1 in a vector of size 1." std::basic_string<char,std::char_traits<char>,std::allocator<char> > +err {msg="Attempt to access index 1 in a vector of size 1." } ErrorException Note: this function works fine for hundreds of test strings with '*' wilds if the extra stepover gate is not used Semi-Solved: I place a pos < w->alpha.size() condition on each path that calls w->alpha[pos]... - this prevented the backtrack calls from attempting to access the vector with an out of bounds index value. Still have other issues to work out - it loops infinitely adding the ? and backtracking to remove it, then repeat. But, moving forward now. Revised question: why during backtracking is the position index accumulating and/or not deincrementing - so at somepoint it calls w->alpha[pos]... with an invalid position that is either remaining from the next node or somehow incremented pos+1 when passing upward?

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • how to avoid sub-query to gain performance

    - by chun
    hi i have a reporting query which have 2 long sub-query SELECT r1.code_centre, r1.libelle_centre, r1.id_equipe, r1.equipe, r1.id_file_attente, r1.libelle_file_attente,r1.id_date, r1.tranche, r1.id_granularite_de_periode,r1.granularite, r1.ContactsTraites, r1.ContactsenParcage, r1.ContactsenComm, r1.DureeTraitementContacts, r1.DureeComm, r1.DureeParcage, r2.AgentsConnectes, r2.DureeConnexion, r2.DureeTraitementAgents, r2.DureePostTraitement FROM ( SELECT cc.id_centre_contact, cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_file_attente, f.libelle_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode, g.granularite, sum(Nb_Contacts_Traites) as ContactsTraites, sum(Nb_Contacts_en_Parcage) as ContactsenParcage, sum(Nb_Contacts_en_Communication) as ContactsenComm, sum(Duree_Traitement/1000) as DureeTraitementContacts, sum(Duree_Communication / 1000 + Duree_Conference / 1000 + Duree_Com_Interagent / 1000) as DureeComm, sum(Duree_Parcage/1000) as DureeParcage FROM agr_synthese_activite_media_fa_agent a, centre_contact cc, direction_contact dc, granularite_de_periode g, media m, file_attente f WHERE m.id_media = a.id_media AND cc.id_centre_contact = a.id_centre_contact AND a.id_direction_contact = dc.id_direction_contact AND dc.direction_contact ='INCOMING' AND a.id_file_attente = f.id_file_attente AND m.media = 'PHONE' AND ( ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') or ( g.granularite = 'Heure' and a.id_th_heure = g.id_granularite_de_periode) ) GROUP by cc.id_centre_contact, a.id_equipe, a.id_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode) r1, ( (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact, a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode ) UNION (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.granularite = 'Heure' AND a.id_th_heure = g.id_granularite_de_periode) AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact,a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode) ) r2 WHERE r1.id_centre_contact = r2.id_centre_contact AND r1.id_equipe = r2.id_equipe AND r1.id_date = r2.id_date AND r1.tranche = r2.tranche AND r1.id_granularite_de_periode = r2.id_granularite_de_periode GROUP BY r1.id_centre_contact , r1.id_equipe, r1.id_file_attente, r1.id_date, r1.tranche, r1.id_granularite_de_periode ORDER BY r1.code_centre, r1.libelle_centre, r1.equipe, r1.libelle_file_attente, r1.id_date, r1.id_granularite_de_periode,r1.tranche the EXPLAIN shows | id | select_type | table | type| possible_keys | key | key_len | ref| rows | Extra | '1', 'PRIMARY', '<derived3>', 'ALL', NULL, NULL, NULL, NULL, '2520', 'Using temporary; Using filesort' '1', 'PRIMARY', '<derived2>', 'ALL', NULL, NULL, NULL, NULL, '4378', 'Using where; Using join buffer' '3', 'DERIVED', 'a', 'ALL', 'fk_Activite_Agent_centre_contact', NULL, NULL, NULL, '83433', 'Using temporary; Using filesort' '3', 'DERIVED', 'g', 'ref', 'Index_granularite,Index_Valeur_min', 'Index_Valeur_min', '23', 'func', '1', 'Using where' '3', 'DERIVED', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' '4', 'UNION', 'g', 'ref', 'PRIMARY,Index_granularite', 'Index_granularite', '23', '', '24', 'Using where; Using temporary; Using filesort' '4', 'UNION', 'a', 'ref', 'fk_Activite_Agent_centre_contact,fk_activite_agent_TH_heure', 'fk_activite_agent_TH_heure', '5', 'reporting_acd.g.Id_Granularite_de_periode', '2979', 'Using where' '4', 'UNION', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' NULL, 'UNION RESULT', '<union3,4>', 'ALL', NULL, NULL, NULL, NULL, NULL, '' '2', 'DERIVED', 'g', 'range', 'PRIMARY,Index_granularite,Index_Valeur_min', 'Index_granularite', '23', NULL, '389', 'Using where; Using temporary; Using filesort' '2', 'DERIVED', 'a', 'ALL', 'fk_agr_synthese_activite_media_fa_agent_centre_contact,fk_agr_synthese_activite_media_fa_agent_direction_contact,fk_agr_synthese_activite_media_fa_agent_file_attente,fk_agr_synthese_activite_media_fa_agent_media,fk_agr_synthese_activite_media_fa_agent_th_heure', NULL, NULL, NULL, '20903', 'Using where; Using join buffer' '2', 'DERIVED', 'cc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Centre_Contact', '1', '' '2', 'DERIVED', 'f', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_File_Attente', '1', '' '2', 'DERIVED', 'dc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Direction_Contact', '1', 'Using where' '2', 'DERIVED', 'm', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Media', '1', 'Using where' don't know it very clear, but i think is the problem of seems it take full scaning than i change all the sub-query to views(create view as select sub-query), and the result is the same thanks for any advice

    Read the article

  • How would you go about tackling this problem? [SOLVED in C++]

    - by incrediman
    Intro: EDIT: See solution at the bottom of this question (c++) I have a programming contest coming up in about half a week, and I've been prepping :) I found a bunch of questions from this canadian competition, they're great practice: http://cemc.math.uwaterloo.ca/contests/computing/2009/stage2/day1.pdf I'm looking at problem B ("Dinner"). Any idea where to start? I can't really think of anything besides the naive approach (ie. trying all permutations) which would take too long to be a valid answer. Btw, the language there says c++ and pascal I think, but i don't care what language you use - I mean really all I want is a hint as to the direction I should proceed in, and perhpas a short explanation to go along with it. It feels like I'm missing something obvious... Of course extended speculation is more than welcome, but I just wanted to clarify that I'm not looking for a full solution here :) Short version of the question: You have a binary string N of length 1-100 (in the question they use H's and G's instead of one's and 0's). You must remove all of the digits from it, in the least number of steps possible. In each step you may remove any number of adjacent digits so long as they are the same. That is, in each step you can remove any number of adjacent G's, or any number of adjacent H's, but you can't remove H's and G's in one step. Example: HHHGHHGHH Solution to the example: 1. HHGGHH (remove middle Hs) 2. HHHH (remove middle Gs) 3. Done (remove Hs) -->Would return '3' as the answer. Note that there can also be a limit placed on how large adjacent groups have to be when you remove them. For example it might say '2', and then you can't remove single digits (you'd have to remove pairs or larger groups at a time). Solution I took Mark Harrison's main algorithm, and Paradigm's grouping idea and used them to create the solution below. You can try it out on the official test cases if you want. //B.cpp //include debug messages? #define DEBUG false #include <iostream> #include <stdio.h> #include <vector> using namespace std; #define FOR(i,n) for (int i=0;i<n;i++) #define FROM(i,s,n) for (int i=s;i<n;i++) #define H 'H' #define G 'G' class String{ public: int num; char type; String(){ type=H; num=0; } String(char type){ this->type=type; num=1; } }; //n is the number of bits originally in the line //k is the minimum number of people you can remove at a time //moves is the counter used to determine how many moves we've made so far int n, k, moves; int main(){ /*Input from File*/ scanf("%d %d",&n,&k); char * buffer = new char[200]; scanf("%s",buffer); /*Process input into a vector*/ //the 'line' is a vector of 'String's (essentially contigious groups of identical 'bits') vector<String> line; line.push_back(String()); FOR(i,n){ //if the last String is of the correct type, simply increment its count if (line.back().type==buffer[i]) line.back().num++; //if the last String is of the wrong type but has a 0 count, correct its type and set its count to 1 else if (line.back().num==0){ line.back().type=buffer[i]; line.back().num=1; } //otherwise this is the beginning of a new group, so create the new group at the back with the correct type, and a count of 1 else{ line.push_back(String(buffer[i])); } } /*Geedily remove groups until there are at most two groups left*/ moves=0; int I;//the position of the best group to remove int bestNum;//the size of the newly connected group the removal of group I will create while (line.size()>2){ /*START DEBUG*/ if (DEBUG){ cout<<"\n"<<moves<<"\n----\n"; FOR(i,line.size()) printf("%d %c \n",line[i].num,line[i].type); cout<<"----\n"; } /*END DEBUG*/ I=1; bestNum=-1; FROM(i,1,line.size()-1){ if (line[i-1].num+line[i+1].num>bestNum && line[i].num>=k){ bestNum=line[i-1].num+line[i+1].num; I=i; } } //remove the chosen group, thus merging the two adjacent groups line[I-1].num+=line[I+1].num; line.erase(line.begin()+I);line.erase(line.begin()+I); moves++; } /*START DEBUG*/ if (DEBUG){ cout<<"\n"<<moves<<"\n----\n"; FOR(i,line.size()) printf("%d %c \n",line[i].num,line[i].type); cout<<"----\n"; cout<<"\n\nFinal Answer: "; } /*END DEBUG*/ /*Attempt the removal of the last two groups, and output the final result*/ if (line.size()==2 && line[0].num>=k && line[1].num>=k) cout<<moves+2;//success else if (line.size()==1 && line[0].num>=k) cout<<moves+1;//success else cout<<-1;//not everyone could dine. /*START DEBUG*/ if (DEBUG){ cout<<" moves."; } /*END DEBUG*/ }

    Read the article

  • Java code optimization leads to numerical inaccuracies and errors

    - by rano
    I'm trying to implement a version of the Fuzzy C-Means algorithm in Java and I'm trying to do some optimization by computing just once everything that can be computed just once. This is an iterative algorithm and regarding the updating of a matrix, the clusters x pixels membership matrix U, this is the update rule I want to optimize: where the x are the element of a matrix X (pixels x features) and v belongs to the matrix V (clusters x features). And m is a parameter that ranges from 1.1 to infinity. The distance used is the euclidean norm. If I had to implement this formula in a banal way I'd do: for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < V.length; j++) { double num = D[i][j]; double sumTerms = 0; for(int k = 0; k < V.length; k++) { double thisDistance = D[i][k]; sumTerms += Math.pow(num / thisDistance, (1.0 / (m - 1.0))); } U[i][j] = (float) (1f / sumTerms); } } In this way some optimization is already done, I precomputed all the possible squared distances between X and V and stored them in a matrix D but that is not enough, since I'm cycling througn the elements of V two times resulting in two nested loops. Looking at the formula the numerator of the fraction is independent of the sum so I can compute numerator and denominator independently and the denominator can be computed just once for each pixel. So I came to a solution like this: int nClusters = V.length; double exp = (1.0 / (m - 1.0)); for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < nClusters; j++) { double distance = D[i][j]; double denominator = D[i][nClusters]; double numerator = Math.pow(distance, exp); U[i][j] = (float) (1f / (numerator * denominator)); } } Where I precomputed the denominator into an additional column of the matrix D while I was computing the distances: for (int i = 0; i < X.length; i++) { for (int j = 0; j < V.length; j++) { double sum = 0; for (int k = 0; k < nDims; k++) { final double d = X[i][k] - V[j][k]; sum += d * d; } D[i][j] = sum; D[i][B.length] += Math.pow(1 / D[i][j], exp); } } By doing so I encounter numerical differences between the 'banal' computation and the second one that leads to different numerical value in U (not in the first iterates but soon enough). I guess that the problem is that exponentiate very small numbers to high values (the elements of U can range from 0.0 to 1.0 and exp , for m = 1.1, is 10) leads to ver y small values, whereas by dividing the numerator and the denominator and THEN exponentiating the result seems to be better numerically. The problem is it involves much more operations. Am I doing something wrong? Is there a possible solution to get both the code optimized and numerically stable? Any suggestion or criticism will be appreciated.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >