Search Results

Search found 609 results on 25 pages for 'bs c3'.

Page 3/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • R- delete rows in multiple columns by unique number

    - by Vincent Moriarty
    Given data like this C1<-c(3,-999.000,4,4,5) C2<-c(3,7,3,4,5) C3<-c(5,4,3,6,-999.000) DF<-data.frame(ID=c("A","B","C","D","E"),C1=C1,C2=C2,C3=C3) How do I go about removing the -999.000 data in all of the columns I know this works per column DF2<-DF[!(DF$C1==-999.000 | DF$C2==-999.000 | DF$C3==-999.000),] But I'd like to avoid referencing each column. I am thinking there is an easy way to reference all of the columns in a particular data frame aka: DF3<-DF[!(DF[,]==-999.000),] or DF3<-DF[!(DF[,(2:4)]==-999.000),] but obviously these do not work And out of curiosity, bonus points if you can me why I need that last comma before the ending square bracket as in: ==-999.000),]

    Read the article

  • how to swap array-elements to transfer the array from a column-like into a row-like representation

    - by Christian Ammer
    For example: the array a1, a2, a3, b1, b2, b3, c1, c2, c3, d1, d2, d3 represents following table a1, b1, c1, d1 a2, b2, c2, d2 a3, b3, c3, d3 now i like to bring the array into following form a1, b1, c1, d1, a2, b2, c2, d2, a3, b3, c3, d3 Does an algorithm exist, which takes the array (from the first form) and the dimensions of the table as input arguments and which transfers the array into the second form? I thougt of an algorithm which doesn't need to allocate additional memory, instead i think it should be possible to do the job with element-swap operations.

    Read the article

  • HLSL - Combining textures

    - by b34r
    Hi All, I'm trying to combine two textures in HLSL - specifically, I want to take the alpha values from a base image, and the color data from an overlay image. My pixel shader for this looks like this: float4 PixelShaderFunction(VertexOut input) : COLOR0 { float4 baseColor = tex2D( BaseSampler, input.baseCoords.xy ).rgba; float4 overlayColor = tex2D( OverlaySampler, input.overlayCoords.xy ).rgba; float4 color; color.r = overlayColor.r; color.g = overlayColor.g; color.b = overlayColor.b; color.a = baseColor.a; return color.rgba; } and my blend state looks like this: BlendState bs = new BlendState(); bs.AlphaSourceBlend = Blend.SourceAlpha; bs.AlphaDestinationBlend = Blend.DestinationAlpha; bs.ColorSourceBlend = Blend.SourceColor; bs.ColorDestinationBlend = Blend.DestinationColor; What this leaves me with is a washed out version of what should be the overlay color. I've tried numerous permutations of the BlendState settings, and played with the pixel shader math quite a bit, but to no avail. Can anyone point me in the right direction? Thanks in advance =)

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • udp through nat

    - by youllknow
    Hi everyone! I've two private networks (each of them behind a typical dsl router). The routers are connected to the WWW. The extern interface of each router have one dynamic IP address. I want to stream data via UDP directly between one client in private network A and one client in private network B. I've already tried a lot of things (see: http://en.wikipedia.org/wiki/UDP_hole_punching, or STUN). But it wasn't possible for me to transfer data between the two clients. It's possible to use a server (located in the WWW, with static IP) to transfer the extern IPs (and extern ports) from the routers between the clients. So imagine client A knows client B's external IP and client B's external port assigned by his router. I simply tried sending UDP packet to the receivers external IP/port combination, but without any result. So does anyone know what do to communicate via UDP throw the two NAT routers? It must be possible??? Or does Skype, for example, not directly communicate between the clients when the call eachother (voice over ip). I am sorry for my bad English! If something is confusing don't mind asking me!!! Thanks for your help in advance. ::::EDIT:::: I can't get pwnat or chownat working. I tried it with my own dsl-gateway - didn't work. Then I set up a complete virtual environment using VMWare. C1 (Client 1, WinXP Prof SP3): 172.16.16.100/24, GW 172.16.16.1 C2 (Client 2, WinXP Prof SP3): 10.0.0.100/24, GW 10.0.0.1 C3 (Client 3, WinXP Prof SP3): 3.0.0.2/24, GW 3.0.0.1 S1 (Ubuntu 10.04 x64 Server): eth0: 172.16.16.1/24, eth1: 1.0.0.2/24 GW 1.0.0.1 S2 (Ubuntu 10.04 x64 Server): eth0: 10.0.0.1/24, eth1: 2.0.0.2/24 GW 2.0.0.1 S3 (Ubuntu 10.04 x64 Server): eth0: 1.0.0.1/24, eth1: 2.0.0.1/24, eth2: 3.0.0.1/24 +--+ +--+ +--+ +--+ +--+ |C1|-----|S1|-----|S3|-----|S2|-----|C2| +--+ +--+ +--+ +--+ +--+ | +--+ |C3| +--+ Server S1 and S2 provide NAT functionality. (they have routing enabled and provide a firewall, which allows trafic from the internal net and provide the nat functionality) Server S3 has routing enabled. The client firewalls are turned off. C1 and C2 are able to ping C3, e.g. visit C3's webserver. They are also able to send UDP Packets to C3 (C3 successful receives them)! C1 and C2 have also webservers running for test reasons. I run ""chownat -s 80 2.0.0.2"" at C1, and ""chownat -c 8000 1.0.0.2"" at C2. Then I tried to access the Webpage from C1 via webbrower localhost at port 8000. It didn't work. Can anybody help me? Any suggestions? If you have any questions to my question, please ask!

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Java invalid stream header Problem

    - by David zsl
    Hi all, im writen a client-server app, and now i´m facing a problem that I dont know how to solve: This is the client: try { Socket socket = new Socket(ip, port); ObjectOutputStream ooos = new ObjectOutputStream(socket .getOutputStream()); SendMessage message = new SendMessage(); message.numDoc = value.numDoc; message.docFreq = value.docFreq; message.queryTerms = query; message.startIndex = startIndex; message.count = count; message.multiple = false; message.ips = null; message.ports = null; message.value = true; message.docFreq = value.docFreq; message.numDoc = value.numDoc; ooos.writeObject(message); ObjectInputStream ois = new ObjectInputStream(socket .getInputStream()); ComConstants mensajeRecibido; Object mensajeAux; String mensa = null; byte[] by = null; do { mensajeAux = ois.readObject(); if (mensajeAux instanceof ComConstants) { System.out.println("Thread by Thread has Search Results"); String test; ByteArrayOutputStream testo = new ByteArrayOutputStream(); mensajeRecibido = (ComConstants) mensajeAux; byte[] wag; testo.write( mensajeRecibido.fileContent, 0, mensajeRecibido.okBytes); wag = testo.toByteArray(); if (by == null) { by = wag; } else { int size = wag.length; System.arraycopy(wag, 0, by, 0, size); } } else { System.err.println("Mensaje no esperado " + mensajeAux.getClass().getName()); break; } } while (!mensajeRecibido.lastMessage); //ByteArrayInputStream bs = new ByteArrayInputStream(by.toByteArray()); // bytes es el byte[] ByteArrayInputStream bs = new ByteArrayInputStream(by); ObjectInputStream is = new ObjectInputStream(bs); QueryWithResult[] unObjetoSerializable = (QueryWithResult[])is.readObject(); is.close(); //AQUI TOCARIA METER EL QUICKSORT XmlConverter xce = new XmlConverter(unObjetoSerializable, startIndex, count); String serializedd = xce.runConverter(); tempFinal = serializedd; ois.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } i++; } And this is the sender: try { QueryWithResult[] outputLine; Operations op = new Operations(); boolean enviadoUltimo=false; ComConstants mensaje = new ComConstants(); mensaje.queryTerms = query; outputLine = op.processInput(query, value); //String c = new String(); //c = outputLine.toString(); //StringBuffer swa = sw.getBuffer(); ByteArrayOutputStream bs= new ByteArrayOutputStream(); ObjectOutputStream os = new ObjectOutputStream (bs); os.writeObject(outputLine); os.close(); byte[] mybytearray = bs.toByteArray(); ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(mybytearray); BufferedInputStream bis = new BufferedInputStream(byteArrayInputStream); int readed = bis.read(mensaje.fileContent,0,4000); while (readed > -1) { mensaje.okBytes = readed; if (readed < ComConstants.MAX_LENGTH) { mensaje.lastMessage = true; enviadoUltimo=true; } else mensaje.lastMessage = false; oos.writeObject(mensaje); if (mensaje.lastMessage) break; mensaje = new ComConstants(); mensaje.queryTerms = query; readed = bis.read(mensaje.fileContent); } if (enviadoUltimo==false) { mensaje.lastMessage=true; mensaje.okBytes=0; oos.writeObject(mensaje); } oos.close(); } catch (Exception e) { e.printStackTrace(); } } And this is the error log: Thread by Thread has Search Results java.io.StreamCorruptedException: invalid stream header: 20646520 at java.io.ObjectInputStream.readStreamHeader(Unknown Source) at java.io.ObjectInputStream.<init>(Unknown Source) at org.tockit.comunication.ServerThread.enviaFicheroMultiple(ServerThread.java:747) at org.tockit.comunication.ServerThread.run(ServerThread.java:129) at java.lang.Thread.run(Unknown Source) Where at org.tockit.comunication.ServerThread.enviaFicheroMultiple(ServerThread.java:747) is this line ObjectInputStream is = new ObjectInputStream(bs); on the 1st code just after while (!mensajeRecibido.lastMessage); Any ideas?

    Read the article

  • SATA errors reported during boot: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0

    - by digby280
    I have noticed some error during the Linux boot. They seem to continue to occur after the boot adding lines to the log every few seconds. Once booted this normally does not appear to be causing any problems. However, around 1 in 10 boots results in a kernel panic and the computer has on two or three occasions suddenly rebooted after being powered on for a number of hours. I presume the cause of the reboot is a kernel panic as well. I am running Ubuntu 11.10 and I have had Ubuntu installed on the computer for around a year. I have googled around and not found anything useful. I have provided the kernel log lines and the output of smartctl. Can anyone explain exactly what these errors mean, or better still how to resolve them? Apr 2 16:51:27 dell580 kernel: [ 19.831140] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:27 dell580 kernel: [ 19.934194] tg3 0000:03:00.0: eth0: Link is down Apr 2 16:51:28 dell580 kernel: [ 20.929468] tg3 0000:03:00.0: eth0: Link is up at 100 Mbps, full duplex Apr 2 16:51:28 dell580 kernel: [ 20.929471] tg3 0000:03:00.0: eth0: Flow control is on for TX and on for RX Apr 2 16:51:28 dell580 kernel: [ 20.929727] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 2 16:51:29 dell580 kernel: [ 21.609381] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:29 dell580 kernel: [ 21.616515] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:29 dell580 kernel: [ 21.616519] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:29 dell580 kernel: [ 21.616525] ata2.00: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 21.934036] ata2.01: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 22.408890] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.408907] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.440934] ata2.00: configured for UDMA/100 Apr 2 16:51:29 dell580 kernel: [ 22.449040] ata2.01: configured for UDMA/133 Apr 2 16:51:29 dell580 kernel: [ 22.449818] ata2: EH complete Apr 2 16:51:33 dell580 kernel: [ 26.122664] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:33 dell580 kernel: [ 26.122670] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:33 dell580 kernel: [ 26.122677] ata2.00: hard resetting link Apr 2 16:51:33 dell580 kernel: [ 26.442684] ata2.01: hard resetting link Apr 2 16:51:34 dell580 kernel: [ 26.925545] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.925561] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.961542] ata2.00: configured for UDMA/100 Apr 2 16:51:34 dell580 kernel: [ 26.969616] ata2.01: configured for UDMA/133 Apr 2 16:51:34 dell580 kernel: [ 26.970400] ata2: EH complete Apr 2 16:51:35 dell580 kernel: [ 28.111180] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:35 dell580 kernel: [ 28.111184] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:35 dell580 kernel: [ 28.111191] ata2.00: hard resetting link Apr 2 16:51:35 dell580 kernel: [ 28.429674] ata2.01: hard resetting link Apr 2 16:51:36 dell580 kernel: [ 28.904557] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.904572] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.936609] ata2.00: configured for UDMA/100 Apr 2 16:51:36 dell580 kernel: [ 28.944692] ata2.01: configured for UDMA/133 Apr 2 16:51:36 dell580 kernel: [ 28.945464] ata2: EH complete Apr 2 16:51:38 dell580 kernel: [ 31.581756] eth0: no IPv6 routers present Apr 2 16:51:38 dell580 kernel: [ 32.103066] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:38 dell580 kernel: [ 32.103074] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:38 dell580 kernel: [ 32.103085] ata2.00: hard resetting link Apr 2 16:51:38 dell580 kernel: [ 32.419669] ata2.01: hard resetting link Apr 2 16:51:39 dell580 kernel: [ 32.894518] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.894533] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.926536] ata2.00: configured for UDMA/100 Apr 2 16:51:39 dell580 kernel: [ 32.934715] ata2.01: configured for UDMA/133 Apr 2 16:51:39 dell580 kernel: [ 32.935578] ata2: EH complete Here's the output of smartctl for the drive. smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-17-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F1 DT Device Model: SAMSUNG HD103UJ Serial Number: S13PJ90QC19706 LU WWN Device Id: 5 0000f0 00b1c7960 Firmware Version: 1AA01113 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Mon Apr 2 17:13:48 2012 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 41) The self-test routine was interrupted by the host with a hard or soft reset. Total time to complete Offline data collection: (11772) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 197) minutes. Conveyance self-test routine recommended polling time: ( 21) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 076 076 011 Pre-fail Always - 7940 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 521 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 642 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 482 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 759 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 069 000 Old_age Always - 27 (Min/Max 16/27) 194 Temperature_Celsius 0x0022 073 067 000 Old_age Always - 27 (Min/Max 16/28) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 320028 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1494 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 211 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 211 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 0f 31 63 8f e1 Error: ICRC, ABRT 15 sectors at LBA = 0x018f6331 = 26174257 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 40 62 8f e1 08 00:01:00.460 READ DMA c8 00 20 00 7c 30 e0 08 00:01:00.450 READ DMA c8 00 00 10 49 8f e1 08 00:01:00.440 READ DMA c8 00 e0 20 d0 30 e0 08 00:01:00.420 READ DMA c8 00 00 c0 59 90 e1 08 00:01:00.400 READ DMA Error 210 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 cf e9 cf 66 e0 Error: ICRC, ABRT 207 sectors at LBA = 0x0066cfe9 = 6737897 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 b8 cf 66 e0 08 00:08:29.780 READ DMA c8 00 60 60 c9 18 e0 08 00:08:29.770 READ DMA c8 00 40 20 c9 18 e0 08 00:08:29.770 READ DMA c8 00 20 00 c9 18 e0 08 00:08:29.760 READ DMA c8 00 20 98 cf 66 e0 08 00:08:29.750 READ DMA Error 209 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 2f d1 74 e0 e0 Error: ICRC, ABRT 47 sectors at LBA = 0x00e074d1 = 14709969 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 00 74 e0 e0 08 00:00:30.940 READ DMA c8 00 20 18 36 de e0 08 00:00:30.930 READ DMA c8 00 08 48 f1 dd e0 08 00:00:30.930 READ DMA c8 00 08 a8 f0 dd e0 08 00:00:30.930 READ DMA c8 00 08 90 f0 dd e0 08 00:00:30.930 READ DMA Error 208 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 7f 21 88 9d e0 Error: ICRC, ABRT 127 sectors at LBA = 0x009d8821 = 10324001 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 a0 00 88 9d e0 08 00:00:27.610 READ DMA c8 00 58 a8 e7 9c e0 08 00:00:27.610 READ DMA c8 00 00 28 e6 9c e0 08 00:00:27.610 READ DMA c8 00 00 e0 e4 9c e0 08 00:00:27.610 READ DMA c8 00 00 90 e0 9c e0 08 00:00:27.600 READ DMA Error 207 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 26 6a 6a c3 e0 Error: ABRT at LBA = 0x00c36a6a = 12806762 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ca 00 00 90 69 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 68 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 50 65 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 d0 64 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 63 c3 e0 08 00:29:39.350 WRITE DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Interrupted (host reset) 90% 638 - # 2 Short offline Interrupted (host reset) 90% 638 - # 3 Extended offline Interrupted (host reset) 90% 638 - # 4 Short offline Interrupted (host reset) 90% 638 - # 5 Extended offline Interrupted (host reset) 90% 638 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • SQL DB design to support user feeds (in application like facebook)

    - by Yoav
    I have a social network server with a MySql DB. I want to show the users feeds like done in Facebook. Example - UserX now Friend with userY, userX did like on postX etc. Currently I have table: C1 : UserId C2 : LogType (now friend, did like etc) C3 : ObjectId (Can be userId or postId) - set depending on the LogType. Currently to get all related logs to show to the user I do the following queries: 1. Get All user Friends userIds 2. Query all rows which C1 is in userIds (I query completed) 3. Scan the DB and see - if LogType equals DidLike, check if post's OwnerId is the userId - if yes add it to logs. And so on. Obvious this is not efficient at all. I am looking for a better way. I thought I had in mind: Create a new table (in addition to the Log table) C1 : UserId C2 : LogId (from Log table) C3 : UserID of the one who did the action When querying logs - look in the table and get related Logs (by LogId) from LogTable. Updating the table: Whenever user doing action that should be in the log: 1. Add the Log entry to LogTable. 2. Scan the DB and see which users are interested with the Log (Who my friends are, Who is the owner of the post) and add related entries to the new table. (must be done in BG). 3. If user UNFRIEND another user - then look in the logs for all rows where C3 == UNFRIENDED user id and delete them. Any opinions? Other suggestions?

    Read the article

  • How can i merge arrays like a gearwheel

    - by JuKe
    i have 3 arrays like this $a = array( 0 => 'a1', 1 => 'a2', 2 => 'a3' ); $b = array( 0 => 'b1', 1 => 'b2', 2 => 'b3' ); $c = array( 0 => 'c1', 1 => 'c2', 2 => 'c3' ); and i like to have somethink like this: $r = array( 0 => 'a1', 1 => 'b1', 2 => 'c1', 3 => 'a2', 4 => 'b2', 5 => 'c2', 6 => 'a3', .... ... ); how can i do this, with the opinion of more than 3 arrays EDIT: i have tried this: $a = array( 0 => 'a1', 1 => 'a2', 2 => 'a3', 3 => 'a4' ); $b = array( 0 => 'b1', 1 => 'b2', 2 => 'b3' ); $c = array( 0 => 'c1', 1 => 'c2', 2 => 'c3', 3 => 'c3', 4 => 'c3', 5 => 'c3' ); $l['a'] = count($a); $l['b'] = count($b); $l['c'] = count($c); arsort($l); $largest = key($l); $result = array(); foreach ($$largest as $key => $value) { $result[] = $a[$key]; if(array_key_exists($key, $b)) $result[] = $b[$key]; if(array_key_exists($key, $c)) $result[] = $c[$key]; } print_r($result); this work.. but the code isn't nice. so anyone has a better solution?

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • Device being used by VxVM

    - by Onur Bingul
    If you are using vxvm, you may have issues when you try to unconfigure a disk root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0cfgadm: Component system is busy, try again: failed to offline:     Resource             Information       ----------------       -------------------------/dev/dsk/c1t3d0   Device being used by VxVM“cfgadm unconfigure” command fails here.The way to resolve this is to disable the disks path from DMP control. Since there is only one path to this disk, the “-f” (for force) option needs to be used:root@techsupport2 # vxdmpadm -f disable path=c1t3d0s2root@techsupport2 # vxdmpadm getsubpaths NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-NAME   CTLR   ATTRS================================================================================c1t6d0       ENABLED(A)   -          disk_0       disk         c1       -c1t3d0       DISABLED(M)   -          disk_1       disk         c1       -c1t0d0s2     ENABLED(A)   -          disk_2       disk         c1       -c1t1d0       ENABLED(A)   -          disk_3       disk         c1       -c3t47d0      ENABLED(A)   -          sun35100_0   sun35100     c3       -c3t47d1      ENABLED(A)   -          sun35100_1   sun35100     c3       -c3t47d2s2    ENABLED(A)   -          sun35100_2   sun35100     c3       -c3t47d3s2    ENABLED(A)   -          sun35100_3   sun35100     c3       -You can see the path now disabled from DMP.root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0Now you can unconfigure the disk

    Read the article

  • Why can't Perl's DBD::DB2 find dbivport.h during installation?

    - by Liju Mathew
    We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error: ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How do we fix this? Do we need root access to resolve this?

    Read the article

  • DBI::DBD package not getting installed for Perl?

    - by Liju Mathew
    Hi, We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error. ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How to fix this? Do we need root access to resolve this? Appreciate the help in advance. Thanks, Mathew Liju

    Read the article

  • Prevent deferred creation of controls.

    - by Scott Chamberlain
    Here is a test framework to show what I am doing, just create a new project add a tabbed control, on tab 1 put a button on tab 2 put a check box (default names) and paste this code for its code public partial class Form1 : Form { private List<bool> boolList = new List<bool>(); BindingSource bs = new BindingSource(); public Form1() { InitializeComponent(); boolList.Add(false); bs.DataSource = boolList; checkBox1.DataBindings.Add("Checked", bs, ""); } bool updating = false; private void button1_Click(object sender, EventArgs e) { updating = true; boolList[0] = true; bs.ResetBindings(false); Application.DoEvents(); updating = false; } private void checkBox1_CheckedChanged(object sender, EventArgs e) { if (!updating) MessageBox.Show("CheckChanged fired outside of updating"); } } The issue is if you run the program and look at tab 2 then press the button on tab 1 the program works as expected, however if you press the button on tab 1 then look at tab 2 the event for the checkbox will not fire untill you look at tab 2. The reason for this is the controll on tab 2 is not in the "created" state, so its binding to change the checkbox from unchecked to checked does not happen until after the control has been "Created". checkbox1.CreateControl() does not do anything because according to MSDN CreateControl does not create a control handle if the control's Visible property is false. You can either call the CreateHandle method or access the Handle property to create the control's handle regardless of the control's visibility, but in this case, no window handles are created for the control's children. I tried getting the value of Handle(there is no CreateHandle for Button) but still the same result. Any suggestions other than have the program quickly flash all of my tabs that have data-bound check boxes when it first loads?

    Read the article

  • sql query - how to apply limit within group by

    - by Raj
    hey guys assuming i have a table named t1 with following fields: ROWID, CID, PID, Score, SortKey it has the following data: 1, C1, P1, 10, 1 2, C1, P2, 20, 2 3, C1, P3, 30, 3 4, C2, P4, 20, 3 5, C2, P5, 30, 2 6, C3, P6, 10, 1 7, C3, P7, 20, 2 what query do i write so that it applies group by on CID, but instead of returning me 1 single result per group, it returns me a max of 2 results per group. also where condition is score = 20 and i want the results ordered by CID and SortKey. If I had to run my query on above data, i would expect the following result: RESULTS FOR C1 - note: ROWID 1 is not considered as its score < 20 C1, P2, 20, 2 C1, P3, 30, 3 RESULTS FOR C2 - note: ROWID 5 appears before ROWID 4 as ROWID 5 has lesser value SortKey C2, P5, 30, 2 C2, P4, 20, 3 RESULTS FOR C3 - note: ROWID 6 does not appear as its score is less than 20 so only 1 record returned here C3, P7, 20, 2 IN SHORT, I WANT A LIMIT WITHIN A GROUP BY. I want the simplest solution and want to avoid temp tables. sub queries are fine. also note i am using sqlite for this

    Read the article

  • Httaccess Rewriting URL issue: how to distinguish Listing and detail page

    - by Asad kamran
    I am developing an commerce site, Where users can post items in any categories( categories can be 2 to 4 levels) I want to generate URL for listing and details pages: Listing page will show list of items in inner category Detail Page will show all information for item in inner category (Inner category means Last Category in hierarchic i.e. in classified/autos4x4s/mitsubishi/lancer/ inner mean "lancer" Here are the Links i want to generate 1) www.example.com/classified/autos4x4s/mitsubishi/lancer/ (for Listing) 2) www.example.com/classified/autos4x4s/mitsubishi/lancer/2011/3/12/lanc er-2002-in-good-condition-14/ (for detail) I want to redirect to ads.php if just 4 categories exist in url and to detail.php if 6 items are passed(4 category name + 2 date and title) I write these rules: listing ads RewriteRule ^(.)/(.)/(.)/(.)/?$ ads.php?c1=$1&c2=$2&c3=$3&c4=$4 [NC,L] Detail pages RewriteRule ^(.)/(.)/(.)/(.)/(.)/(.)/?$ detail.php?c1=$1&c2=$2&c3=$3&c4=$4&dt=$5&at=$6 [NC,L] But all the sites page redirect to ads.php (Listing page) even home page. I changes the rules as follow: (Even though i donot want to Use Listing and Detail in start of url Why as i see on some site as i want:: dubai.dubizzle.com/classified/autos4x4s/mitsubishi/lancer/2011/3/12/l ancer-2002-in-good-condition-14/) Listing pages RewriteRule ^Listing/(.)/(.)/(.)/(.)/?$ ads.php?c1=$1&c2=$2&c3=$3&c4=$4 [NC,L] Detail pages RewriteRule ^Detail/(.)/(.)/(.)/(.)/(.)/(.)/?$ detail.php?c1=$1&c2=$2&c3=$3&c4=$4&dt=$5&at=$6 [NC,L] Now all other pages are fine, but when i pass www.example.com/classified/autos4x4s/mitsubishi/lancer/2011/3/12/lanc er-2002-in-good-condition-14/ it always goes to Listing page (ads.php) not to detail page. Any help would be appreciated.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Sequential access to asynchronous sockets

    - by Lars A. Brekken
    I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients. The message protocol is request/response based, where the server sends a request to a client and then the client sends a response. The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait. I do not want to block threads from sending requests to different clients at the same time. E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response. I was thinking of using a simple lock statement on the socket: lock (c3Socket) { // Send request to C3 // Get response from C3 } I am using asynchronous sockets, so I may have to use Monitor instead: Monitor.Enter(c3Socket); // Before calling .BeginReceive() And Monitor.Exit(c3Socket); // In .EndReceive I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for. Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it. Am I overlooking anything here? Any input appreciated.

    Read the article

  • Modified map2 (without truncation of lists) in F# - how to do it idiomatically?

    - by Maciej Piechotka
    I'd like to rewrite such function into F#: zipWith' :: (a -> b -> c) -> (a -> c) -> (b -> c) -> [a] -> [b] -> [c] zipWith' _ _ h [] bs = h `map` bs zipWith' _ g _ as [] = g `map` as zipWith' f g h (a:as) (b:bs) = f a b:zipWith f g h as bs My first attempt was: let inline private map2' (xs : seq<'T>) (ys : seq<'U>) (f : 'T -> 'U -> 'S) (g : 'T -> 'S) (h : 'U -> 'S) = let xenum = xs.GetEnumerator() let yenum = ys.GetEnumerator() seq { let rec rest (zenum : IEnumerator<'A>) (i : 'A -> 'S) = seq { yield i(zenum.Current) if zenum.MoveNext() then yield! (rest zenum i) else zenum.Dispose() } let rec merge () = seq { if xenum.MoveNext() then if yenum.MoveNext() then yield (f xenum.Current yenum.Current); yield! (merge ()) else yenum.Dispose(); yield! (rest xenum g) else xenum.Dispose() if yenum.MoveNext() then yield! (rest yenum h) else yenum.Dispose() } yield! (merge ()) } However it can hardly be considered idiomatic. I heard about LazyList but I cannot find it anywhere.

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • Dereferencing possible null pointer in java

    - by Nealio
    I am just starting to get into graphics and when I am trying to get the graphics, I get the error"Exception in thread "Thread-2" java.lang.NullPointerException" and I have no clue on what is going on! Any help is greatly appreciated. //The display class for the game //Crated: 10-30-2013 //Last Modified: 10-30-2013 package gamedev; import gamedev.Graphics.Render; import gamedev.Graphics.Screen; import java.awt.Canvas; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import java.awt.image.DataBufferInt; import javax.swing.JFrame; private void tick() { } private void render() { System.out.println("display.render"); BufferStrategy bs = this.getBufferStrategy(); if (bs == null) { createBufferStrategy(3); } for (int i = 0; i < GAMEWIDTH * GAMEHEIGHT; i++) { pixels[i] = screen.PIXELS[i]; } screen.Render(); //The line of code that is the problem Graphics g = bs.getDrawGraphics(); //end problematic code g.drawImage(img, 0, 0, GAMEWIDTH, GAMEHEIGHT, null); g.dispose(); bs.show(); }

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >