Search Results

Search found 5 results on 1 pages for 'biohazard'.

Page 1/1 | 1 

  • My server is running out of memory, despite having all swap free

    - by Biohazard
    I am using Debian 6 (Squeeze). The server has 4gb of memory in it, and 8gb of swap. I'm starting to get memory alloc errors at high application load times, but from top command: Mem: 4055944k total, 3915436k used, 140508k free, 10444k buffers Swap: 7999480k total, 0k used, 7999480k free, 3604496k cached The system isn't even trying to use the swap? Why would this be happening? I would like to upgrade the primary memory, but this isn't possible just right now. Thanks.

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • function to find common rows between more than two data frames in R

    - by biohazard
    I have 4 data frames, and would like to find the rows whose values in a certain column do not exist in any of the other data frames. I wrote this function: #function to test presence of $Name in 3 other datasets common <- function(a, b, c, d) { is.B <- is.numeric(a$Name %in% b$Name) == 1 is.C <- is.numeric(a$Name %in% c$Name) == 1 is.D <- is.numeric(a$Name %in% d$Name) == 1 t <- as.numeric(is.B & is.C & is.D) t } However, the output is always t = 0. This means that it tells me that there are no unique rows in any data sets, even though the datas frames have very different numbers of rows. Since there are no duplicate rows in any of the data frames, I should be getting t = 1 for at least some rows in the biggest dataset. Can someone figure out what I got wrong?

    Read the article

  • Calculate rolling / moving average in c or c++

    - by Biohazard
    I know this is achievable with boost as per: Using boost::accumulators, how can I reset a rolling window size, does it keep extra history? But I really would like to avoid using boost. I have googled and not found any suitable or readable examples. Basically I want to track the moving average of an ongoing stream of a stream of floating point numbers using the most recent 1000 numbers as a data sample. What is the easiest way to achieve this?

    Read the article

1