Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 394/670 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Tracking down data load performance issues in SSIS package

    - by SteveC
    Are there any ways to determine what the differences in databases are that affect a SSIS package load performance ? I've got a package which loads and does various bits of processing on ~100k records on my laptop database in about 5 minutes Try the same package and same data on the test server, which is a reasonable box in both CPU and memory, and it's still running ... about 1 hour so far :-( Checked the package with a small set of data, and it ran through Ok

    Read the article

  • controlling the bandwidth using tc

    - by Supratik
    Hi I have two NIC etho is connected to the internet and eth1 is connected to the LAN. I want to restrict the download limit using iptables and linux tc. So I wrote a test script to verify if it is working. My iptables configuration is as below. iptables -t mangle -N INBOUND iptables -t mangle -I PREROUTING -i eth0 -j INBOUND iptables -t mangle -A INBOUND -j MARK --set-mark 60 My ingress configuration is as below. tc qdisc add dev eth0 handle 1: ingress tc filter add dev eth0 parent 1: protocol ip prio 1 handle 60 fw police rate 100kbit burst 20kbit drop flowid :1 Can you please tell me what I am missing here ?

    Read the article

  • c read float from file and sort

    - by Franky
    Hi all I have a problem with a C application; i have on a .txt file some float numbers and I have to read them and sort in descending way. When i do the fscanf command and then the printf, i get on the screen strange numbers (memory location I suppose). How can i solve the problem? Thanks in advance

    Read the article

  • Shortening large CSV on debian

    - by Unkwntech
    I have a very large CSV file and I need to write an app that will parse it but using the 6GB file to test against is painful, is there a simple way to extract the first hundred or two lines without having to load the entire file into memory? The file resides on a Debian server.

    Read the article

  • Assembly GDB Print String

    - by Ken
    So in assembly I declare the following String: Sample db "This is a sample string",0 In GDB I type "p Sample" (without quotes) and it spits out 0x73696854. I want the actual String to print out. So I tried "printf "%s", Sample" (again, without quotes) and it spits out "Cannot access memory at address 0x73696854." Short version: How do I print a string in GDB?

    Read the article

  • Perl kill(0, $pid) in Windows always returning 1

    - by banshee_walk_sly
    I'm trying to make a Perl script that will run a set of other programs in Windows. I need to be able to capture the stdout, stderr, and exit code of the process, and I need to be able to see if a process exceeds it's allotted execution time. Right now, the pertinent part of my code looks like: ... $pid = open3($wtr, $stdout, $stderr, $command); if($time < 0){ waitpid($pid, 0); $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } else{ # Do timeout stuff, currently not working as planned print "pid: $pid\n"; my $elapsed = 0; #THIS LOOP ONLY TERMINATES WHEN $time > $elapsed ...? while(kill 0, $pid and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } #these lines are needed to grab the stdout and stderr in arrays so # I may reuse them in multiple logs if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } ... Everything is working correctly if $time = -1 (no timeout is needed), but the system thinks that kill 0, $pid is always 1. This makes my loop run for the entirety of the time allowed. Some extra details just for clarity: This is being run on Windows. I know my process does terminate because I have get all the expected output. Perl version: This is perl, v5.10.1 built for MSWin32-x86-multi-thread (with 2 registered patches, see perl -V for more detail) Copyright 1987-2009, Larry Wall Binary build 1007 [291969] provided by ActiveState http://www.ActiveState.com Built Jan 26 2010 23:15:11 I appreciate your help :D For that future person who may have a similar issue I got the code to work, here is the modified code sections: $pid = open3($wtr, $stdout, $stderr, $command); close($wtr); if($time < 0){ waitpid($pid, 0); } else{ print "pid: $pid\n"; my $elapsed = 0; while(waitpid($pid, WNOHANG) <= 0 and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } close($stdout); close($stderr);

    Read the article

  • with "viewDidLoad" my viewController take much time to appear!

    - by dingua
    hi, when i load my viewController i used "viewDidLoad"method to init my view ,but this take much time to make the view appeared .So i had the idea to use "viewDidAppear" method to accelerate the appearance of my view but the load of the informations about my view are now loaded to the memory every time that i push my view (which is normal) or i pop to it(and there is my problem) Have you an idea?

    Read the article

  • SQL Server insert slow

    - by andrew007
    Hi, I have two servers where I installed SQL Server 2008 Production: RAID 1 on SCSI disks Test: IDE disk When I try to execute a script with about 35.000 inserts, on the test server I need 30 sec and instead on the production server more than 2 min! Does anybody know why such difference? I mean, the DB is configured in the same way and the production server has also a RAID config, a better processor and memory... THANKS!

    Read the article

  • Ways to setup a ZFS pool on a device without possibility to create/manage partitions?

    - by Karl Richter
    I have a NAS where I don't have a possibility to create and manage partitions (maybe I could with some hacks that I don't want to make). What ways to setup multiple ZFS pools with one partition each (for starters - just want to use deduplication) exist? The setup should work with the NAS, i.e. over network (I'd mount the images via NFS or cifs). My ideas and associated issues so far: sparse files mounted over loop device (specifying sparse file directly as ZFS vdev doesn't work, see Can I choose a sparse file as vdev for a zfs pool?): problem that the name/number of the assigned loop device is anything but constant, not sure how increasing the number loop device with kernel parameter affects performance (there has to be a reason to limit it to 8 in the default value, right?)

    Read the article

  • Password History Storage and Variability Comparison

    - by z3ke
    I believe this situation would be similar to many others out there, so maybe some of you can shed some light... Supposedly, when making password changes through MS exchange every 90 days, you cannot use any simple variation of one of your old passwords, up to whatever limit the admin's set for a system. My question: If your previous passwords are only stored as hashes, how can they check for the "just changed one letter" case. Wouldn't they have to have access to the old plain-text passwords in order to make those comparisons? The only other thing I can think of is if upon original creation of a password, they also stored all other one character permutations of it, so that they can be banned later?

    Read the article

  • How do I create and read non-global variables that aren't destroyed at end of function?

    - by Paul Reilly
    I am attempting to code some plugins to use with MIDI sequencers but have hit a stumbling block. I can't use global-scope variables to store information because multiple instances of the .dll can exist which share memory. How do I create a class (for re-usability purposes in other plugins) containing 2 dimensional array and other variables the content of which is to be shared between functions? If that is possible, how would I read and write the data from the function in the framework where I do the processing?

    Read the article

  • Efficient storage in C#.net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

  • What are possible causes of IDirect3DVertexBuffer9::Lock failing?

    - by Suma
    In error reports from some I have quite often seen following behaviour: IDirect3DVertexBuffer9::Lock fails, returned error code is D3DERR_NOTAVAILABLE. Once this happens, quite frequently (but not always) it is followed by CreateTexture or CreateVertexBuffer failing with error D3DERR_OUTOFVIDEOMEMORY. What are possible reasons for vertex buffer lock failure? Could virtual memory address space exhausted, or what?

    Read the article

  • When should I use SATA 6gb/s?

    - by Gili
    I purchased a Baraccuda hard-drive (model ST3000DM001) that supports a maximum read transfer rate of 210 MB/s and SATA 1.5/3/6 Gb/s. My motherboard has a limited number of 6 Gb/s ports so I'd like to reserve them for when it's really necessary. When does a hard-drive benefit from a SATA 6 Gb/s port? Doesn't it require a transfer speed of at least 375 MB/s to surpass the limit of SATA 3 Gb/s? Are there any other benefits of SATA 6 Gb/s vs 3 Gb/s ports?

    Read the article

  • Difference among STLPort and SGI STL

    - by Yan Cheng CHEOK
    Recently, I was buzzed by the following problem STL std::string class causes crashes and memory corruption on multi-processor machines while using VC6. I plan to use an alternative STL libraries instead of the one provided by VC6. I came across 2 libraries : STLPort and SGI STL I was wondering what is the difference between the 2. Which one I should use? Which one able to guarantee thread safety? Thanks.

    Read the article

  • using trickle to slow down browser

    - by tester
    according to trickle's man page, http://linux.die.net/man/1/trickle i can limit the download speed of a process, e.g. trickle -u 10 -d 20 ncftp to Launch ncftp(1) limiting its upload capacity to 10 KB/s, and download capacity at 20 KB/s. how would I go about limiting google-chrome or firefox with trickle? Edit: For those of you asking why I asked such an obvious question, I tried trickle -u 10 -d 20 firefox and I'm getting an error trickle: Could not reach trickled, working independently: No such file or directory firefox opens right after, but is definitely not rate limited...

    Read the article

  • Why is the Objective-C Boolean data type defined as a signed char?

    - by EddieCatflap
    Something that has piqued my interest is Objective-C's BOOL type definition. Why is it defined as a signed char (which could cause unexpected behaviour if a value greater than 1 byte in length is assigned to it) rather than as an int, as C does (much less margin for error: a zero value is false, a non-zero value is true)? The only reason I can think of is the Objective-C designers micro-optimising storage because the char will use less memory than the int. Please can someone enlighten me?

    Read the article

  • Is it possible to view Facebook news feeds page by page rather than loading it all as I scroll?

    - by oscilatingcretin
    If you want to scroll through your Facebook news feed (be it on the main feed, your personal feed, or in groups) to older parts, you have to scroll to the bottom, wait for the ajax load of the next part of the feed, and repeat. The problem with this is that, if you're scrolling down very far, the HTML document just gets bigger and bigger until your browser starts to die due to the overload of resources brought on by added HTML, text, and even images. This pretty much sets a limit to how far back you can scroll. Clicking on months and years on your personal feed has the same effect of cumulatively adding feed segments to the HTML document. I notice that this month/year feature is not available on the main feed and for groups. If there were a way to literally page through the feed so that only a single page's worth of feed data is loaded at a time, that would make scrolling through it much more doable.

    Read the article

  • Google Contacts and Phone Contacts : difference

    - by Rahul
    hiiiiiiiiiiiii i got Htc Hero. There are 2 kind of Contacts marked as Google and Phone. Both are stored in phone memory. If i want to access only Phone Tagged Contact info what should i do? And if i want to convert Google Tagged Contacts to Phone Tagged Contacts what should i do? thanks

    Read the article

  • Looking for a free SMTP server program

    - by Richard
    Hello all, I am looking for a free SMTP server. I am currently using Free SMTP Server http://www.softstack.com/freesmtp.html This software works great other than the fact that it can only send 10 messages a day. This is a bit of a problem seeing that the software I am writing needs to send a message every half hour. Anyone knows of a good piece of software that does the same thing, but does not limit the amount of messages that can be sent in a day? I am using Windows XP, so software must be Windows friendly

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >