Search Results

Search found 870 results on 35 pages for 'allocation'.

Page 15/35 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • What is a good metaphor for c memory management?

    - by fsmc
    I'm trying to find a good metaphor to explain memory allocation, initialization and freeing in c to a non technical audience. I've heard pass-by-reference/value talked about quite well with postal service usage, but not so much for allocation/deallocation. So for I've thought about using the idea of renting a space might work, but I wonder if the SO crew can provide something better.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • LVM mirror attempt results in "Insufficient free space"

    - by MattK
    Attempting to add a disk to mirror an LVM volume on CentOS 7 always fails with "Insufficient free space: 1 extents needed, but only 0 available". Having searched for a solution, I have tried specifying disks, multiple logging options, adding 3rd log partition, but have not found a solution Not sure if I am making a rookie mistake, or there is something more subtle wrong (I am more familiar with ZFS, new to using LVM): # lvconvert -m1 centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --mirrorlog mirrored --alloc anywhere centos_bi/home /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home /dev/sdi2 /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available The two disks are of the same size, and have identical partition layouts via "sfdisk -d /dev/sdi part_table; sfdisk /dev/sda < part_table". The current configuration is detailed below. # pvs PV VG Fmt Attr PSize PFree /dev/sda1 centos_bi lvm2 a-- 496.00m 496.00m /dev/sda2 centos_bi lvm2 a-- 465.27g 465.27g /dev/sdi2 centos_bi lvm2 a-- 465.27g 0 # vgs VG #PV #LV #SN Attr VSize VFree centos_bi 3 3 0 wz--n- 931.02g 465.75g # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices home centos_bi -wi-ao---- 391.64g /dev/sdi2(6050) root centos_bi -wi-ao---- 50.00g /dev/sdi2(106309) swap centos_bi -wi-ao---- 23.63g /dev/sdi2(0) # pvdisplay --- Physical volume --- PV Name /dev/sdi2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 119109 Free PE 0 Allocated PE 119109 --- Physical volume --- PV Name /dev/sda2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 119109 Free PE 119109 Allocated PE 0 --- Physical volume --- PV Name /dev/sda1 VG Name centos_bi PV Size 500.00 MiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 124 Free PE 124 Allocated PE 0 # vgdisplay --- Volume group --- VG Name centos_bi System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 3 Act PV 3 VG Size 931.02 GiB PE Size 4.00 MiB Total PE 238342 Alloc PE / Size 119109 / 465.27 GiB Free PE / Size 119233 / 465.75 GiB # lvdisplay --- Logical volume --- LV Path /dev/centos_bi/swap LV Name swap VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:34 -0400 LV Status available # open 2 LV Size 23.63 GiB Current LE 6050 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/centos_bi/home LV Name home VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:35 -0400 LV Status available # open 1 LV Size 391.64 GiB Current LE 100259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/centos_bi/root LV Name root VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:37 -0400 LV Status available # open 1 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

    Read the article

  • how can i figure out iis7 memory leak from this dump result?

    - by Cenk Erdem
    my application sometimes starts to eat too much memory in a few seconds then crashes, i used debugdiag to take a dump when this happened, in the analyse i see lots of memory allocations all of them has the same information and each of them allocates 128mb. they look like this: Address 0x00000000`aff41798 Allocation Time 06:56:06 since tracking started Allocation Size 128.00 MBytes Function Source Destination LeakTrack+186cf clr!CExecutionEngine::ClrVirtualAlloc+3c clr!ClrVirtualAlloc+3c clr!WKS::virtual_alloc+42 clr!WKS::gc_heap::get_segment+a2 clr!WKS::gc_heap::get_large_segment+204 clr!WKS::gc_heap::loh_get_new_seg+78 clr! ?? ::FNODOBFM::`string'+a008a clr!WKS::gc_heap::try_allocate_more_space+31b clr!WKS::gc_heap::allocate_more_space+26 clr!WKS::gc_heap::allocate_large_object+6a clr!WKS::GCHeap::Alloc+b5 clr!FramedAllocateString+b06 mscorlib_ni+39f5fd mscorlib_ni+389f83 System_Xml_ni+451adc System_Data_SqlXml_ni+2275d4 System_Data_SqlXml_ni+233f32 System_Data_SqlXml_ni+8ec28 System_Data_SqlXml_ni+8eb65 System_Web_ni+2882b2 System_Web_ni+2794b6 System_Web_ni+2794b6 0x7FF002474BC what can be wrong about my code? any suggestions?

    Read the article

  • Unable to connect to Github for the first time

    - by MaxMackie
    This is my first time with Git and I'm trying to set it up on my box. I added my key to my profile in the Github web interface. When I try to connect... : max@linux-vwzy:~> ssh [email protected] The authenticity of host 'github.com (207.97.227.239)' can't be established. RSA key fingerprint is xx Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. PTY allocation request failed on channel 0 max@linux-vwzy:~> ssh-add ~/.ssh/id_rsa Identity added: /home/max/.ssh/id_rsa (/home/max/.ssh/id_rsa) max@linux-vwzy:~> ssh [email protected] PTY allocation request failed on channel 0 I'm supposed to be getting some kind of welcome message however, I'm not.

    Read the article

  • SSH_ORIGINAL_ENVIRONMENT error with snow leopard client to a gitosis server on debian

    - by Mica
    I have a server running gitosis (installed from the package manager) on debian lenny. I am able to perform all operations from my linux mint laptop, but from my Mac running an up-to-date Snow Leopard gives me the following error: mica@waste Desktop$ git clone [email protected]:Poems.git Initialized empty Git repository in /Users/micas/Desktop/Poems/.git/ ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly mica@waste Desktop$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to 192.168.0.156 [192.168.0.156] port 22. debug1: Connection established. debug1: identity file /Users/micas/.ssh/identity type -1 debug1: identity file /Users/micas/.ssh/id_rsa type 1 debug1: identity file /Users/micas/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.0.156' is known and matches the RSA host key. debug1: Found key in /Users/mica/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering public key: /Users/mica/.ssh/id_rsa debug1: Remote: Forced command: gitosis-serve mica@waste debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Remote: Forced command: gitosis-serve micas@waste debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Requesting authentication agent forwarding. PTY allocation request failed on channel 0 ERROR:gitosis.serve.main:Need SSH_ORIGINAL_COMMAND in environment. debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to 192.168.0.156 closed. Transferred: sent 2544, received 2888 bytes, in 0.1 seconds Bytes per second: sent 29642.1, received 33650.3 debug1: Exit status 1 Extensive googling of the error isn't returning much-- I changed the /etc/sshd_config file on my Mac as per http://www.schmidp.com/2009/06/23/enable-ssh-agent-key-forwarding-on-snow-leopard/. I still get the same error.

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Why is the JVM stack-based and the DalvikVM register based?

    - by aioobe
    I'm curious, why did Sun decide to make the JVM stack-based and Google decide to make the DalvikVM register based? I suppose the JVM can't really assume that a certain number of registers are available on the target platform, since it is supposed to be platform independent. Therefor it just postpones the register-allocation etc, to the JIT compiler. (Correct me if I'm wrong.) So the Android guys thought, "hey, that's inefficient, let's go for a register based vm right away..."? But wait, there are multiple different android devices, what number of registers did the Dalvik target? Are the Dalvik opcodes hardcoded for a certain number of registers? Do all current Android devices on the market have about the same number of registers? Or, is there a register re-allocation performed during dex-loading? How does all this fit together?

    Read the article

  • resource acquisition is initialization "RAII"

    - by hitech
    in the example below class X{ int *r; public: X(){cout<< X is created ; r new int[10]; } ~X(){cout<< X is destroyed ; delete [] r; } }; class Y { public: Y(){ X x; throw 44; } ~Y(){cout<< Y is destroyed ;} }; I got this example of RAII from one site and ave some doubts. please help. in the contructor of x we are not considering the scenation "if the memory allocation fails" . Here for the destructor of Y is safe as in y construcotr is not allocating any memory. what if we need to do some memory allocation also in y constructor?

    Read the article

  • How to measure sum of collected memory of Young Generation?

    - by Marcel
    Hi, I'd like to measure memory allocation data from my java application, i.e. the sum of the size of all objects that were allocated. Since object allocation is done in young generation this seems to be the right place. I know jconsole and I know the JMX beans but I just can't find the right variable... Right at the moment we are parsing the gc log output file but that's quite hard. Ideally we'd like to measure it via JMX... How can I get this value? Thanks, Marcel

    Read the article

  • Is there Any Limit on stack memory!

    - by Vikas
    I was going through one of the threads. A program crashed because It had declared an array of 10^6 locally inside a function. Reason being given was memory allocation failure on stack leads to crash. when same array was declared globally, it worked well.(memory on heap saved it). Now for the moment ,Let us suppose, stack grows downward and heap upwards. We have: ---STACK--- ---HEAP---- Now , I believe that if there is failure in allocation on stack, it must fail on heap too. So my question is :Is there any limit on stack size? (crossing the limit caused the program to crash). Or Am I missing something?

    Read the article

  • Can anyone tell me what the authors mean on this line?

    - by Anirudh Goel
    i was going through this link: FAT16 Basics to Assemble Clusters. I have read the structures involved in defining a directory entry in FAT. Now when giving the example for a FAT16 File, it says the data cluster is 0x03 for the example file MyFile.txt. Which means if we logically compute the Data Cluster we will be able to reach to the first node which happens to be cluster no 3. But what I fail to understand is what the author is trying to say in the next line where it says What we can see in the File Allocation Table at this moment? How suddenly we reach to the File Allocation Table? Weren't we already there when we were going through the information of Myfile.txt? I couldn't find any reason how suddenly the author jumped to an offset location of 00000200 and is identifying the emptiness of the clusters. It will be great if someone can help me understand. Thanks

    Read the article

  • Resources for memory management in embedded application

    - by Elazar Leibovich
    How should I manage memory in my mission critical embedded application? I found some articles with google, but couldn't pinpoint a really useful practical guide. The DO-178b forbids dynamic memory allocations, but how will you manage the memory than? preallocate everything in advance and send a pointer to each function that needs allocation? Allocate it on the stack? Use a global static allocator (but then it's very similar to dynamic allocation)? Answers can be of the form of regular answer, reference to a resource, or reference to good opensource embedded system for example. clarification: The issue here is not whether or not memory management is availible for the embedded system. But what is a good design for an embedded system, to maximize reliability.

    Read the article

  • Switching from C++ (with a lot of STL use) to C for interpreter building

    - by wndsr
    I'm switching from C++ to C because I'm rebuilding my toy interpreter. I was used to vectors for dynamic allocation of objects like tokens or instructions of my programs, stacks and mainly strings with all their aspects. Now, in C I'm not going to have all these anymore. I know that I will have to use a lot of memory management, too. I'm completely new to C, I only know the high-level easy-life data structures from the STL, how can I get started with strings and dynamic memory allocation?

    Read the article

  • Visual C++ 9 Linker file size limitation.

    - by Raindog
    It appears that the visual C++ 9 linker has a file allocation algorithm that doubles the size of the file every allocation, so you get 512mb, 1024mb, 2048mb, 4096mb. The problem is that it is using a library that cannot handle files larger 2048MB, and as such crashes with an error such as "cannot read file at is the disk full or write protected". Is there a way to bypass this limitation or otherwise replace the linker with something else that works? A bit of background, I have a code generator that generates a large number of files, ~15k cpp files, I've managed to reduce the number of files to something about 6k to get something that at least completes the linking process, I would like to be able to include all 15k without having to create multiple libs.

    Read the article

  • Totally fed-up with get Gtk widget height and width.

    - by PP
    Trying to get Height and Width of GtkEventBox. Tried following Things. GtkRequisition requisition; gtk_widget_get_child_requisition(widget, &requisition); // Getting requisition.height 0 ---------------------------------------------------------- widget->allocation-x //getting 0 widget->allocation-height //getting -1 ---------------------------------------------------------- gtk_widget_get_size_request( widget, &height, &width); // Again getting 0 -------------------------------------------------------------------------- It is really bad that Gtk has not provided simple function that will give you the actual displayed height and with of the widget. Anyone tried to get height and with of GtkWidget?

    Read the article

  • memory management question -- releasing an object which has to be returned

    - by ulag
    Hi, I have an NSMutableArray called playlist. This is in a method called getAllPlaylists. The code is something like this: -(NSMutableArray *)getAllPlaylists { //playlist is an instance variable playlist = [[NSMutableArray alloc] init]; //memory leak here . . //some code here which populates the playlist array [playlist addObject: object1]; . . return playlist; } The array allocation step of playlist is causing a memory leak. In such a scenario where can i release this array? Or can i avoid allocation n initialization of playlist here by doing something else?

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Oracle Linux 6 DVDs Now Available

    - by sergio.leunissen
    On Sunday 6 February 2011, Oracle Linux 6 was released on the Unbreakable Linux Network for customers with an Oracle Linux support subscription. Shortly after that, the Oracle Linux 6 RPMs were made available on our public yum server. Today we published the installation DVD images on edelivery.oracle.com/linux. Oracle Linux 6 is free to download, install and use. The full release notes are here, but similar to my recent post about Oracle Linux 5.6, I wanted to highlight a few items about this release. Unbreakable Enterprise Kernel As is the case with Oracle Linux 5.6, the default installed kernel on x86_64 platform in Oracle Linux 6 is the Unbreakable Enterprise Kernel. If you haven't already, I highly recommend you watch the replay of this webcast by Chris Mason on the performance improvements made in this kernel. # uname -r 2.6.32-100.28.5.el6.x86_64 The Unbreakable Enterprise Kernel is delivered via the package kernel-uek: [root@localhost ~]# yum info kernel-uek ... Installed Packages Name : kernel-uek Arch : x86_64 Version : 2.6.32 Release : 100.28.5.el6 Size : 84 M Repo : installed From repo : anaconda-OracleLinuxServer-201102031546.x86_64 Summary : The Linux kernel URL : http://www.kernel.org/ License : GPLv2 Description: The kernel package contains the Linux kernel (vmlinuz), the core of : any Linux operating system. The kernel handles the basic functions : of the operating system: memory allocation, process allocation, : device input and output, etc. ext4 file system The ext4 or fourth extended filesystem replaces ext3 as the default filesystem in Oracle Linux 6. # mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Red Hat compatible kernel Oracle Linux 6 also includes a Red Hat compatible kernel built directly from RHEL source. It's already installed, so booting it is a matter of editing /etc/grub.conf # rpm -qa | grep kernel-2.6.32 kernel-2.6.32-71.el6.x86_64 Oracle Linux 6 no longer includes a Red Hat compatible kernel with Oracle bug fixes. The only Red Hat compatible kernel included is the one built directly from RHEL source. Yum-only access to Unbreakable Linux Network (ULN) Oracle Linux 6 uses yum exclusively for access to Unbreakable Linux Network. To register your system with ULN, use the following command: # uln_register No Itanium Support Oracle Linux 6 is not supported on the Itanium (ia64) platform. Next Steps Read the release notes Download Oracle Linux 6 for free Discuss on the Oracle Linux forum

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >