Search Results

Search found 18811 results on 753 pages for 'dynamic memory allocation'.

Page 340/753 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • dllimport C++ DLL In VB.net

    - by JFLow
    Hi Experts out there, I'm stuck at dll imports with the c++ dll and i really need help to get over this. Here is the function in the c++ dll that i want to call from my VB.net code. bool LoadNewTestPlan(const char* szPlanFileName=" "); I've tried many ways in my VB.net but always getting the error : "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." I have tried passing in byte(), Marshalling with LPStr, SafeArray and nothing works. Here is the example of my code code within the module <DllImport("HPVKIfc.dll", EntryPoint:="?LoadNewTestPlan@HPVKIfc@@QAE_NPBD@Z", CharSet:=CharSet.Ansi)> _ Public Function LoadNewTestPlan(<MarshalAs(UnmanagedType.LPStr)> ByVal pln As String) As Boolean End Function Do you see anything wrong? Thanks in advance.

    Read the article

  • MIPS assembly: how to declare integer values in the .data section?

    - by Barney
    I'm trying to get my feet wet with MIPS assembly language using the MARS simulator. My main problem now is how do I initialize a set of memory locations so that I can access them later via assembly language instructions? For example, I want to initialize addresses 0x1001000 - 0x10001003 with the values 0x99, 0x87, 0x23, 0x45. I think this can be done in the data declaration (.data) section of my assembly program but I'm not sure of the syntax. Is this possible? Alternatively, in the .data section, how do I specify storing the integer values in some memory location (I don't care where, but I just want to reference them somewhere). So I'm looking for the C equivalent of "int x = 20, y=30, z=90;" I know how to do that using MIPS instructions but is it possible to declare something like that in the .data section of a MIPS assembly program?

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • Does the CLR store small values in 'natural' sized locations?

    - by izb
    In Java, a byte or short is stored in the JVM's 'natural' word length, i.e. for the most part, 32-bits. An exception would be an array of bytes, where each byte occupies a byte of memory. Does the CLR do the same thing? If it does do this, in what situations are there exceptions to this? E.g. How much memory does this occupy? struct MyStruct { short s1; short s2; }

    Read the article

  • Asynchronous readback from opengl front buffer using multiple PBO's

    - by KillianDS
    I am developing an application that needs to read back the whole frame from the front buffer of an openGL application. I can hijack the application's opengl library and insert my code on swapbuffers. At the moment I am successfully using a simple but excruciating slow glReadPixels command without PBO's. Now I read about using multiple PBO's to speed things up. While I think I've found enough resources to actually program that (isn't that hard), I have some operational questions left. I would do something like this: create a series (e.g. 3) of PBO's use glReadPixels in my swapBuffers override to read data from front buffer to a PBO (should be fast and non-blocking, right?) Create a seperate thread to call glMapBufferARB, once per PBO after a glReadPixels, because this will block until the pixels are in client memory. Process the data from step 3. Now my main concern is of course in steps 2 and 3. I read about glReadPixels used on PBO's being non-blocking, will this be an issue if I issue new opengl commands after that very fast? Will those opengl commands block? Or will they continue (my guess), and if so, I guess only swapbuffers can be a problem, will this one stall or will glReadPixels from front buffer be many times faster than swapping (about each 15-30ms) or, worst case scenario, will swapbuffers be executed while glReadPixels is still reading data to the PBO? My current guess is this logic will do something like this: copy FRONT_BUFFER - generic place in VRAM, copy VRAM-RAM. But I have no idea which of those 2 is the real bottleneck and more, what the influence on the normal opengl command stream is. Then in step 3. Is it wise to do this asynchronously in a thread separated from normal opengl logic? At the moment I think not, It seems you have to restore buffer operations to normal after doing this and I can't install synchronization objects in the original code to temporarily block those. So I think my best option is to define a certain swapbuffer delay before reading them out, so e.g. calling glReadPixels on PBO i%3 and glMapBufferARB on PBO (i+2)%3 in the same thread, resulting in a delay of 2 frames. Also, when I call glMapBufferARB to use data in client memory, will this be the bottleneck or will glReadPixels (asynchronously) be the bottleneck? And finally, if you have some better ideas to speed up frame readback from GPU in opengl, please tell me, because this is a painful bottleneck in my current system. I hope my question is clear enough, I know the answer will probably also be somewhere on the internet but I mostly came up with results that used PBO's to keep buffers in video memory and do processing there. I really need to read back the front buffer to RAM and I do not find any clear explanations about performance in that case (which I need, I cannot rely on "it's faster", I need to explain why it's faster). Thank you

    Read the article

  • Python RSS Feeder + Jquery RSS Reader

    - by Panther24
    I'm trying to implement an RSS Feeder in python using PyRSS2Gen and it created an XML file. Now in jQuery I'm using Google Ajax API's RSS reader and trying to read it. I get an error saying something is wrong. I'm unable to figure out the issue. I've checked some documentation but not able to get it. This is how my xml file looks: <rss version="2.0"> <channel> <title>Andrew's PyRSS2Gen feed</title> <link> http://www.dalkescientific.com/Python/PyRSS2Gen.html </link> <description> The latest news about PyRSS2Gen, a Python library for generating RSS2 feeds </description> <lastBuildDate>Fri, 26 Mar 2010 13:10:55 GMT</lastBuildDate> <generator>PyRSS2Gen-1.0.0</generator> <docs>http://blogs.law.harvard.edu/tech/rss</docs> <item> <title>PyRSS2Gen-0.0 released</title> <link> http://www.dalkescientific.com/news/030906-PyRSS2Gen.html </link> <description> Dalke Scientific today announced PyRSS2Gen-0.0, a library for generating RSS feeds for Python. </description> <guid isPermaLink="true"> http://www.dalkescientific.com/news/030906-PyRSS2Gen.html </guid> <pubDate>Sat, 06 Sep 2003 21:31:00 GMT</pubDate> </item> <item> <title>Thoughts on RSS feeds for bioinformatics</title> <link> http://www.dalkescientific.com/writings/diary/archive/2003/09/06/RSS.html </link> <description> One of the reasons I wrote PyRSS2Gen was to experiment with RSS for data collection in bioinformatics. Last year I came across... </description> <guid isPermaLink="true"> http://www.dalkescientific.com/writings/diary/archive/2003/09/06/RSS.html </guid> <pubDate>Sat, 06 Sep 2003 21:49:00 GMT</pubDate> </item> </channel> </rss> And my JS looks like this <!-- ++Begin Dynamic Feed Wizard Generated Code++ --> <!-- // Created with a Google AJAX Search and Feed Wizard // http://code.google.com/apis/ajaxsearch/wizards.html --> <!-- // The Following div element will end up holding the actual feed control. // You can place this anywhere on your page. --> <!-- Google Ajax Api --> <script src="../js/jsapi.js" type="text/javascript"></script> <!-- Dynamic Feed Control and Stylesheet --> <script src="../js/gfdynamicfeedcontrol.js" type="text/javascript"></script> <link rel="stylesheet" type="text/css" href="../css/gfdynamicfeedcontrol.css"> <script type="text/javascript"> function LoadDynamicFeedControl() { var feeds = [ {title: 'Demo', url: 'wcarss.xml' }]; var options = { stacked : false, horizontal : false, title : "" } new GFdynamicFeedControl(feeds, 'feed-control', options); } // Load the feeds API and set the onload callback. google.load('feeds', '1'); google.setOnLoadCallback(LoadDynamicFeedControl); </script> <!-- ++End Dynamic Feed Control Wizard Generated Code++ --> Can some help me and tell me what I'm doing wrong here? NOTE: Does the XML file have to be deployed? Cause currently its in my local machine and I'm trying to just call it.

    Read the article

  • SqlDataReader / DbDataReader implementation question

    - by Jose
    Does anyone know how DbDataReaders actually work. We can use SqlDataReader as an example. When you do the following cmd.CommandText = "SELECT * FROM Customers"; var rdr = cmd.ExecuteReader(); while(rdr.Read()) { //Do something } Does the data reader have all of the rows in memory, or does it just grab one, and then when Read is called, does it go to the db and grab the next one? It seems just bringing one into memory would be bad performance, but bringing all of them would make it take a while on the call to ExecuteReader. I know I'm the consumer of the object and it doesn't really matter how they implement it, but I'm just curious, and I think that I would probably spend a couple hours in Reflector to get an idea of what it's doing, so thought I'd ask someone that might know. I'm just curious if anyone has an idea.

    Read the article

  • OutOfMemoryError what to increase and how?

    - by Pentium10
    I have a really long collection with 10k items, and when running a toString() on the object it crashes. I need to use this output somehow. 05-21 12:59:44.586: ERROR/dalvikvm-heap(6415): Out of memory on a 847610-byte allocation. 05-21 12:59:44.636: ERROR/dalvikvm(6415): Out of memory: Heap Size=15559KB, Allocated=12932KB, Bitmap Size=613KB 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): Uncaught handler: thread main exiting due to uncaught exception 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): java.lang.OutOfMemoryError 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.lang.AbstractStringBuilder.enlargeBuffer(AbstractStringBuilder.java:97) 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.lang.AbstractStringBuilder.append0(AbstractStringBuilder.java:155) 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.lang.StringBuilder.append(StringBuilder.java:202) 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.util.AbstractCollection.toString(AbstractCollection.java:384) I need step by step guide how to increase the heap for and Android application. I don't run the command line.

    Read the article

  • JBoss Seam - Jetty - Virtualhosting

    - by Walter White
    Hi all, I am trying to cutback on the memory usage of my server and would like to optimize the architecture. I currently deploy 2 separate web applications to Jetty 6.1.22 that correspond to different virtualhosts. They have pretty much the same application stack except one has fewer components and are styled differently (content, images, css, etc.). If I change my design pattern over to EJB / EAR + 2 WARS embedded, will that lower the memory consumption? Will that give me a single instance of JBoss Seam, Quartz, and all of my components? They must use a different datasource. Thanks, Walter

    Read the article

  • pyinstaller: 2 instances of my cherrypy app exe get executed.

    - by d.c
    I have a cherrypy app that I've made an exe with pyinstaller. now when I run the exe it loads itself twice into memory. Watching the taskmanager shows the first instance load into about 1k, then a second later a second instance of hte exe loads into about 3k ram. If I close the bigger one both processes die. If I close hte smaller one only that one dies. Loading the exe with subprocess, if I try to proc.kill(), it only kills the small one leaving the other running in memory. Is this a sideeffect of using cherrypy and pyinstaller together?

    Read the article

  • In LINQ-SQL, wrap the DataContext is an using statement - pros cons

    - by hIpPy
    Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc. Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC. DataContext with using using (DBDataContext db = new DBDataContext()) { var q = from x in db.Tables where x.Id == someId select x; return q.toList(); } DataContext without using and kept alive DBDataContext db = new DBDataContext() var q = from x in db.Tables where x.Id == someId select x; return q.toList(); Thanks.

    Read the article

  • OSX 10.6 Cisco IPSEC strange behavior

    - by tair
    I'm trying to connect to Cisco IPSEC VPN of my company over DSL Internet. I managed to successfully connect using Cisco VPN Client, now I'm trying to switch to OSX 10.6 native client, because of licensing issues. The problems is that the connection fails with a dialog box containing the message: The negotiation with the VPN server failed. Verify the server address and try reconnecting. I checked logs: Jun 29 13:10:39 racoon[4551]: Connecting. Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:10:39 racoon[4551]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:10:42 racoon[4551]: IKE Packet: receive success. (MODE-Config). Jun 29 13:10:42 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface setting (name: u92.168.54.147, subnet: 255.255.255.0, destination: 192.168.54.147). Jun 29 13:10:42 configd[19]: network configuration changed. Jun 29 13:10:42 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:42 named[62]: not listening on any interfaces Jun 29 13:10:58: --- last message repeated 1 time --- Jun 29 13:10:58 configd[19]: SCNCController: Disconnecting. (Connection tried to negotiate for, 16 seconds). Jun 29 13:10:58 racoon[4551]: IKE Packet: transmit success. (Information message). Jun 29 13:10:58 racoon[4551]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Jun 29 13:10:58 racoon[4551]: Disconnecting. (Connection tried to negotiate for, 19.113382 seconds). Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 configd[19]: network configuration changed. Then I opened Terminal, started pinging a server behind VPN, and tried to connect again. Now connection is OK! Logs this time: Jun 29 13:46:53 racoon[8136]: Connecting. Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:46:53 racoon[8136]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (MODE-Config). Jun 29 13:46:56 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface settinaddress: 192.168.54.149, subnet: 255.255.255.0, destination: 192.168.54.149). Jun 29 13:46:56 vmnet-bridge[111]: Dynamic store changed Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 configd[19]: network configuration changed. Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Jun 29 13:46:56 racoon[8136]: Connected. Jun 29 13:46:56 configd[19]: SCNCController: Connected. I tested it several times and it consistently behaves the same. What is the magic?

    Read the article

  • Why won't the VisualVM Profiler profile my application?

    - by Luke
    I've created a simple 1 file java application that iterates through a loop, calls some functions, allocates some memory, adds some numbers, etc. I run that application via eclipse's Run As->Java Application. The running application shows up in Java VisualVM under Local. I double click on that application and go to the Profiler tab. The default settings are: Start profiling from classes: my.main.package.** Do not profile classes: java.*, javax.*, sun.*, sunw.*, com.sun.* I click on CPU. The CPU and Memory buttons gray out. Nothing happens. The Status says profiling inactive. When my application terminates the Status says application terminated. What am I doing wrong here? Are there some settings I need to tweak? Do I need to set a VM flag when I launch my application?

    Read the article

  • Silverlight SOS (Son of Strike) documenation

    - by Kris Erickson
    Is there any microsoft or even non-official documentation for SOS for Silverlight. Other than a few web posts I have seen zero documentation for it on MSDN. Even official documentation for the CLR version of SOS seems hard to find, this ancient article mentions a sos.htm file that is included in the windows SDK but it doesn't appear to be there any more. Any pointers to debugging Silveright with SOS? I have found the following blog posts but am looking for more information: http://davybrion.com/blog/2009/08/finding-memory-leaks-in-silverlight-with-windbg/ http://www.ningzhang.org/2008/12/19/silverlight-debugging-with-windbg-and-sos/ http://debuggingblog.com/wp/2009/07/07/windbg-extension-sos-in-clr-40net-framework-40-ctp-net-runtime-dll-renamed-and-sos-commands-just-got-richer/ http://www.netfxharmonics.com/label/debugging http://blogs.msdn.com/b/tess/archive/2008/08/21/debugging-silverlight-applications-with-windbg-and-sos-dll.aspx http://blogs.msdn.com/b/delay/archive/2009/03/11/where-s-your-leak-at-using-windbg-sos-and-gcroot-to-diagnose-a-net-memory-leak.aspx http://blogs.msdn.com/b/delay/archive/2009/03/09/controls-are-like-diapers-you-don-t-want-a-leaky-one-implementing-the-weakevent-pattern-on-silverlight-with-the-weakeventlistener-class.aspx

    Read the article

  • Convert a byte array to a class containing a byte array in C#

    - by Mathijs
    I've got a C# function that converts a byte array to a class, given it's type: IntPtr buffer = Marshal.AllocHGlobal(rawsize); Marshal.Copy(data, 0, buffer, rawsize); object result = Marshal.PtrToStructure(buffer, type); Marshal.FreeHGlobal(buffer); I use sequential structs: [StructLayout(LayoutKind.Sequential)] public new class PacketFormat : Packet.PacketFormat { } This worked fine, until I tried to convert to a struct/class containing a byte array. [StructLayout(LayoutKind.Sequential)] public new class PacketFormat : Packet.PacketFormat { public byte header; public byte[] data = new byte[256]; } Marshal.SizeOf(type) returns 16, which is too low (should be 257) and causes Marshal.PtrToStructure to fail with the following error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. I'm guessing that using a fixed array would be a solution, but can it also be done without having to resort to unsafe code?

    Read the article

  • OutOfMemoryException - out of ideas

    - by Captain Comic
    Hi I have a net Windows service that is constantly throwing OutOfMemoryException. The service has two builds for x86 and x64 Windows. However on x64 it consumes a lot more memory. I have tried profiling it with various memory profilers. But I cannot get a clue what the problem is. The diagnosis - service consumes lot of VMSize. Also I tried to look at performance counters (perfmon.exe). What I can see is that heap size is growing and %GC time is 19%. My application has threads and locking objects, DB connections and WCF interface. See first app in list The link to picture with performance counters view http://s006.radikal.ru/i215/1003/0b/ddb3d6c80809.jpg

    Read the article

  • openmp in mex : stackoverflow error

    - by Edwin
    i have got the following fraction of code that getting me the stack overflow error #pragma omp parallel shared(Mo1, Mo2, sum_normalized_p_gn, Data, Mean_Out,Covar_Out,Prior_Out, det) private(i) num_threads( number_threads ) { //every thread has a new copy double* normalized_p_gn = (double*)malloc(NMIX*sizeof(double)); #pragma omp critical { int id = omp_get_thread_num(); int threads = omp_get_num_threads(); mexEvalString("drawnow"); } #pragma omp for //some parallel process..... } the variables declared in the shared are created by malloc. and they consumes with large amount of memory there are 2 questions regarding to the above code. 1) why this would generate the stack overflow error( i.e. segmentation fault) before it goes into the parallel for loop? it works fine when it runs in the sequential mode.... 2) am i right to dynamic allocate memory for each thread like "normalized_p_gn" above? Regards Edwin

    Read the article

  • ungetc in Python

    - by Dragos Toader
    Some file read (readlines()) functions in Python copy the file contents to memory (as a list) I need to process a file that's too large to be copied in memory and as such need to use a file pointer (to access the file one byte at a time) -- as in C getc(). The additional requirement I have is that I'd like to rewind the file pointer to previous bytes like in C ungetc(). Is there a way to do this in Python? Also, in Python, I can read one line at a time with readline() Is there a way to read the previous line going backward?

    Read the article

  • System architecture: simple approach for setting up background tasks behind a web application -- wil

    - by Tim Molendijk
    I have a Django web application and I have some tasks that should operate (or actually: be initiated) on the background. The application is deployed as follows: apache2-mpm-worker; mod_wsgi in daemon mode (1 process, 15 threads). The background tasks have the following characteristics: they need to operate in a regular interval (every 5 minutes or so); they require the application context (i.e. the application packages need to be available in memory); they do not need any input other than database access, in order to perform some not-so-heavy tasks such as sending out e-mail and updating the state of the database. Now I was thinking that the most simple approach to this problem would be simply to piggyback on the existing application process (as spawned by mod_wsgi). By implementing the task as part of the application and providing an HTTP interface for it, I would prevent the overhead of another process that is holding all of the application into memory. A simple cronjob can be setup that sends a request to this HTTP interface every 5 minutes and that would be it. Since the application process provides 15 threads and the tasks are quite lightweight and only running every 5 minutes, I figure they would not be hindering the performance of the web application's user-facing operations. Yet... I have done some online research and I have seen nobody advocating this approach. Many articles suggest a significantly more complex approach based on a full-blown messaging component (such as Celery, which uses RabbitMQ). Although that's sexy, it sounds like overkill to me. Some articles suggest setting up a cronjob that executes a script which performs the tasks. But that doesn't feel very attractive either, as it results in creating a new process that loads the entire application into memory, performs some tiny task, and destroys the process again. And this is repeated every 5 minutes. Does not sound like an elegant solution. So, I'm looking for some feedback on my suggested approach as described in the paragraph before the preceeding paragraph. Is my reasoning correct? Am I overlooking (potential) problems? What about my assumption that application's performance will not be impeded?

    Read the article

  • Fast permutation -> number -> permutation mapping algorithms

    - by ijw
    I have n elements. For the sake of an example, let's say, 7 elements, 1234567. I know there are 7! = 5040 permutations possible of these 7 elements. I want a fast algorithm comprising two functions: f(number) maps a number between 0 and 5039 to a unique permutation, and f'(permutation) maps the permutation back to the number that it was generated from. I don't care about the correspondence between number and permutation, providing each permutation has its own unique number. So, for instance, I might have functions where f(0) = '1234567' f'('1234567') = 0 The fastest algorithm that comes to mind is to enumerate all permutations and create a lookup table in both directions, so that, once the tables are created, f(0) would be O(1) and f('1234567') would be a lookup on a string. However, this is memory hungry, particularly when n becomes large. Can anyone propose another algorithm that would work quickly and without the memory disadvantage?

    Read the article

  • Are stack based arrays possible in C#?

    - by Bob
    Let's say, hypothetically (read: I don't think I actually need this, but I am curious as the idea popped into my head), one wanted an array of memory set aside locally on the stack, not on the heap. For instance, something like this: private void someFunction() { int[20] stackArray; //C style; I know the size and it's set in stone } I'm guessing the answer is no. All I've been able to find is heap based arrays. If someone were to need this, would there be any workarounds? Is there any way to set aside a certain amount of sequential memory in a "value type" way? Or are structs with named parameters the only way (like the way the Matrix struct in XNA has 16 named parameters (M11-M44))?

    Read the article

  • expr non-numeric argument shell script

    - by Kimi
    The below check is not working : expr non-numeric argument shell script. Always it is going to else. Please tell me what is the mistake. Earlier I was no using while so the same thing was woring fine now suddenly when I did put it in the while loop it is no working. echo "`${BOLD}` ***** Checking Memory Utilization User*****`${UNBOLD}`" echo "===================================================" IFS='|' cat configMachineDetails.txt | grep -v "^#" | while read MachineType UserName MachineName do export MEMORY_USAGE1=`ssh -f -T ${UserName}@${MachineName} prstat -t -s rss 1 2 | tr '%' ' '| awk '$5>5.0'` export LEN=`echo "$MEMORY_USAGE1"|wc -l` export CNPROC=`echo "$MEMORY_USAGE1"|grep "NPROC"|wc -l` export CTotal=`echo "$MEMORY_USAGE1"|grep "Total"|wc -l` **if [ $LEN = `expr $CNPROC + $CTotal` ] then echo "`${BOLD}`**************All usages are normal !!!!!! *************`${UNBOLD}`" else echo "`${BOLD}`**** Memory(%) is more than 5% in MachineType $MachineType UserName $UserName MachineName $MachineName *******`${UNBOLD}`" echo "====================================================" echo "$MEMORY_USAGE1" fi** done

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >