Search Results

Search found 9889 results on 396 pages for 'pointer speed'.

Page 123/396 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Advantages of Thread pooling in embedded systems

    - by Microkernel
    I am looking at the advantages of threadpooling design pattern in Embedded systems. I have listed few advantages, please go through them, comment and please suggest any other possible advantages that I am missing. Scalability in systems like ucos-2 where there is limit on number of threads. Increasing capability of any task when necessary like Garbage collection (say in normal systems if garbage collection is running under one task, its not possible to speed it up, but in threadpooling we can easily speed it up). Can set limit on the max system load. Please suggest if I am missing anything.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • Stack / base pointers in assembly

    - by flyingcrab
    I know this topic has been covered ad-naseum here, and other places on the internet - but hopefully the question is a simple one as I try to get my head around assembly... So if i understand correctly the ebp (base pointer) will point to the top of the stack, and the esp (stack pointer) will point to the bottom -- since the stack grows downward. esp therefore points to the 'current location'. So on a function call, once you've saved the ebp on the stack you insert a new stack frame - for the function. So in the case of the image below, if you started from N-3 you would go to N-2 with a function call. But when you are at N-2 - is your ebp == 25 and the esp == 24 (at least initially, before any data is placed on the stack)? Is this correct or am I of on a tangent here? Thanks!

    Read the article

  • ungetc in Python

    - by Dragos Toader
    Some file read (readlines()) functions in Python copy the file contents to memory (as a list) I need to process a file that's too large to be copied in memory and as such need to use a file pointer (to access the file one byte at a time) -- as in C getc(). The additional requirement I have is that I'd like to rewind the file pointer to previous bytes like in C ungetc(). Is there a way to do this in Python? Also, in Python, I can read one line at a time with readline() Is there a way to read the previous line going backward?

    Read the article

  • Regex matching very slow

    - by Ali Lown
    I am trying to parse a PDF to extract the text from it (please don't suggest any libraries to do this, as this is part of learning the format). I have already handled deflating it to put it in the alphanumeric format. I now need to extract the text from the text blocks. So, my current pattern is "BT.*?((.*?)).*?ET" (with DOTMATCHALL set) to match something like: BT /F13 12 Tf 288 720 Td (ABC) Tj ET The only bit I want is the text ABC in the brackets. The above pattern works, but is really slow, I assume it is because the regex library is failing to match the pattern that matches the text between BT and the (ABC) many times. The regex is pre-compiled in an attempt to speed it up, but it seems negligible. How may I speed this up?

    Read the article

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • Multiple rows with a single INSERT in SQLServer 2008

    - by Todd
    I am testing the speed of inserting multiple rows with a single INSERT statement. For example: INSERT INTO [MyTable] VALUES (5, 'dog'), (6, 'cat'), (3, 'fish) This is very fast until I pass 50 rows on a single statement, then the speed drops significantly. Inserting 10000 rows with batches of 50 take 0.9 seconds. Inserting 10000 rows with batches of 51 take 5.7 seconds. My question has two parts: Why is there such a hard performance drop at 50? Can I rely on this behavior and code my application to never send batches larger than 50? My tests were done in c++ and ADO.

    Read the article

  • PyOpenGL: glVertexPointer() offset problem

    - by SurvivalMachine
    My vertices are interleaved in a numpy array (dtype = float32) like this: ... tu, tv, nx, ny, nz, vx, vy, vz, ... When rendering, I'm calling gl*Pointer() like this (I have enabled the arrays before): stride = (2 + 3 + 3) * 4 glTexCoordPointer( 2, GL_FLOAT, stride, self.vertArray ) glNormalPointer( GL_FLOAT, stride, self.vertArray + 2 ) glVertexPointer( 3, GL_FLOAT, stride, self.vertArray + 5 ) glDrawElements( GL_TRIANGLES, len( self.indices ), GL_UNSIGNED_SHORT, self.indices ) The result is that nothing renders. However, if I organize my array so that the vertex position is the first element ( ... vx, vy, vz, tu, tv, nx, ny, nz, ... ) I get correct positions for vertices while rendering but texture coords and normals aren't rendered correctly. This leads me to believe that I'm not setting the pointer offset right. How should I set it? I'm using almost the exact same code in my other app in C++ and it works.

    Read the article

  • What are the real-world benefits of declarative-UI languages such as XAML and QML?

    - by Stu Mackellar
    I'm currently evaluating QtQuick (Qt User Interface Creation Kit) which will be released as part of Qt 4.7. QML is the JavaScript-based declarative language behind QtQuick. It seems to be a very powerful concept, but I'm wondering if anybody that's made extensive use of other, more mature declarative-UI languages like XAML in WPF or Silverlight can give any insight into the real-world benefits that can be gained from this style of programming. Various advantages are often cited: Speed of development Forces separation between presentation and logic Better integration between coders and designers UI changes don't require re-compilation Also, are there any downsides? A few potential areas of concern spring to mind: Execution speed Memory usage Added complexity Are there any other considerations that should be taken into account?

    Read the article

  • classes and static variables in shared libraries

    - by abel
    I am trying to write something in c++ with an architecture like: App -- Core (.so) <-- Plugins (.so's) for linux, mac and windows. The Core is implicitly linked to App and Plugins are explicitly linked with dlopen/LoadLibrary to App. The problem I have: static variables in Core are duplicated at run-time -- Plugins and App have different copys of them. at least on mac, when a Plugin returns a pointer to App, dynamic casting that pointer in App always result in NULL. Can anyone give me some explanations and instructions for different platforms please? I know this may seem lazy to ask them all here but I really cannot find a systematic answer to this question.

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • can router configuration cause decreasing of download rate?

    - by Behrooz
    my download speed got crazy since i changed the routers IP. but nothing got fixed after "reset factory"ing it. the speed was 1024kb/s(128kB/s) but it is 200kb/s(max) right now. i mean it works good if a request is small(i.e. a HTTP request) but it gets slow if a request has a big response. help me please(it is three days I'm downloading VS2010) http://serverfault.com/questions/135243/ no one on serverfault helped me for posting my question please migrate it to serverfault. thanks.

    Read the article

  • GWT PagingScrollTable ( set a style on a particular cell of the header table )

    - by Mario
    I have a column Definition for each colume that extends AbstractColumnDefinition these columns are put in a DefaultTableDefinition PART OF a PagingScrollTable example: NAME | SIZE | RES | DELETE | this style (AT THE END OF THE PAGE ) is added to the column names if you notice all of them are with a cursor pointer , meaning a hand shows up when i hover above each one. I want to remove the cursor for some cells in the header like delete. HOW DO YOU set/add/remove a style on a particular cell of the header table of a PagingScrollTable? THANK YOU .gwt-ScrollTable .headerTable td { border-left: 1px solid #CCCCCC; border-right: 1px solid #CCCCCC; border-bottom: 1px solid black; vertical-align: bottom; cursor: pointer; }

    Read the article

  • Any Win32 APIs to get the screenshots?

    - by Microkernel
    Hi all, I am writing an app, which needs to take the screen shots automatically (just like pressing PrintScreen button). So please suggest me how to get this done. A raw 24 bit BMP image would suffice. PLEASE NOTE: My app is in C, so any win32 APIs that can be called from my code is what I am looking for. (Sometimes back I had got an example code from codeproject which used to get the screen shots but the mouse pointer user to blink when the screen shot is taken. As multiple shots are taken this looks irritating to the user, so I don't want the mouse pointer to blink!) Regards, Chethan KR

    Read the article

  • MPI: is there mpi libraries capable of message compression?

    - by osgx
    Sometimes MPI is used to send low-entropy data in messages. So it can be useful to try to compress messages before sending it. I know that MPI can work on very fast networks (10 Gbit/s and more), but many MPI programs are used with cheap network like 0,1G or 1Gbit/s Ethernet and with cheap (slow, low bisection) network switch. There is a very fast Snappy (wikipedia) compression algorithm, which has Compression speed is 250 MB/s and decompression speed is 500 MB/s so on compressible data and slow network it will give some speedup. Is there any MPI library which can compress MPI messages (at layer of MPI; not the compression of ip packets like in PPP). MPI messages are also structured, so there can be some special method, like compression of exponent part in array of double.

    Read the article

  • Objective-C getter/ setter question

    - by pic-o-matic
    Hi, im trying to works my way trough an Objective-C tutorial. In the book there is this example: @interface { int width; int height; XYPoint *origin; } @property int width, height; I though, hey there's no getter/setter for the XYPoint object. The code does work though. Now i'm going maybe to answer my own question :). I thinks its because "origin" is a pointer already, and whats happening under the hood with "width" and "height", is that there is going te be created a pointer to them.. Am i right, or am i talking BS :) ??

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • how to better use of eclipse code templates (PHP)?

    - by pocketfullofcheese
    One particular problem I was having was using ${word_selection} in an Eclipse PDT template. I was recently trying to use some code templates with Eclipse PDT 2.1 to speed up some common tasks. We use a lot of getters/setters, so I wrote the following template. function get${word_selection}() { return $$this->getData('${word_selection}'); } function set${word_selection}($$${word_selection}) { $$this->setData('${word_selection}', $$${word_selection}); } I named the template "getset" and the only way I know to use the Code Assist is to type: "getset" then hit my code assist keys (I have it set to Esc, but I think the default was Ctrl+Space). The problem is, this doesn't actually let me select a word to be used by the ${word_selection}. how do I type in my template name, hit the key combo, and have a word selected all at the same time? I also want to know what kinds of templates people have set up and any other tips for using templates to speed of programming.

    Read the article

  • Memory cleanup on returned array from static method (objective-c)

    - by Michael Bordelon
    In objective-c, I have a utility class with a bunch of static methods that I call for various tasks. As an example, I have one method that returns an NSArray that I allocate in the static method. If I set the NSArray to autorelease, then some time later, the NSArray in my calling method (that is assigned to the returned pointer) losses it's reference because the original form the static method is cleaned up. I can't release the NSArray object in the static method because it needs to be around for the return and assignment. What is the right way to return an object (like the NSArray) from a static class, and have it hang around for the calling class, but then get cleaned up later when it is no longer needed? Do I have to create the object first in the caller and pass in a pointer to the object and then return that same object form the static method? I know this is a basic O-O problem, I just never had this issue in Java and I do not do much C/C++. Thanks for your help.

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • Smart pointers and polymorphism

    - by qwerty
    hello. I implemented reference counting pointers (called SP in the example) and im having problems with polymorphism which i think i shouldn't have. In the following code: SP<BaseClass> foo() { // Some logic... SP<DerivedClass> retPtr = new DerivedClass(); return retPtr; } DerivedClass inherits from BaseClass. With normal pointers this should have worked, but with the smart pointers it says "cannot convert from 'SP<T>' to 'const SP<T>&" and i think it refers to the copy constructor of the smart pointer. How to i allow this kind of polymorphism with reference counting pointer? I'd appreciate code samples cause obviously im doing something wrong here if im having this problem. Thanks! :) [p.s., plz don't tell me to use standart liberary with smart pointers cuz that's impossible at this moment.]

    Read the article

  • How to achieve Bing maps like InfoWindow in Google Maps?

    - by BillB
    I'm using Google Maps v3. I really like the InfoWindows found in Bing, as opposed to Google. Screenshots & functionality found here comparing the two: http://www.axismaps.com/blog/2009/07/data-probing-and-info-window-design-on-web-based-maps/ Question: How can I replicate Bing like InfoWindows while using Google Maps v3? UPDATE: To be more specific, what I like about Bing's InfoWindows include: - The pointer dynamically changes sides from left/right/bottom/top, as opposed to Google limited to only have the InfoWindow pointer on the bottom - Bing's InfoWindows use less space - You can configure Bing's InfoWindows to pop up outside of the map bounders so that you don't have to autopan the map to display the marker's InfoWindow

    Read the article

  • ASP.NET MVC and ApplicationPath

    - by user93422
    Question is about paths and domains: I have an out-of-the box ASP.NET MVC project (generated by "File-New Project"). On LogOn page it does: return Redirect("~/Account/LogOn");. I have a domain name: mycompany.com, and following file structure on the server: /Root /MyApp (this is where my app goes into) Default.aspx ... I have set up following domain pointer: mycompany.com -> \MyApp When I go to mycompany.com I get an error, something about can't find mycompany.com/MyApp/MyApp/Account/LogOn Question: Where does second /MyApp path element comes from? Note: If I don't use domain pointer and deploy the site to the root - everything works just fine. Note: My hosting provider is webhost4life.com.

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • How to create a container that holds different types of function pointers in C++?

    - by Alex
    I'm doing a linear genetic programming project, where programs are bred and evolved by means of natural evolution mechanisms. Their "DNA" is basically a container (I've used arrays and vectors successfully) which contain function pointers to a set of functions available. Now, for simple problems, such as mathematical problems, I could use one type-defined function pointer which could point to functions that all return a double and all take as parameters two doubles. Unfortunately this is not very practical. I need to be able to have a container which can have different sorts of function pointers, say a function pointer to a function which takes no arguments, or a function which takes one argument, or a function which returns something, etc (you get the idea)... Is there any way to do this using any kind of container ? Could I do that using a container which contains polymorphic classes, which in their turn have various kinds of function pointers? I hope someone can direct me towards a solution because redesigning everything I've done so far is going to be painful.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >