Search Results

Search found 17 results on 1 pages for 'wowus'.

Page 1/1 | 1 

  • Proper Commit Messages

    - by wowus
    What are commit messages for? I've always been writing them as an explanation of what I did, but I've recently gotten into a discussion about it with a colleague who writes commit messages explaining why he did. Which one is right, or is there another answer entirely?

    Read the article

  • How would I go about prevent DLL injection.

    - by wowus
    So the other day, I saw this: http://www.edgeofnowhere.cc/viewtopic.php?p=2483118 and it goes over three different methods of DLL injection. How would I prevent these from the process? Or at a bare minimum, how do I prevent the first one? I was thinking maybe a Ring 0 driver might be the only way to stop all three, but I'd like to see what the community thinks.

    Read the article

  • Using memcpy in the STL

    - by wowus
    Why does C++'s vector class call copy constructors? Why doesn't it just memcpy the underlying data? Wouldn't that be a lot faster, and remove half of the need for move semantics? I can't imagine a use case where this would be worse, but then again, maybe it's just because I'm being quite unimaginative.

    Read the article

  • unique_ptr boost equivalent?

    - by wowus
    Is there some equivalent class for C++1x's std::unique_ptr in the boost libraries? The behavior I'm looking for is being able to have an exception-safe factory function, like so... std::unique_ptr<Base> create_base() { return std::unique_ptr<Base>(new Derived); } void some_other_function() { std::unique_ptr<Base> b = create_base(); // Do some stuff with b that may or may not throw an exception... // Now b is destructed automagically. }

    Read the article

  • Using Boost.Asio to get "the whole packet"

    - by wowus
    I have a TCP client connecting to my server which is sending raw data packets. How, using Boost.Asio, can I get the "whole" packet every time (asynchronously, of course)? Assume these packets can be any size up to the full size of my memory. Basically, I want to avoid creating a statically sized buffer.

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • Get the last '/' or '\\' character in Python

    - by wowus
    If I have a string that looks like either ./A/B/c.d OR .\A\B\c.d How do I get just the "./A/B/" part? The direction of the slashes can be the same as they are passed. This problem kinda boils down to: How do I get the last of a specific character in a string? Basically, I want the path of a file without the file part of it.

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • Virtual destructors for interfaces.

    - by wowus
    Do interfaces need a virtual destructor, or is the auto-generated one fine? For example, which of the following two code snippets is best, and why? class Base { public: virtual void foo() = 0; virtual ~Base() {} }; OR... class Base { public: virtual void foo() = 0; };

    Read the article

  • Heap Behavior in C++

    - by wowus
    Is there anything wrong with the optimization of overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory? Basically, given that memory usage isn't as much of an issue as performance, should I do this?

    Read the article

  • In C++, what happens when you return a variable?

    - by wowus
    What happens, step by step, when a variable is returned. I know that if it's a built-in and fits, it's thrown into rax/eax/ax. What happens when it doesn't fit, and/or isn't built-in? More importantly, is there a guaranteed copy constructor call? edit: What about the destructor? Is that called "sometimes", "always", or "never"?

    Read the article

  • Trying to link my project with Boost.Thread using CMake

    - by wowus
    When I link Boost.Thread to my boost_test executable, it gives me make[2]: *** No rule to make target `/usr/lib64/libboost_thread-mt.so', needed by `gogo/test/test_boost'. Stop. when I make it. Here's the offending CMake code, what am I doing wrong? add_executable(boost_test boost_test.cpp) add_test(boost_test boost_test) # Boost auto-links for MSVC, so we exclude it. if(CMAKE_COMPILER_IS_GNUCXX) target_link_libraries(test_boost #LINK_INTERFACE_LIBRARIES ${Boost_THREAD_LIBRARY} ) endif()

    Read the article

1