Search Results

Search found 16 results on 1 pages for 'caspin'.

Page 1/1 | 1 

  • Is it possible to add tcp autotuning to windows xp?

    - by Caspin
    I have a network application that needs to send messages at 60 times a second. The messages are usually 300-400 bytes, but can be as large as 1500. The default setting for SO_SNDBUF is too small and limits the number of message that can be sent if the network latency is anything greater then 100ms. The naive solution is to just bump the SO_SNDBUF size to to something large. However, depending on the latency and the packet size that could be anywhere from 64K to 8MB. One of Vista's new features is TCP autotuning. Autotuning monitors the tcp connection and dynamically adjust the buffer sizes to allow for optimal communication. I would like to use auto tuning on our windows xp machine so I don't need to guess what my buffers sizes should be. Is there a way to install either a microsoft or 3rd party tcp autotuner on windows xp?

    Read the article

  • Is it possible to add tcp autotuning to windows xp?

    - by Caspin
    I have a network application that needs to send messages at 60 times a second. The messages are usually 300-400 bytes, but can be as large as 1500. The default setting for SO_SNDBUF is too small and limits the number of message that can be sent if the network latency is anything greater then 100ms. The naive solution is to just bump the SO_SNDBUF size to to something large. However, depending on the latency and the packet size that could be anywhere from 64K to 8MB. One of Vista's new features is TCP autotuning. Autotuning monitors the tcp connection and dynamically adjust the buffer sizes to allow for optimal communication. I would like to use auto tuning on our windows xp machine so I don't need to guess what my buffers sizes should be. Is there a way to install either a microsoft or 3rd party tcp autotuner on windows xp?

    Read the article

  • Why allow concatenation of string literals?

    - by Caspin
    I recently got bit by a subtle bug. char ** int2str = { "zero", // 0 "one", // 1 "two" // 2 "three",// 3 nullptr }; assert( values[1] == "one"_s ); // passes assert( values[2] == "two"_s ); // fails If you have godlike code review powers you'll notice I forgot the , after "two". After the considerable effort to find that bug I've got to ask why would anyone ever want this behavior? I can see how this might be useful for macro magic, but then why is this a "feature" in a modern language like python? Have you ever used string literal concatenation in production code?

    Read the article

  • Complex error handling

    - by Caspin
    I've got a particularly ornery piece of network code. I'm using asio but that really doesn't matter for this question. I assume there is no way to unbind a socket other than closing it. The problem is that open(), bind(), and listen() can all throw a system_error. So I handled the code with a simple try/catch. The code as written in broken. using namespace boost::asio; class Thing { public: ip::tcp::endpoint m_address; ip::tcp::acceptor m_acceptor; /// connect should handle all of its exceptions internally. bool connect() { try { m_acceptor.open( m_address.protocol() ); m_acceptor.set_option( tcp::acceptor::reuse_address(true) ); m_acceptor.bind( m_address ); m_acceptor.listen(); m_acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); m_acceptor.close(); return false; } return true; } /// don't call disconnect unless connect previously succeeded. void disconnect() { // other stuff needed to disconnect is ommited m_acceptor.close(); } }; The error is if the socket fails to connect it will try to close it in the catch block and throw another system_error about closing an acceptor that has never been opened. One solution is to add an if( acceptor.is_open() ) in the catch block but that tastes wrong. Kinda like mixing C-style error checking with c++ exceptions. If I where to go that route, I may as well use the non-throwing version of open(). boost::system::error_code error; acceptor.open( address.protocol, error ); if( ! error ) { try { acceptor.set_option( tcp::acceptor::reuse_address(true) ); acceptor.bind( address ); acceptor.listen(); acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); acceptor.close(); return false; } } return !error; Is there an elegant way to handle these possible exceptions using RAII and try/catch blocks? Am I just wrong headed in trying to avoid if( error condition ) style error handling when using exceptions?

    Read the article

  • Why is the volatile qualifier used through out std::atomic?

    - by Caspin
    From what I've read from Herb Sutter and others you would think that volatile and concurrent programming were completely orthogonal concepts, at least as far as C/C++ are concerned. However, in GCC c++0x extension all of std::atomic's member functions have the volatile qualifier. The same is true in Anthony Williams's implementation of std::atomic. So what's deal, do my atomic<> variables need be volatile or not?

    Read the article

  • D callbacks in C functions

    - by Caspin
    I am writing D2 bindings for Lua. This is in one of the Lua header files. typedef int (*lua_CFunction) (lua_State *L); I assume the equivalent D2 statement would be: extern(C) alias int function( lua_State* L ) lua_CFunction; Lua also provides an api function: void lua_pushcfunction( lua_State* L, string name, lua_CFunction func ); If I want to push a D2 function does it have to be extern(C) or can I just use the function? int dfunc( lua_State* L ) { std.stdio.writeln("dfunc"); } extern(C) int cfunc( lua_State* L ) { std.stdio.writeln("cfunc"); } lua_State* L = lua_newstate(); lua_pushcfunction(L, "cfunc", &cfunc); //This will definitely work. lua_pushcfunction(L, "dfunc", &dfunc); //Will this work? If I can only use cfunc, why? I don't need to do anything like that in C++. I can just pass the address of a C++ function to C and everything just works.

    Read the article

  • Compile time string hashing

    - by Caspin
    I have read in few different places that using c++0x's new string literals it might be possible to compute a string's hash at compile time. However, no one seems to be ready to come out and say that it will be possible or how it would be done. Is this possible? What would the operator look like? I'm particularly interested use cases like this. void foo( const std::string& value ) { switch( std::hash(value) ) { case "one"_hash: one(); break; case "two"_hash: two(); break; /*many more cases*/ default: other(); break; } } Note: the compile time hash function doesn't have to look exactly as I've written it. I did my best to guess what the final solution would look like, but meta_hash<"string"_meta>::value could also be a viable solution.

    Read the article

  • Debugging embedded Lua

    - by Caspin
    How do you debug lua code embedded in a c++ application? From what I gather, either I need to buy a special IDE and link in their special lua runtime (ugh). Or I need to build a debug console in to the game engine, using the lua debug API calls. I am leaning toward writing my own debug console, but it seems like a lot of work. Time that I could better spend polishing the other portions of the game.

    Read the article

  • immutable strings vs std::string

    - by Caspin
    I've recent been reading about immutable strings, here and here as well some stuff about why D chose immutable strings. There seem to be many advantages. trivially thread safe more secure more memory efficient in most use cases. cheap substrings (tokenizing and slicing) Not to mention most new languages have immutable strings, D2.0, Java, C#, Python, Ruby, etc. Would C++ benefit from immutable strings? Is it possible to implement an immutable string class in c++ (or c++0x) that would have all of these advantages?

    Read the article

  • inheritance and hidden overloads

    - by Caspin
    The following code doesn't compile. struct A {}; struct B {}; class Base { public: virtual void method( A param ) { } virtual void method( B param ) = 0; }; class Derived : public Base { public: //using Base::method; void method( B param ) { } }; int main() { Derived derived; derived.method(A()); } The compiler can't find the overload of method() that has an A parameter. The 'fix' is to add a using declaration in the derived class. My question is why. What is the rational for a weird language rule like this? I verified the error in both GCC and Comeau, so I assume this isn't a compiler bug but a feature of the language. Comeau at least gives me this warning: "ComeauTest.c", line 10: warning: overloaded virtual function "Base::method" is only partially overridden in class "Derived" class Derived : public Base ^

    Read the article

  • read subprocess stdout line by line

    - by Caspin
    My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file, but only show some of it to the user. I thought the following would work, but the output does show up in my application until the utility has produced a significant amount of output. #fake_utility.py, just generates lots of output over time import time i = 0 while True: print hex(i)*512 i += 1 time.sleep(0.5) #filters output import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout.subprocess.PIPE) for line in proc.stdout: #the real code does filtering here print "test:", line.rstrip() The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code. What am I missing? Is this even possible?

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Can any function be a deleted-function?

    - by Caspin
    The working draft explicitly calls out that defaulted-functions must be special member functions (eg copy-constructor, default-constructor, etc). Which makes perfect sense. However, I don't see any such restriction on deleted-functions. Is that right? Or in other words are these three examples valid c++0? struct Foo { // 1 int bar( int ) = delete; }; // 2 int baz( int ) = delete; template< typename T > int boo( T t ); // 3 template<> int boo<int>(int t) = delete;

    Read the article

  • Should checkins be small steps or complete features?

    - by Caspin
    Two of version controls uses seem to dictate different checkin styles. distibution centric: changesets will generally reflect a complete feature. In general these checkins will be larger. This style is more user/maintainer friendly. rollback centric: changesets will be individual small steps so the history can function like an incredibly powerful undo. In general these checkins will be smaller. This style is more developer friendly. I like to use my version control as really powerful undo while while I banging away at some stubborn code/bug. In this way I'm not afraid to make drastic changes just to try out a possible solution. However, this seems to give me a fragmented file history with lots of "well that didn't work" checkins. If instead I try to have my changeset reflect complete features I loose the use of my version control software for experimentation. However, it is much easier for user/maintainers to figure out how the code is evolving. Which has great advantages for code reviews, managing multiple branches, etc. So what's a developer to do? checkin small steps or complete features?

    Read the article

  • Link compatibility between C++ and D

    - by Caspin
    D easily interfaces with C. D just as easily interfaces with C++, but (and it's a big but) the C++ needs to be extremely trivial. The code cannot use: namespaces templates multiple inheritance mix virtual with non-virtual methods more? I completely understand the inheritance restriction. The rest however, feel like artificial limitations. Now I don't want to be able to use std::vector<T> directly, but I would really like to be able to link with std::vector<int> as an externed template. The C++ interfacing page has this particularly depressing comment. D templates have little in common with C++ templates, and it is very unlikely that any sort of reasonable method could be found to express C++ templates in a link-compatible way with D. This means that the C++ STL, and C++ Boost, likely will never be accessible from D. Admittedly I'll probably never need std::vector while coding in D, but I'd love to use QT or boost. So what's the deal. Why is it so hard to express non-trivial C++ classes in D? Would it not be worth it to add some special annotations or something to express at least namespaces?

    Read the article

1