Search Results

Search found 15088 results on 604 pages for 'programming books'.

Page 192/604 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Help me understand this C code

    - by Benjamin
    INT GetTree (HWND hWnd, HTREEITEM hItem, HKEY *pRoot, TCHAR *pszKey, INT nMax) { TV_ITEM tvi; TCHAR szName[256]; HTREEITEM hParent; HWND hwndTV = GetDlgItem (hWnd, ID_TREEV); memset (&tvi, 0, sizeof (tvi)); hParent = TreeView_GetParent (hwndTV, hItem); if (hParent) { // Get the parent of the parent of the... GetTree (hWnd, hParent, pRoot, pszKey, nMax); // Get the name of the item. tvi.mask = TVIF_TEXT; tvi.hItem = hItem; tvi.pszText = szName; tvi.cchTextMax = dim(szName); TreeView_GetItem (hwndTV, &tvi); //send the TVM_GETITEM message? lstrcat (pszKey, TEXT ("\\")); lstrcat (pszKey, szName); } else { *pszKey = TEXT ('\0'); szName[0] = TEXT ('\0'); // Get the name of the item. tvi.mask = TVIF_TEXT | TVIF_PARAM; tvi.hItem = hItem; tvi.pszText = szName; tvi.cchTextMax = dim(szName); if (TreeView_GetItem (hwndTV, &tvi)) //*pRoot = (HTREEITEM)tvi.lParam; //original hItem = (HTREEITEM)tvi.lParam; else { INT rc = GetLastError(); } } return 0; } The block of code that begins with the comment "Get the name of the item" does not make sense to me. If you are getting the listview item why does the code set the parameters of the item being retrieved? If you already had the values there would be no need to retrieve them. Secondly near the comment "original" is the original line of code which will compile with a warning under embedded visual c++ 4.0, but if you copy the exact same code into visual studio 2008 it will not compile. Since I did not write any of this code, and am trying to learn, is it possible the original author made a mistake on this line? The *pRoot should point to HKEY type yet he is casting to an HTREEITEM type which should never work since the data types don't match?

    Read the article

  • posmax: like argmax but gives the position(s) of the element x for which f[x] is maximal

    - by dreeves
    Mathematica has a built-in function ArgMax for functions over infinite domains, based on the standard mathematical definition. The analog for finite domains is a handy utility function. Given a function and a list (call it the domain of the function), return the element(s) of the list that maximize the function. Here's an example of finite argmax in action: http://stackoverflow.com/questions/471029/canonicalize-nfl-team-names/472213#472213 And here's my implementation of it (along with argmin for good measure): (* argmax[f, domain] returns the element of domain for which f of that element is maximal -- breaks ties in favor of first occurrence. *) SetAttributes[{argmax, argmin}, HoldFirst]; argmax[f_, dom_List] := Fold[If[f[#1]>=f[#2], #1, #2]&, First[dom], Rest[dom]] argmin[f_, dom_List] := argmax[-f[#]&, dom] First, is that the most efficient way to implement argmax? What if you want the list of all maximal elements instead of just the first one? Second, how about the related function posmax that, instead of returning the maximal element(s), returns the position(s) of the maximal elements?

    Read the article

  • How can I get the source code for ASTassistant?

    - by cyclotis04
    I'm trying to develop an application similar to ASTassistant, and in the article the author says that he included "the source code with the binaries." After downloading the ZIP folder, however, I've found no source. The program is written in REAL Basic, which I don't know anything about. Do I need to purchase REAL Basic to view ASTassistant's source code, or is it somewhere I haven't looked? Thanks

    Read the article

  • Searching for patterns to create a TCP Connection Pool for high performance messaging

    - by JoeGeeky
    I'm creating a new Client / Server application in C# and expect to have a fairly high rate of connections. That made me think of database connection pools which help mitigate the expense of creating and disposing connections between the client and database. I would like to create a similar capability for my application and haven't been able to find any good examples of how to apply this pattern. Do I really need to spin up an instance of a TcpClient every time I want to send a message to the server and receive a receipt message? Each connection is expected to transport between 1-5KB with each receiving a 1KB response message. I realize this question is somewhat vague, but I am starting from scratch so I am open to suggestions. Even if that means my suppositions are all wrong.

    Read the article

  • How to simulate a dial-up connection for testing purposes?

    - by mawg
    I have to code a server app where clients open a TCP/IP socket, send some data and close the connection. The data packets are small < 100 bytes, however there is talk of having them batch their transactions and send multiple packets. How can I best simulate a dial-up ut connection (using Delphy & Indy components, just FYI)? Is it as simple as open connection wait a while (what is the definition of "a while"?) close connection

    Read the article

  • php functions new php developers should be aware of

    - by John
    Can people suggest a list of common or popular php functions that new/junior programmers should be aware of so that they don't "re-invent-the-wheel" so to speak? For example, I've seen a lot of new coders try to write their own date parsing functions when a combination of date(), strtotime() and time() can do everything their looking for. Any other ones you guys want to add to this list? Thanks

    Read the article

  • my realtime network receiving time differs a lot, anyone can help?

    - by sguox002
    I wrote a program using tcpip sockets to send commands to a device and receive the data from the device. The data size would be around 200kB to 600KB. The computer is directly connected to the device using a 100MB network. I found that the sending packets always arrive at the computer at 100MB/s speed (I have debugging information on the unit and I also verified this using some network monitoring software), but the receiving time differs a lot from 40ms to 250ms, even if the size is the same (I have a receiving buffer about 700K and the receiving window of 8092 bytes and changing the window size does not change anything). The phenomena differs also on different computers, but on the same computer the problem is very stable. For example, receiving 300k bytes on computer a would be 40ms, but it may cost 200ms on another computer. I have disabled firewall, antivirus, all other network protocol except the TCP/IP. Any experts on this can give me some hints?

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • Good language to learn in order to build small websites

    - by mkoryak
    I want to start building websites and charging people for them! My problem is that the stack that know well does not lend itself to quick development, or cheap hosting. I am looking for languages that satisfy the following criteria: Fast to develop in Can find cheap hosting for it Bonus points if it can also be 'enterprisey'

    Read the article

  • Telling someone to "let the world judge their development practices" without being condicending?

    - by leeand00
    There's a person in management on my team, that: Doesn't ask questions on Stack Overflow. Doesn't read development blogs. Doesn't use development best practices. This person is about to make some major decisions about the technology stack that will be used throughout the company. (I asked him what the technology stack was they were planning to use was, and it included many things that are not even development tools). How can I tell them to "Let the world's experience" judge their development practices, before they set them in stone; without being condescending or upsetting them?

    Read the article

  • Artificially create a connection timeout error

    - by Mark Ingram
    I've had a bug in our software that occurs when I receive a connection timeout. These errors are very rare (usually when my connection gets dropped by our internal network). How can I generate this kind of effect artificially so I can test our software? If it matters the app is written in C++/MFC using CAsyncSocket classes. Edit: I've tried using a non-existant host, and I get the socket error: WSAEINVAL (10022) Invalid argument My next attempt was to use Alexander's suggestion of connecting to a different port, e.g. 81 (on my own server though). That worked great. Exactly the same as a dropped connection (60 second wait, then error). Thank you!

    Read the article

  • Create UML diagrams after or before coding?

    - by ajsie
    I can clearly see the benefits of having UML diagrams showing your infrastructure of the application (class names, their members, how they communicate with each other etc). I'm starting a new project right now and have already structured the database (with visual paradigm). I want to use some design patterns to guide me how to code the classes. I wonder, should I code the classes first before I create UML diagram of it (maybe out of the code... seems possible) or should I first create UML diagram and then code (or generate code from the UML, seems possible that too). What are you experiences telling you is the best way?

    Read the article

  • How to simplify this code or a better design?

    - by Tattat
    I am developing a game, the game have different mode. Easy, Normal, and Difficult. So, I'm thinking about how to store the game mode. My first idea is using number to represent the difficulty. Easy = 0 Normal = 1 Difficult = 2 So, my code will have something like this: switch(gameMode){ case 0: //easy break; case 1: //normal break; case 3: //difficult break; } But I think it have some problems, if I add a new mode, for example, "Extreme", I need to add case 4... ... it seems not a gd design. So, I am thinking making a gameMode object, and different gameMode is sub class of the super class gameMode. The gameMode object is something like this: class GameMode{ int maxEnemyNumber; int maxWeaponNumber; public static GameMode init(){ GameMode gm = GameMode(); gm.maxEnemyNumber = 0; gm.maxWeaponNumber = 0; return gm; } } class EasyMode extends GameMode{ public static GameMode init(){ GameMode gm = super.init(); gm.maxEnemyNumber = 10; gm.maxWeaponNumber = 100; return gm; } } class NormalMode extends GameMode{ public static GameMode init(){ GameMode gm = super.init(); gm.maxEnemyNumber = 20; gm.maxWeaponNumber = 80; return gm; } } But I think it seems too "bulky" to create an object to store gameMode, my "gameMode" only store different variables for game settings.... Is that any simple way to store data only instead of making an Object? thz u.

    Read the article

  • Explain ML type inference to a C++ programmer

    - by Tsubasa Gomamoto
    How does ML perform the type inference in the following function definition: let add a b = a + b Is it like C++ templates where no type-checking is performed until the point of template instantiation after which if the type supports the necessary operations, the function works or else a compilation error is thrown ? i.e. for example, the following function template template <typename NumType> NumType add(NumType a, NumType b) { return a + b; } will work for add<int>(23, 11); but won't work for add<ostream>(cout, fout); Is what I am guessing is correct or ML type inference works differently? PS: Sorry for my poor English; it's not my native language.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >