Search Results

Search found 28744 results on 1150 pages for 'higher order functions'.

Page 579/1150 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • Best ways to teach a beginner to program?

    - by Justin Standard
    Original Question I am currently engaged in teaching my brother to program. He is a total beginner, but very smart. (And he actually wants to learn). I've noticed that some of our sessions have gotten bogged down in minor details, and I don't feel I've been very organized. (But the answers to this post have helped a lot.) What can I do better to teach him effectively? Is there a logical order that I can use to run through concept by concept? Are there complexities I should avoid till later? The language we are working with is Python, but advice in any language is welcome. How to Help If you have good ones please add the following in your answer: Beginner Exercises and Project Ideas Resources for teaching beginners Screencasts / blog posts / free e-books Print books that are good for beginners Please describe the resource with a link to it so I can take a look. I want everyone to know that I have definitely been using some of these ideas. Your submissions will be aggregated in this post. Online Resources for teaching beginners: A Gentle Introduction to Programming Using Python How to Think Like a Computer Scientist Alice: a 3d program for beginners Scratch (A system to develop programming skills) How To Design Programs Structure and Interpretation of Computer Programs Learn To Program Robert Read's How To Be a Programmer Microsoft XNA Spawning the Next Generation of Hackers COMP1917 Higher Computing lectures by Richard Buckland (requires iTunes) Dive into Python Python Wikibook Project Euler - sample problems (mostly mathematical) pygame - an easy python library for creating games Create Your Own Games With Python ebook Foundations of Programming for a next step beyond basics. Squeak by Example Recommended Print Books for teaching beginners Accelerated C++ Python Programming for the Absolute Beginner Code by Charles Petzold

    Read the article

  • Good locations worldwide for a coder gypsy wannabe

    - by fung
    Yes, this is not programming related but please bear with me =). I run a small niche SaaS business. Lately I've been thinking of traveling and experiencing life in other places. Would really appreciate suggestions for good places a developer could relocate to. In particular I'm looking for a place that: Has good internet connection (cheap stable broadband, lots of places that provide free wifi, etc.) Low cost of living (rent and food fairly cheap). At least half of the population speak English. Has a local courier agent (DHL, Fedex, any...). The government allows for extended stay of foreigners. I'm thinking of staying for about 6 months at each location and maybe doing it for 3 years. So looking for 5 to 6 locations in total. So if any of you think you're staying in a place that would be great for a visiting developer then please shout out. Include as detailed a description as possible. And include any cons about the place if there are. The only place that pops to mind right now is Bali =). Isle of Skye also seems interesting but I think immigration is tight and cost of living would definitely be higher. Thanks in advance for suggestions =)

    Read the article

  • Any way to loop through FPDF code with proper XY coordinates?

    - by JM4
    At the end of a form collection, I provide the consumer a printable PDF with the information they just entered. I already run through a loop to store the variables themselves but am wondering if it is at all possible to build a loop that builds on itself for FPDF. The catch is this, each new variable (#1, #2, #3) will change location by a determined amount of space. For example: I print the Member #1 First name at coordinate at coordinate (95, 101). I print Member #2 First name at coordinate (95, 110)... and so on. Each known variable will be 9.5mm greater than its previous entry (therefor Member #9 will be 40mm higher than Member 6) My sample code for the FPDF itself is: $pdf->SetFont('Arial','', 7); $pdf->SetXY(8,76.5); $pdf->Cell(20,0,$f1name); $pdf->SetFont('Arial','', 5); $pdf->SetXY(50.5,76.5); $pdf->Cell(20,0,$f1address); $pdf->SetFont('Arial','', 7); $pdf->SetXY(95.7,76.5); $pdf->Cell(20,0,$f1city); $pdf->SetXY(129.5,76.5); $pdf->Cell(20,0,$f1state); $pdf->SetXY(139.1,76.5); $pdf->Cell(20,0,$f1zip); $pdf->SetXY(151,76.5); $pdf->Cell(20,0,$f1dob); $pdf->SetXY(168,76.5); $pdf->Cell(20,0,$f1ssn); $pdf->SetXY(186,76.5); $pdf->Cell(20,0,$f1phone); $pdf->SetXY(55,81.1); $pdf->Cell(20,0,$f1email); $pdf->SetXY(129,81.1); $pdf->Cell(20,0,$f1fednum); Ideally, all Y variables with $f2 would be 9.5mm greater than f1's Y values.

    Read the article

  • How to program a neural network for chess?

    - by marco92w
    Hello! I want to program a chess engine which learns to make good moves and win against other players. I've already coded a representation of the chess board and a function which outputs all possible moves. So I only need an evaluation function which says how good a given situation of the board is. Therefore, I would like to use an artificial neural network which should then evaluate a given position. The output should be a numerical value. The higher the value is, the better is the position for the white player. My approach is to build a network of 385 neurons: There are six unique chess pieces and 64 fields on the board. So for every field we take 6 neurons (1 for every piece). If there is a white piece, the input value is 1. If there is a black piece, the value is -1. And if there is no piece of that sort on that field, the value is 0. In addition to that there should be 1 neuron for the player to move. If it is White's turn, the input value is 1 and if it's Black's turn, the value is -1. I think that configuration of the neural network is quite good. But the main part is missing: How can I implement this neural network into a coding language (e.g. Delphi)? I think the weights for each neuron should be the same in the beginning. Depending on the result of a match, the weights should then be adjusted. But how? I think I should let 2 computer players (both using my engine) play against each other. If White wins, Black gets the feedback that its weights aren't good. So it would be great if you could help me implementing the neural network into a coding language (best would be Delphi, otherwise pseudo-code). Thanks in advance!

    Read the article

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • How can you replace the src of an image that points to mjpeg?

    - by user165623
    I have an application that displays images from network cameras. The application will show multiple images on a single page. Each image container has a button on it that causes jQuery to scale that image up to fill a 64x480 div vs much smaller when multiple images are present. I change the src in the image with jquery as you'd expect: $("#imageId").attr("src", "http://cameraUrl.com:6000?width=640&height=480&d="+date.getTime()); the date.getTime() is intended to override the caching in a browser. It works for retrieving still images. Sometimes the video changes to the larger resolution. Other times it just sticks with the original image scaled up to fit the larger div as if it is not loading the new URL. This is evident by the fact it's grainy and the text overlay from the camera is scaled up. Occasionally the image finally loads in the higher resolution version and it works evidenced by the image clarity and the text overlay scaling to what it should be. Is there a correct way to force the setting of an image src where the src points to a motion jpeg stream to reload?

    Read the article

  • Examples of localization in Perl using gettext and Locale::TextDomain, with fallback if Locale::Text

    - by Jakub Narebski
    The "On the state of i18n in Perl" blog post from 26 April 2009 recommends using Locale::TextDomain module from libintl-perl distribution for l10n / i18n in Perl. Besides I have to use gettext anyway, and gettext support in Locale::Messages / Locale::TextDomain is more natural than in gettext emulation in Locale::Maketext. The subsection "15.5.18 Perl" in chapter "15 Other Programming Languages" in GNU gettext manual says: Portability The libintl-perl package is platform independent but is not part of the Perl core. The programmer is responsible for providing a dummy implementation of the required functions if the package is not installed on the target system. However neither of two examples in examples/hello-perl in gettext sources (one using lower level Locale::Messages, one using higher level Locale::TextDomain) includes detecting if the package is installed on the target system, and providing dummy implementation if it is not. What is complicating matter (with respect to detecting if package is installed or not) is the following fragment of Locale::TextDomain manpage: SYNOPSIS use Locale::TextDomain ('my-package', @locale_dirs); use Locale::TextDomain qw (my-package); USAGE It is crucial to remember that you use Locale::TextDomain(3) as specified in the section "SYNOPSIS", that means you have to use it, not require it. The module behaves quite differently compared to other modules. Could you please tell me how one should detect if libintl-perl is present on target system, and how to provide dummy fallthrough implementation if it is not installed? Or give examples of programs / modules which do this?

    Read the article

  • Source-to-source compiler framework wanted

    - by cheungcc_2000
    Dear all, I used to use OpenC++ (http://opencxx.sourceforge.net/opencxx/html/overview.html) to perform code generation like: Source: class MyKeyword A { public: void myMethod(inarg double x, inarg const std::vector<int>& y, outarg double& z); }; Generated: class A { public: void myMethod(const string& x, double& y); // generated method below: void _myMehtod(const string& serializedInput, string& serializedOutput) { double x; std::vector<int> y; // deserialized x and y from serializedInput double z; myMethod(x, y, z); } }; This kind of code generation directly matches the use case in the tutorial of OpenC++ (http://www.csg.is.titech.ac.jp/~chiba/opencxx/tutorial.pdf) by writing a meta-level program for handling "MyKeyword", "inarg" and "outarg" and performing the code generation. However, OpenC++ is sort of out-of-date and inactive now, and my code generator can only work on g++ 3.2 and it triggers error on parsing header files of g++ of higher version. I have looked at VivaCore, but it does not provide the infra-structure for compiling meta-level program. I'm also looking at LLVM, but I cannot find documentation that tutor me on working out my source-to-source compilation usage. I'm also aware of the ROSE compiler framework, but I'm not sure whether it suits my usage, and whether its proprietary C++ front-end binary can be used in a commercial product, and whether a Windows version is available. Any comments and pointers to specific tutorial/paper/documentation are much appreciated.

    Read the article

  • Does Google Maps API v3 allow larger zoom values ?

    - by Dr1Ku
    If you use the satellite GMapType using this Google-provided example in v3 of the API, the maximum zoom level has a scale of 2m / 10ft , whereas using the v2 version of another Google-provided example (had to use another one since the control-simple doesn't have the scale control) yields the maximum scale of 20m / 50ft. Is this a new "feature" of v3 ? I have to mention that I've tested the examples in the same GLatLng regions - so my guess is that tile detail level doesn't influence it, am I mistaken ? As mentioned in another question, v3 is to be considered of very Labs-y/beta quality, so use in production should be discouraged for the time being. I've been drawn to the subject since I have to "increase the zoom level of a GMap", the answers here seem to suggest using GTileLayer, and I'm considering GMapCreator, although this will involve some effort. What I'm trying to achieve is to have a larger zoom level, a scale of 2m / 10ft would be perfect, I have a map where the tiles aren't that hi-res and quite a few markers. Seeing that the area doesn't have hi-res tiles, the distance between the markers is really tiny, creating some problematic overlapping. Or better yet, how can you create a custom Map which allows higher zoom levels, as by the Google Campus, where the 2m / 10ft scale is achieved, and not use your own tileserver ? I've seen an example on a fellow Stackoverflower's GMaps sandbox , where the tiles are manually created based on the zoom level. I don't think I'm making any more sense, so I'm just going to end this big question here, I've been wondering around trying to find a solution for hours now. Hope that someone comes to my aid though ! Thank you in advance !

    Read the article

  • Possible to change the alpha value of certain pixels on iPhone?

    - by emi1faber
    Is it possible to change just a portion of a Sprite's alpha in response to user interaction? A good example of what I mean is iFog or iSteam, where the user can wipe "steam" off the iPhone's screen. Swapping images out wouldn't be feasible due to the sheer number of possibilities where the user could touch and move... For example, say you have a simple app that has a brick wall in the background that has graffiti on it, so there'd be two sprites, one of the brick wall, then one of the graffiti that has a higher z value than the brick wall. Then, based upon where the user touches (assuming their touch controls a sandblaster), some of the graffiti should be removed, but not all of it, which could be accomplished by changing the alpha value on a portion of the graffiti sprite. Is there any way to do this in cocos2d-iphone? Or, do I need to drop down into openGL, and if so, where would be a good place to start my search on how to accomplish this? Ideally, I'd like to accomplish this on a cocos2d-iphone Sprite, but if it's not possible, where's the best place to start looking? Thanks in advance, Ben

    Read the article

  • Storing a digital signature for bookings on a web based system

    - by Duncan
    I have a web based bookings system built for a UK higher education client to allow students to sign out equipment (laptops, camera's etc). It's been in use successfully for a couple of years, in the current workflow equipment is collected and the booking is printed, signed by the student and kept until the equipment is returned. They are emailed a pdf copy of the booking and reminders if equipment is outstanding. Students can login and prebook equipment using their university LDAP credentials, the booking is then authorised by staff for later collection, but can also walk in and have equipment booked out by staff. They would like to remove the signed paper part of the process and replace this with some sort of digital signature. The suggestion was a graphics tablet but with a web based system this would require a local software package and in my view be impractical. My thought is that students would enter their LDAP username and password upon collection of the equipment, verifying their identity and effectively digitally signing the booking. My question is what would be best to store as a signature or whether to simply authenticate the user and use a boolean flag to indicate that this has been done could be deemed sufficient?

    Read the article

  • Lucene - querying with long strings

    - by Mikos
    I have an index, with a field "Affiliation", some example values are: "Stanford University School of Medicine, Palo Alto, CA USA", "Institute of Neurobiology, School of Medicine, Stanford University, Palo Alto, CA", "School of Medicine, Harvard University, Boston MA", "Brigham & Women's, Harvard University School of Medicine, Boston, MA" "Harvard University, Cambridge MA" and so on... (the bottom-line being the affiliations are written in multiple ways with no apparent consistency) I query the index on the affiliation field using say "School of Medicine, Stanford University, Palo Alto, CA" (with QueryParser) to find all Stanford related documents, I get a lot of false +ves, presumably because of the presence of School of Medicine etc. etc. (note: I cannot use Phrase query because of variability in the way affiliation is constructed) I have tried the following: Use a SpanNearQuery by splitting the search phrase with a whitespace (here I get no results!) Tried boosting (using ^) by splitting with the comma and boosting the last parts such as "Palo Alto CA" with a much higher boost than the initial phrases. Here I still get lots of false +ves. Any suggestions on how to approach this? If SpanNearQuery the way to go, Any ideas on why I get 0 results?

    Read the article

  • Load Balance WCF and Share a Remote MSMQ for High Throughput

    - by BarDev
    After a ton of reading in books and on the web, I have noticed hints of information that WCF and MSMQ can be used in achieving high throughput. The information I have seen mentions using multiple WCF services in a farm that reads from a single MSMQ queue. The problem is that I have found paragraphs here and there that mentions that high throughput can be done, but I cannot seem to find a document of how to implement it. The following is an excerpt from a MSDN article. The following paragraph is from Best Practices for Queued Communication http://msdn.microsoft.com/en-us/library/ms731093.aspx To achieve higher throughput and availability, use a farm of WCF services that read from the queue. This requires that all of these services expose the same contract on the same endpoint. The farm approach works best for applications that have high production rates of messages because it enables a number of services to all read from the same queue. This is what I'm trying to solve. I have an intranet application where a client sends a request to a WCF service. But I want the ability to load balance the WCF services on multiple servers in a farm. I also want these WCF services in the farm to do transactional reads from a remote MSMQ when an item is available in the Queue. If this is possible, an issue I have is that I do not understand the activation process of WCF to retrieve messages from a remote queue. If this is possible, does anyone know of any articles or Webcasts that would explain it in detail? BarDev

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • When does a PHP <5.3.0 daemon script receive signals?

    - by MidnightLightning
    I've got a PHP script in the works that is a job worker; its main task is to check a database table for new jobs, and if there are any, to act on them. But jobs will be coming in in bursts, with long gaps in between, so I devised a sleep cycle like: while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now sleep(30); } } Good, but in some cases that means there might be a 30 second lag before a new job is acted upon. Since this is a daemon script, I figured I'd try the pcntl_signal hook to catch a SIGUSR1 signal to nudge the script to wake up, like: $_isAwake = true; function user_sig($signo) { global $_isAwake; daemon_log("Caught SIGUSR1"); $_isAwake = true; } pcntl_signal(SIGUSR1, 'user_sig'); while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now daemon_log("No new jobs, sleeping..."); $_isAwake = false; $ts = time(); while(time() < $ts+30) { sleep(1); if ($_isAwake) break; // Did a signal happen while we were sleeping? If so, stop sleeping } $_isAwake = true; } } I broke the sleep(30) up into smaller sleep bits, in case a signal doesn't interrupt a sleep() command, thinking that this would cause at most a one-second delay, but in the log file, I'm seeing that the SIGUSR1 isn't being caught until after the full 30 seconds has passed (and maybe the outer while loop resets). I found the pcntl_signal_dispatch command, but that's only for PHP 5.3 and higher. If I were using that version, I could stick a call to that command before the if ($_isAwake) call, but as it currently stands I'm on 5.2.13. On what sort of situations is the signals queue interpreted in PHP versions without the means to explicitly call the queue parsing? Could I put in some other useless command in that sleep loop that would trigger a signal queue parse within there?

    Read the article

  • iPad receiving memory warning with low memory use

    - by Fer
    I have an UIWebKit with a HTML, this HTML have several images and text, but just displaying it gives me the memory warning. So I did some tests: The same HTML with different images, fullsize, and after the same images but reduced 50% from it's original size, for the 50% reduced images, I went to preview and reduced all images in 50% The surprising part is the 50% test, you can see that even with 16 images, the memory peak is 4.90MB. That's really surprising. Notice that these values are not always the same, they change but there's not a huge difference between the tests. In the 50% issue, in the 8 and 16 images, although the memory is low, sometimes a memory warning appears, but the performance enhance is noticeable compared to the full size images standing still = memory after scrolling all article 1 Image = [standing still 5MB] [rotating 5.6MB] 2 Images = [standing still 6.99MB] [rotating 7.7MB] 3 Images = [standing still 9.04MB] [rotating 10.9MB] 4 Images = [standing still 10.89MB] [rotating 13.20MB] 8 Images = [standing still 23.14MB] [rotating 25.20MB] (sometimes crashes) 16 Images = [standing still 27.14MB and app crashes] 50% 1 Image = [standing still 3.2MB] [rotating 3.67MB] 2 Image = [standing still 3.2MB] [rotating 3.70MB] 3 Image = [standing still 3.3MB] [rotating 3.79MB] 4 Image = [standing still 3.3MB] [rotating 3.80MB] 8 Images = [standing still 4.29MB] [rotating 4,63MB] (sometimes crashes) 16 Images = [standing still 4.79MB] [rotating 4,90MB] (sometimes crashes) My question is: The app sometimes crashed with 16 small images. Why? The memory was much lower. What is the limit of memory use? These numbers are helpful if you also tell us the maximum. But, the maximum seemed different with the 50% size images. 13.2MB works for large images and 3.8 for small images. Anything higher sometimes crashes. That makes no sense.

    Read the article

  • php: how to return an HTTP 500 code on any error, no matter what

    - by Jake
    Hi guys. I'm writing an authentication script in PHP, to be called as an API, that needs to return 200 only in the case that it approves the request, and 403 (Forbidden) or 500 otherwise. The problem I'm running into is that php returns 200 in the case of error conditions, outputting the error as html instead. How can I make absolutely sure that php will return an HTTP 500 code unless I explicitly return the HTTP 200 or HTTP 403 myself? In other words, I want to turn any and all warning or error conditions into 500s, no exceptions, so that the default case is rejecting the authentication request, and the exception is approving it with a 200 code. I've fiddled with set_error_handler() and error_reporting(), but so far no luck. For example, if the code outputs something before I send the HTTP response code, PHP naturally reports that you can't modify header information after outputting anything. However, this is reported by PHP as a 200 response code with html explaining the problem. I need even this kind of thing to be turned into a 500 code. Is this possible in PHP? Or do I need to do this at a higher level like using mod_rewrite somehow? If that's the case, any idea how I'd set that up? Thanks for any help. Jake

    Read the article

  • what type of bug causes a program to slowly use more processor power and all of a sudden go to 100%?

    - by reinier
    Hi, I was hoping to get some good ideas as to what might be causing a really nasty bug. This is a program which is transmitting data over a socket, and also receives messages back. I could explain lots more, but I don't think this will help here. I'm just searching for hypothetical problems which can cause the following behaviour: program runs processor time slowly accumulates (till around 60%) all of a sudden (could be after 30 but also after 60 seconds) the processor time shoots to 100%. the program halts completely In my syslog it always ends on one line with a memory allocation (something similar to: myArray = new byte[16384]) in the same thread. now here is the weird part: if I set the debugger anywhere...it immediately stops on that line. So just the act of setting a breakpoint, made the thread continue (it wasn't running since I saw no log output anymore) I was thinking 'deadlock' but that would not cause 100% processor power. If anything, the opposite. Also, setting a breakpoint would not cause a deadlock to end. anyone else a theoretical suggestion as to what kind of 'construct' might cause this effect? (apart from 'bad programming') ;^) thanks EDIT: I just noticed.... by setting the sendspeed slower, the problem shows itself much later than expected. I would think around the same amount of packets send...but no the amount of packets send is much higher this way before it has the same problem.

    Read the article

  • POSIX AIO Library and Callback Handlers

    - by Charles Salvia
    According to the documentation on aio_read/write, there are basically 2 ways that the AIO library can inform your application that an async file I/O operation has completed. Either 1) you can use a signal, 2) you can use a callback function I think that callback functions are vastly preferable to signals, and would probably be much easier to integrate into higher-level multi-threaded libraries. Unfortunately, the documentation for this functionality is a mess to say the least. Some sources, such as the man page for the sigevent struct, indicate that you need to set the sigev_notify data member in the sigevent struct to SIGEV_CALLBACK and then provide a function handler. Presumably, the handler is invoked in the same thread. Other documentation indicates you need to set sigev_notify to SIGEV_THREAD, which will invoke the callback handler in a newly created thread. In any case, on my Linux system (Ubuntu with a 2.6.28 kernel) SIGEV_CALLBACK doesn't seem to be defined anywhere, but SIGEV_THREAD works as advertised. Unfortunately, creating a new thread to invoke the callback handler seems really inefficient, especially if you need to invoke many handlers. It would be better to use an existing pool of threads, similar to the way most network I/O event demultiplexers work. Some versions of UNIX, such as QNX, include a SIGEV_SIGNAL_THREAD flag, which allows you to invoke handlers using a specified existing thread, but this doesn't seem to be available on Linux, nor does it seem to even be a part of the POSIX standard. So, is it possible to use the POSIX AIO library in a way that invokes user handlers in a pre-allocated background thread/threadpool, rather than creating/destroying a new thread everytime a handler is invoked?

    Read the article

  • Android Scoreloop, OpenFeint etc al

    - by theblitz
    I am looking to use one of the social networks in my Android program. Most important for me is the ability to build a continuous leadership board in which players move up and down depending their wins/loses to others. The idea is for players to challenge others head-to-head. The winner gains points and the loser loses points. Equally important, I want this feature to include the possibility to "charge" the player game coins. Scoreloop includes the possibility of challenges but they are there in order to win coins off other players. In other words, they are the means to the end. In my case I need it to be the other way around. The "ends" is to be higher in the leadership board and the "means" are to play others with coins. Scoreloop do have a continuos leadership board but it is not accessible from the program. I tried looking at OpenFeint but their site is a real mess. It is impossible to understand from there exactly what is and isn't available. I signed up and tried to add my program. I ended up adding it four times and cannot delete it!

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Can't connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest).

    Read the article

  • iPhone - is it IMPOSSIBLE to grab the contents of a CALayers composition?

    - by Mike
    I have a CALayer transformed in 3D on a offscreen UIView (much larger than 320x480). How do I dump what is seen on this UIView to a UIImage? NOTE: I Have edited the question to include this code... This is how I create the layer... CGRect area = CGRectMake (0,0,400,600]; vista3D = [[UIView alloc] initWithFrame:area ]; [self.view addSubview:vista3D]; [vista3D release]; transformed = [CALayer layer]; transformed.frame = area; [vista3D.layer addSublayer:transformed]; CALayer *imageLayer = [CALayer layer]; imageLayer.doubleSided = YES; imageLayer.frame = area; imageLayer.transform = CATransform3DMakeRotation(40 * M_PI / 180.0f, 1.0f, 0.0f, 0.0f); imageLayer.contents = (id)myRawImage.CGImage; [transformed addSublayer:imageLayer]; // Add a perspective effect CATransform3D initialTransform = transformed.sublayerTransform; initialTransform.m34 = 1.0 / -500; transformed.sublayerTransform = initialTransform; // now the layer is in perspective // my next step is to "flatten" the composition into a UIImage UIImage *thisIsTheResult = some magic command thanks for any help! EDIT 1: I have tried jessecurry solution but it gives me a flat layer without any perspective. EDIT 2: I discovered a partial solution for this that works, but this solution gives me an image the size of the screen and I was looking for obtaining a higher resolution version, rendering off screen.

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • fast similarity detection

    - by reinierpost
    I have a large collection of objects and I need to figure out the similarities between them. To be exact: given two objects I can compute their dissimilarity as a number, a metric - higher values mean less similarity and 0 means the objects have identical contents. The cost of computing this number is proportional to the size of the smaller object (each object has a given size). I need the ability to quickly find, given an object, the set of objects similar to it. To be exact: I need to produce a data structure that maps any object o to the set of objects no more dissimilar to o than d, for some dissimilarity value d, such that listing the objects in the set takes no more time than if they were in an array or linked list (and perhaps they actually are). Typically, the set will be very much smaller than the total number of objects, so it is really worthwhile to perform this computation. It's good enough if the data structure assumes a fixed d, but if it works for an arbitrary d, even better. Have you seen this problem before, or something similar to it? What is a good solution? To be exact: a straightforward solution involves computing the dissimilarities between all pairs of objects, but this is slow - O(n2) where n is the number of objects. Is there a general solution with lower complexity?

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >