Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 769/883 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • iPhone NSTimer OpenGL problem

    - by Toby Wilson
    I've got a problem that only seems to occur on the device, not in the simulator. My app's animation is started and stopped using these methods: NSTimer* animationTimer; -(void)startAnimation { if(animationTimer = nil) animationTimer = [NSTimer scheduledTimerWithTimeInterval:1.0f/60.0f target:self selector:@selector(drawView) userInfo:nil repeats:YES]; } -(void)stopAnimation { [animationTimer invalidate]; animationTimer = nil; } In the simulator this works fine and drawView starts being called at 60fps. On the device (testing on iPod Touch), the scheduleTimerWithTimeInterval method doesn't seem to work. Furthermore, [animationTimer invalidate] causes EXC_BAD_ACCESS. I've spotted an obvious but minor flaw; adding if(animationTimer != nil) to the stopAnimation method will prevent the crash, but doesn't solve the problem of the animation timer not being properly initialised. Edit: The above doesn't prevent a crash. animationTimer != nil yet calling invalidate causes EXC_BAD_ACCESS. Should also add, this problem doesn't occur all the time on the device. Maybe 40% of the time.

    Read the article

  • Technology and language for a stable Digital Audio Workstation development

    - by Kill KRT
    Hi, I'm designing a cross platform (Windows/Linux/OS X) application, something like a digital audio workstation. I'd like to create a software where users have a fully featured sequencer (multiple tracks with automation) and where it is possible to create instruments using a visual language (as Pure Data/Max MSP). Ehm... I know that I've already posted a question about a related issue... But in order to decide which technology I should use, I think I'd better to make more investigation. I'm a quite experted user of audio trackers (Renoise, Protracker,...) and sequencers (FL Studio, Cubase 5), but I didn't ever try to develop even a basic audio tracker. I know just the basic theory of mixing sound and know how basically a DSP works. My questions are: Where I can find a good tutorial/guide/book about this issue? Do you think using C# (with NAudio) could dramatically reduce performance? I know C++ would be the best choice, but I find C# so elegant and easy to build and port, while C++ is so powerful and fast, but there are too #define and bad things for my taste! ;-) Thank you.

    Read the article

  • Facebook Stream.Get -- How to access data in array?

    - by user316841
    On stream.get, I try to echo $feeds["posts"][$i]["attachment"]["href"]; It return the URL, but, in the same array scope where "type" is located (which returns string: video, etc), trying $feeds["posts"][$i]["attachment"]["type"] returns nothing at all! Here's an array through PHP's var_dump: http://pastie.org/930475 So, from testing I suppose this is protected by Facebook? Does that makes sense at all? Here it's full: http://pastie.org/930490, but not all attachment/media/types has values. It's also strange, because I can't access through [attachment][media][href] or [attachment][media][type], and if I try [attachment][media][0][type] or href, it gives me a string offset error. ["attachment"]=> array(8) { ["media"]=> array(1) { [0]=> array(5) { ["href"]=> string(55) "http://www.facebook.com/video/video.php?v=1392999461587" ["alt"]=> string(13) "IN THE STUDIO" ["type"]=> string(5) "video" My question is, is this protected by Facebook? Or we can actually access this array position?

    Read the article

  • Use different configurations with Simple Injection

    - by Ruben.Canton
    I'm using the library "Simple Injector" (http://simpleinjector.codeplex.com) and it looks cool and nice. But after building a configuration and use it, now I want to know how to change from one configuration to another. Scenario: Let's imagine I've set up a configuration in the Global Asax and I have the public and global Container there. Now I want to make some tests and I want them to use mock classes so I want to change the configuration. I can, of course, build another configuration and assign it to the global Container created by default, so that every time I run a test the alternative configuration will be set. But on doing that and though I'm in development context the Container is changed for everyone, even for normal requests. I know I'm testing in this context and that shouldn't matter, but I have the feeling that this is not the way for doing this... and I wonder how to change from one configuration to another in the correct way. Note: At Simple Injector documentation says that you can ask questions in stackoverflow so that's why I'm here. =P PD: I'm new in this IoC and DI world so try to be easy with me when explaining it :)

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • How to efficiently convert String, integer, double, datetime to binary and vica versa?

    - by Ben
    Hi, I'm quite new to C# (I'm using .NET 4.0) so please bear with me. I need to save some object properties (their properties are int, double, String, boolean, datetime) to a file. But I want to encrypt the files using my own encryption, so I can't use FileStream to convert to binary. Also I don't want to use object serialization, because of performance issues. The idea is simple, first I need to somehow convert objects (their properties) to binary (array), then encrypt (some sort of xor) the array and append it to the end of the file. When reading first decrypt the array and then somehow convert the binary array back to object properties (from which I'll generate objects). I know (roughly =) ) how to convert these things by hand and I could code it, but it would be useless (too slow). I think the best way would be just to get properties' representation in memory and save that. But I don't know how to do it using C# (maybe using pointers?). Also I though about using MemoryStream but again I think it would be inefficient. I am thinking about class Converter, but it does not support toByte(datetime) (documentation says it always throws exception). For converting back I think the only options is class Converter. Note: I know the structure of objects and they will not change, also the maximum String length is also known. Thank you for all your ideas and time. EDIT: I will be storing only parts of objects, in some cases also parts of different objects (a couple of properties from one object type and a couple from another), thus I think that serialization is not an option for me.

    Read the article

  • iPhone: Using dispatch_after to mimick NSTimer

    - by Joseph Tura
    Don't know a whole lot about blocks. How would you go about mimicking a repeating NSTimer with dispatch_after? My problem is that I want to "pause" a timer when the app moves to the background, but subclassing NSTimer does not seem to work. I tried something which seems to work. I cannot judge its performance implications or whether it could be greatly optimized. Any input is welcome. #import "TimerWithPause.h" @implementation TimerWithPause @synthesize timeInterval; @synthesize userInfo; @synthesize invalid; @synthesize invocation; + (TimerWithPause *)scheduledTimerWithTimeInterval:(NSTimeInterval)aTimeInterval target:(id)aTarget selector:(SEL)aSelector userInfo:(id)aUserInfo repeats:(BOOL)aTimerRepeats { TimerWithPause *timer = [[[TimerWithPause alloc] init] autorelease]; timer.timeInterval = aTimeInterval; NSMethodSignature *signature = [[aTarget class] instanceMethodSignatureForSelector:aSelector]; NSInvocation *aInvocation = [NSInvocation invocationWithMethodSignature:signature]; [aInvocation setSelector:aSelector]; [aInvocation setTarget:aTarget]; [aInvocation setArgument:&timer atIndex:2]; timer.invocation = aInvocation; timer.userInfo = aUserInfo; if (!aTimerRepeats) { timer.invalid = YES; } [timer fireAfterDelay]; return timer; } - (void)fireAfterDelay { dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, self.timeInterval * NSEC_PER_SEC); dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_after(delay, queue, ^{ [invocation performSelectorOnMainThread:@selector(invoke) withObject:nil waitUntilDone:NO]; if (!invalid) { [self fireAfterDelay]; } }); } - (void)invalidate { invalid = YES; [invocation release]; invocation = nil; [userInfo release]; userInfo = nil; } - (void)dealloc { [self invalidate]; [super dealloc]; } @end

    Read the article

  • Why would this Lua optimization hack help?

    - by Ian Boyd
    i'm looking over a document that describes various techniques to improve performance of Lua script code, and i'm shocked that such tricks would be required. (Although i'm quoting Lua, i've seen similar hacks in Javascript). Why would this optimization be required: For instance, the code for i = 1, 1000000 do local x = math.sin(i) end runs 30% slower than this one: local sin = math.sin for i = 1, 1000000 do local x = sin(i) end They're re-declaring sin function locally. Why would this be helpful? It's the job of the compiler to do that anyway. Why is the programmer having to do the compiler's job? i've seen similar things in Javascript; and so obviously there must be a very good reason why the interpreting compiler isn't doing its job. What is it? i see it repeatedly in the Lua environment i'm fiddling in; people redeclaring variables as local: local strfind = strfind local strlen = strlen local gsub = gsub local pairs = pairs local ipairs = ipairs local type = type local tinsert = tinsert local tremove = tremove local unpack = unpack local max = max local min = min local floor = floor local ceil = ceil local loadstring = loadstring local tostring = tostring local setmetatable = setmetatable local getmetatable = getmetatable local format = format local sin = math.sin What is going on here that people have to do the work of the compiler? Is the compiler confused by how to find format? Why is this an issue that a programmer has to deal with? Why would this not have been taken care of in 1993? i also seem to have hit a logical paradox: Optimizatin should not be done without profiling Lua has no ability to be profiled Lua should not be optimized

    Read the article

  • rspec and ruby 1.9.1: problem with dummy controller and routes

    - by giorgian
    I want to test a module that basically executes some verify statements, to ensure that actions are invoked with the correct method. # /lib/rest_verification.rb module RestVerification def self.included(base) # :nodoc: base.extend(ClassMethods) end module ClassMethods def verify_rest_actions verify :method => :post, :only => [:create], :redirect_to => { :action => :new } ... end end end I tried this: describe RestVerification do class FooController < ActionController::Base include RestVerification verify_rest_actions def new ; end def index ; end def create ; end def edit ; end def update ; end def destroy ; end end # controller_name 'foo' # this only works with ruby 1.8.7 : 1.9.1 says "uninitialized constant FooController" tests FooController # this works with both before(:each) do ActionController::Routing::Routes.draw do |map| map.resources :foo end end after(:each) do ActionController::Routing::Routes.reload! end it ':create should redirect to :new if invoked with wrong verb' do [:get, :put, :delete].each do |verb| send verb, :create response.should redirect_to(new_foo_url) end end ... end When testing: $ ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [i486-linux] $ rake RestVerification :create should redirect to :new if invoked with wrong verb Finished in 0.175586 seconds $ rvm use 1.9.1 Using ruby 1.9.1 p378 $ rake RestVerification :create should redirect to :new if invoked with wrong verb (FAILED - 1) 1) 'RestVerification :create should redirect to :new if invoked with wrong verb' FAILED expected redirect to "http://test.host/foo/new", got redirect to "http://test.host/spec/rails/example/controller_example_group/subclass_1/foo/new" Is this a known issue? Is there a workaround?

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • Mamimum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; relayPageCount = 0; return fetchResult; } req.Abort(); resp.Close(); return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • What would be different in Java if Enum declaration didn't have the recursive part

    - by atamur
    Please see http://stackoverflow.com/questions/211143/java-enum-definition and http://stackoverflow.com/questions/3061759/why-in-java-enum-is-declared-as-enume-extends-enume for general discussion. Here I would like to learn what exactly would be broken (not typesafe anymore, or requiring additional casts etc) if Enum class was defined as public class Enum<E extends Enum> I'm using this code for testing my ideas: interface MyComparable<T> { int myCompare(T o); } class MyEnum<E extends MyEnum> implements MyComparable<E> { public int myCompare(E o) { return -1; } } class FirstEnum extends MyEnum<FirstEnum> {} class SecondEnum extends MyEnum<SecondEnum> {} With it I wasn't able to find any benefits in this exact case. PS. the fact that I'm not allowed to do class ThirdEnum extends MyEnum<SecondEnum> {} when MyEnum is defined with recursion is a) not relevant, because with real enums you are not allowed to do that just because you can't extend enum yourself b) not true - pls try it in a compiler and see that it in fact is able to compile w/o any errors PPS. I'm more and more inclined to believe that the correct answer here would be "nothing would change if you remove the recursive part" - but I just can't believe that.

    Read the article

  • Magic function `bash ` not found

    - by inspectorG4dget
    I have a bunch of simulations that I want to run on a high-performance cluster, on which I should make reservations to get computing time. Since the reservations are bounded by time, I am developing an automation script that I can scp into the cluster and run. This script will then download the relevant simulation files, run them, and upload the results. Part of this automation script is in bash (cp, scp, etc) and the rest is in python. In order to develop this automation, I am using an IPython notebook. So far, I've coded all the python automation stuff in my IPython notebook and am trying to write the bash part of it now. However, it seems that the magic %%bash doesn't work in my IPython notebook. I get the following error when I have this code in my cell: Cell %%bash echo hi Error File "<ipython-input-22-62ec98e35224>", line 3 echo hi ^ SyntaxError: invalid syntax On a whim, I tried this: Cell %%bash print "hi" Error hi ERROR: Magic function `bash` not found. So I tried this with %%system, %%! and %%shell. But none of those work; they all give me the same error. Why is this happening? How can I fix this? Metadata: IPython 0.13.dev Python 2.7.1 Mac OS X Lion

    Read the article

  • When and why can sprintf fail?

    - by Srekel
    I'm using swprintf to build a string into a buffer (using a loop among other things). const int MaxStringLengthPerCharacter = 10 + 1; wchar_t* pTmp = pBuffer; for ( size_t i = 0; i < nNumPlayers ; ++i) { const int nPlayerId = GetPlayer(i); const int nWritten = swprintf(pTmp, MaxStringLengthPerCharacter, TEXT("%d,"), nPlayerId); assert(nWritten >= 0 ); pTmp += nWritten; } *pTaskPlayers = '\0'; If during testing the assert never hits, can I be sure that it will never hit in live code? That is, do I need to check if nWritten < 0 and handle that, or can I safely assume that there won't be a problem? Under which circumstances can it return -1? The documentation more or less just states "If the function fails". In one place I've read that it will fail if it can't match the arguments (i.e. the formatting string to the varargs) but that doesn't worry me. I'm also not worried about buffer overrun in this case - I know the buffer is big enough.

    Read the article

  • EXC_BAD_ACCESS on button press for button dynamically added to UIView within UIScrollView

    - by alan
    OK. It's an iPad app. Within the DetailViewController I have added a UIScrollView through IB and within that UIScrollView I have added a UIView (also added through IB) which holds various dynamically added UITableViews, UILabels and UIButtons. My problem is that I'm getting errors on the UIButton clicks. I have defined this method: - (void)translationSearch:(id)sender { NSLog(@"in transearch"); [self doSearch]; } This is how I'm adding the UIButton to the UIView: UIButton *translationButton = [UIButton buttonWithType:UIButtonTypeRoundedRect]; translationButton.frame = CGRectMake(6, 200, 200, 20); translationButton.backgroundColor = [UIColor clearColor]; [translationButton setTitle:@"testing" forState:UIControlStateNormal]; [translationButton addTarget:self action:@selector(translationSearch:) forControlEvents:UIControlEventTouchUpInside]; [verbView addSubview:translationButton]; Now the button is added to the form without any issue but, when I press it, I am getting an EXC_BAD_ACCESS error. I'm sure it's staring me in the face but I've passed my usual time limit for getting a bug like this fixed so any help would be greatly appreciated. The only thing I can think is the fact that the UIButton is within a UIView which is within a UIScrollView which is within the view controller is somehow causing an issue. Cheers.

    Read the article

  • How to scale JPEG images with a non-standard sampling factor in Java?

    - by HRJ
    I am using Java AWT for scaling a JPEG image, to create thumbnails. The code works fine when the image has a normal sampling factor ( 2x2,1x1,1x1 ) However, an image which has this sampling factor ( 1x1, 1x1, 1x1 ) creates problem when scaled. The colors get corrupted though the features are recognizable. The original and the thumbnail: The code I am using is roughly equivalent to: static BufferedImage awtScaleImage(BufferedImage image, int maxSize, int hint) { // We use AWT Image scaling because it has far superior quality // compared to JAI scaling. It also performs better (speed)! System.out.println("AWT Scaling image to: " + maxSize); int w = image.getWidth(); int h = image.getHeight(); float scaleFactor = 1.0f; if (w > h) scaleFactor = ((float) maxSize / (float) w); else scaleFactor = ((float) maxSize / (float) h); w = (int)(w * scaleFactor); h = (int)(h * scaleFactor); // since this code can run both headless and in a graphics context // we will just create a standard rgb image here and take the // performance hit in a non-compatible image format if any Image i = image.getScaledInstance(w, h, hint); image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g = image.createGraphics(); g.drawImage(i, null, null); g.dispose(); i.flush(); return image; } (Code courtesy of this page ) Is there a better way to do this? Here's a test image with sampling factor of [ 1x1, 1x1, 1x1 ].

    Read the article

  • document.write Not working when loading external Javascript source

    - by jadent
    I'm trying to load an external JavaScript file dynamically into an HTML element to preview an ad tag. The script loads and executes but the script contains "document.write" which has an issue executing properly but there are no errors. <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script> <script type="text/javascript"> $(function() { source = 'http://ib.adnxs.com/ttj?id=555281'; // DOM Insert Approach // ----------------------------------- var script = document.createElement('script'); script.setAttribute('type', 'text/javascript'); script.setAttribute('src', source); document.body.appendChild(script); }); </script> </head> <body> </body> </html> I can get it to work if If i move the the source to the same domain for testing If the script was modified to use document.createElement and appendChild instead of document.write like the code above. I don't have the ability to modify the script since it being generated and hosted by a 3rd party. Does anyone know why the document.write will not work correctly? And is there a way to get around this? Thanks for the help!

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • select failing with C program but not shell

    - by Gary
    So I have a parent and child process, and the parent can read output from the child and send to the input of the child. So far, everything has been working fine with shell scripts, testing commands which input and output data. I just tested with a simple C program and couldn't get it to work. Here's the C program: #include <stdio.h> int main( void ) { char stuff[80]; printf("Enter some stuff:\n"); scanf("%s", stuff); return 0; } The problem with with the C program is that my select fails to read from the child fd and hence the program cannot finish. Here's the bit that does the select.. //wait till child is ready fd_set set; struct timeval timeout; FD_ZERO( &set ); // initialize fd set FD_SET( PARENT_READ, &set ); // add child in to set timeout.tv_sec = 3; timeout.tv_usec = 0; int r = select(FD_SETSIZE, &set, NULL, NULL, &timeout); if( r < 1 ) { // we didn't get any input exit(1); } Does anyone have any idea why this would happen with the C program and not a shell one? Thanks!

    Read the article

  • NSTimer as a timeout mechanism

    - by alexantd
    I'm pretty sure this is really simple, and I'm just missing something obvious. I have an app that needs to download data from a web service for display in a UITableView, and I want to display a UIAlertView if the operation takes more than X seconds to complete. So this is what I've got (simplified for brevity): MyViewController.h @interface MyViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { NSTimer *timer; } @property (nonatomic, retain) NSTimer *timer; MyViewController.m @implementation MyViewController @synthesize timer; - (void)viewDidLoad { timer = [NSTimer scheduledTimerWithTimeInterval:20 target:self selector:@selector(initializationTimedOut:) userInfo:nil repeats:NO]; [self doSomethingThatTakesALongTime]; [timer invalidate]; } - (void)doSomethingThatTakesALongTime { sleep(30); // for testing only // web service calls etc. go here } - (void)initializationTimedOut:(NSTimer *)theTimer { // show the alert view } My problem is that I'm expecting the [self doSomethingThatTakesALongTime] call to block while the timer keeps counting, and I'm thinking that if it finishes before the timer is done counting down, it will return control of the thread to viewDidLoad where [timer invalidate] will proceed to cancel the timer. Obviously my understanding of how timers/threads work is flawed here because the way the code is written, the timer never goes off. However, if I remove the [timer invalidate], it does.

    Read the article

  • Doing coding in Linux through a virtual machine on Windows VS partitioning

    - by ihaveitnow
    I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community. Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality. All responses are appreciated, thanks in advance. The type of programming I intend to dive into : Android Dev, Web Dev, Desktop Dev...More Android and Web right now though. So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome. I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right? And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers" Thanks to all for reading, I appreciate it. Hope this isnt confusing :S

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >