Search Results

Search found 9889 results on 396 pages for 'pointer speed'.

Page 327/396 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • jQuery .attr() crashing Internet Explorer

    - by Sunyatasattva
    Hello everyone, This is my first time posting a question here, as I usually try to find solutions myself. This one, though, being an IE issue, just drives me crazy. I use jQuery cycle plug-in on a website I made and, to populate a caption div, I use a little function that is called after the image is loaded, which uses the "alt" attribute of the image. This seems to exasperate Internet Explorer, which, doesn't have the time to fulfill this apparently so-complicated task, and, as the slideshow cycles, it enters in an infinite loop and eventually crashes – the newer the version, the worse the crash: the older IEs just display an error message saying “The webpage cannot be displayed”, while the newer (7 and 8) completely crash the system. I have no idea on how to solve or work around this. Here is the problematic code. function changeCaption() { var caption = $("img", this).attr("alt"); $('#caption').fadeIn("slow").html(caption); } Thanks in advance for any pointer: I am amazed as how something so simple and globally recognized (didn't encounter any other browser who had problem with this), can cause a problem so big. I also read somewhere that being able to crash a browser remotely is a serious issue :) Lucio

    Read the article

  • What's the Matlab equivalent of NULL, when it's calling COM/ActiveX methods?

    - by David M
    Hi, I maintain a program which can be automated via COM. Generally customers use VBS to do their scripting, but we have a couple of customers who use Matlab's ActiveX support and are having trouble calling COM object methods with a NULL parameter. They've asked how they do this in Matlab - and I've been scouring Mathworks' COM/ActiveX documentation for a day or so now and can't figure it out. Their example code might look something like this: function do_something() OurAppInstance = actxserver('Foo.Application'); OurAppInstance.Method('Hello', NULL) end where NULL is where in another language, we'd write NULL or nil or Nothing, or, of course, pass in an object. The problem is this is optional (and these are implemented as optional parameters in most, but not all, cases) - these methods expect to get NULL quite often. They tell me they've tried [] (which from my reading seemed the most likely) as well as '', Nothing, 'Nothing', None, Null, and 0. I have no idea how many of those are even valid Matlab keywords - certainly none work in this case. Can anyone help? What's Matlab's syntax for a null pointer / object for use as a COM method parameter? Update: Thanks for all the replies so far! Unfortunately, none of the answers seem to work, not even libpointer. The error is the same in all cases: Error: Type mismatch, argument 2 This parameter in the COM type library is described in RIDL as: HRESULT _stdcall OurMethod([in] BSTR strParamOne, [in, optional] OurCoClass* oParamTwo, [out, retval] VARIANT_BOOL* bResult); The coclass in question implements a single interface descending from IDispatch.

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • algorithm || method to write prog

    - by fatai
    I am one of the computer science student. My wonder is everyone solve problem with different or same method, but actually I dont know whether they use method or I dont know whether there are such common method to approach problem. All teacher give us problem which in simple form sometimes, but they dont introduce any approach or method(s) so that we can first choose method then apply that one to problem , afterward find solution then write code. I have found one method when I failed the course, More accurately, When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 But now, I wonder other method ; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

  • Dragging an UIView inside UIScrollView

    - by Sergey Mikhanov
    Hello community! I am trying to solve a basic problem with drag and drop on iPhone. Here's my setup: I have a UIScrollView which has one large content subview (I'm able to scroll and zoom it) Content subview has several small tiles as subviews that should be dragged around inside it. My UIScrollView subclass has this method: - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event { UIView *tile = [contentView pointInsideTiles:[self convertPoint:point toView:contentView] withEvent:event]; if (tile) { return tile; } else { return [super hitTest:point withEvent:event]; } } Content subview has this method: - (UIView *)pointInsideTiles:(CGPoint)point withEvent:(UIEvent *)event { for (TileView *tile in tiles) { if ([tile pointInside:[self convertPoint:point toView:tile] withEvent:event]) return tile; } return nil; } And tile view has this method: - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:self.superview]; self.center = location; } This works, but not fully correct: the tile sometimes "falls down" during the drag process. More precisely, it stops receiving touchesMoved: invocations, and scroll view starts scrolling instead. I noticed that this depends on the drag speed: the faster I drag, the quicker the tile "falls". Any ideas on how to keep the tile glued to the dragging finger? Thanks in advance!

    Read the article

  • Link checker ; how to avoid false positives

    - by Burnzy
    I'm working a on a link checker/broken link finder and I am getting many false positives, after double checking I noticed that many error codes were returning webexceptions but they were actually downloadable, but in some other cases the statuscode is 404 and i can access the page from the browse. So here is the code, its pretty ugly, and id like to have something more, id say practical. All the status codes are in that big if are used to filter the ones i dont want to add to brokenlink because they are valid links ( i tested them all ). What i need to fix is the structure (if possible) and how to not get false 404. Thank you! try { HttpWebRequest request = ( HttpWebRequest ) WebRequest.Create ( uri ); request.Method = "Head"; request.MaximumResponseHeadersLength = 32; // FOR IE SLOW SPEED request.AllowAutoRedirect = true; using ( HttpWebResponse response = ( HttpWebResponse ) request.GetResponse() ) { request.Abort(); } /* WebClient wc = new WebClient(); wc.DownloadString( uri ); */ _validlinks.Add ( strUri ); } catch ( WebException wex ) { if ( !wex.Message.Contains ( "The remote name could not be resolved:" ) && wex.Status != WebExceptionStatus.ServerProtocolViolation ) { if ( wex.Status != WebExceptionStatus.Timeout ) { HttpStatusCode code = ( ( HttpWebResponse ) wex.Response ).StatusCode; if ( code != HttpStatusCode.OK && code != HttpStatusCode.BadRequest && code != HttpStatusCode.Accepted && code != HttpStatusCode.InternalServerError && code != HttpStatusCode.Forbidden && code != HttpStatusCode.Redirect && code != HttpStatusCode.Found ) { _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); } else _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); }

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Best Practice - Removing item from generic collection in C#

    - by Matt Davis
    I'm using C# in Visual Studio 2008 with .NET 3.5. I have a generic dictionary that maps types of events to a generic list of subscribers. A subscriber can be subscribed to more than one event. private static Dictionary<EventType, List<ISubscriber>> _subscriptions; To remove a subscriber from the subscription list, I can use either of these two options. Option 1: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { if (_subscriptions[event].Contains(subscriber)) { _subscriptions[event].Remove(subscriber); } } Option 2: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { _subscriptions[event].Remove(subscriber); } I have two questions. First, notice that Option 1 checks for existence before removing the item, while Option 2 uses a brute force removal since Remove() does not throw an exception. Of these two, which is the preferred, "best-practice" way to do this? Second, is there another, "cleaner," more elegant way to do this, perhaps with a lambda expression or using a LINQ extension? I'm still getting acclimated to these two features. Thanks. EDIT Just to clarify, I realize that the choice between Options 1 and 2 is a choice of speed (Option 2) versus maintainability (Option 1). In this particular case, I'm not necessarily trying to optimize the code, although that is certainly a worthy consideration. What I'm trying to understand is if there is a generally well-established practice for doing this. If not, which option would you use in your own code?

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Passing row from UIPickerView to integer CoreData attribute

    - by Gordon Fontenot
    I'm missing something here, and feeling like an idiot about it. I'm using a UIPickerView in my app, and I need to assign the row number to a 32-bit integer attribute for a Core Data object. To do this, I am using this method: -(void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { object.integerValue = row; } This is giving me a warning: warning: passing argument 1 of 'setIntegerValue:' makes pointer from integer without a cast What am I mixing up here? --EDIT 1-- Ok, so I can get rid of the errors by changing the method to do the following: NSNumber *number = [NSNumber numberWithInteger:row]; object.integerValue = rating; However, I still get a value of 0 for object.integerValue if I use NSLog to print it out. object.integerValue has a max value of 5, so I print out number instead, and then I'm getting a number above 62,000,000. Which doesn't seem right to me, since there are 5 rows. If I NSLog the row variable, I get a number between 0 and 5. So why do I end up with a completely different number after casting the number to NSNumber?

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • Does height and width not apply to span?

    - by Kyle Sevenoaks
    Total noob question, but here. CSS .product__specfield_8_arrow { /*background-image:url(../../upload/orng_bg_arrow.png); background-repeat:no-repeat;*/ background-color:#fc0; width:50px !important; height:33px !important; border: 1px solid #dddddd; border-left:none; border-radius:5px; -moz-border-radius:5px; -webkit-border-radius:5px; border-bottom-left-radius:0px; border-top-left-radius:0px; -moz-border-radius-bottomleft:0px; -moz-border-radius-topleft:0px; -webkit-border-bottom-left-radius:0px; -webkit-border-top-left-radius:0px; margin:0; padding:2px; cursor:pointer; }??? HTML <span class="product__specfield_8_arrow">&nbsp;</span>? Fiddle Basically I'm trying to emulate a button, make a span (or something) look like a button next to an input field that actually doesn't need to be one because of an auto fill generator that generates errors onEnter. Thought this'd be a quick fix for now but obviously not. Thanks.

    Read the article

  • Which is the best jQuery-powered site?

    - by Reigel
    This "Top 10 JavaScript (jQuery) Powered Sites", posted about 2 years ago, was the one that invites me(after seeing the list) to use jQuery. All the sites in that list made me realize how cool it is to build sites powered by jQuery. And now more and more sites are being powered by jQuery. More and more developers are learning jQuery. Two years have past now but I still do a search to google for best sites and still got me to that link and which the link is not updated from the first time I have looked at it. Here at stackoverflow, there are lots of jQuery user. I was thinking if we can here show some of the sites that you know that is powered by jQuery which you think is best. If you can, please make it one site one answer, so that we can make a vote if it's really best jQuery powered site. We will take a look if it is really well implemented ( the way the codes are written, fast speed site, etc...) and deserves to be the best. Lot's of viewers will benefit from it. Like, we can view and have an idea of how we will make our next project cooler, faster, and powerful.

    Read the article

  • Define a swig interface file for generation of wrapper to every type from some header file

    - by Dmitriy Matveev
    Hi! We're using some C library in our Java project. Several years ago some other developer which has retired few years ago (as always) has created all the wrappers for us. The wrappers were generated by the swig, but the interface file is lost now. The basic idea of library and the wrappers for it is following: There only one function which returns pointer to some complex object. And there are wrapper for that function. The complex object is a tree-like structure with dozens of node kinds and types (C structures) used to represent them. There are hundreds of wrappers for every field of every type and we're trying to use them all. The library was updated some time ago and now there are some new data we unaware of which yet, but would like to use. This data is contained in some of the objects indirectly contained or referenced from the object created by the function we call (Some new fields and types were added). I know that I shouldn't make any changes to the wrappers by hand and should rather modify the interface, but as I already wrote it's missing. For now I only want to generate wrappers some few types which are added/changed and them to our old wrappers, but later I want to start creation of interface file which will define "what and how should be wrapped". All the definitions necessary for us are defined in single header file. Is it possible to tell swig to generate wrappers for every type in this header? If so, how can I write such interface file?

    Read the article

  • How to install DBD::mysql on OS X server?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • Use auto_ptr in VC6 dll cause crash

    - by Yan Cheng CHEOK
    // dll #include <memory> __declspec(dllexport) std::auto_ptr<int> get(); __declspec(dllexport) std::auto_ptr<int> get() { return std::auto_ptr<int>(new int()); } // exe #include <iostream> #include <memory> __declspec(dllimport) std::auto_ptr<int> get(); int main() { { std::auto_ptr<int> x = get(); } std::cout << "done\n"; getchar(); } The following code run perfectly OK under VC9. However, under VC6, I will experience an immediate crash with the following message. Debug Assertion Failed! Program: C:\Projects\use_dynamic_link\Debug\use_dynamic_link.exe File: dbgheap.c Line: 1044 Expression: _CrtIsValidHeapPointer(pUserData) Is it exporting auto_ptr under VC6 is not allowed? It is a known problem that exporting STL collection classes through DLL. http://stackoverflow.com/questions/2451714/access-violation-when-accessing-an-stl-object-through-a-pointer-or-reference-in-a However, I Google around and do not see anything mention for std::auto_ptr. Any workaround?

    Read the article

  • Precision error on matrix multiplication

    - by Wam
    Hello all, Coding a matrix multiplication in my program, I get precision errors (inaccurate results for large matrices). Here's my code. The current object has data stored in a flattened array, row after row. Other matrix B has data stored in a flattened array, column after column (so I can use pointer arithmetic). protected double[,] multiply (IMatrix B) { int columns = B.columns; int rows = Rows; int size = Columns; double[,] result = new double[rows,columns]; for (int row = 0; row < rows; row++) { for (int col = 0; col < columns; col++) { unsafe { fixed (float* ptrThis = data) fixed (float* ptrB = B.Data) { float* mePtr = ptrThis + row*rows; float* bPtr = ptrB + col*columns; double value = 0.0; for (int i = 0; i < size; i++) { value += *(mePtr++) * *(bPtr++); } result[row, col] = value; } } } } } Actually, the code is a bit more complicated : I do the multiply thing for several chunks (so instead of having i from 0 to size, I go from localStart to localStop), then sum up the resulting matrices. My problem : for a big matrix I get precision error : NUnit.Framework.AssertionException: Error at (0,1) expected: <6.4209571409444209E+18> but was: <6.4207619776304906E+18> Any idea ?

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • NSDate out of scope

    - by therealtkd
    Having problems with out of scope for NSDate in an iphone app. I have an interface defined like this: @interface MyObject : NSoObject { NSMutableArray *array; BOOL checkThis; NSDate *nextDue; } Now in the implementation I have this: -(id) init { if( (self=[super init]) ) { checkThis = NO; array = [[NSMutableArray alloc] init]; nextDue = [[NSDate date] retain]; NSDate *testDate = [NSDate date]; } return self; } Now, if I trace through the init, before I actually assign the variables checkThis shows as boolean. array shows as pointer 0x0 because it hasn't ben assigned. But the nextDue is showing as 'out of scope'. I don't understand why this is out of scope but the other variables aren't. If I trace through the code until after the variables are assigned, array now shows as being correctly assigned but nextDue is still out of scope. Interestingly, the testDate variable is assigned just fine and the debugger shows this as a valid date. Further interesting point is if I move the mouse over the testDate variable while I am debugging, it shows as an 'NSDate *' type which I would expect since that's its definition. Yet the nextDue, which to me is defined the same way is showing as a '_NSCFDate *'. Any googling I did on the subject said that the retain is the problem, but its actually out of scope before I even try to assign the variable. However, in another class, the same definition for NSDate work ok. It shows as nil before a value is assigned to it. Arghhh

    Read the article

  • Building a custom (dynamic) dataset and grid

    - by marko.ivanovski.nz
    Hi, I'm in the process of building a dynamic table in which you can add/remove rows & columns so it varies in size depending on what the user wants. Its purpose is to store properties for a product, but there can be from 1 to 10 different properties(columns) per product, and multiple instances(rows) of the product as well. Here's a screenshot of what I mean http://i40.tinypic.com/nbqkxc.jpg As you can see I need the structure to be completely up to the client which is where I'm getting stuck. I've started writing a custom DataSet that has "add column" & "add row" buttons, and have built a custom Table with Textboxes in each cell which builds from that dataset. I have no idea how to store the data on submit though, and to make it even more complex I need to store this in the database as a string which I think I can do by converting it to XML. Any help is appreciated, I think I just need a pointer in the right direction and am happy to do research from there. Thanks in advance. Marko

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >