Search Results

Search found 6363 results on 255 pages for 'buford speed'.

Page 214/255 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Optimize Use of Ramdisk for Eclipse Development

    - by Eric J.
    We're developing Java/SpringSource applications with Eclipse on 32-bit Vista machines with 4GB RAM. The OS exposes roughly 3.3GB of RAM due to reservations for hardware etc. in the virtual address space. I came across several Ramdisk drivers that can create a virtual disk from the OS-hidden RAM and am looking for suggestions how best to use the 740MB virtual disk to speed development in our environment. The slowest part of development for us is compiling as well as launching SpringSource dm Server. One option is to configure Vista to swap to the Ramdisk. That works, and noticeably speeds up development in low memory situations. However, the 3.3GB available to the OS is often sufficient and there are many situations where we do not use the swap file(s) much. Another option is to use the Ramdisk as a location for temporary files. Using the Vista mklink command, I created a hard link from where the SpringSource dm Server's work area normally resides to the Ramdisk. That significantly improves server startup times but does nothing for compile times. There are roughly 500MB still free on the Ramdisk when the work directory is fully utilized, so room for plenty more. What other files/directories might be candidates to place on the Ramdisk? Eclipse-related files? (Parts of) the JDK? Is there a free/open source tool for Vista that will show me which files are used most frequently during a period of time to reduce the guesswork?

    Read the article

  • Polymorphic Numerics on .Net and In C#

    - by Bent Rasmussen
    It's a real shame that in .Net there is no polymorphism for numbers, i.e. no INumeric interface that unifies the different kinds of numerical types such as bool, byte, uint, int, etc. In the extreme one would like a complete package of abstract algebra types. Joe Duffy has an article about the issue: http://www.bluebytesoftware.com/blog/CommentView,guid,14b37ade-3110-4596-9d6e-bacdcd75baa8.aspx How would you express this in C#, in order to retrofit it, without having influence over .Net or C#? I have one idea that involves first defining one or more abstract types (interfaces such as INumeric - or more abstract than that) and then defining structs that implement these and wrap types such as int while providing operations that return the new type (e.g. Integer32 : INumeric; where addition would be defined as public Integer32 Add(Integer32 other) { return Return(Value + other.Value); } I am somewhat afraid of the execution speed of this code but at least it is abstract. No operator overloading goodness... Any other ideas? .Net doesn't look like a viable long-term platform if it cannot have this kind of abstraction I think - and be efficient about it. Abstraction is reuse.

    Read the article

  • [Doxygen] How to documenting global dependencies for functions?

    - by Thomas Matthews
    I've got some C code from a 3rd party vendor (for an embedded platform) that uses global variables (for speed & space optimizations). I'm documenting the code, converting to Doxygen format. How do I put a note in the function documentation that the function requires on global variables and functions? Doxygen has special commands for annotating parameters and return values as describe here: Doxygen Special Commands. I did not see any commands for global variables. Example C code: extern unsigned char data_buffer[]; //!< Global variable. /*! Returns the next available data byte. * \return Next data byte. */ unsigned char Get_Byte(void) { static unsigned int index = 0; return data_buffer[index++]; //!< Uses global variable. } In the above code, I would like to add Doxygen comments that the function depends on the global variable data_buffer.

    Read the article

  • A better python property decorator

    - by leChuck
    I've inherited some python code that contains a rather cryptic decorator. This decorator sets properties in classes all over the project. The problem is that this I have traced my debugging problems to this decorator. Seems it "fubars" all debuggers I've tried and trying to speed up the code with psyco breaks everthing. (Seems psyco and this decorator dont play nice). I think it would be best to change it. def Property(function): """Allow readable properties""" keys = 'fget', 'fset', 'fdel' func_locals = {'doc':function.__doc__} def probeFunc(frame, event, arg): if event == 'return': locals = frame.f_locals func_locals.update(dict((k,locals.get(k)) for k in keys)) sys.settrace(None) return probeFunc sys.settrace(probeFunc) function() return property(**func_locals) Used like so: class A(object): @Property def prop(): def fget(self): return self.__prop def fset(self, value): self.__prop = value ... ect The errors I get say the problems are because of sys.settrace. (Perhaps this is abuse of settrace ?) My question: Is the same decorator achievable without sys.settrace. If not I'm in for some heavy rewrites.

    Read the article

  • learning to type - tips for programmers?

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer?

    Read the article

  • Which linear programming package should I use for high numbers of constraints and "warm starts"

    - by davidsd
    I have a "continuous" linear programming problem that involves maximizing a linear function over a curved convex space. In typical LP problems, the convex space is a polytope, but in this case the convex space is piecewise curved -- that is, it has faces, edges, and vertices, but the edges aren't straight and the faces aren't flat. Instead of being specified by a finite number of linear inequalities, I have a continuously infinite number. I'm currently dealing with this by approximating the surface by a polytope, which means discretizing the continuously infinite constraints into a very large finite number of constraints. I'm also in the situation where I'd like to know how the answer changes under small perturbations to the underlying problem. Thus, I'd like to be able to supply an initial condition to the solver based on a nearby solution. I believe this capability is called a "warm start." Can someone help me distinguish between the various LP packages out there? I'm not so concerned with user-friendliness as speed (for large numbers of constraints), high-precision arithmetic, and warm starts. Thanks!

    Read the article

  • Dragging an UIView inside UIScrollView

    - by Sergey Mikhanov
    Hello community! I am trying to solve a basic problem with drag and drop on iPhone. Here's my setup: I have a UIScrollView which has one large content subview (I'm able to scroll and zoom it) Content subview has several small tiles as subviews that should be dragged around inside it. My UIScrollView subclass has this method: - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event { UIView *tile = [contentView pointInsideTiles:[self convertPoint:point toView:contentView] withEvent:event]; if (tile) { return tile; } else { return [super hitTest:point withEvent:event]; } } Content subview has this method: - (UIView *)pointInsideTiles:(CGPoint)point withEvent:(UIEvent *)event { for (TileView *tile in tiles) { if ([tile pointInside:[self convertPoint:point toView:tile] withEvent:event]) return tile; } return nil; } And tile view has this method: - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:self.superview]; self.center = location; } This works, but not fully correct: the tile sometimes "falls down" during the drag process. More precisely, it stops receiving touchesMoved: invocations, and scroll view starts scrolling instead. I noticed that this depends on the drag speed: the faster I drag, the quicker the tile "falls". Any ideas on how to keep the tile glued to the dragging finger? Thanks in advance!

    Read the article

  • Link checker ; how to avoid false positives

    - by Burnzy
    I'm working a on a link checker/broken link finder and I am getting many false positives, after double checking I noticed that many error codes were returning webexceptions but they were actually downloadable, but in some other cases the statuscode is 404 and i can access the page from the browse. So here is the code, its pretty ugly, and id like to have something more, id say practical. All the status codes are in that big if are used to filter the ones i dont want to add to brokenlink because they are valid links ( i tested them all ). What i need to fix is the structure (if possible) and how to not get false 404. Thank you! try { HttpWebRequest request = ( HttpWebRequest ) WebRequest.Create ( uri ); request.Method = "Head"; request.MaximumResponseHeadersLength = 32; // FOR IE SLOW SPEED request.AllowAutoRedirect = true; using ( HttpWebResponse response = ( HttpWebResponse ) request.GetResponse() ) { request.Abort(); } /* WebClient wc = new WebClient(); wc.DownloadString( uri ); */ _validlinks.Add ( strUri ); } catch ( WebException wex ) { if ( !wex.Message.Contains ( "The remote name could not be resolved:" ) && wex.Status != WebExceptionStatus.ServerProtocolViolation ) { if ( wex.Status != WebExceptionStatus.Timeout ) { HttpStatusCode code = ( ( HttpWebResponse ) wex.Response ).StatusCode; if ( code != HttpStatusCode.OK && code != HttpStatusCode.BadRequest && code != HttpStatusCode.Accepted && code != HttpStatusCode.InternalServerError && code != HttpStatusCode.Forbidden && code != HttpStatusCode.Redirect && code != HttpStatusCode.Found ) { _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); } else _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); }

    Read the article

  • Best Practice - Removing item from generic collection in C#

    - by Matt Davis
    I'm using C# in Visual Studio 2008 with .NET 3.5. I have a generic dictionary that maps types of events to a generic list of subscribers. A subscriber can be subscribed to more than one event. private static Dictionary<EventType, List<ISubscriber>> _subscriptions; To remove a subscriber from the subscription list, I can use either of these two options. Option 1: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { if (_subscriptions[event].Contains(subscriber)) { _subscriptions[event].Remove(subscriber); } } Option 2: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { _subscriptions[event].Remove(subscriber); } I have two questions. First, notice that Option 1 checks for existence before removing the item, while Option 2 uses a brute force removal since Remove() does not throw an exception. Of these two, which is the preferred, "best-practice" way to do this? Second, is there another, "cleaner," more elegant way to do this, perhaps with a lambda expression or using a LINQ extension? I'm still getting acclimated to these two features. Thanks. EDIT Just to clarify, I realize that the choice between Options 1 and 2 is a choice of speed (Option 2) versus maintainability (Option 1). In this particular case, I'm not necessarily trying to optimize the code, although that is certainly a worthy consideration. What I'm trying to understand is if there is a generally well-established practice for doing this. If not, which option would you use in your own code?

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Which is the best jQuery-powered site?

    - by Reigel
    This "Top 10 JavaScript (jQuery) Powered Sites", posted about 2 years ago, was the one that invites me(after seeing the list) to use jQuery. All the sites in that list made me realize how cool it is to build sites powered by jQuery. And now more and more sites are being powered by jQuery. More and more developers are learning jQuery. Two years have past now but I still do a search to google for best sites and still got me to that link and which the link is not updated from the first time I have looked at it. Here at stackoverflow, there are lots of jQuery user. I was thinking if we can here show some of the sites that you know that is powered by jQuery which you think is best. If you can, please make it one site one answer, so that we can make a vote if it's really best jQuery powered site. We will take a look if it is really well implemented ( the way the codes are written, fast speed site, etc...) and deserves to be the best. Lot's of viewers will benefit from it. Like, we can view and have an idea of how we will make our next project cooler, faster, and powerful.

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • What are some typing patterns using a standard QWERTY keyboard that work well for you as a programme

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer? Update: US layout and I use home row Update 2: Despite my best efforts to the contrary, people are interpreting this questionas "how do I learn to type" or "what keyboard should I use". Take it as a given, that I will learn to type, and that I will be doing so on a standard QWERTY layout keyboard, not DVORAK. I am interested in aquiring a skill that will be useful wherever I go.

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • How to Manage CSS Explosion

    - by Jason
    I have been heavily relying on CSS for a website that I am working on (currently, everything is done as property values within each tag on the website and I'm trying to get away from that to make updates significantly easier). The problem I am running into, is I'm starting to get a bit of "CSS explosion" going on. It is becoming difficult for me to decide how to best organize and abstract data within the CSS file. For example: I am using a large number of div tags within the website (previously it was completely tables based). So I'm starting to get a lot of CSS that looks like this... div.title { background-color: Blue; color: White; text-align: center; } div.footer { /* Stuff Here */ } div.body { /* Stuff Here */ } etc. It's not too bad yet, but since I am learning here, I was wondering if recommendations could be made on how best to organize the various parts of a CSS file. What I don't want to get to is where I have a separate CSS attribute for every single thing on my website (which I have seen happen), and I always want the CSS file to be fairly intuitive. (P.S. I do realize this is a very generic, high-level question. My ultimate goal is to make it easy to use the CSS files and demonstrate their power to increase the speed of web development so other individuals that may work on this site in the future will also get into the practice of using them rather than hard-coding values everywhere.)

    Read the article

  • Dependency between operations in scala actors

    - by paradigmatic
    I am trying to parallelise a code using scala actors. That is my first real code with actors, but I have some experience with Java Mulithreading and MPI in C. However I am completely lost. The workflow I want to realise is a circular pipeline and can be described as the following: Each worker actor has a reference to another one, thus forming a circle There is a coordinator actor which can trigger a computation by sending a StartWork() message When a worker receives a StartWork() message, it process some stuff locally and sends DoWork(...) message to its neighbour in the circle. The neighbours do some other stuff and sends in turn a DoWork(...) message to its own neighbour. This continues until the initial worker receives a DoWork() message. The coordinator can send a GetResult() message to the initial worker and wait for a reply. The point is that the coordinator should only receive a result when data is ready. How can a worker wait that the job returned to it before answering the GetResult() message ? To speed up computation, any worker can receive a StartWork() at any time. Here is my first try pseudo-implementation of the worker: class Worker( neighbor: Worker, numWorkers: Int ) { var ready = Foo() def act() { case StartWork() => { val someData = doStuff() neighbor ! DoWork( someData, numWorkers-1 ) } case DoWork( resultData, remaining ) => if( remaining == 0 ) { ready = resultData } else { val someOtherData = doOtherStuff( resultData ) neighbor ! DoWork( someOtherData, remaining-1 ) } case GetResult() => reply( ready ) } } On the coordinator side: worker ! StartWork() val result = worker !? GetResult() // should wait

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >