Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 221/255 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Keyboard shortcut to Un/Comment out code in Mathematica 7?

    - by dbjohn
    A keyboard shortcut to comment/uncomment out a piece of code is common in other programming IDE's for languages like Java, .Net. I find it a very useful technique when experimenting through trial and error to temporarily comment out and uncomment lines, words and parts of the code to find out what is and isn't working. I cannot find any such keyboard shortcut on the Mathematica front end in version 7. I know that it is possible to comment out code by selecting the code, right mouse click and select Un/Comment from the menu that appears but this is too slow while coding. I tried to access this using the menu key Menu on the keyboard but Mathematica frontend doesn't respond to or recognise this key unlike other applications, this could have allowed a key combination for commenting. Can someone else verify that this isn't unique to my machine and that the key isn't recognised by mathematica. I looked at this question and looked in the KeyEventTranslations.tr file but I don't think there is any way to create a shortcut to do this(?). Should I just live with it? Any other suggestions? (I have seen there is an Emacs version of mathematica, I have never tried Emacs or this Mma version and imagine that it would have this ability but would prefer not to go to the trouble and uncertainty of installing it. Also I would guess that the Wolfram Workbench could do this, but that may not be worth the investment just for this.)

    Read the article

  • How to only fade content once it is ready in Jquery?

    - by Jared
    Hello, I've seen examples for load() and ready(), but they always seem to apply to the window or document body, or a div or img or something like that. I have a hover() function, and when you hover over it, it uses fadeIn() to reveal it. Problem is, the images inside the div aren't loaded yet, so it ends up just appearing anyway. I need code that will only allow the image to fade when it's contents are fully loaded. When I just plug in other selectors like the following, it doesn't work $(this).next(".menuPopup").ready(function () { //or load(), neither work fadeTo(300, 1); }); EDIT: Relevant code $( '.leftMenuProductButton' ).hover ( function () { $(this).next(".menuPopup").stop().css("opacity", 1).fadeIn('slow'); }, function () { $(this).next(".menuPopup").stop().hide(); }); HTML <div class="leftMenuProductButton"> Product 1</div> <div id="raceramps56" class="menuPopup"><IMG SRC="myimage.jpeg"> </div>

    Read the article

  • jQuery weirdness. Div becomes attached to chained element markup?

    - by Scott B
    I've got a div in my app that is displayed each time my theme options panel is "saved". The markup is... <form method="post"> <?php if ( $_REQUEST['saved']) { ?> <div id="message" class="updated fade"><p>Sweet! The settings were saved :)</p></div> <script type="text/javascript"> $('#message').delay(3000).fadeOut(3000);</script> <?php }?> This has the effect of showing the div (which is absolutely positioned to overlay the interface). I'm also using jQuery to fade the message offscreen after 3 seconds. This works fine, however, when I add a bit of script to my jQuery chain (see the commented out block below), the message div is only visible when the jPicker popup appears. $(function() { $("#carousel").jCarouselLite ( { btnNext: ".next", btnPrev: ".prev", visible: 6, speed: 700 } ); $('#carousel').show(); $('#myTheme').change ( function() { var myImage = $('#myTheme :selected').text(); $('.selectedImage img').attr('src','../wp-content/themes/myTheme/styles/'+myImage+'/screenshot.jpg'); } ); $('#carousel ul li').click ( function(e) { var myOption = $(this).children('img').attr('title'); $("#myTheme option[value='"+myOption+"']").attr('selected', 'selected'); $("#myTheme").css('backgroundColor', '#A9A9A9').animate({backgroundColor: "#ffffff"}, 'slow'); } ); $('#carousel ul li').hover ( function(e) { var img_src = $(this).children('img').attr('src'); $('.selectedImage img').attr('src',img_src); } ,function() { $('.selectedImage img').attr('src', '<?php echo $selectedThumb; ?>'); } ); /* $('#myTheme_sidebar_color').jPicker ( {}, function(color) { $(this).val(color.get_Hex()); }, function(color) { $(this).val(color.get_Hex()); } ); */ });

    Read the article

  • Reload images in a UIWebView after they have been downloaded by a background thread

    - by dantastic
    I have an application that frequently checks in with a server and downloads a batch of articles to the iphone. The articles are in html and just stored using core data. An article has 0-n images on the page. Downloading all associated images at the same time as the text will be too slow and take too much bandwidth. Users are not likely to open every article. If they open an article once it is likely they will open it several times. So I want to download and store the images locally when they are needed. These articles are listed in a UITableView. When you tap an article you pop open a UIWebView that displays the article. I have a function that checks if I have downloaded the images associated with the article already. If I have I just pop open the the UIWebView - everything works fine. If I don't have the images downloaded I go off and download them and store them to my Documents directory. Although this i working, the app is hanging while the images are downloading. Not very tidy. I want the article to open in a snap and download the images with the article open. So what I've done is I check if the images are downloaded, if they aren't I go ahead and just "touch" the files I need and load the webview. The UIWebView opens up but the images referenced contain no data. Then in a background thread I download the images and overwrite the "dummy" ones. This will save the images and everything but it won't reload the images in my current UIWebView. I have to go back out of the article back back in again to see the images. Are there any ways around this? reloading just an image in a UIWebView?

    Read the article

  • Converting to a column oriented array in Java

    - by halfwarp
    Although I have Java in the title, this could be for any OO language. I'd like to know a few new ideas to improve the performance of something I'm trying to do. I have a method that is constantly receiving an Object[] array. I need to split the Objects in this array through multiple arrays (List or something), so that I have an independent list for each column of all arrays the method receives. Example: List<List<Object>> column-oriented = new ArrayList<ArrayList<Object>>(); public void newObject(Object[] obj) { for(int i = 0; i < obj.length; i++) { column-oriented.get(i).add(obj[i]); } } Note: For simplicity I've omitted the initialization of objects and stuff. The code I've shown above is slow of course. I've already tried a few other things, but would like to hear some new ideas. How would you do this knowing it's very performance sensitive?

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • An online php debugger/code editor

    - by Zirak
    It's a simple deal: I'm sometimes in places where I don't have my laptop, and find myself with spare time and an idea for a project. But unfortunately, I can't do anything about it. I tried a variety of solutions, which include running IDEs (like phpstorm or Aptana) on a disc-on-key or cd (very slow and unappealing), trying several online solutions (like http://phpanywhere.net) and found that all of them are either buggy, overloaded or underloaded with features, just difficult to use, require FTP etc etc. All that is required here is a syntax highlighting and debugging alerts; no actual running of code. So the question is split into two: 1)Do you know of a good online php editor that you've used and enjoyed? 2)If no, then how would you go about making one? The second one seems a bit general, so I'll try and expand...It might be a good idea; if you can't find one, make one. The question is about the concept of making a syntax highlighter (shouldn't be too difficult), and the difficult part of catching php errors WITHOUT executing any php code. Thank you in advance.

    Read the article

  • Is it good choice to move a Sublayer around a view using UIPanGestureRecognizer?

    - by Pranjal Bikash Das
    I have a CALayer and as sublayer to this CALayer i have added an imageLayer which contains an image of resolution 276x183. I added a UIPanGestureRecognizer to the main view and calculation the coordinates of the CALayer as follows: - (void)panned:(UIPanGestureRecognizer *)sender{ subLayer.frame=CGRectMake([sender locationInView:self.view].x-138, [sender locationInView:self.view].y-92, 276, 183); } in viedDidLoad i have: subLayer.backgroundColor=[UIColor whiteColor].CGColor; subLayer.frame=CGRectMake(22, 33, 276, 183); imageLayer.contents=(id)[UIImage imageNamed:@"A.jpg"].CGImage; imageLayer.frame=subLayer.bounds; imageLayer.masksToBounds=YES; imageLayer.cornerRadius=15.0; subLayer.shadowColor=[UIColor blackColor].CGColor; subLayer.cornerRadius=15.0; subLayer.shadowOpacity=0.8; subLayer.shadowOffset=CGSizeMake(0, 3); [subLayer addSublayer:imageLayer]; [self.view.layer addSublayer:subLayer]; It is giving desired output but a bit slow in the simulator. I have not yet tested it in Device. so my question is - Is it OK to move a CALayer containing an image??

    Read the article

  • Jquery click event propagation

    - by ozsenegal
    I've a table with click events bind to it rows (tr). Also,there're A elements with it owns click events assigned inside those rows. Problem is when i click on A element,it also fires click event from TD.And Im dont want this behavior,i just want to fire A click's event. Code: //Event row TR $("tr:not(:first)").click(function(){ $(".window,.backFundo,.close").remove(); var position = $(this).offset().top; position = position < 0 ? 20 : position; $("body").append($("<div></div>").addClass("backFundo")); $("body").append($("<div></div>").addClass("window").html("<span class=close><img src=Images/close.png id=fechar /></span>").append("<span class=titulo>O que deseja fazer?</span><span class=crud><a href=# id=edit>Editar</a></span><span class=crud><a href=# id=delete codigo=" + $(this).children("td:first").html() + ">Excluir</a></span>").css({top:"20px"}).fadeIn("slow")); $(document).scrollTop(0); }); //Element event $("a").live("click",function(){alert("clicked!");}); Whenever you click the anchor it fires event from it parent row.Any ideas?

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

  • Faster way to split a string and count characters using R?

    - by chrisamiller
    I'm looking for a faster way to calculate GC content for DNA strings read in from a FASTA file. This boils down to taking a string and counting the number of times that the letter 'G' or 'C' appears. I also want to specify the range of characters to consider. I have a working function that is fairly slow, and it's causing a bottleneck in my code. It looks like this: ## ## count the number of GCs in the characters between start and stop ## gcCount <- function(line, st, sp){ chars = strsplit(as.character(line),"")[[1]] numGC = 0 for(j in st:sp){ ##nested ifs faster than an OR (|) construction if(chars[[j]] == "g"){ numGC <- numGC + 1 }else if(chars[[j]] == "G"){ numGC <- numGC + 1 }else if(chars[[j]] == "c"){ numGC <- numGC + 1 }else if(chars[[j]] == "C"){ numGC <- numGC + 1 } } return(numGC) } Running Rprof gives me the following output: > a = "GCCCAAAATTTTCCGGatttaagcagacataaattcgagg" > Rprof(filename="Rprof.out") > for(i in 1:500000){gcCount(a,1,40)}; > Rprof(NULL) > summaryRprof(filename="Rprof.out") self.time self.pct total.time total.pct "gcCount" 77.36 76.8 100.74 100.0 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.58 3.6 3.64 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $by.total total.time total.pct self.time self.pct "gcCount" 100.74 100.0 77.36 76.8 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.64 3.6 3.58 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $sampling.time [1] 100.74 Any advice for making this code faster?

    Read the article

  • Problem with Active Record

    - by kshchepelin
    Hello everyone. Lets assume we have a User model. And user can plan some activities. The number of types of activities is about 50. All activities have common properties, such as start_time, end_time, user_id, etc. But each of them has some unique properties. Now we have each activity living in its own table in DB. And thats why we have such terrible sql queries like SELECT * FROM `first_activities_table` WHERE (`first_activity`.`id` IN (17,18)) SELECT * FROM `second_activities_table` WHERE (`second_activity`.`id` = 17) ..... SELECT * FROM `n_activities_table` WHERE (`n_activity`.`id` = 44) About 50 queries. That's terrible. There are different ways to solve this. Choose the activity type with the biggest number of properties, create the table 'Activities' and have STI model. But this way we must name our columns in uncomfortable way and often the record in that table would have some NULL fields. Also STI model, but having columns, common for all of activity types and some blob column with serialized properties. But we have to do some search on activities - there can be a problem. And serialization is quite slow. Please help me dealing with this. Maybe my problem has quite different solution that will fit my needs. Thanks for help.

    Read the article

  • vectorizing loops in Matlab - performance issues

    - by Gacek
    This question is related to these two: http://stackoverflow.com/questions/2867901/introduction-to-vectorizing-in-matlab-any-good-tutorials http://stackoverflow.com/questions/2561617/filter-that-uses-elements-from-two-arrays-at-the-same-time Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time. I've rewritten this: function B = bfltGray(A,w,sigma_r) dim = size(A); B = zeros(dim); for i = 1:dim(1) for j = 1:dim(2) % Extract local region. iMin = max(i-w,1); iMax = min(i+w,dim(1)); jMin = max(j-w,1); jMax = min(j+w,dim(2)); I = A(iMin:iMax,jMin:jMax); % Compute Gaussian intensity weights. F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2); B(i,j) = sum(F(:).*I(:))/sum(F(:)); end end into this: function B = rngVect(A, w, sigma) W = 2*w+1; I = padarray(A, [w,w],'symmetric'); I = im2col(I, [W,W]); H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2); B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []); But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems. I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer. I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available). So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • Why does first call to java.io.File.createTempFile(String,String,File) take 5 seconds on Citrix?

    - by Ben Roling
    While debugging slow startup of an Eclipse RCP app on a Citrix server, I came to find out that java.io.createTempFile(String,String,File) is taking 5 seconds. It does this only on the first execution and only for certain user accounts. Specifically, I am noticing it Citrix anonymous user accounts. I have not tried many other types of accounts, but this behavior is not exhibited with an administrator account. Also, it does not matter if the user has access to write to the given directory or not. If the user does not have access, the call will take 5 seconds to fail. If they do have access, the call with take 5 seconds to succeed. This is on a Windows 2003 Server. I've tried Sun's 1.6.0_16 and 1.6.0_19 JREs and see the same behavior. I googled a bit expecting this to be some sort of known issue, but didn't find anything. It seems like someone else would have had to have run into this before. The Eclipse Platform uses File.createTempFile() to test various directories to see if they are writeable during initialization and this issue adds 5 seconds to the startup time of our application. I imagine somebody has run into this before and might have some insight. Here is sample code I executed to see that it is indeed this call that is consuming the time. I also tried it with a second call to createTempFile and notice that subsequent calls return nearly instantaneously. public static void main(final String[] args) throws IOException { final File directory = new File(args[0]); final long startTime = System.currentTimeMillis(); File file = null; try { file = File.createTempFile("prefix", "suffix", directory); System.out.println(file.getAbsolutePath()); } finally { System.out.println(System.currentTimeMillis() - startTime); if (file != null) { file.delete(); } } } Sample output of this program is the following: C:\java.exe -jar filetest.jar C:/Temp C:\Temp\prefix8098550723198856667suffix 5093

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Why does Android allocate more memory than needed when loading images

    - by Simon
    Folks, I don't think that this is a duplicate and is NOT one of those how do I avoid OOMs questions. This is a genuine quest for knowledge so hold off on those down votes please... Imagine I have a JPEG of 500x500 pixels. I load it as ARGB_8888 which is as "bad as it gets". I would expect Android to allocate 500x500x4 bytes = a little under 1MB however, look at a heap dump and you will see that Android allocates significantly more, often factors of 5-10 times greater. You frequently see questions on here about OOMS where the stack trace shows a heap request of say 15MB and it is ALWAYS much larger than is required simply to hold the bytes of the image. The OP usually catches some downvotes then is bombarded with stock answers and comments about using less memory (thanks Romain!) and in scaling. I think there is more than meets the eye here. Anybody know why this is? If there is no apparent answer, I will put together an SSCCE if it helps. PS. I assume that JPEG vs PNG etc is irrelevant since we're talking about the memory usage of the backing bitmap which is simply x times y times BPP - or am I being slow?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Is my slide-to-anchor jQuery routine a correct use of JavaScript, or is there a better way?

    - by Stuart Robson
    I'm currently working on a project with a one page design that'll slide up and down between sections on an <a href> link... Currently, i have it written as follows: <ul> <li><a href="javascript:void(0)" onClick="goToByScroll('top')">home</a></li> <li><a href="javascript:void(0)" onClick="goToByScroll('artistsmaterials')">artist's materials</a></li> <li><a href="javascript:void(0)" onClick="goToByScroll('pictureframing')">picture framing</a></li> <li><a href="javascript:void(0)" onClick="goToByScroll('gallery')">gallery</a></li> <li><a href="javascript:void(0)" onClick="goToByScroll('contactus')">contact us</a></li> </ul> ...with the most relevant portion being the links: <a href="javascript:void(0)" onClick="goToByScroll('contactus')"> Then in a .js file I have: function goToByScroll(id){ $('html,body').animate({scrollTop: $("#"+id).offset().top},'slow'); } Is this ok? Or should this be done a different way?

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Jquery .Show() .Hide() not working as expected

    - by fizgig07
    I'm trying to use the show and hide to display a different set of select options when a certain report type is selected. I have a couple problems with this: The .show .hide only execute properly if I pass params, slow fast, in the the first result of my conditional statement. If I take out the params or pass params in both results, only one select shows and it never changes.. here's is the code that currently kind of works. if ($('#ReportType').val() == 'PbuseExport') { $('#PbuseServices').show('fast'); $('#ReportServiceDropdown').hide('fast'); } else { $('#PbuseServices').hide(); $('#ReportServiceDropdown').show(); } After i've used this control I am taken to a differnt page. When I use the control again, it reatins the original search values and repopulates the control. Then again I only want to show one select option if a certain report is chosen.. This works correctly if the report type I originally searched on is not the "PbuseExport". If I searched on the report type "PbuseExport", then both selects show on the screen, and only until I change the report type does it show only one select. I know this probably isn't very clear.. Here is the code that handles the change event on the report type drop down. var serviceValue = $("#ReportType").val(); switch (serviceValue) { case 'PbuseExport': $('#PbuseServices').show('fast'); $('#ReportServiceDropdown').hide('fast'); default: $('#PbuseServices').hide(); $('#ReportServiceDropdown').show(); break; }

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >