Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 104/1086 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Linux PDF/Postscript Optimizing

    - by Sheldon Ross
    So I have a report system built using Java and iText. PDF templates are created using Scribus. The Java code merges the data into the document using iText. The files are then copied over to a NFS share, and a BASH script prints them. I use acroread to convert them to PS, then lpr the PS. The FOSS application pdftops is horribly inefficient. My main problem is that the PDF's generated using iText/Scribus are very large. And I've recently run into the problem where acroread pukes because it hits 4gb of mem usage on large (300+ pages) documents. (Adobe is painfully slow at updating stuff to 64 bit). Now I can use Adobe reader on Windows, and use the Create Print PDF option or whatever its called, and it greatly( 10x) reduces the size of the PDF(it removes alot of metadata about form fields and such it appears) and produces a PDF that is basically a Print image. My question is does anyone know of a good solution/program for doing something similiar on Linux. Ideally, it would optimize the PDF, reduce size, and reduce PS complexity so the printer could print faster as it takes about 15-20 seconds a page to print right now.

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • Jetty 6 - QueuedThreadPool versus ThreadPool

    - by Walter White
    Hi all, I am using Jetty 6 and was wondering when the QueuedThreadPool should be used over the ThreadPool? By default, Jetty 6 comes configured with the QueuedThreadPool. My server has Java 6 installed so I was thinking that I should use the ThreadPool: <New class="org.mortbay.thread.QueuedThreadPool"> <Set name="minThreads">10</Set> <Set name="maxThreads">200</Set> <Set name="lowThreads">20</Set> <Set name="SpawnOrShrinkAt">2</Set> </New> <New class="org.mortbay.thread.concurrent.ThreadPool"> <Set name="corePoolSize">50</Set> <Set name="maximumPoolSize">50</Set> </New> Thanks, Walter

    Read the article

  • Speed improvements for Perl's chameneos-redux script in the Computer Language Benchmarks Game

    - by Robert P
    Ever looked at the Computer Language Benchmarks Game, (formerly known as the Great Language Shootout)? Perl has some pretty healthy competition there at the moment. It also occurs to me that there's probably some places that Perl's scores could be improved. The biggest one is in the chameneos-redux script right now - the Perl version runs the worst out of any language : 1,626 times slower than the C baseline solution! There are some restrictions on how the programs can be made and optimized, and there is Perl's interpreted runtime penalty, but 1,626 times? There's got to be something that can get the runtime of this program way down. Taking a look at the source code and the challenge, what do you think could be done to reduce this runtime speed?

    Read the article

  • Timing the Linux Kernel boot-time optimisation

    - by CVS-2600Hertz-wordpress-com
    I am trying to optimise the boot-up time of linux on an embedded device (not PC) Currently to profile the boot-up sequence, I have enabled the timing info on printk logs. Is this the most optimum way? If not, how do i profile the boot-up sequence (with timing) with minimum overhead? PS: I have a terminal (of the device) over a serial-connection & I use TeraTerm over windows-XP to access it.

    Read the article

  • Problem with closing excel by c#

    - by phenevo
    Hi, I've got unit test with this code: Excel.Application objExcel = new Excel.Application(); Excel.Workbook objWorkbook = (Excel.Workbook)(objExcel.Workbooks._Open(@"D:\Selenium\wszystkieSeba2.xls", true, false, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value)); Excel.Worksheet ws = (Excel.Worksheet)objWorkbook.Sheets[1]; Excel.Range r = ws.get_Range("A1", "I2575"); DateTime dt = DateTime.Now; Excel.Range cellData = null; Excel.Range cellKwota = null; string cellValueData = null; string cellValueKwota = null; double dataTransakcji = 0; string dzien = null; string miesiac = null; int nrOperacji = 1; int wierszPoczatkowy = 11; int pozostalo = 526; cellData = r.Cells[wierszPoczatkowy, 1] as Excel.Range; cellKwota = r.Cells[wierszPoczatkowy, 6] as Excel.Range; if ((cellData != null) && (cellKwota != null)) { object valData = cellData.Value2; object valKwota = cellKwota.Value2; if ((valData != null) && (valKwota != null)) { cellValueData = valData.ToString(); dataTransakcji = Convert.ToDouble(cellValueData); Console.WriteLine("data transakcji to: " + dataTransakcji); dt = DateTime.FromOADate((double)dataTransakcji); dzien = dt.Day.ToString(); miesiac = dt.Month.ToString(); cellValueKwota = valKwota.ToString(); } } r.Cells[wierszPoczatkowy, 8] = "ok"; objWorkbook.Save(); objWorkbook.Close(true, @"C:\Documents and Settings\Administrator\Pulpit\Selenium\wszystkieSeba2.xls", true); objExcel.Quit(); Why after finish test I'm still having excel in process (it does'nt close) And : is there something I can improve to better perfomance ??

    Read the article

  • YUI Compressor and .NET Apps

    - by objektivs
    I want to use YUI Compressor (the original) and use it as part of typical MS build processes (Visual Studio 2008, MSBuild). Does anyone have any guidance or thoughts on this? For example, good ways for incorporating into project, what to do with existing CSS and JS references, and the like. I am happy to hear on the benefits of YUI Compressor .NET and alternatives but I'm mor einterested in use of the original. Thanks Scott

    Read the article

  • Optimizing website - minification, sprites, etc...

    - by nivlam
    I'm looking at the product Aptimize Website Accelerator, which is an ISAPI filter that will concatenate files, minify css/javascript, and more. Does anyone have experience with this product, or any other "all-in-one" solutions? I'm interesting in knowing whether something like this would be good long-term, or would manually setting up all the components (integrate YUICompress into the build process, setting up gzip compression, tweaking expiration headers, etc...) be more beneficial? An all-in-one solution like this looks very tempting, as it could save a lot of time if our website is "less than optimal". But how efficient are these products? Would setting up the components manually generate better results? Or would the gap between the all-in-one solution and manually setting up the component be so small, that it's negligible?

    Read the article

  • Java Integer: what is faster comparison or subtraction?

    - by Vladimir
    I've found that java.lang.Integer implementation of compareTo method looks as follows: public int compareTo(Integer anotherInteger) { int thisVal = this.value; int anotherVal = anotherInteger.value; return (thisVal<anotherVal ? -1 : (thisVal==anotherVal ? 0 : 1)); } The question is why use comparison instead of subtraction: return thisVal - anotherVal;

    Read the article

  • optimize query: get al votes from user's item

    - by Toni Michel Caubet
    hi there! i did it my way because i'm very bad getting results from two tables... Basically, first i get all the id items that correspond to the user, and then i calculate the ratings of each item. But, there is two different types of object item, so i do this 2 times: show you: function votos_usuario($id){ $previa = "SELECT id FROM preguntas WHERE id_usuario = '$id'"; $r_previo = mysql_query($previa); $ids_p = '0, '; while($items_previos = mysql_fetch_array($r_previo)){ $ids_p .= $items_previos['id'].", "; //echo "ids pregunta usuario: ".$items_previos['id']."<br>"; } $ids = substr($ids_p,0,-2); //echo $ids; $consulta = "SELECT valor FROM votos_pregunta WHERE id_pregunta IN ( $ids )"; //echo $consulta; $resultado = mysql_query($consulta); $votos_preguntas = 0; while($voto = mysql_fetch_array($resultado)){ $votos_preguntas = $votos_preguntas + $voto['valor']; } $previa_r = "SELECT id FROM recetas WHERE id_usuario = '$id'"; $r_previo_r = mysql_query($previa_r); $ids_r = '0, '; while($items_previos_r = mysql_fetch_array($r_previo_r)){ $ids_r .= $items_previos_r['id'].", "; //echo "ids pregunta usuario: ".$items_previos['id']."<br>"; } $ids = substr($ids_r,0,-2); $consulta_b = "SELECT valor FROM votos_receta WHERE id_receta IN ( $ids )"; //echo $consulta; $resultado_b = mysql_query($consulta_b); $votos_recetas = 0; while($voto_r = mysql_fetch_array($resultado_b)){ $votos_recetas = $votos_recetas + $voto_r['valor']; } $total = $votos_preguntas + $votos_recetas; return $total; } As you can si this is two much.. O(n^2) Feel like thinking? thanks!

    Read the article

  • Retrieve 2 last posts for each category.

    - by Savageman
    Hello, Lets say I have 2 tables: blog_posts and categories. Each blog post belongs to only ONE category, so there is basically a foreign key between the 2 tables here. I would like to retrieve the 2 lasts posts from each category, is it possible to achieve this in a single request? GROUP BY would group everything and leave me with only one row in each category. But I want 2 of them. It would be easy to perform 1 + N query (N = number of category). First retrieve the categories. And then retrieve 2 posts from each category. I believe it would also be quite easy to perform M queries (M = number of posts I want from each category). First query selects the first post for each category (with a group by). Second query retrieves the second post for each category. etc. I'm just wondering if someone has a better solution for this. I don't really mind doing 1+N queries for that, but for curiosity and general SQL knowledge, it would be appreciated! Thanks in advance to whom can help me with this.

    Read the article

  • The speed of Ruby and Java.

    - by Simon
    In every benchmark that I found on the web it seems that Ruby is slow, much slower than Java. The Ruby folks just state that it doesn't matter. Could you give me any example that the speed of Ruby on Rails (and the Ruby itself) really doesn't matter?

    Read the article

  • Optimizing Oracle query

    - by Omnipresent
    SELECT MAX(verification_id) FROM VERIFICATION_TABLE WHERE head = 687422 AND mbr = 23102 AND RTRIM(LTRIM(lname)) = '.iq bzw' AND TO_CHAR(dob,'MM/DD/YYYY')= '08/10/2004' AND system_code = 'M'; This query is taking 153 seconds to run. there are millions of rows in VERIFICATION_TABLE. I think query is taking long because of the functions in where clause. However, I need to do ltrim rtrim on the columns and also date has to be matched in MM/DD/YYYY format. How can I optimize this query?

    Read the article

  • How to outperform this regex replacement?

    - by spender
    After considerable measurement, I have identified a hotspot in one of our windows services that I'd like to optimize. We are processing strings that may have multiple consecutive spaces in it, and we'd like to reduce to only single spaces. We use a static compiled regex for this task: private static readonly Regex regex_select_all_multiple_whitespace_chars = new Regex(@"\s+",RegexOptions.Compiled); and then use it as follows: var cleanString= regex_select_all_multiple_whitespace_chars.Replace(dirtyString.Trim(), " "); This line is being invoked several million times, and is proving to be fairly intensive. I've tried to write something better, but I'm stumped. Given the fairly modest processing requirements of the regex, surely there's something faster. Could unsafe processing with pointers speed things further?

    Read the article

  • Is there an offline version of Smush.it available?

    - by Jonathon Watney
    Sometimes I use Smush.it via the YSlow Firefox plugin to non-destructively reduce the file size of JPG images. Is there an offline version available that runs on Windows? And if not is there an alternative? The reason I'd like an offline version is that I'd like to optimize images before I deploy them. Currently Smush.it accepts only public facing URLs for images or a web page (via YSlow) and can't access my internal network. That means I have to deploy, optimize, replace images and deploy again. I'd really like to deploy the optimized images on the first deploy. Update: Here's a very similar question.

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • What's the difference between !col and col=false in MySQL?

    - by Mask
    The two statements have totally different performance: mysql> explain select * from jobs where createIndexed=false; +----+-------------+-------+------+----------------------+----------------------+---------+-------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+----------------------+---------+-------+------+-------+ | 1 | SIMPLE | jobs | ref | i_jobs_createIndexed | i_jobs_createIndexed | 1 | const | 1 | | +----+-------------+-------+------+----------------------+----------------------+---------+-------+------+-------+ 1 row in set (0.01 sec) mysql> explain select * from jobs where !createIndexed; +----+-------------+-------+------+---------------+------+---------+------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+-------------+ | 1 | SIMPLE | jobs | ALL | NULL | NULL | NULL | NULL | 17996 | Using where | +----+-------------+-------+------+---------------+------+---------+------+-------+-------------+ Column definition and related index for aiding analysis: createIndexed tinyint(1) NOT NULL DEFAULT 0, create index i_jobs_createIndexed on jobs(createIndexed);

    Read the article

  • Software to Tune/Calibrate Properties for Heuristic Algorithms

    - by Karussell
    Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input. This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe). Do you know other software which can do the same which runs under Linux? (preferable Open Source)

    Read the article

  • Software to Tune/Calibrate Properties for Heuristic Algorithms

    - by Karussell
    Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input. This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe). Do you know other software which can do the same which runs under Linux? (preferable Open Source) EDIT: Since I need this for a java application I will now invest my research in java libraries like jgap. Other ideas and links would be appreciated!

    Read the article

  • Benchmarking a particular method in Objective-C

    - by Jasconius
    I have a critical method in an Objective-C application that I need to optimize as much as possible. I first need to take some easy benchmarks on this one single method so I can compare my progress as I optimize. What is the easiest way to track the execution time of a given method in, say, milliseconds, and print that to console.

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • What are good memory management techniques in Flash/as3

    - by Parris
    Hello! So I am pretty familiar with memory management in Java, C and C++; however, in flash what constructs are there for memory management? I assume flash has a sort of virtual machine like java, and I have been assuming that things get garbage collected when they are set to null. I am not sure if this is actually the case though. Also is there a way to force garbage collection in Flash? Any other tips? Thanks

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >