Search Results

Search found 7517 results on 301 pages for 'fast debugger'.

Page 197/301 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • How to I correctly add brackets to this code

    - by Mohammad
    This code removes whites paces, (fyi: it's credited to be very fast) function wSpaceTrim(s){ var start = -1, end = s.length; while (s.charCodeAt(--end) < 33 ); //here while (s.charCodeAt(++start) < 33 ); //here also return s.slice( start, end + 1 ); } The while loops don't have brackets, how would i correctly add brackets to this code? while(iMean){ like this; } Thank you so much!

    Read the article

  • jquery change function into code for plugin

    - by jaap Klevering
    Help would be greatly appreciated. What is the correct markup that would change this function into a plugin. I tried, but cant make it to work. $(document).ready(function(){ $('ul.tabNav a').click(function() { var curChildIndex = $(this).parent().prevAll().length + 1; $(this).parent().parent().children('.current').removeClass('current'); $(this).parent().addClass('current'); $(this).parent().parent().next('.tabContainer').children('.current').slideUp('fast',function() { $(this).removeClass('current'); $(this).parent().children('div:nth-child('+curChildIndex+')').slideDown('normal',function() { $(this).addClass('current'); }); }); return true; }); });

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • how to make a program like fraps.

    - by blood
    i want to make a program that will capture video. what if the best way to captrue the video, i know c++ and im learning assembly and i found in my assembly book i can get data from the video card i think? would that be the best way? i know fraps hooks into programs but i want my program to take the full screen? so i want something fast low memory useage if i can and something i can use on other computers with them having the same hardware.

    Read the article

  • packing fields of a class into a byte array in c#

    - by alex
    Hi all: I am trying to create a fast way to convert c# classes into byte array. I thought of serializing the class directly to a byte array using an example I found: // Convert an object to a byte array private byte[] ObjectToByteArray(Object obj) { if(obj == null) return null; BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); bf.Serialize(ms, obj); return ms.ToArray(); } But the byte array I got contains some other information that are not related to the fields in the class. I guess it is also converting the Properties of the class. Is there a way to serialize only the fields of the class to a byte array? Thanks

    Read the article

  • gcc options for fastest code

    - by rwallace
    I'm distributing a C++ program with a makefile for the Unix version, and I'm wondering what compiler options I should use to get the fastest possible code (it falls into the category of programs that can use all the computing power they can get and still come back for more), given that I don't know in advance what hardware, operating system or gcc version the user will have, and I want above all else to make sure it at least works correctly on every major Unix-like operating system. Thus far, I have g++ -O3 -Wno-write-strings, are there any other options I should add? On Windows, the Microsoft compiler has options for things like fast calling convention and link time code generation that are worth using, are there any equivalents on gcc? (I'm assuming it will default to 64-bit on a 64-bit platform, please correct me if that's not the case.)

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • C: Pointers to any type?

    - by dragme
    I hear that C isn't so type-safe and I think that I could use that as an advantage for my current project. I'm designing an interpreter with the goal for the VM to be extremely fast, much faster than Ruby and Python, for example. Now I know that premature optimization "is the root of all evil" but this is rather a conceptual problem. I have to use some sort of struct to represent all values in my language (from number over string to list and map) Would the following be possible? struct Value { ValueType type; void* value; } I would store the actual values elsewhere, e.g: a separate array for strings and integers, value* would then point to some member in this table. I would always know the type of the value via the type variable, so there wouldn't be any problems with type errors. Now: Is this even possible in terms of syntax and typing?

    Read the article

  • PHP / javascript live chat using too much bandwidth

    - by David
    So I am learning about javascript, so I am making a live chat system with PHP and javascript. I have it so the javascript refreshes the log (each message gets logged in a file on the server), and it refreshes every second. Im using firebug to monitor the resource usage, and I see under the net tab each times its updated, and the bytes add up really fast. I know I can change it to update less, but is there a way that when a user on the other end I'm talking to, when the send a message, it gets sent to the server, then an alert gets sent to me saying that the chatlog needs to update somehow. That way it only updates when the log is updated. let me know, thanks

    Read the article

  • Is Python good for highload web projects?

    - by Vitali Fokin
    Hello! I decidet to start my own web project. It should be highload project, and I can't decide which technologies should I use. I'm good in ASP.NET MVC, but I like languages like Python more than C#. I read a lot about Python and Django/Pylons/etc but I didn't find any good examples of highload projects on python. So, the question is: Is Python good for highload project? Is it enough fast? And if it is, are python frameworks like django/pylons/etc good for this? Or asp.net mvc will be better choice? PS, I'm not interesting in Java, Ruby and PHP :) So, I'm choosing only between Python + django/pylons/etc and asp.net mvc. Thanks in advance. Please, don't make holywars :)

    Read the article

  • Multiple indexes for a Java Collection - most basic solution?

    - by chris_l
    Hi, I'm looking for the most basic solution to create multiple indexes on a Java Collection. Required functionality: When a Value is removed, all index entries associated with that value must be removed. Index lookup must be faster than linear search (at least as fast as a TreeMap). Side conditions: It should ideally work with JavaSE (6.0) alone - no extra libraries, if possible. If necessary, then only small (not something like Lucene), common and well tested libraries. No database! Of course, I could write a class that manages multiple Maps myself. But I'd like to know, if it can be done without - while still getting a simple usage similar to using a single indexed java.util.Map. Thanks, Chris

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Trying to find a good javascript function for hmac-sha1

    - by Darxval
    So i have been searching the web for a javascript source for an Hmac-sha1 algorithm. I saw Crypto's but i cant seem to get it to work, mainly because it has no idea what crypto means. (i copied the .js script functions into my script file) http://code.google.com/p/crypto-js/ I have my base64 encoded function already. that i got from here: http://nerds-central.blogspot.com/2007/01/fast-scalable-javascript-and-vbscript.html btw this for a twitter application using the new OAuth system. any help or links to where i can find anything on this would be helpful If you need me to elaborate let me know. thank you!

    Read the article

  • C++ union assignment, is there a good way to do this?

    - by Sqeaky
    I am working on a project with a library and I must work with unions. Specifically I am working with SDL and the SDL_Event union. I need to make copies of the SDL_Events, and I could find no good information on overloading assignment operators with unions. Provided that I can overload the assignment operator, should I manually sift through the union members and copy the pertinent members or can I simply come some members (this seems dangerous to me), or maybe just use memcpy() (this seems simple and fast, but slightly dangerous)? If I can't overload operators what would my best options be from there? I guess I could make new copies and pass around a bunch of pointers, but in this situation I would prefer not to do that. Any ideas welcome!

    Read the article

  • How to store and compare time-zone sensitive times

    - by Chad Moran
    I have a data structure where an entity has times stored as an int (minutes into the day) for fast comparison. The entity also has a Foreign Key reference back to a TimeZone table which contains the .NET CLR ID Name and it's Standard Time/Daylight Time acronyms. Since this information is stored as time-zone insensitive - I was wondering how in LINQ to SQL I could convert this into a UTC DateTime for comparison against other times that will be in UTC. Just to be clear this conversion has to be done server-side so that I can execute filtering on the SQL Server and not the client. I am using .NET 3.5 SP1 and SQL Server 2008.

    Read the article

  • Is Sphinx better than LaTex in writing manuals/books?

    - by Masi
    Only a few people recommended to use Sphinx at the beginning of the year. Sphinx has developed rather fast recently. I noted today that Sage has made a change from direct editing with LaTex to Sphinx. This is evident in William Stein's answer on 2nd April about Sage's tutorial The tutorial is not a latex document anymore. It's an entirely different Sphinx document that can output pdf. It suggests me that Sphinx may be at a level such that it is suitable for me. Is Sphinx better than LaTex in writing manuals/books?

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • two android threads and not synchronized data

    - by Sponge
    i have a (perhaps stupid) question: im using 2 threads, one is writing floats and one is reading this floats permanently. my question is, what could happen worse when i dont synchronize them? it would be no problem if some of the values would not be correct because they switch just a little every write operation. im running the application this way at the moment and dont have any problems so i want to know what could happen worse? a read/write conflict would cause a number like 12345 which is written to 54321 and red at the same time appear for example as 54345 ? or could happen something worse? (i dont want to use synchronization to keep the code as fast as possible)

    Read the article

  • Treating a fat webservice in .net 3.5 c#

    - by Chris M
    I'm dealing with an obese 3rd party webservice that returns about 3mb of data for a simple search results, about 50% of the data in that response is junk. Would it make sense then to remap this data to my own result object and ditch the response so I'm storing 1-2 mb in memory for filtering and sorting rather than using the web-responses own object and using 2-4 or am I missing a point? So far I've been accessing the webservice from a separate project and using a new class to provide the interaction and to handle the persistence so my project looks like this |- Web (mvc2 proj) |- DAL (database/storage fluent-nhibernate) |- SVCGateway (interaction layer + webservice related models) |- Services -------------- |- Tests |- Specs I'm trying to make the application behave fast and I also need to store the result set temporarily in case a customer goes to view the product and wants to go back to the results. (Service returns only 500 of possible 14K results). So basically I'm looking for confirmation that I'm doing the right thing in pushing the results into my own objects or if I'm breaking some rule or even if there's a better way of handling it. Thanks

    Read the article

  • which layout engine for finding coordinates of html elements on the web page?

    - by Mexx
    I am doing some web data classification task and was thinking if I could get the co-ordinates of html elements as they would appear on a web-browser without taking into consideration any css or javascript being referred in the web page. My language of programming is c++ and the need results for a couple million of pages, so it has to be fast. I know there is a Microsoft COM component which renders the page in a web browser control and then can be queried for position of different html tags. But this is not suitable in my case as it first renders the whole page which takes up a lot of time. So as I found out, there are open-source layout engines WebKit, Gecko that can probably be used for this. But that's a huge piece of code and I need someone to direct me to the right classes or right modules to look into or any previous/similar work someone has done previously. Also, please let me know what you guys think is a good choice if I want to customize the existing code for use with multiple threads to make it faster. Thanks

    Read the article

  • Why is doing a top(1) on an indexed column in mssql slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work?

    Read the article

  • What are some good ways to do intermachine locking?

    - by mike
    Our server cluster consists of 20 machines, each with 10 pids of 5 threads. We'd like some way to prevent any two threads, in any pid, on any machine, from modifying the same object at the same time. Our code's written in Python and runs on Linux, if that helps narrow things down. Also, it's a pretty rare case that two such threads want to do this, so we'd prefer something that optimizes the "only one thread needs this object" case to be really fast, even if it means that the "one thread has locked this object and another one needs it" case isn't great. What are some of the best practices?

    Read the article

  • Is it possible to generate plain-old XML using Haml?

    - by lsdr
    I've been working on a piece of software where I need to generate a custom XML file to send back to a client application. The current solutions on Ruby/Rails world for generating XML files are slow, at best. Using builder or event Nokogiri, while have a nice syntax and are maintainable solutions, they consume too much time and processing. I definetly could go to ERB, which provides a good speed at the expense of building the whole XML by hand. HAML is a great tool, have a nice and straight-forward syntax and is fairly fast. But I'm struggling to build pure XML files using it. Which makes me wonder, is it possible at all? Does any one have some pointers to some code or docs showing how to do this, build a full, valid XML from HAML?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >