Search Results

Search found 941 results on 38 pages for 'fastest'.

Page 10/38 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • What is the fastest and best PHP IDE on Windows?

    - by vegatron
    I've tried a few PHP IDEs, but am still searching for the fastest one. All Java-based IDEs are too slow. I have 2 computers to work with: my home PC which is too fast, and my laptop which is good but can't handle heavy software. And I have to work on them both, so I'm looking for the best free IDE which is fast. I'm not talking about text editors, because I already have Notepad++ and it's great, but looking for extra features to help me save time. Any tips?

    Read the article

  • What is the fastest way to compare two list of items?

    - by edude05
    I have two folders with approximately 10,000 files each. I'd like to write a script or program that can tell me if these folders are in sync and then tell me what files are missing from each to make them in sync. Therefore, after generating a list of files, what is the fastest algorithm to sort them for unique files? What I'm thinking right now is comparing the first file on each list then if they are different remove one until they are the same, then remove both from the list (because they are not unique.) Is there a faster algorithm then this?

    Read the article

  • C#. Whats the fastest way to make an integer positive

    - by maxima120
    I asked wrong question previously and was swamped with negative votes... Let me try again... What is absolutely fastest way to make an int positive (given 50/50 distribution of pos/neg over time). To be nominated for an answer I will require MSIL analysis and not a guess or measuring of time with granny's watch... P.S. as one of variations I proposed i * i not because I wanted to do Sqrt(i * i) afterwards but because i will be used only once to be compared to a const. And if i * i will win competition I simply multiply the const.. Hence the following solution is valid: int trigger = realTrigger * realTrigger; i = SomeCalcs(); i = i * i; if(i < trigger) DoSomething(); P.P.S. pointless rant is not acceptable.. like: why do you need this, its BS! C# cannot tolerate developers like you!

    Read the article

  • What's the fastest way to check the availability of a SQL Server server?

    - by mwolfe02
    I have an MS Access program in use in multiple locations. It connects to MS SQL Server tables, but the server name is different in each location. I am looking for the fastest way to test for the existence of a server. The code I am currently using looks like this: ShellWait "sc \\" & ServerName & " qdescription MSSQLSERVER > " & Qt(fn) FNum = FreeFile() Open fn For Input As #FNum Line Input #FNum, Result Close #FNum Kill fn If InStr(Result, "SUCCESS") Then ... ShellWait: executes a shell command and waits for it to finish Qt: wraps a string in double quotes fn: temporary filename variable I run the above code against a list of server names (of which only one is normally available). The code takes about one second if the server is available and takes about 8 seconds for each server that is unavailable. I'd like to get both of these lower, if possible, but especially the fail case as this one happens most often.

    Read the article

  • I need programmatically way to perform fastest 'trans-wrap' of mov to mp4 on iPhone/iPad application

    - by user1307877
    I want to change the container of a .mov video files that I pick using  UIImagePickerController and compressed them via AVAssetExportSession with AVAssetExportPresetMediumQuality and  shouldOptimizeForNetworkUse = YES to .mp4 container. I need programmatically way/sample code to perform a fastest 'trans-wrap' on iPhone/iPad application I tried to set AVAssetExportSession.outputFileType property to AVFileTypeMPEG4 but it not supported and I got exception I tried to do this transform using AVAssetWriter by specifying fileType:AVFileTypeMPEG4, actually I got .mp4 output file, but it was not 'wrap-trans', the output file was  3x bigger than source, and the convert process took 128 sec for video with 60 sec duration. I need solution that will run quickly and will keep the file size  please help

    Read the article

  • What is the fastest way to trim blank lines from beginning and end of array?

    - by Edward Tanguay
    This script: <?php $lines[] = ''; $lines[] = 'first line '; $lines[] = 'second line '; $lines[] = ''; $lines[] = 'fourth line'; $lines[] = ''; $lines[] = ''; $lineCount = 1; foreach($lines as $line) { echo $lineCount . ': [' . trim($line) . ']<br/>'; $lineCount++; } ?> produces this output: 1: [] 2: [first line] 3: [second line] 4: [] 5: [fourth line] 6: [] 7: [] What is the fastest, most efficient way to change the above script so that it also deletes the preceding and trailing blank entries but not the interior blank entries so that it outputs this: 1: [first line] 2: [second line] 3: [] 4: [fourth line] I could use the foreach loop but I imagine there is a way with array_filter or something similar which is much more efficient.

    Read the article

  • What is the fastest way to validate that a field has no more than n words?

    - by James A. Rosen
    I have a Ruby-on-Rails model: class Candidate < ActiveRecord::Base validates_presence_of :application_essay validate :validate_length_of_application_essay protected def validate_length_of_application_essay return if application_essay.blank? # don't add a second error message if they didn't fill it out errors.add(:application_essay, :too_long), unless ... end end Without dropping into C, what is the fastest way to check that the application_essay contains no more than 500 words? You can assume that most essays will be at least 200 words, are unlikely to be more than 5000 words, and are in English (or the pseudo-English sometimes called "business-ese"). You can also classify anything you want as a "word" as long as your classification would be immediately obvious to a typical user. (NB: this is not the place to debate what a "typical user" is :) )

    Read the article

  • fastest way to search through this data object? (python)

    - by victor
    I have a data object that looks like this: { 'node-16': { 'tags': ['cuda'], 'localNodes': [ { 'name': 'nC', 'consumesFrom': ['nA', 'nB'], 'classType': 'VectorAdder.VectorAdder' }, { 'name': 'nB', 'consumesFrom': None, 'classType': 'RandomVector' } ] }, 'node-17': { 'tags': ['boring'], 'localNodes': [ { 'name': 'nA', 'consumesFrom': None, 'classType': 'RandomVector' } ] } } Notice that node nA is a producer for nC. What's the fastest way to find out if a given localNode is a producer for another localnode in the data structure (and not within the same list)? For example, I would like to know that nA (node-17) produces for nC (exists on node-16). But I don't need to know that nB produces for nC, since they exist in the same localNodes list.

    Read the article

  • Fastest way of converting a quad to a triangle strip?

    - by Tina Brooks
    What is the fastest way of converting a quadrilateral (made up of foyr x,y points) to a triangle strip? I'm well aware of the general triangulation algorithms that exist, but I need a short, well optimized algorithm that deals with quadrilaterals only. My current algorithm does this, which works for most quads but still gets the points mixed up for some: #define fp(f) bounds.p##f /* Sort four points in ascending order by their Y values */ point_sort4_y(&fp(1), &fp(2), &fp(3), &fp(4)); /* Bottom two */ if (fminf(-fp(1).x, -fp(2).x) == -fp(2).x) { out_quad.p1 = fp(2); out_quad.p2 = fp(1); } else { out_quad.p1 = fp(1); out_quad.p2 = fp(2); } /* Top two */ if (fminf(-fp(3).x, -fp(4).x) == -fp(3).x) { out_quad.p3 = fp(3); out_quad.p4 = fp(4); } else { out_quad.p3 = fp(4); out_quad.p4 = fp(3); }

    Read the article

  • Fastest way to convert file from latin1 to utf-8 in python.

    - by xsaero00
    I need fastest way to convert files from latin1 to utf-8 in python. The files are large ~ 2G. ( I am moving DB data ). So far I have import codecs infile = codecs.open(tmpfile, 'r', encoding='latin1') outfile = codecs.open(tmpfile1, 'w', encoding='utf-8') for line in infile: outfile.write(line) infile.close() outfile.close() but it is still slow. The conversion takes one fourth of the whole migration time. I could also use a linux command line utility if it is faster than native python code.

    Read the article

  • What is the fastest way to compare 2 rows in SQL?

    - by Swoosh
    I have 2 different databases. Upon changing something in the big one (i don't have access to), i get some rows imported in my databases in a similar HUGE table. I have a job checking for records in this table, and if any, execute a stored procedure, process and delete from table. Performance. (Huge amount of data) I would like to know what is the fastest way to know if something has changed using let's say 2 imported rows with 100 columns each. Don't have FK-s, don't need. Chances are, that even though I have records in my table, nothing has actually changed. Also. Let's say there is actually changed something. Is it possible for example to check only for changes inside datetime columns? Thanks

    Read the article

  • What is the fastest way to get the persisted object after calling Hibernate's saveOrUpdate?

    - by Dave
    I'm using Hibernate 3.2.1.ga, hibernate annotations 3.2.1.ga, and hibernate-jpa-2.0-api. I can't upgrade at this time as I'm working with legacy code. I have this generic method for saving or updating objects ... protected void saveOrUpdate(Object obj) { final Session session = sessionFactory.getCurrentSession(); session.saveOrUpdate(obj); } You can assume that every argument, "obj," will have a member field that is marked with the "@Id" annotation. I would like to change the return type to return an Object that represents the persisted object in the database (meaning if "obj" didn't contain an id before, what is returned is the database object with a populated id. What is the fastest way to do this given my versioning and generic constraints?

    Read the article

  • What's the fastest way to compare two objects in PHP?

    - by johnnietheblack
    Let's say I have an object - a User object in this case - and I'd like to be able to track changes to with a separate class. The User object should not have to change it's behavior in any way for this to happen. Therefore, my separate class creates a "clean" copy of it, stores it somewhere locally, and then later can compare the User object to the original version to see if anything changed during its lifespan. Is there a function, a pattern, or anything that can quickly compare the two versions of the User object? Option 1 Maybe I could serialize each version, and directly compare, or hash them and compare? Option 2 Maybe I should simply create a ReflectionClass, run through each of the properties of the class and see if the two versions have the same property values? Option 3 Maybe there is a simple native function like objects_are_equal($object1,$object2);? What's the fastest way to do this?

    Read the article

  • All things equal what is the fastest way to output data to disk in C++?

    - by user260197
    I am running simulation code that is largely bound by CPU speed. I am not interested in pushing data in/out to a user interface, simply saving it to disk as it is computed. What would be the fastest solution that would reduce overhead? iostreams? printf? I have previously read that printf is faster. Will this depend on my code and is it impossible to get an answer without profiling? Edit: Output data needs to be in text format, whether tab or comma separated. This will require formatting, precision, etc. Running in Windows.

    Read the article

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

  • How to store data in mysql, to get the fastest performance?

    - by Oden
    Hey, I'm thinking about it, witch of the following two query types would give me the fastest performance for a user messaging module inside my site: The first one i thought about is a multi table setup, witch has a connection table, and a main table. The connection table holds the connection between accounts, and the messaging table. In this case a query would look like following, to get some data of the author, and the messages he has sent: SELECT m.*, a.username FROM messages AS m LEFT JOIN connection_table ON (message_id = m.id) LEFT JOIN accounts AS a ON (account_id = a.id) WHERE m.id = '32341' Inserting into it is a little bit more "complicated". My other idea, and in my thought the better solution of this problem is that i store the data i would use in a connection table in the same table where is store the data of the mail. Sounds like i would get lots of duplicated entries, but no, because i have a field witch has text type and holds user ids like this: *24*32*249* If I want to query them, i use the mysql LIKE method. Deleting is an other problem, but for this i have one more field where i store who has deleted the post. Sad about that i don't know how to join this. So what would you recommend? Are there other ways?

    Read the article

  • What are the fastest-performing options for a read-only, unordered collection of unique strings?

    - by Dan Tao
    Disclaimer: I realize the totally obvious answer to this question is HashSet<string>. It is absurdly fast, it is unordered, and its values are unique. But I'm just wondering, because HashSet<T> is a mutable class, so it has Add, Remove, etc.; and so I am not sure if the underlying data structure that makes these operations possible makes certain performance sacrifices when it comes to read operations -- in particular, I'm concerned with Contains. Basically, I'm wondering what are the absolute fastest-performing data structures in existence that can supply a Contains method for objects of type string. Within or outside of the .NET framework itself. I'm interested in all kinds of answers, regardless of their limitations. For example I can imagine that some structure might be restricted to strings of a certain length, or may be optimized depending on the problem domain (e.g., range of possible input values), etc. If it exists, I want to hear about it. One last thing: I'm not restricting this to read-only data structures. Obviously any read-write data structure could be embedded inside a read-only wrapper. The only reason I even mentioned the word "read-only" is that I don't have any requirement for a data structure to allow adding, removing, etc. If it has those functions, though, I won't complain.

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • What is the fastest way to check if files are identical?

    - by ojblass
    If you have 1,000,0000 source files, you suspect they are all the same, and you want to compare them what is the current fasted method to compare those files? Assume they are Java files and platform where the comparison is done is not important. cksum is making me cry. When I mean identical I mean ALL identical. Update: I know about generating checksums. diff is laughable ... I want speed. Update: Don't get stuck on the fact they are source files. Pretend for example you took a million runs of a program with very regulated output. You want to prove all 1,000,000 versions of the output are the same. Update: read the number of blocks rather than bytes? Immediatly throw out those? Is that faster than finding the number of bytes? Update: Is this ANY different than the fastest way to compare two files?

    Read the article

  • Interview question : What is the fastest way to generate prime number recursively ?

    - by hilal
    Generation of prime number is simple but what is the fastest way to find it and generate( prime numbers) it recursively ? Here is my solution. However, it is not the best way. I think it is O(N*sqrt(N)). Please correct me, if I am wrong. public static boolean isPrime(int n) { if (n < 2) { return false; } else if (n % 2 == 0 & n != 2) { return false; } else { return isPrime(n, (int) Math.sqrt(n)); } } private static boolean isPrime(int n, int i) { if (i < 2) { return true; } else if (n % i == 0) { return false; } else { return isPrime(n, --i); } } public static void generatePrimes(int n){ if(n < 2) { return ; } else if(isPrime(n)) { System.out.println(n); } generatePrimes(--n); } public static void main(String[] args) { generatePrimes(200); }

    Read the article

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • The fastest way to encode image+audio for Youtube from command line?

    - by Pavel Vlasov
    I have an mp3 and image and I want to make a simple clip to upload onto Youtube. Is there a fast solution? If video formats are so bad designed, then maybe it is possible to use a prerendered video-only clip? This works good except it takes as much time as the audio lasts: ffmpeg -loop_input -r ntsc -i "%IMAGE%" -i "%AUDIO%" -r 1 -acodec copy -shortest -re -force_fps "%VIDEO%" This takes a second but results in a black screen video that is successfully played by a desktop video player but not acceptable by Youtube: ffmpeg -i "%IMAGE%" -i "%AUDIO%" -acodec copy "%VIDEO%" Windows 7. Preserving audio quality is preferred over video quality.

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • What is the fastest and best way to convert an rmvb video to mp4/mkv without losing any quality?

    - by Eric Leung
    the file will be played in a popbox3d. my old method was to convert the video using vidcoder (an offshoot of handbrake) using normal settings, but i've recently confirmed that this significantly reduces video and audio quality. i bumped up the conversion quality to 'high profile' and this produced a higher quality video but raised the conversion time to about twice the video length (95 minutes to convert a 45 minute video) on a core2duo laptop. this is less than ideal when a large number of videos need to be converted. i have tried a direct remuxing using mkv toolnix but this produced a video that refused to display video on the popbox3d, which is consistent with the reported: [quote=other old thread] it is possible to put RealMedia A/V in MKV container (used MKVtoolnix) - however, it is awkward to play later. RV40 is only suspected to be based on H.264 - simplify, is not consistent with MPEG-4 AVC specification. [/quote] i have read that ... [quote=from old thread] Under normal circumstances, [ffmpeg] should convert the video to .video.mp4 and the audio to (.wav then to) .audio.mp4, then mux the video and audio into a new .mp4 file and delete the temporary video-only and audio-only files.[/quote] and i am currently attempting to discover how this is done. help? PS: i download a lot of series from asia and for some strange reason, rmvb is a really popular format over there. sometimes, it's the only format that's available. unfortunately, it's a format that is incompatible with the popbox3d, so i have to convert the files before i can watch them on my tv.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >