Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 232/266 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • jquery animate boxshadow

    - by mstef
    http://jsfiddle.net/mstefanko/w5aAn/877/ Below, i'm achieving the effect I wanted, but with a pending issue. Since i'm using a separate span and positioning it absolute over-top of a box with relative position. I can not access the inputs until after the animation is finished. I'm guessing the only way to alleviate this would be do something similar with just animating the border of the outer box? But nothing I was doing to animate box-shadow:inset was working. HTML <div id="wow"> <span id="pulse"></span> <input id="form-input"/> <input id="form-input"/> </div>? CSS #wow { width: 500px; height: 200px; display: inline-block; position: relative; border: 1px solid black; } #pulse { width: 100%; height: 100%; box-shadow:inset 0 0 20px #6c95c3; -moz-box-shadow:inset 0 0 10px #6c95c3; position: absolute; z-index: 20000; } JS $('#pulse').stop().animate({"opacity": 0}, "fast"); $('#pulse').effect("pulsate", { times:4 }, 500, function() { $(this).remove(); });

    Read the article

  • populating an nsarray

    - by MoKaM
    I intend to make a program that does the following: Create an NSArray populated with numbers from 1 to 100,000. Loop over some code that deletes certain elements of the NSArray when certain conditions are met. Store the resultant NSArray. However the above steps will also be looped over many times and so I need a fast way of making this NSArray that has 100,000 number elements. So what is the fastest way of doing it? Is there an alternative to iteratively populating an Array using a for loop? Such as an NSArray method that could do this quickly for me? Or perhaps I could make the NSArray with the 100,000 numbers by any means the first time. And then create every new NSArray (for step 1) by using method arraywithArray? (is it quicker way of doing it?) Or perhaps you have something completely different in mind that will achieve what I want. edit: Replace NSArray with NSMutableArray in this post.

    Read the article

  • jquery an div tags used in a ordered list

    - by shan2on
    hello everyone, i'm not sure where my errors are lying but here is my scenario... i have an un-ordered list in html that handles my main link bar... <ul id="navlist"> <li><a href="/" id="current">home</a></li> <li><a href="#" id="search">search</a></li> <li><a href="#" id="users">login</a></li> </ul> and what i want done is to call each id and have jQuery handle each and dislpay it where specified in my css file... for now, lets just ignore my "home" link... here is a snippet of my inline jquery $(document).ready(function() { $("a").click(function () { $("div#login").slideToggle("fast"); }); }) Now, whichever div i specify, it calls that ... however it will only call one div, using either from the un-orderd list, as mentioned above in the html. My Goal is to call a search bar when search is clicked, and to call a login box when login is clicked. to my knowledge (noob), i believe use an as a class, however that is what brings me here. i believe my error is in my jquery and not my css, as stated above, either instance will be displayed when the jquery function calls either tag. any help is greatly appreciated. thanks in advance shan2on

    Read the article

  • Are reads and (transactional) writes faster for entities of the same group than otherwise?

    - by indiehacker
    What advantage is there to designing child-parent relationships, which allow us to do writes in transactions, when there is never a real concern for consistency and contention and those sort of more complex issues? Does it make writes and reads faster? Consider my situation where there are many .png images that are referenced to one mosaic layer, and these .png images are written just once by a single user. The user can design many mosaic layers and her mosaic layers and referenced image entities are never changed/updated, they are just deleted some time in the future. Other users can come to the web project site and interactively view the mosaic layer as different layouts/configurations of the images as they play (query) with different criteria. So reads should be very fast. So there is no real worry of contention, or users conflicting with one another with writing new image entities. And because of that I am assuming there is no "requirement" for the .png image entities to be grouped by their same mosaic layer in child-parent relationship. However, perhaps, since the documentation says they are stored close to one another, if the many image entities were grouped as children to a single mosaic layer parent than this has the advantage that the writing (in transaction) and reading will happen much faster?

    Read the article

  • Design Advice Needed For Synonyms Database

    - by James J
    I'm planning to put together a database that can be used to query synonyms of words. The database will end up huge, so the idea is to keep things running fast. I've been thinking about how to do this, but my database design skills are not up to scratch these days. My initial idea was to have each word stored in one table, and then another table with a 1 to many relationship where each word can be linked to another word and that table can be queried. The application I'm developing allows users to highlight a word, and then type in, or select some synonyms from the database for that word. The application learns from the user input so if someone highlights "car" and types in "motor" the database would be updated to link the relationship if it don't exist already. What I don't want to happen is for a user to type in the word "shop" and link it to the word car. So I'm thinking I will need to add some sort of weight to each relationship. Eventually the synonyms the users enter will be used so they can auto select common synonyms used with a certain word. The lower weight words will not be displayed so shop could never be a synonym of car unless it had a very high weight, and chances are nobody is going to do that. Does the above sound right? Can you offer any suggestions or improvements?

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Responsive: two different toggles on same element

    - by Mathijs Delva
    I'm having difficulties with the following problem. For a responsive website, i need to use the same toggle element for both a toggle system for one window width, and another toggle system for a second window width. I have the following snippets: 1.A simple hover for a language dropdown, which should be executed for resolutions greater than 980px. $('#clickme').hover(function() { $(this).parent().find("#select-language").show(); $(this).find("> a span").css({"opacity":"0.5"}); }, function() { $(this).parent().find("#select-language").hide(); $(this).find("> a span").css({"opacity":"1"}); }); 2.A simple click for the same language toggle, which should be executed for resolutions smaller than 980px. jQuery('#clickme').click(function() { jQuery("#select-language-mobile").slideToggle("fast"); }); I need to nest these two, so that when the user is viewing the website in one resolution, the click function with be fired, and in the other case the hover function. Can anybody help me with this? PS: I'm sorry, but the code blocks don't seem to work at the moment.

    Read the article

  • What's the steps for SQL optimization and changes without reflect live system ?

    - by Space Cracker
    we have a big portal that build using SharePoint 2007 , asp.net 3.5 , SQL Server 2005 .. many developers work in it since 01/2008 and we are now doing huge analysis for current SQL Databases [not share-point DB ] to optimize and enhance it. The main db have about 330 table and 1720 stored procedure (SP) created from 01/2008 till now Many table names / Columns is very long and we want to short it we found SP names is written in 25 format :( , some of them are very complex and also we want to rename many SP parameters need to be renamed one of the biggest table is Registered user table, that will be spitted in more than one table for some optimization, many columns name will be changed I searched for the way that i can rename table names ,columns and i found SQL refactor tool but i still trying it .. my questions : Is SQl Refactor is the best tool for renaming ? or is there any other one ? if i want to make it manually, is there any references or best practice for that ? How can i do such changes in fast and stable way .. i search for recommendations and case studies if exist ?

    Read the article

  • Split user.config into different files for faster saving (at runtime)

    - by HorstWalter
    In my c# Windows Forms application (.net 3.5 / VS 2008) I have 3 settings files resulting in one user.config file. One setting file consists of larger data, but is rarely changed. The frequently changed data are very few. However, since the saving of the settings is always writing the whole (XML) file it is always "slow". SettingsSmall.Default.Save(); // slow, even if SettingsSmall consists of little data Could I configure the settings somehow to result in two files, resulting in: SettingsSmall.Default.Save(); // should be fast SettingsBig.Default.Save(); // could be slow, is seldom saved I have seen that I can use the SecionInformation class for further customizing, however what would be the easiest approach for me? Is this possible by just changing the app.config (config.sections)? --- added information about App.config The reason why I get one file might be the configSections in the App.config. This is how it looks: <configSections <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" <section name="XY.A.Properties.Settings2Class" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" / <section name="XY.A.Properties.Settings3Class" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" / </sectionGroup </configSections I got the sections when I've added the 2nd and 3rd settings file. I have not paid any attention to this, so it was somehow the default of VS 2008. The single user.config has these 3 sections, it is absolutely transparent. Only I do not know how to tell the App.config to create three independent files instead of one. I have "played around" with the app.config above, but e.g. when I remove the config sections my applications terminates with an exception.

    Read the article

  • Simple Oracle File repository with folder hierarchy

    - by Ope
    I have an application that stores large amount of files (XML and binary) in folder hierarchies. Currently the main method is storing them in file system or using a legacy CMS, which we want to get rid of. The CMS supports Oracle and a customer wants to keep the files in Oracle because of enterprise policies (backup etc.) The question is: Is there a simple implementation of file repository with folder hierarchy for Oracle? What I am looking for is a small .Net component or example code (PL/SQL and/or .Net) that would have the following methods: Create, Delete, Exists Folder CRUD file Move and potentially Copy file or directory Access to files and folders with paths like "/root/folder1/folder2/file.xml" Ability to get all the files and folders in a folder and potentially also the entire directory tree Tree traversal, getting the parent, all children etc. needs to be fast. I need the implementation in .Net, but if it was just the stored procedures, I could create the .Net calling code. I have pointers to generic articles for creating hierarchies in DB, so if I need to do it from scratch, I know where to start. What I am asking here, is there already an implementation that I could take without doing this from scratch? It seems like such a generic requirement... If the answer is a CMS, Document management system or such it should be Open Source or at least quite cheap (some hundreds / server) and it should be possible to deploy it XCopy - hopefully only couple of DLL:s. I do not need - or want - a full featured big CMS with dozens of dlls and especially not an msi-installation. I have tried to google this, but the words "repository", "CMS", "file hierarchy" etc. give so many answers, the searches are pretty much useless. Thanks, OPe

    Read the article

  • How to get results efficiently out of an Octree/Quadtree?

    - by Reveazure
    I am working on a piece of 3D software that has sometimes has to perform intersections between massive numbers of curves (sometimes ~100,000). The most natural way to do this is to do an N^2 bounding box check, and then those curves whose bounding boxes overlap get intersected. I heard good things about octrees, so I decided to try implementing one to see if I would get improved performance. Here's my design: Each octree node is implemented as a class with a list of subnodes and an ordered list of object indices. When an object is being added, it's added to the lowest node that entirely contains the object, or some of that node's children if the object doesn't fill all of the children. Now, what I want to do is retrieve all objects that share a tree node with a given object. To do this, I traverse all tree nodes, and if they contain the given index, I add all of their other indices to an ordered list. This is efficient because the indices within each node are already ordered, so finding out if each index is already in the list is fast. However, the list ends up having to be resized, and this takes up most of the time in the algorithm. So what I need is some kind of tree-like data structure that will allow me to efficiently add ordered data, and also be efficient in memory. Any suggestions?

    Read the article

  • How to debug properly and find causes for crashes?

    - by Newbie
    I dont know what to do anymore... its hopeless. I'm getting tired of guessing whats causing the crashes. Recently i noticed some opengl calls crashes programs randomly on some gfx cards. so i am getting really paranoid what can cause crashes now. The bad thing on this crash is that it crashes only after a long time of using the program, so i can only guess what is the problem. I cant remember what changes i made to the program that may cause the crashes, its been so long time. But luckily the previous version doesnt crash, so i could just copypaste some code and waste 10 hours to see at which point it starts crashing... i dont think i want to do that yet. The program crashes after i make it to process the same files about 5 times in a row, each time it uses about 200 megabytes of memory in the process. It crashes at random times while and after the reading process. I have createn a "safe" free() function, it checks the pointer if its not NULL, and then frees the memory, and then sets the pointer to NULL. Isn't this how it should be done? I watched the task manager memory usage, and just before it crashed it started to eat 2 times more memory than usual. Also the program loading became exponentially slower every time i loaded the files; first few loads didnt seem much slower from each other, but then it started rapidly doubling the load speeds. What should this tell me about the crash? Also, do i have to manually free the c++ vectors by using clear() ? Or are they freed after usage automatically, for example if i allocate vector inside a function, will it be freed every time the function has ended ? I am not storing pointers in the vector. -- Shortly: i want to learn to catch the damn bugs as fast as possible, how do i do that? Using Visual Studio 2008.

    Read the article

  • jQuery $.each()-problem

    - by Volmar
    Hi, im making a wordpress plugin and i have a function where i import images, this is done with a $.each()-loop that calls a .load()-function every iteration. The load-function page the load-function calls is downloading the image and returns a number. The number is imported into a span-element. The source and destination Arrays is being imported from LI-elemnts of a hidden ULs. this way the user sees a counter counting from zero up to the total number of images being imported. You can se my jQuery code below: jQuery(document).ready(function($) { $('#mrc_imp_img').click(function(){ var dstA = []; var srcA = []; $("#mrc_dst li").each(function() { dstA.push($(this).text()) }); $("#mrc_src li").each(function() { srcA.push($(this).text()) }); $.each(srcA, function (i,v) { $('#mrc_imgimport span.fc').load('/wp-content/plugins/myplugin/imp.php?num='+i+'&dst='+dstA[i]+'&src='+srcA[i]); }); }); }); This works pretty good but sometimes it looks like the load function isn't updating the DOM as fast as it should because sometimes the numbers that the span is updated with is lower than the previous and almost everytime a lower number is replacing the last number in the end. How can i prevent this from happening and how can i make it hide '#mrc_imp_img' when the $.each-loop is ready?

    Read the article

  • How to store and synchronize a big list of strings

    - by Joel
    I have a large database table in SQLExpress on Windows, with a particular field of interest 'code'. I have an Apache web server with MySQL on Linux. The web application on the Linux box needs access to the list of all codes. The only thing it will use the list for is checking for the existence of a given code. Having the Linux server call out to the Windows server is impractical as the Windows server is behind a NAT'ed office internet connection, and it may not always be accessible. I have set it so the Windows server will push the list of codes to the web server by means of a simple HTTP POST request. However, at this point I have not implemented the storage of the codes on the Linux box. Should I store them in a MySQL table with a single field 'code'? Then I get fast indexed lookups O(1), however I think synchronization will be an issue - given an updated list of codes, pushed from the Windows box, how would I optimally synchronize the list with the database? TRUNCATE, followed by INSERT? Should I instead store them in a flat file? Then I have O(n) look up time rather than O(1). Additionally an extra constant-time overhead too, as I will be processing the file in Ruby. However, synchronization is easy - simply replace the file.

    Read the article

  • Using "as bool?" instead of "object something = ViewState["hi"]"

    - by Programmin Tool
    So I'm going through old code (2.0) and I came across this: object isReviewingValue = ViewState["IsReviewing"]; if (isReviewingValue is bool) { return (bool)isReviewingValue; } My first thought was to us the "as" keyword to avoid the unneeded (bool)isReviewingValue; But "as" only works with non value types. No problem, I just went ahead and did this: bool? isReviewingValue= ViewState["IsReviewing"] as bool?; if (isReviewingValue.HasValue) { return isReviewingValue.Value; } Question is: Besides looking a bit more readable, is this in fact better? EDIT: So this is getting more interesting. I decided to test it using a simple stopwatch and turns out that the second is much faster... Which after reading some of the responses here I didn't expect at all. I was thinking for sure my way was much slower. Tell me what I did wrong: public Stopwatch AsRun() { Stopwatch watch = new Stopwatch(); watch.Start(); for (Int32 loopCounter = 0; loopCounter < 10000; loopCounter++) { Boolean? test = true as Boolean?; if (test.HasValue) { Boolean something = test.Value; } } watch.Stop(); return watch; } public Stopwatch ObjectIsRun() { Stopwatch watch = new Stopwatch(); watch.Start(); for (Int32 loopCounter = 0; loopCounter < 10000; loopCounter++) { Object test = true; if (test is Boolean) { Boolean something = (Boolean)test; } } watch.Stop(); return watch; } Every time I run these methods against each other, the AsRun is twice as fast.

    Read the article

  • what are good ways to implement search and search results using ajax?

    - by Amr ElGarhy
    i have some text box in a page and in the same page there will be a table 'grid' like for holding the search result. When the user start editing and of the textbox above, the search must start by sending all textboxs values to the server 'ajax', and get back with the results to fill the below grid. Notes: This grid should support paging, sorting by clicking on headers and it will contains some controls beside the results such as checkboxs for boolean values and links for opening details in another page. I know many ways to do this some of them are: 1- updatepanel around all of these controls and thats it "fast dirty solution" 2- send the search criteria using ajax request using JQuery post function for example and get back the JSON result, and using a template will draw the grid "clean but will take time to finish and will be harder to edit later". 3- .... My question is: What do you think will be the best choice to implement this scenario? because i face this scenario too much, and want to know which implementation will be better regarding performance, optimization, and time to finish. I just want to know your thoughts about this issue.

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • NHibernate: Using value tables for optimization AND dynamic join

    - by Kostya
    Hi all, My situation is next: there are to entities with many-to-many relation, f.e. Products and Categories. Also, categories has hierachial structure, like a tree. There is need to select all products that depends to some concrete category with all its childs (branch). So, I use following sql statement to do that: SELECT * FROM Products p WHERE p.ID IN ( SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 ) But with big set of data this query executes very long and I found some solution to optimize it, look at here: DECLARE @CatProducts TABLE ( ProductID int NOT NULL ) INSERT INTO @CatProducts SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 SELECT * FROM Products p INNER JOIN @CatProducts cp ON cp.ProductID = p.ID This query executes very fast but I don't know how to do that with NHIbernate. Note, that I need use only ICriteria because of dynamic filtering\ordering. If some one knows a solution for that, it will be fantastic. But I'll pleasure to any suggestions of course. Thank you ahead, Kostya

    Read the article

  • Remove the hash after Ajax loading (I'm ajaxing wordpress 8-) )

    - by Alberto
    Hi everybody, I followed this great tutorial to"ajax" my blog:http://www.deluxeblogtips.com/2010/05/how-to-ajaxify-wordpress-theme.html But it creates some problems and I think the problem is in the hash that ajax creates. So, after the content is loaded, how can I remove the hash from the url? I copy my code here: jQuery(document).ready(function($) { var $mainContent = $("#content"), siteUrl = "http://" + top.location.host.toString(), url = ''; $(document).delegate("a[href^='"+siteUrl+"']:not([href*=/wp-admin/]):not([href*=/wp-login.php]):not([href$=/feed/]):not([href*=/go.php]):not(.comment-reply-link)", "click", function() { location.hash = this.pathname; $('html, body').animate({scrollTop:0}, 'fast'); return false; }); $("#searchform").submit(function(e) { location.hash = '?s=' + $("#search").val(); e.preventDefault(); }); $(window).bind('hashchange', function(){ url = window.location.hash.substring(1); if (!url) { return; } url = url + " #inside"; $mainContent.html('<div id="loader">Caricamento in corso...</div>').load(url, function() { //$mainContent.animate({opacity: "1"}); scriptss(); }); }); $(window).trigger('hashchange'); }); Thank all very much!

    Read the article

  • UIDs for data objects in MySQL

    - by Callash
    Hi there, I am using C++ and MySQL. I have data objects I want to persist to the database. They need to have a unique ID for identification purposes. The question is, how to get this unique ID? Here is what I came up with: 1) Use the auto_increment feature of MySQL. But how to get the ID then? I am aware that MySQL offers this "SELECT LAST_INSERT_ID()" feature, but that would be a race condition, as two objects could be inserted quite fast after each other. Also, there is nothing else that makes the objects discernable. Two objects could be created pretty much at the same time with exactly the same data. 2) Generate the UID on the C++ side. No dice, either. There are multiple programs that will write to and read from the database, who do not know of each other. 3) Insert with MAX(uid)+1 as the uid value. But then, I basically have the same problem as in 1), because we still have the race condition. Now I am stumped. I am assuming that this problem must be something other people ran into as well, but so far, I did not find any answers. Any ideas?

    Read the article

  • Python2.7: How can I speed up this bit of code (loop/lists/tuple optimization)?

    - by user89
    I repeat the following idiom again and again. I read from a large file (sometimes, up to 1.2 million records!) and store the output into an SQLite databse. Putting stuff into the SQLite DB seems to be fairly fast. def readerFunction(recordSize, recordFormat, connection, outputDirectory, outputFile, numObjects): insertString = "insert into NODE_DISP_INFO(node, analysis, timeStep, H1_translation, H2_translation, V_translation, H1_rotation, H2_rotation, V_rotation) values (?, ?, ?, ?, ?, ?, ?, ?, ?)" analysisNumber = int(outputPath[-3:]) outputFileObject = open(os.path.join(outputDirectory, outputFile), "rb") outputFileObject, numberOfRecordsInFileObject = determineNumberOfRecordsInFileObjectGivenRecordSize(recordSize, outputFileObject) numberOfRecordsPerObject = (numberOfRecordsInFileObject//numberOfObjects) loop1StartTime = time.time() for i in range(numberOfRecordsPerObject ): processedRecords = [] loop2StartTime = time.time() for j in range(numberOfObjects): fout = outputFileObject .read(recordSize) processedRecords.append(tuple([j+1, analysisNumber, i] + [x for x in list(struct.unpack(recordFormat, fout))])) loop2EndTime = time.time() print "Time taken to finish loop2: {}".format(loop2EndTime-loop2StartTime) dbInsertStartTime = time.time() connection.executemany(insertString, processedRecords) dbInsertEndTime = time.time() loop1EndTime = time.time() print "Time taken to finish loop1: {}".format(loop1EndTime-loop1StartTime) outputFileObject.close() print "Finished reading output file for analysis {}...".format(analysisNumber) When I run the code, it seems that "loop 2" and "inserting into the database" is where most execution time is spent. Average "loop 2" time is 0.003s, but it is run up to 50,000 times, in some analyses. The time spent putting stuff into the database is about the same: 0.004s. Currently, I am inserting into the database every time after loop2 finishes so that I don't have to deal with running out RAM. What could I do to speed up "loop 2"?

    Read the article

  • PHP site scheduling Java execution?

    - by obfuscation
    I'm trying to get started on combining my (slightly limited) PHP experience with my (better) Java experience, in a project where I need to allow uploads of Java source files to the server, which the server then executes Javac on to compile it. Then, at a set time (e.g. specified on upload) I need to run that once on the server, which will generate some database info for the PHP site to display. To describe my current programming abilities- I have made many desktop Java programs, and am confident in 'pure' Java, but so far have only undertaken a couple of PHP projects (including using the CodeIgniter framework). My motivation for using PHP as the frontend is because I know it is very fast, lightweight and I will be able to display the results I need very easily with it (simple DB readout). Ideally, the technology used should be able to be developed on a localhost (e.g. WAMP, Tomcat etc..) Is there any advice which you could give on what technology I should consider to use to bridge this gap, and what resources could help in using that technology? I have looked at a few, but have struggled to find documentation helping in achieving what I need.

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • Replacing repetitively occuring loops with eval in Javascript - good or bad?

    - by Herc
    Hello stackoverflow! I have a certain loop occurring several times in various functions in my code. To illustrate with an example, it's pretty much along the lines of the following: for (var i=0;i<= 5; i++) { function1(function2(arr[i],i),$('div'+i)); $('span'+i).value = function3(arr[i]); } Where i is the loop counter of course. For the sake of reducing my code size and avoid repeating the loop declaration, I thought I should replace it with the following: function loop(s) { for (var i=0;i<= 5; i++) { eval(s); } } [...] loop("function1(function2(arr[i],i),$('div'+i));$('span'+i).value = function3(arr[i]);"); Or should I? I've heard a lot about eval() slowing code execution and I'd like it to work as fast as a proper loop even in the Nintendo DSi browser, but I'd also like to cut down on code. What would you suggest? Thank you in advance!

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >