Search Results

Search found 6587 results on 264 pages for 'slow motion'.

Page 224/264 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Dynamically created operators

    - by Gero
    I created a program using dev-cpp and wxwidgets which solves a puzzle. The user must fill the operations blocks and the results blocks, and the program will solve it. Im solving it using bruteforce, i generate all non repeated 9 length number combinations using a recursive algorithm. It does it pretty fast. Up to here all is great! But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. Im doing a CASE. Is there some way or some programming language wich allows dinamic creation of operators? So i can define the operator ROW1COL2 to be a +, and the same way to all other operations. I leave a screenshot of the app, so its easier to understand how the puzzle works. http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png PD: The algorithm works, i tryed it with a trivial puzzle, and solved it in a second.

    Read the article

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • Enable cross app redirects

    - by Gogster
    Hi all, We have load balancing set up on out two web server, however, a few users are being asked to login when they are being re-directed to a particular server to upload a document (we are trying to keep all uploaded documents on one server only), here is the code from web.config: <authentication mode="Forms"> <forms name="EAAAuthCookie" loginUrl="/login" defaultUrl="/members/home" protection="All" path="/" timeout="60000" slidingExpiration="true" enableCrossAppRedirects="true" /> </authentication> <machineKey decryption="AES" validation="SHA1" decryptionKey="7B4EC5B0C83631DF25D5B179EDDBF91B1C175B81C6F52102267D3D097FBF272A" validationKey="7D1F50788629CC342EE4985D85DE3D14F10654695912C0FFD439F54BED64F76A57A2D5E8180BC6FF052E0385C30558F5527D6C197C577A7F32DD8FF1CAC9F794" /> Here is the transfer code to the upload form: $('#addReport').click(function() { if ($.cookie('TransferURL') != '') { $("#iframeUploadReport").attr('src', $.cookie('TransferURL')); }; $('#overlay').fadeIn('slow'); }); <script type="text/C#" runat="server"> void Page_Load() { string cookieName = FormsAuthentication.FormsCookieName; string userName = Request.Cookies["HiddenUsername"].ToString(); string cookieValue = FormsAuthentication.GetAuthCookie(userName, false).Value; Response.Cookies["TransferURL"].Value = "http://eaa.cms.necinteractive.net/members/media-upload" + String.Format("?{0}={1}", cookieName, cookieValue); } </script> <iframe id="iframeUploadReport" src="http://eaa.cms.necinteractive.net/members/media-upload" width="500px" height="336px" frameborder="0" scrolling="no"></iframe> Can you see any obvious step we are missing? Thanks

    Read the article

  • MS Access form_current() firing multiple times

    - by Eric G
    I have a form with two subforms (on separate tab pages). It's an MDB project in Access 2003. When it initially opens, Form_Current on the active subform fires once, as it should. But when you move to another record (ie. from the main form), it fires Form_Current on the active subform 4 times. Then subsequent record-moves result in Form_Current firing 2 times. This is a pain, because the subforms have a lot of fields that get moved and/or hidden and so it jumps around for every Form_Current, not to mention being slow. I am opening the form with a filter via DoCmd.OpenForm (actually it sends the filter in via OpenArgs). FilterOn is only set once, in Form_Open on the main form, never in the subforms. Form_Current is not called explicitly anywhere else in the code. When I look at the call stack when Form_Current fires moving the first time, it looks like: my_subform.Form_Current [<Debug Window>] my_subform.Form_Current So it seems like something in Form_Current is triggering another Form_Current event. But only on the first record move. The code in Form_Current is somewhat complex, involving custom classes and event callbacks, but generally does not touch the table data. The only thing I can think might be triggering a Form_Current is that it checks OldValue on form controls - could this be causing it? Or anything else come to mind? Thanks. Eric

    Read the article

  • Autocomplete and IE7 - slowness, sluggishness as overall pagesize grows?

    - by wchrisjohnson
    Hi, I have the autocomplete plugin (http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/, version 1.1 and 1.0.2) on a project to add pieces of "equipment" to a "project". On a fresh project the plugin works great; the data returned from the database comes back FAST, you can scroll the list fast, and can select an item and move on to the next one. Once I have a project established with equipment on it, and I go to add equipment, the performance is pretty bad. It takes 4-5 seconds to get the list of data back from the server, scrolling the list is painful, and the cursor takes several seconds to settle on an item. Repainting the page after the list goes away is slow. This is occurring in IE7, latest version. FF3 and Chrome are fine, very snappy. The pagesize is about 40K overall. I'm thinking this is an issue with the IE7 Javascript engine, or an edge case with this plugin and IE7; it works quickly enough in FF3+. I would appreciate any ideas, solutions, known issues, or thoughts on how to more specifically pin this down. I'd love to post sample code, but this is a corporate app, and I'm not how useful it would be given that the server side piece cannot be shown; ie: you can't pull it down and test it like a self contained piece of code.. Thanks in advance! Chris

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • Web Workers - Transferable Objects for JSON

    - by kclem06
    HTML 5 Web workers are very slow when using worker.postMessage on a large JSON object. I'm trying to figure out how to transfer a JSON Object to a web worker - using the 'Transferable Objects' types in Chrome, in order to increase the speed of this. Here is what I'm referring to and appears it should speed this up quite a bit: http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast I'm having trouble finding a good example of this (and I don't believe I want to use an ArrayBuffer). Any help would be appreciated. I'm imagining something like this: worker = new Worker('workers.js'); var large_json = {}; for(var i = 0; i < 20000; ++i){ large_json[i] = i; large_json["test" + i] = "string"; }; //How to make this call to use Transfer Objects? Takes approx 2 seconds to serialize this for me currently. worker.webkitPostMessage(large_json);

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Drawing to the canvas

    - by Mattl
    I'm writing an android application that draws directly to the canvas on the onDraw event of a View. I'm drawing something that involves drawing each pixel individually, for this I use something like: for (int x = 0; x < xMax; x++) { for (int y = 0; y < yMax; y++){ MyColour = CalculateMyPoint(x, y); canvas.drawPoint(x, y, MyColour); } } The problem here is that this takes a long time to paint as the CalculateMyPoint routine is quite an expensive method. Is there a more efficient way of painting to the canvas, for example should I draw to a bitmap and then paint the whole bitmap to the canvas on the onDraw event? Or maybe evaluate my colours and fill in an array that the onDraw method can use to paint the canvas? Users of my application will be able to change parameters that affect the drawing on the canvas. This is incredibly slow at the moment.

    Read the article

  • Python pixel manipulation library

    - by silinter
    So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast. My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem). I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program. My program will chiefly be dealing in loading images and writing/reading to/from surfaces.

    Read the article

  • jQuery: Inserting li items one by one?

    - by Legend
    I wrote the following (part of a jQuery plugin) to insert a set of items from a JSON object into a <ul> element. ... query: function() { ... $.ajax({ url: fetchURL, type: 'GET', dataType: 'jsonp', timeout: 5000, error: function() { self.html("Network Error"); }, success: function(json) { //Process JSON $.each(json.results, function(i, item) { $("<li></li>") .html(mHTML) .attr('id', "div_li"+i) .attr('class', "divliclass") .prependTo("#" + "div_ul"); $(slotname + "div_li" + i).hide(); $(slotname + "div_li" + i).show("slow") } } }); } }); }, ... Doing this maybe adding the <li> items one by one theoretically but when I load the page, everything shows up instantaneously. Instead, is there an efficient way to make them appear one by one more slowly? I'll explain with a small example: If I have 3 items, this code is making all the 3 items appear instantaneously (at least to my eyes). I want something like 1 fades in, then 2 fades in, then 3 (something like a newsticker perhaps). Anyone has a suggestion?

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • How to make a product catalog in C#?

    - by Ervin
    I need to develop a product catalog (about 4000 products) application, which would be given to clients on CD or DVD. The catalog exists in webpage format using PHP and MySQL. IMPORTANT: the application is given to clients who maight have old PC, old System. For minimal requirements I would put Windows XP and Internet Explorer 6 (if needed). I need the following features: 1 search option (after productID AND after keyword) 2 print option (by selecting multiple products) 3 shopping cart (making a list which will be sent to an email address if there is any Internet Connection on the computer) When I was asked to do it I had 2 days to realise a very basic version, so I took the whole website and exported it in HTML pages, and developed an application in C# which contains an embeded browser. So the whole website is now static and put on a CD. Everything fine so far. Now here are the problems: 1. the search option was realized by parsing the html files and reading the productID or looking for keywords inside of them. Put on a CD it was extremely slow (searching in 600MB of html files). FOR THIS I WOULD NEED A SOLUTION WITH A STATIC DATABASE (USING ACCESS OR SOMETHING) TO HAVE INDEXED ROWS, SO THE SEARCH COULD BE A VERY FAST ONE. 2. the printing option was a simply call of the embeded Internet Explorer print functions. Here are two problems: a) user needs IE7 for printing the website scaled (FIT TO PAGE), otherwise the edges of the page are cut down. b) users of this app does not have even the basic PC usage skills, so they can't set the printing settings, so there will appear in header and footer the page numbers and titles. QUESTION: can I set these settings from CSS for printing? 3. couldn't make a a shopping cart as I don't use a database, so I have static websites and content is inside the HTML. QUESTION: WHICH ARE THE BEST SOLUTIONS FOR THE PROBLEMS DESCRIBED ABOVE? PLEASE ANSWER EVEN IF YOUR ANSWER IS FOR ONE QUESTION ONLY. THANKS

    Read the article

  • Trouble Shoot JavaScript Function in IE

    - by CreativeNotice
    So this function works fine in geko and webkit browsers, but not IE7. I've busted my brain trying to spot the issue. Anything stick out for you? Basic premise is you pass in a data object (in this case a response from jQuery's $.getJSON) we check for a response code, set the notification's class, append a layer and show it to the user. Then reverse the process after a time limit. function userNotice(data){ // change class based on error code returned var myClass = ''; if(data.code == 200){ myClass='success'; } else if(data.code == 400){ myClass='error'; } else{ myClass='notice'; } // create message html, add to DOM, FadeIn var myNotice = '<div id="notice" class="ajaxMsg '+myClass+'">'+data.msg+'</div>'; $("body").append(myNotice); $("#notice").fadeIn('fast'); // fadeout and remove from DOM after delay var t = setTimeout(function(){ $("#notice").fadeOut('slow',function(){ $(this).remove(); }); },5000); }

    Read the article

  • Toggle Blind Effect

    - by flipflopmedia
    Is there a way to alter this script to be used as the blind effect. // Andy Langton's show/hide/mini-accordion - updated 23/11/2009 // Latest version @ http://andylangton.co.uk/jquery-show-hide // this tells jquery to run the function below once the DOM is ready $(document).ready(function() { // choose text for the show/hide link - can contain HTML (e.g. an image) var showText=''; var hideText=''; // initialise the visibility check var is_visible = false; // append show/hide links to the element directly preceding the element with a class of "toggle" $('.toggle').prev().append(' '+showText+''); // hide all of the elements with a class of 'toggle' $('.toggle').hide(); // capture clicks on the toggle links $('a.toggleLink').click(function() { // switch visibility is_visible = !is_visible; // toggle the display - uncomment the next line for a basic "accordion" style //$('.toggle').hide();$('a.toggleLink').html(showText); $(this).parent().next('.toggle').toggle('slow'); // return false so any link destination is not followed return false; }); }); FYI- Where it says: var showText=''; var hideText=''; It was originally: var showText='Show'; var hideText='Hide'; I deleted the Show/Hide Text because I am applying the link to different areas of text. I like the Blind effect, vs. this Toggle effect, and need to know how to apply it, if possible. I cannot find a basic Blind effect script that allows the use of applying the link to ANY text, vs. a button or static text. Thanks! Hope you can help! Tracy

    Read the article

  • Performing text processing on flatpage content to include handling of custom tag

    - by Dzejkob
    Hi. I'm using flatpages app in my project to manage some html content. That content will include images, so I've made a ContentImage model allowing user to upload images using admin panel. The user should then be able to include those images in content of the flatpages. He can of course do that by manually typing image url into <img> tag, but that's not what I'm looking for. To make including images more convenient, I'm thinking about something like this: User edits an additional, let's say pre_content field of CustomFlatPage model (I'm using custom flatpage model already) instead of defining <img> tags directly, he uses a custom tag, something like [img=...] where ... is name of the ContentImage instance now the hardest part: before CustomFlatPage is saved, pre_content field is checked for all [img=...] occurences and they are processed like this: ContentImage model is searched if there's image instance with given name and if so, [img=...] is replaced with proper <img> tag. flatpage actual content is filled with processed pre_content and then flatpage is saved (pre_content is leaved unchanged, as edited by user) The part that I can't cope with is text processing. Should I use regular expressions? Apparently they can be slow for large strings. And how to organize logic? I assume it's rather algorithmic question, but I'm not familliar with text processing in Python enough, to do it myself. Can somebody give me any clues?

    Read the article

  • Solution for cleaning an image cache directory on the SD card

    - by synic
    I've got an app that is heavily based on remote images. They are usually displayed alongside some data in a ListView. A lot of these images are new, and a lot of the old ones will never be seen again. I'm currently storing all of these images on the SD card in a custom cache directory (ala evancharlton's magnatune app). I noticed that after about 10 days, the directory totals ~30MB. This is quite a bit more than I expected, and it leads me to believe that I need to come up with a good solution for cleaning out old files... and I just can't think of a great one. Maybe you can help. These are the ideas that I've had: Delete old files. When the app starts, start a background thread, and delete all files older than X days. This seems to pose a problem, though, in that, if the user actively uses the app, this could make the device sluggish if there are hundreds of files to delete. After creating the files on the SD card, call new File("/path/to/file").deleteOnExit(); This will cause all files to be deleted when the VM exits (I don't even know if this method works on Android). This is acceptable, because, even though the files need to be cached for the session, they don't need to be cached for the next session. It seems like this will also slow the device down if there are a lot of files to be deleted when the VM exits. Delete old files, up to a max number of files. Same as #1, but only delete N number of files at a time. I don't really like this idea, and if the user was very active, it may never be able to catch up and keep the cache directory clean. That's about all I've got. Any suggestions would be appreciated.

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Is there a PHP benchmark that meets these specific criteria? [closed]

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The PHP source needs to be in a single file. It should solve a real-world problem. No silly looping over empty methods etc. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • Tinyurl API Example - Am i doing it right :D

    - by Paul Weber
    Hi ... we use super-long Hashes for the Registration of new Users in our Application. The Problem is that these Hashes break in some Email Clients - making the Links unusable. I tried implementing the Tinyurl - API, with a simple Call, but i think it times out sometimes ... sometimes the mail does not reach the user. I updated the Code, but now the URL is never converted. Is Tinyurl really so slow or am i doing something wrong? (I mean hey, 5 Seconds is much in this Times) Can anybody recommend me a more reliable service? All my Fault, forgot a false in the fopen. But i will leave this sample of code here, because i often see this sample, wich i think does not work very reliable: return file_get_contents('http://tinyurl.com/api-create.php?url='.$u); This is the - i think fully working sample. I would like to hear about Improvements. static function gettinyurl( $url ) { $context = stream_context_create( array( 'http' => array( 'timeout' => 5 // 5 Seconds should be enough ) ) ); // get tiny url via api-create.php $fp = fopen( 'http://tinyurl.com/api-create.php?url='.$url, 'r', $context); // open (read) api-create.php with long url as get parameter if( $fp ) { // check if open was ok $tinyurl = fgets( $fp ); // read response if( $tinyurl && !empty($tinyurl) ) // check if response is ok $url = $tinyurl; // set response as url fclose( $fp ); // close connection } // return return $url; // return (tiny) url }

    Read the article

  • jQuery .attr() crashing Internet Explorer

    - by Sunyatasattva
    Hello everyone, This is my first time posting a question here, as I usually try to find solutions myself. This one, though, being an IE issue, just drives me crazy. I use jQuery cycle plug-in on a website I made and, to populate a caption div, I use a little function that is called after the image is loaded, which uses the "alt" attribute of the image. This seems to exasperate Internet Explorer, which, doesn't have the time to fulfill this apparently so-complicated task, and, as the slideshow cycles, it enters in an infinite loop and eventually crashes – the newer the version, the worse the crash: the older IEs just display an error message saying “The webpage cannot be displayed”, while the newer (7 and 8) completely crash the system. I have no idea on how to solve or work around this. Here is the problematic code. function changeCaption() { var caption = $("img", this).attr("alt"); $('#caption').fadeIn("slow").html(caption); } Thanks in advance for any pointer: I am amazed as how something so simple and globally recognized (didn't encounter any other browser who had problem with this), can cause a problem so big. I also read somewhere that being able to crash a browser remotely is a serious issue :) Lucio

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >