Search Results

Search found 7179 results on 288 pages for 'slow logon'.

Page 258/288 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • jQuery and MySQL

    - by Wayne
    I have taken a jQuery script which would remove divs on a click, but I want to implement deleting records of a MySQL database. In the delete.php: <?php $photo_id = $_POST['id']; $sql = "DELETE FROM photos WHERE id = '" . $photo_id . "'"; $result = mysql_query($sql) or die(mysql_error()); ?> The jQuery script: $(document).ready(function() { $('#load').hide(); }); $(function() { $(".delete").click(function() { $('#load').fadeIn(); var commentContainer = $(this).parent(); var id = $(this).attr("id"); var string = 'id='+ id ; $.ajax({ type: "POST", url: "delete.php", data: string, cache: false, success: function(){ commentContainer.slideUp('slow', function() {$("#photo-" + id).remove();}); $('#load').fadeOut(); } }); return false; }); }); The div goes away when I click on it, but then after I refresh the page, it appears again... How do I get it to delete it from the database? Thanks :) EDIT: Woopsie... forgot to add the db.php to it, so it works now .<

    Read the article

  • C#: Any faster way of copying arrays?

    - by Yang
    I have three arrays that need to be combined in one three-dimension array. The following code shows slow performance in Performance Explorer. Is there a faster solution? for (int i = 0; i < sortedIndex.Length; i++) { if (i < num_in_left) { // add instance to the left child leftnode[i, 0] = sortedIndex[i]; leftnode[i, 1] = sortedInstances[i]; leftnode[i, 2] = sortedLabels[i]; } else { // add instance to the right child rightnode[i-num_in_left, 0] = sortedIndex[i]; rightnode[i-num_in_left, 1] = sortedInstances[i]; rightnode[i-num_in_left, 2] = sortedLabels[i]; } } Update: I'm actually trying to do the following: //given three 1d arrays double[] sortedIndex, sortedInstances, sortedLabels; // copy them over to a 3d array (forget about the rightnode for now) double[] leftnode = new double[sortedIndex.Length, 3]; // some magic happens here so that leftnode = {sortedIndex, sortedInstances, sortedLabels};

    Read the article

  • Overlapping audio in IE when I show/hide videos

    - by user1062448
    I have a thumbnail list with links to individual videos. Everything works fine in all browsers but IE. In IE, if I start a video and (without slicking pause or stop) click on the thumbnail for the next video, the audio continues playing. In other words, the audio for both videos plays at once. Any suggestions? HTML: <ul class="videoButtons"> <li><a class="vidButton" href="javascript:void(0)" id="1" ><img src="images/videoPics/vid1Thumb.jpg" /><br />video title</a></li> <li><a class="vidButton" href="javascript:void(0)" id="2" ><img src="images/videoPics/vid2Thumb.jpg" /><br />video title</a></li> <li><a class="vidButton" href="javascript:void(0)" id="3" ><img src="images/videoPics/vid3Thumb.jpg" /><br />video title</a></li> </ul> <div class="box" id="video1"> <!--flv embedded object - FLVPlayer--> </div> <div class="box" id="video2"> <!--flv embedded object - FLVPlayer1--> </div> <div class="box" id="video3"> <!--flv embedded object - FLVPlayer2--> </div> Show/Hide code: $(".vidButton").click(function() { var buttonID = $(this).attr('id'); // get ID of the button clicked var video = $('#'+'video'+buttonID); // add ID number to video $('.box').hide(); // hide all other divs video.fadeTo("slow", 1); // show video }); }); // video objects swfobject.registerObject("FLVPlayer"); swfobject.registerObject("FLVPlayer1"); swfobject.registerObject("FLVPlayer2");

    Read the article

  • jquery toggling more than one div

    - by Crays
    Hi guys, i'm trying to make toggling div with only 1 javascript. I tried this, the first div does what it was meant to do but the second doesn't. Have a look. <body> <div> <div>He <div>You <div id="Me"><a id="me">Me</a></div> </div> </div> <div id="This">We </div> </div> <div> <div>1 <div>2 <div id="Me"><a id="me">3</a></div> </div> </div> <div id="This">4 </div> </div> <script> $("#me").click(function () { $(this).parent().parent().parent().siblings("#This").slideToggle("slow"); }); </script> </body> when i click me, we disappear, alright. But when i click 3, 4 doesn't disappear.

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • own drawImage / drawLine in OpenGL

    - by Chrise
    I'm implementing some native 2D-draw functions in my graphics engine for android, but now there's another question coming up, when I observe the performance of my program. At the moment I'm implementing a drawLine/drawImage function. In summary, there are following different values for drawing each different line / image: the color the alpha value the width of the line rotation (only for images) size/scale (also for images) blending method (subrtract, add, normal-alpha) Now, when an imageLine is drawn, I put the CPU-calculated vertex-positions and uv-values for 6 vertices (2 triangles), into a Floatbuffer and draw it immediately with drawArrays, after passing information for drawing (color,alpha, etc.) via uniforms to the shader. When I draw an image, the pre-set VBO is directly drawn after passing information. The first fact I recognized, is: of course drawing Images is much faster, than imagelines (beacuse of VBOs), but also: I cannot pre-put vertex-data into a VBO for imageLines, because imageLines have no static shape like normal images (varying linelength, varying linewidth and the vertex positions of x1,y1 and x2,y2 change too often) That's why I use a normal Floatbuffer, instead of a VBO. So my question is: What's the best way for managing images, and other 2D-graphics functions. For me it's some kind of important, that the user of the engine is able to draw as many images/2D graphics as possible, without loosing to much performance. You can find the functions for drawing images, imagelines, rects, quads, etc. here: https://github.com/Chrise55/LLama3D/blob/master/Llama3DLibrary/src/com/llama3d/object/graphics/image/ImageBase.java Here an example how it looks with many images (testing artificial neural networks), it works fine, but already little bit slow with that many images... :(

    Read the article

  • How to skip certain tests with Test::Unit

    - by Daniel Abrahamsson
    In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite. My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test. The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality. As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

    Read the article

  • PHP and storing stats

    - by John
    Using PHP5 and the latest version of MySQL I want to be able to track impressions and clicks for business listings. My question is if I did this myself what would be the best method in storing it so I can run reports? Before I just had a table that had the listing id, user ip address and if it was a click or impression as well as the date it was tracked. However the database itself is approaching 2GB of data and its very slow, part of the problem is its a pretty simple script that includes impressions and clicks from anyone including search engines and basically anyone or anything that accesses the listing page. Is there an api or file out there that has an update to date list that can detect if the person viewing is a actually person and not a spider so I dont fill up the database with unneeded stats? Just looking for suggestions, do I just have a raw database that gets just the hits then a cron job at night tally up for the day for each listing for each ip and store the cumulative stats in a different table? Also what type of database should it be? Innodb? MyISAM?

    Read the article

  • Populate on demand a TreeView with datas in an XML

    - by m6a-uds
    Hi I have a large XML file (3000+ nodes) that I want to represent in a TreeView on ASP.NET. I cannot databind it to a XMLDataSource because loading the TreeView will then be way too slow (I never even waited long enough to see it finish...) So the solution for this would be to use the PopulateOnDemand property of the TreeNodes to load data only when needed. Problem is, I can't think of a way to acheive this... How can-I, based on the ID of a node, search a XMLDocument to get all the childnodes of the node having this ID? XML would look like that: <document ID=1> <document ID=2> <document ID=3> </document> </document> <document ID=4> </document> </document> There are nor rules on how much levels it can go down or anything...

    Read the article

  • Starting a process synchronously, and "streaming" the output

    - by Benjol
    I'm looking at trying to start a process from F#, wait till it's finished, but also read it's output progressively. Is this the right/best way to do it? (In my case I'm trying to execute git commands, but that is tangential to the question) let gitexecute (logger:string->unit) cmd = let procStartInfo = new ProcessStartInfo(@"C:\Program Files\Git\bin\git.exe", cmd) // Redirect to the Process.StandardOutput StreamReader. procStartInfo.RedirectStandardOutput <- true procStartInfo.UseShellExecute <- false; // Do not create the black window. procStartInfo.CreateNoWindow <- true; // Create a process, assign its ProcessStartInfo and start it let proc = new Process(); proc.StartInfo <- procStartInfo; proc.Start() |> ignore // Get the output into a string while not proc.StandardOutput.EndOfStream do proc.StandardOutput.ReadLine() |> logger What I don't understand is how the proc.Start() can return a boolean and also be asynchronous enough for me to get the output out of the while progressively. Unfortunately, I don't currently have a large enough repository - or slow enough machine, to be able to tell what order things are happening in...

    Read the article

  • Running a process at the Windows 7 Welcome Screen

    - by peelman
    So here's the scoop: I wrote a tiny C# app a while back that displays the hostname, ip address, imaged date, thaw status (we use DeepFreeze), current domain, and the current date/time, to display on the welcome screen of our Windows 7 lab machines. This was to replace our previous information block, which was set statically at startup and actually embedded text into the background, with something a little more dynamic and functional. The app uses a Timer to update the ip address, deepfreeze status, and clock every second, and it checks to see if a user has logged in and kills itself when it detects such a condition. If we just run it, via our startup script (set via group policy), it holds the script open and the machine never makes it to the login prompt. If we use something like the start or cmd commands to start it off under a separate shell/process, it runs until the startup script finishes, at which point Windows seems to clean up any and all child processes of the script. We're currently able to bypass that using psexec -s -d -i -x to fire it off, which lets it persist after the startup script is completed, but can be incredibly slow, adding anywhere between 5 seconds and over a minute to our startup time. We have experimented with using another C# app to start the process, via the Process class, using WMI Calls (Win32_Process and Win32_ProcessStartup) with various startup flags, etc, but all end with the same result of the script finishing and the info block process getting killed. I tinkered with rewriting the app as a service, but services were never designed to interact with the desktop, let alone the login window, and getting things operating in the right context never really seemed to work out. So for the question: Does anybody have a good way to accomplish this? Launch a task so that it would be independent of the startup script and run on top of the welcome screen?

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Efficient way to get highly correlated pairs from large data set in Python or R

    - by Akavall
    I have a large data set (Let's say 10,000 variables with about 1000 elements each), we can think of it as 2D list, something like: [[variable_1], [variable_2], ............ [variable_n] ] I want to extract highly correlated variable pairs from that data. I want "highly correlated" to be a parameter that I can choose. I don't need all pairs to be extracted, and I don't necessarily want the most correlated pairs. As long as there is an efficient method that gets me highly correlated pairs I am happy. Also, it would be nice if a variable does not show up in more than one pair. Although this might not be crucial. Of course, there is a brute force way to finding such pairs, but it is too slow for me. I've googled around for a bit and found some theoretical work on this issue, but I wasn't able for find a package that could do what I am looking for. I mostly work in python, so a package in python would be most helpful, but if there exists a package in R that does what I am looking for it will be great. Does anyone know of a package that does the above in Python or R? Or any other ideas? Thank You in Advance

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • How to make this JavaScript much faster?

    - by Ralph
    Still trying to answer this question, and I think I finally found a solution, but it runs too slow. var $div = $('<div>') .css({ 'border': '1px solid red', 'position': 'absolute', 'z-index': '65535' }) .appendTo('body'); $('body *').live('mousemove', function(e) { var topElement = null; $('body *').each(function() { if(this == $div[0]) return true; var $elem = $(this); var pos = $elem.offset(); var width = $elem.width(); var height = $elem.height(); if(e.pageX > pos.left && e.pageY > pos.top && e.pageX < (pos.left + width) && e.pageY < (pos.top + height)) { var zIndex = document.defaultView.getComputedStyle(this, null).getPropertyValue('z-index'); if(zIndex == 'auto') zIndex = $elem.parents().length; if(topElement == null || zIndex > topElement.zIndex) { topElement = { 'node': $elem, 'zIndex': zIndex }; } } }); if(topElement != null ) { var $elem = topElement.node; $div.offset($elem.offset()).width($elem.width()).height($elem.height()); } }); It basically loops through all the elements on the page and finds the top-most element beneath the cursor. Is there maybe some way I could use a quad-tree or something and segment the page so the loop runs faster?

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • (How) Can I approximate a "dynamic" index (key extractor) for Boost MultiIndex?

    - by Sarah
    I have a MultiIndex container of boost::shared_ptrs to members of class Host. These members contain private arrays bool infections[NUM_SEROTYPES] revealing Hosts' infection statuses with respect to each of 1,...,NUM_SEROTYPES serotypes. I want to be able to determine at any time in the simulation the number of people infected with a given serotype, but I'm not sure how: Ideally, Boost MultiIndex would allow me to sort, for example, by Host::isInfected( int s ), where s is the serotype of interest. From what I understand, MultiIndex key extractors aren't allowed to take arguments. An alternative would be to define an index for each serotype, but I don't see how to write the MultiIndex container typedef ... in such an extensible way. I will be changing the number of serotypes between simulations. (Do experienced programmers think this should be possible? I'll attempt it if so.) There are 2^(NUM_SEROTYPES) possible infection statuses. For small numbers of serotypes, I could use a single index based on this number (or a binary string) and come up with some mapping from this key to actual infection status. Counting is still darn slow. I could maintain a separate structure counting the total numbers of infecteds with each serotype. The synchrony is a bit of a pain, but the memory is fine. I would prefer a slicker option, since I would like to do further sorts on other host attributes (e.g., after counting the number infected with serotype s, count the number of those infected who are also in a particular household and have a particular age). Thanks in advance.

    Read the article

  • Jquery conditionals, window locations, and viewdata. Oh my!

    - by John Stuart
    I have one last thing left on a project and its a doozy. Not only is this my first web application, but its the first app i used Jquery, CSS and MVC. I have no idea on how to proceed with this. What i am trying to do is: In my controller, a waste item is validated, and based on the results one of these things can happen. The validation is completed, nothing bad happens, which sets ViewData["FailedWasteId"] to -9999. Its a new waste item and the validation did not pass, which sets ViewData["FailedWasteId"] to 0. Its an existing waste item and the validation did not pass, which sets ViewData["FailedWasteId"] to the id of the waste item. This ViewData["FailedWasteId"] is set on page load using <%=Html.Hidden("wFailId", int.Parse(ViewData["WasteFailID"].ToString()))%> When the validations do not pass, then the page zooms (by window.location) to an invisible div, opens the invisible div etc. Hopefully my intentions are clear with this poor attempt at jquery. The new waste div is and the existing item divs are dynamically generated (this i know works) " So my question here is... Help? I cant even get the data to parse correctly, nor can i even get the conditionals to work. And since this happens after post, i cant get firebug to help my step through the debugger, as the script isnt loaded yet. $(document).ready(function () { var wasteId = parseInt($('#wFailId').text()); if (wasteId == -9999) { //No Issue } else if (wasteId < 0) { //Waste not saved to database } else if (wasteId == 0) { //New Waste window.location = '#0'; $('.editPanel').hide(); $('#GeneratedWasteGrid:first').before(newRow); $('.editPanel').appendTo('#edit-panel-row').slideDown('slow'); } else if (wasteId > 0) { //Waste saved to database } });

    Read the article

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • Faster Javascript text replace

    - by Stacey
    Given the following javascript (jquery) $("#username").keyup(function () { selected.username = $("#username").val(); var url = selected.protocol + (selected.prepend == true ? selected.username : selected.url) + "/" + (selected.prepend == true ? selected.url : selected.username); $("#identifier").val(url); }); This code basically reads a textbox (username), and when it is typed into, it reconstructs the url that is being displayed in another textbox (identifier). This works fine - there are no problems with its functionality. However it feels 'slow' and 'sluggish'. Is there a cleaner/faster way to accomplish this task? Here is the HTML as requested. <fieldset class="identifier delta"> <form action="/authenticate/openid" method="post" target="_top" > <input type="text" class="openid" id="identifier" name="identifier" readonly="readonly" /> <input type='text' id='username' name='username' class="left" style='display: none;'/> <input type="submit" value="Login" style="height: 32px; padding-top: 1px; margin-right: 0px;" class="login right" /> </form> </fieldset> The identifier textbox just has a value set based on the hyperlink anchor of a button.

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • Hide / Show menu code not working after postback

    - by WraithNath
    I have a button on my web page that toggles the menu, After a postback the menu comes back despite me updating a hidden field value to store its state. Am I doing something wrong here? If there is a better way of doing it, let me know! Markup: <asp:Button ID="btnMenu" runat="server" Text="Hide Menu" UseSubmitBehavior="False" OnClientClick="return toggleMenu(this);" /> <asp:Panel runat="server" ID="pnlMenuToggle"> //Main Menu </asp:Panel> <asp:Panel runat="server" ID="pnlSubMenuToggle"> //Sub Menu </asp:Panel> <asp:HiddenField ID="hfMenuState" runat="server" Value="true" /> <script> //Toggles menu visibility function toggleMenu(menuButton) { var menuVisible = $('#<%=hfMenuState.ClientID%>').val() == 'true' ? true : false; $('#<%=pnlMenuToggle.ClientID%>').slideToggleWidth(); $('#<%=pnlSubMenuToggle.ClientID%>').slideToggle('slow'); //Update whether the menu is visible menuVisible = !menuVisible; //Update menu button text $(menuButton).val(menuVisible ? 'Hide Menu' : 'Show Menu'); $('#<%=hfMenuState.ClientID%>').val(menuVisible) return false; } </script> Code Behind: (Page Load) bool menu = Convert.ToBoolean( hfMenuState.Value ); pnlMenuToggle.Visible = menu; pnlSubMenuToggle.Visible = menu; The javascripts updates the hidden field value but it looks like this is never posted back to the server. What can I do to make sure the menu stays hidden after postbacks. I have also tried putting the hidden field in an Update Panel with Update Mode set to Always

    Read the article

  • Faster way to convert from a String to generic type T when T is a valuetype?

    - by Kumba
    Does anyone know of a fast way in VB to go from a string to a generic type T constrained to a valuetype (Of T as Structure), when I know that T will always be some number type? This is too slow for my taste: Return DirectCast(Convert.ChangeType(myStr, GetType(T)), T) But it seems to be the only sane method of getting from a String -- T. I've tried using Reflector to see how Convert.ChangeType works, and while I can convert from the String to a given number type via a hacked-up version of that code, I have no idea how to jam that type back into T so it can be returned. I'll add that part of the speed penalty I'm seeing (in a timing loop) is because the return value is getting assigned to a Nullable(Of T) value. If I strongly-type my class for a specific number type (i.e., UInt16), then I can vastly increase the performance, but then the class would need to be duplicated for each numeric type that I use. It'd almost be nice if there was converter to/from T while working on it in a generic method/class. Maybe there is and I'm oblivious to its existence?

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >