Search Results

Search found 8611 results on 345 pages for 'apt fast'.

Page 256/345 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Branching logic in an MVC view

    - by Alex Kilpatrick
    I find myself writing a lot of code in my views that looks like the code below. In this case, I want to add some explanatory HTML for a novice, and different HTML for an expert user. <% if (ViewData["novice"] != null ) { % some extra HTML for a novice <% } else { % some HTML for an expert <% } % This is presentation logic, so it makes sense that it is in a view vs the controller. However, it gets ugly really fast, especially when ReSharper wants to move all the braces around to make it even uglier (is there a way to turn that off for views?). My question is whether this is proper, or should I branch in the controller to two separate views? If I do two views, I will have a lot of duplicated HTML to maintain. Or should I do two separate views with a shared partial view of the stuff that is in common?

    Read the article

  • Recommendations for C# controls that can be used to create a hierarchical list of prioritised items?

    - by Mendokusai
    I need to be able to display and edit a hierarchical list of tasks in a C# app. It can either be a Windows form app, or ASP.NET. Basically, I want similar behaviour to the way Microsoft Project handles tasks. The control would need to: 1) Maintain a list of items made up of several fields 2) Each item can have a number of children (at least 3 levels of nesting) 3) It needs to be very easy to change the parents/children of an item 4) It needs to be very easy to edit the fields (as fast as changing cells in Excel) 5) It needs to be very easy to reorder the items by dragging and dropping or cut and paste 6) If I can easily connect the control to a database, even better Anyone got any recommendations for controls for me to look at?

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • jQuery Tools - Range Input with Mousewheel support

    - by pspahn
    I've got a nice looking horizontal scroller that uses the Range Input feature of jQ Tools. Mouse wheel scrolling support would be awesome, and I'm sure it can work, just not sure how to get it set up. I've got the generic setup: var scroll = jQuery('#scroll'); jQuery(':range').rangeinput({ speed: 0, progress: true, onSlide: function(ev, step) { scroll.css({left: -step}); }, change: function(e, i) { scroll.animate({left: -i}, "fast"); } }); The doc page ( http://jquerytools.org/documentation/toolbox/mousewheel.html ) for mousewheel gives an example: $("#myelement").mousewheel(function(event, delta) { }); But I'm not sure how this hooks in. Other tools in the package simply allow the option: mousewheel: true Unfortunately that doesn't work on range input. Any advice appreciated. Thank you.

    Read the article

  • How to I correctly add brackets to this code

    - by Mohammad
    This code removes whites paces, (fyi: it's credited to be very fast) function wSpaceTrim(s){ var start = -1, end = s.length; while (s.charCodeAt(--end) < 33 ); //here while (s.charCodeAt(++start) < 33 ); //here also return s.slice( start, end + 1 ); } The while loops don't have brackets, how would i correctly add brackets to this code? while(iMean){ like this; } Thank you so much!

    Read the article

  • How might one develop a program like FRAPS?

    - by blood
    I would like to make a program to capture video. What is the best way to capture video? I know C++ and I'm learning assembly. I found in my assembly book that I can get data from the video card. Would that be the best way? I know FRAPS hooks into programs, but I would like my program to take video of the entire screen. I would like something something fast, with low memory usage if possible. A requirement is that the program must be usable on other computers, despite dissimilar hardware.

    Read the article

  • how to make a program like fraps.

    - by blood
    i want to make a program that will capture video. what if the best way to captrue the video, i know c++ and im learning assembly and i found in my assembly book i can get data from the video card i think? would that be the best way? i know fraps hooks into programs but i want my program to take the full screen? so i want something fast low memory useage if i can and something i can use on other computers with them having the same hardware.

    Read the article

  • gcc options for fastest code

    - by rwallace
    I'm distributing a C++ program with a makefile for the Unix version, and I'm wondering what compiler options I should use to get the fastest possible code (it falls into the category of programs that can use all the computing power they can get and still come back for more), given that I don't know in advance what hardware, operating system or gcc version the user will have, and I want above all else to make sure it at least works correctly on every major Unix-like operating system. Thus far, I have g++ -O3 -Wno-write-strings, are there any other options I should add? On Windows, the Microsoft compiler has options for things like fast calling convention and link time code generation that are worth using, are there any equivalents on gcc? (I'm assuming it will default to 64-bit on a 64-bit platform, please correct me if that's not the case.)

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Applying a pixel shader effect to a portion of an image

    - by Nick
    I have a ScrollViewer that contains a very large video (16 megapixel @ 10fps) and I want to apply a pixel shader effect to it. Given the size of the images I can't apply the effect directly to the image. So I apply the effect to the ScrollContentPresenter in the control style. Which is great, everything runs nice and fast. However, I'm also rendering annotations inside of the ScrollContentPresenter which I do NOT want effects applied to (but they need to move and scale along with the image). Is there to apply the effect just to the clipped and displayed portion of the image or do I need to build a rather more complex control?

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • packing fields of a class into a byte array in c#

    - by alex
    Hi all: I am trying to create a fast way to convert c# classes into byte array. I thought of serializing the class directly to a byte array using an example I found: // Convert an object to a byte array private byte[] ObjectToByteArray(Object obj) { if(obj == null) return null; BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); bf.Serialize(ms, obj); return ms.ToArray(); } But the byte array I got contains some other information that are not related to the fields in the class. I guess it is also converting the Properties of the class. Is there a way to serialize only the fields of the class to a byte array? Thanks

    Read the article

  • does a ruby on rails rack class get access to the entire rails environment?

    - by Andrew Arrow
    that is when the def call(env) method is invoked by hitting any url, can I inside that method make some ActiveRecord queries, use classes defined in lib, etc. etc. Or is it more like an irb console without the rails env loaded? Another way to put it with a rake task example: task :foo => :environment do # with env end task :foo2 do # without env end I would think rack classes would NOT get the environment so they are super fast and don't take all the overhead of a normal rails request. But that doesn't seem to be the case. I CAN make ActiveRecord queries inside my rack class. So what is the advantage of rack then?

    Read the article

  • What are some good ways to do intermachine locking?

    - by mike
    Our server cluster consists of 20 machines, each with 10 pids of 5 threads. We'd like some way to prevent any two threads, in any pid, on any machine, from modifying the same object at the same time. Our code's written in Python and runs on Linux, if that helps narrow things down. Also, it's a pretty rare case that two such threads want to do this, so we'd prefer something that optimizes the "only one thread needs this object" case to be really fast, even if it means that the "one thread has locked this object and another one needs it" case isn't great. What are some of the best practices?

    Read the article

  • PHP APC - Why is loading cached array op codes slow?

    - by Aaron Kreider
    I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions. I'm wondering: does OP code caching not work as well for arrays? My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%. apc.shm_size=32 M apc.max_file_size = 1M apc.shm_segments= 1 APC 3.1.6 I'm using PHP 5.2, Apache 2, and Windows Vista.

    Read the article

  • Silverlight 4 application on localhost runs extremely slow

    - by rams
    Silverlight 4 app running in IE8 and hosted on VS2010 internal webserver. The website takes atleast a minute to download the xap and code runs slow on client (IE8). I am running the app in debug mode and have turned intellitrace off. Symbol loading is also turned off. However if I kill the VS webserver, clean the solution, the app runs fast. 3 debugging sessions later, the app slows to a crawl. Have also tried turning off McAfee live scanning but no use. Looked in event log for any clue but found none. What could be the cause of the slowness? TIA rams

    Read the article

  • problem with HttpWebRequest.GetResponse perfomance in multithread applcation

    - by Nikita
    I have very bad perfomance of HttpWebRequest.GetResponse method when it is called from different thread for different URLs. For example if we have one thread and execute url1 it requires 3sec. If we ececute url1 and url2 in parallet it requires 10sec, first request ended after 8sec and second after 10sec. If we exutet 10 URLs url1, url2, ... url0 it requires 1min 4 sec!!! first request ended after 50 secs! I use GetResponse method. I tried te set DefaultConnectionLimit but it doesn't help. If use BeginGetRequest/EndGetResponse methods it works very fast but only if this methods called from one thread. If from different it is also very slowly. I need to execute Http requests from many threads at one time. ?an anyone ever encountered such a problem? Thank you for any suggestions.

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • Multiple indexes for a Java Collection - most basic solution?

    - by chris_l
    Hi, I'm looking for the most basic solution to create multiple indexes on a Java Collection. Required functionality: When a Value is removed, all index entries associated with that value must be removed. Index lookup must be faster than linear search (at least as fast as a TreeMap). Side conditions: It should ideally work with JavaSE (6.0) alone - no extra libraries, if possible. If necessary, then only small (not something like Lucene), common and well tested libraries. No database! Of course, I could write a class that manages multiple Maps myself. But I'd like to know, if it can be done without - while still getting a simple usage similar to using a single indexed java.util.Map. Thanks, Chris

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • Is it possible to generate plain-old XML using Haml?

    - by lsdr
    I've been working on a piece of software where I need to generate a custom XML file to send back to a client application. The current solutions on Ruby/Rails world for generating XML files are slow, at best. Using builder or event Nokogiri, while have a nice syntax and are maintainable solutions, they consume too much time and processing. I definetly could go to ERB, which provides a good speed at the expense of building the whole XML by hand. HAML is a great tool, have a nice and straight-forward syntax and is fairly fast. But I'm struggling to build pure XML files using it. Which makes me wonder, is it possible at all? Does any one have some pointers to some code or docs showing how to do this, build a full, valid XML from HAML?

    Read the article

  • What is the technological basis for Azure DocumentDB?

    - by user1703840
    Microsoft announced the availability of Azure DocumentDB as follows... A fully-managed, highly-scalable, NoSQL document database service. - Rich query over a schema-free JSON data model - Transactional execution of JavaScript logic - Scalable storage and throughput - Tunable consistency - Rapid development with familiar technologies - Blazingly fast and write optimized database service I really like the "transactional execution of JavaScript logic". Sounds like an approach similar to that of PostgreSQL NoSQL. Anyone know what is the technological basis for the Azure DocumentDB service? SQL Server?

    Read the article

  • PHP / javascript live chat using too much bandwidth

    - by David
    So I am learning about javascript, so I am making a live chat system with PHP and javascript. I have it so the javascript refreshes the log (each message gets logged in a file on the server), and it refreshes every second. Im using firebug to monitor the resource usage, and I see under the net tab each times its updated, and the bytes add up really fast. I know I can change it to update less, but is there a way that when a user on the other end I'm talking to, when the send a message, it gets sent to the server, then an alert gets sent to me saying that the chatlog needs to update somehow. That way it only updates when the log is updated. let me know, thanks

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >