Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 257/361 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • How to deal with Rounding-off TimeSpan?

    - by infant programmer
    I take the difference between two DateTime fields, and store it in a TimeSpan variable, Now I have to round-off the TimeSpan by the following rules: if the minutes in TimeSpan is less than 30 then Minutes and Seconds must be set to zero, if the minutes in TimeSpan is equal to or greater than 30 then hours must be incremented by 1 and Minutes and Seconds must be set to zero. TimeSpan can also be a negative value, so in that case I need to preserve the sign.. I could be able to achieve the requirement if the TimeSpan wasn't a negative value, though I have written a code I am not happy with its inefficiency as it is more bulky .. Please suggest me a simpler and efficient method. Thanks regards,

    Read the article

  • Developing browser plug-ins?

    - by JavaMan
    I have a project that I'd like to try that involves developing an internet browser plug-in. I have knowledge in Java and DHTML, but nothing in the world of browser plug-in development. I thought I would just ask here then what is the most efficient way to develop a browser plug-in? If possible, I'd like to streamline the process so that getting the plug-in to work in different browsers involves as little work as possible. Can this be done? I'm not asking for a tutorial like the trolls do, just a few pointers that's all. I don't waste mine or anyone else's time.

    Read the article

  • C++ Long switch statement or look up with a map?

    - by Rachel
    In my C++ application, I have some values that act as codes to represent other values. To translate the codes, I've been debating between using a switch statement or an stl map. The switch would look something like this: int code; int value; switch(code) { case 1: value = 10; break; case 2: value = 15; break; } The map would be an stl::map<int, int> and translation would be a simple lookup with the code used as the key value. Which one is better/more efficient/cleaner/accepted? Why?

    Read the article

  • Converting mxml Rect & SolidColor to actionscript

    - by touB
    I'm trying to learn how to use actionscript over mxml for flexibility. I have this simple block of mxml that I'm trying to convert to actionscript, but I'm stuck half way though <s:Rect id="theRect" x="0" y="50" width="15%" height="15%"> <s:fill> <s:SolidColor color="black" alpha="0.9" /> </s:fill> </s:Rect> I can convert the Rect no problem to private var theRect:Rect = new Rect(); theRect.x = 0; theRect.y = 50; theRect.width = "15%"; theRect.height = "15%"; then I'm stuck on the fill. What's the most efficient way to add the SolidColor in as few lines of code as possible.

    Read the article

  • JQuery slideToggle replace image src

    - by Rob
    Hi, This function is called when an up/down arrow is clicked to slide hidden div. If the div is hidden, the arrow points down and changes to up when the div is shown. If the div is shown, the arrow points up to hide div and changes to down when the div is closed. I just wanted to know if there was a more efficient way of doing this or if this is the correct way. Thanks. function showInfo(info_id) { var img_id = '#arrow_'+info_id; var div = '#info_'+appointment_id; $(div).slideToggle('normal',function() { if ($(this).is(':visible')) { $(img_id).attr('src',$(img_id).attr('src').replace('_down','_up')); } else { $(img_id).attr('src',$(img_id).attr('src').replace('_up','_down')); } });}

    Read the article

  • to get columns from Excel files using Apache POI??

    - by posdef
    Hi, In order to do some statistical analysis I need to extract values in a column of an Excel sheet. I have been using the Apache POI package to read from Excel files, and it works fine when one needs to iterate over rows. However I couldn't find anything about getting columns neither in the API (link text) nor through google searching. As I need to get max and min values of different columns and generate random numbers using these values, so without picking up individual columns, the only other option is to iterate over rows and columns to get the values and compare one by one, which doesn't sound all that time-efficient. Any ideas on how to tackle this problem? Thanks,

    Read the article

  • How to return arrays with the biggest elements in C#?

    - by theateist
    I have multiple int arrays: 1) [1 , 202,4 ,55] 2) [40, 7] 3) [2 , 48 ,5] 4) [40, 8 ,90] I need to get the array that has the biggest numbers in all positions. In my case that would be array #4. Explanation: arrays #2, #4 have the biggest number in 1st position, so after the first iteration these 2 arrays will be returned ([40, 7] and [40, 8 ,90]) now after comparing 2nd position of the returned array from previous iteration we will get array #4 because 8 7 and so on... Can you suggest an efficient algorithm for this? Doing with Linq will be preferable. UPDATE There is no limitation for the length, but as soon as some number in any position is greater, so this array is the biggest.

    Read the article

  • Shared library to minimise size of FLA file

    - by Dmitry
    In a project we use large flash FLA file with lots of graphic assets, but the actual data that is changed is just in a few symbols. Sometimes it is not very efficient to transfer the whole FLA file that comes up to 20MB now. I was thinking about using Shared Libraries, but it seems that, even if you import external library, it still copies the whole assets into the destination file, but does not link it from external file. Consequently, size of the FLA file still remains the same. Is there any way to split FLA files into few separate in order to minimise size of the most frequently updated file and keep all unchanged data in another file?

    Read the article

  • How to roll my own index in c#?

    - by bill seacham
    I need a faster way to create an index file. The application generates pairs of items to be indexed. I currently add each pair as it is generated to a sorted dictionary and then write it out to a disk file. This works well until the number of items added exceeds one million, at which time it slows to the point that is unacceptable. There can be as many as three million data items to be indexed. I prefer to avoid a database because I do not want to significantly increase the size of the deployment package, which is now less than one-half of one megabyte. I tried Access but it is even slower than the sorted dictionary -if it had an efficient bulk load utility then that might work, but I do not find such a tool for Access. Is there a better way to roll my own index?

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • What is it about Fibonacci numbers?

    - by Ian Bishop
    Fibonacci numbers have become a popular introduction to recursion for Computer Science students and there's a strong argument that they persist within nature. For these reasons, many of us are familiar with them. They also exist within Computer Science elsewhere too; in surprisingly efficient data structures and algorithms based upon the sequence. There are two main examples that come to mind: Fibonacci heaps which have better amortized running time than binomial heaps. Fibonacci search which shares O(log N) running time with binary search on an ordered array. Is there some special property of these numbers that gives them an advantage over other numerical sequences? Is it a density quality? What other possible applications could they have? It seems strange to me as there are many natural number sequences that occur in other recursive problems, but I've never seen a Catalan heap.

    Read the article

  • Watermarking Flash Videos (server-side)

    - by Roberto Aloi
    Hi all, I have a bunch of flash videos that I need to watermark with user related information, to make illegal re-distribution of these files harder. I'm wondering how can this be done server-side. If done client-side, it will be quite easy for the user to intercept the videos before they are watermarked. Since the watermark should contain user-specific information I can't really watermark the videos before encoding them (unless I have an encoded video per user - not feasible). I'm expecting this to affect the streaming performances a lot, though. Any idea how this can be done (possibly in an efficient way)?

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Return specific HREF attribute using Xpath query

    - by Michael Pasqualone
    Having a major brain freeze, I have the following chunk of code: // Get web address $domQuery = query_HtmlDocument($html, '//a[@class="productLink"]'); foreach($domQuery as $rtn) { $web = $rtn->getAttribute('href'); } Which obviously gets the entire href attribute, however I only want 1 specific attribute within the href. I.e. If the href is: /website/product1234.do?code=1234&version=1.3&somethingelse=blaah I only want to return the variable for "version", so wish to only return "1.3" in my example. What's most efficient way to do this?

    Read the article

  • LINQ to Entity, joining on NOT IN tables

    - by SlackerCoder
    My brain seems to be mush right now! I am using LINQ to Entity, and I need to get some data from one table that does NOT exist in another table. For example: I need the groupID, groupname and groupnumber from TABLE A where they do not exist in TABLE B. The groupID will exist in TABLE B, along with other relevant information. The tables do not have any relationship. In SQL it would be quite simply (there is a more elegant and efficient solution, but I want to paint a picture of what I need) SELECT GroupID, GroupName, GroupNumber, FROM TableA WHERE GroupID NOT IN (SELECT GroupID FROM TableB) Is there an easy/elegant way to do this? Right now I have a bunch of queries hitting the db, then comparing, etc. It's pretty messy. Thanks.

    Read the article

  • Efficiently compute the row sums of a 3d array in R

    - by Gavin Simpson
    Consider the array a: > a <- array(c(1:9, 1:9), c(3,3,2)) > a , , 1 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 , , 2 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 How do we efficiently compute the row sums of the matrices indexed by the third dimension, such that the result is: [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 ?? The column sums are easy via the 'dims' argument of colSums(): > colSums(a, dims = 1) but I cannot find a way to use rowSums() on the array to achieve the desired result, as it has a different interpretation of 'dims' to that of colSums(). It is simple to compute the desired row sums using: > apply(a, 3, rowSums) [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 but that is just hiding the loop. Are there other efficient, truly vectorised, ways of computing the required row sums?

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • InnoDB not supported by webhost. What now?

    - by Peter Perhác
    I was developing a small WAMP web application on my laptop, where I have an instance of mySQL running and I chose InnoDB for my DB engine. After several weeks' development I wanted to make it available to the public and found out the database server provided by my web host does not support InnoDB, only MyISAM. The create-and-populate script generated from the innoDB schema on my laptop, when executed against the live database, can manage to create individual TABLES but then runs into problems creating the VIEWs. Are views not supported in MyISAM? I know FOREIGN KEYs are not. That's very much why I made the choice of InnoDB... What are my chances of making my innoDB schema design work with myISAM? Is there any straightforward way of converting the whole schema from one storage engine to the other? Should I look for another web host that does provide a mysql db that supports innoDB?

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • c arrays: setting size dynamically?

    - by user336994
    Hello, I am new to C programming. I am trying to set the size of the array using a variable but I am getting an error: Storage size of 'array' isn't constant !! 01 int bound = bound*4; 02 static GLubyte vertsArray[bound]; I have noticed that when I replace bounds (within the brackets on line 02) with the number say '20', the program would run with no problems. But I am trying to set the size of the array dynamically ... Any ideas why I am getting this error ? thanks much,

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • How do I request a single random row from a force.com database in SOQL?

    - by Ollie C
    Total row-count is in the range 10k-100k rows. Can I use RAND() on force.com? Unfortunately although all the rows have a unique numeric identifier, there are many gaps, and I'd often want to select a random row from a filtered subset anyway. I suspect there's no particularly efficient way to do this, but is it possible at all? Ultimately all I want to do is to extract one row from a table (or a subset based on specific filter criteria) at random. If force.com doesn't let me select a random row, then can I query the rows to select from, and assign sequential IDs to all the rows, say 1-1,035, and then select a random number in that range locally, say 349, and then get row 349?

    Read the article

  • Efficiency of the .NET garbage collector

    - by Jonas B
    OK here's the deal. There are some people who put their lives in the hands of .NET's garbage collector and some who simply wont trust it. I am one of those who partially trusts it, as long as it's not extremely performance critical (I know I know.. performance critical + .net not the favored combination), in which case I prefer to manually dispose of my objects and resources. What I am asking is if there are any facts as to how efficient or inefficient performance-wise the garbage collector really is? Please don't share any personal opinions or likely-assumptions-based-on-experience, I want unbiased facts. I also don't want any pro/con discussions because it won't answer the question. Thanks

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >