Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 242/361 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Is a switch statement the fastest way to implement operator interpretation in Java

    - by Mordan
    Is a switch statement the fastest way to implement operator interpretation in Java public boolean accept(final int op, int x, int val) { switch (op) { case OP_EQUAL: return x == val; case OP_BIGGER: return x > val; case OP_SMALLER: return x < val; default: return true; } } In this simple example, obviously yes. Now imagine you have 1000 operators. would it still be faster than a class hierarchy? Is there a threshold when a class hierarchy becomes more efficient in speed than a switch statement? (in memory obviously not) abstract class Op { abstract public boolean accept(int x, int val); } And then one class per operator.

    Read the article

  • Innodb setting in xampp doesn't seem locate my.cnf file....

    - by bala3569
    I created a new mysql database and i want to use foreign keys with it... I googled and found out this... InnoDB is one of MySQL storage engines, it supports transactions, row-level locking, and foreign-keys. However, by default, InnoDB is not enabled by XAMPP. To enable it, locate the my.cnf configuration file (normally in C:/xampp/mysql/bin directory) and search for the following lines: # Comment the following if you are using InnoDB tables But the path C:/xampp/mysql/bin directory in my system doesn't seem to have such a file... Look at this image http://img691.imageshack.us/img691/524/mysqln.jpg Where is my.cnf file? Any suggestion...

    Read the article

  • MERGE -v- UPSERT

    - by Kevin Ross
    Hi, I have an application I’m writing in access with a SQL server backend. One of the most heavily used parts is where the users selects an answer to a question, a stored procedure is then fired which sees if an answer has already been given, if it has an UPDATE is executed, if not an INSERT is executed. This works just fine but now we have upgraded to SQL server 2008 express I was wondering if it would be better/quicker/more efficient to rewrite this SP to use the new MERGE command. Does anyone have any idea if this is faster than doing a SELECT followed by either an INSERT or UPDATE?

    Read the article

  • Architecture of an image hosting site

    - by kamziro
    I'm sure many here are aware of image hosting sites, like imgur, min.us, photobucket etc. Not that I want to develop one, but besides just uploading the file, organising it in some directory somewhere, what architectural considerations are involved in these sites? Especially when there's millions of page views a day (like imgur, I'd imagine) I'm curious about this because it seems that a lot of sites (say, dating websites etc) would be pretty image intensive. Even if it's not for millions of page views, what are some basic architectural requirements of efficient image deliveries online?

    Read the article

  • Open source alternative to WebEx WebOffice?

    - by Dieseltime
    I have a client who has been using WebOffice (from WebEx) for a variety of tasks within their small organization. The problem is that they only really need a small subset of the features WebOffice provides (Contact list, Database, and Document Storage). They've asked me to develop a website focused on these three features with the rationalization that this should be more cost-effective, since they currently aren't using many of the features of WebOffice they pay for. What are some open-source alternatives that I could implement for them? Sharepoint sounds like it would be too bloated and Google Apps may not be as collaborative as they would like.

    Read the article

  • Core Data multi-threading

    - by JK
    My app starts by presenting a tableview whose datasource is a Core Data SQLite store. When the app starts, a secondary thread with its own store controller and context is created to obtain updates from the web for data in the store. However, any resulting changes to the store are not notified to the fetchedresults controller (I presume because it has its own coordinator) and consequently the table is not updated with store changes. What would be the most efficient way to refresh the context on the main thread? I am considering tracking the objectIDs of any objects changed on the secondary thread, sending those to the main thread when the secondary thread completes and invoking "[context refreshObject:....] Any help would be greatly appreciated.

    Read the article

  • Enum.values() vs EnumSet.allOf( ). Which one is more preferable?

    - by Alexander Pogrebnyak
    I looked under the hood for EnumSet.allOf and it looks very efficient, especially for enums with less than 64 values. Basically all sets share the single array of all possible enum values and the only other piece of information is a bitmask which in case of allOf is set in one swoop. On the other hand Enum.values() seems to be a bit of black magic. Moreover it returns an array, not a collection, so in many cases it must be decorated with Arrays.asList( ) to be usable in any place that expects collection. So, should EnumSet.allOf be more preferable to Enum.values? More specifically, which form of for iterator should be used: for ( final MyEnum val: MyEnum.values( ) ); or for ( final MyEnum val: EnumSet.allOf( MyEnum.values ) );

    Read the article

  • Stochastic calculus library in python

    - by LeMiz
    Hello, I am looking for a python library that would allow me to compute stochastic calculus stuff, like the (conditional) expectation of a random process I would define the diffusion. I had a look a at simpy (simpy.sourceforge.net), but it does not seem to cover my needs. This is for quick prototyping and experimentation. In java, I used with some success the (now inactive) http://martingale.berlios.de/Martingale.html library. The problem is not difficult in itself, but there is a lot non trivial, boilerplate things to do (efficient memory use, variable reduction techniques, and so on). Ideally, I would be able to write something like this (just illustrative): def my_diffusion(t, dt, past_values, world, **kwargs): W1, W2 = world.correlated_brownians_pair(correlation=kwargs['rho']) X = past_values[-1] sigma_1 = kwargs['sigma1'] sigma_2 = kwargs['sigma2'] dX = kwargs['mu'] * X * dt + sigma_1 * W1 * X * math.sqrt(dt) + sigma_2 * W2 * X * X * math.sqrt(dt) return X + dX X = RandomProcess(diffusion=my_diffusion, x0 = 1.0) print X.expectancy(T=252, dt = 1./252., N_simul= 50000, world=World(random_generator='sobol'), sigma1 = 0.3, sigma2 = 0.01, rho=-0.1) Does someone knows of something else than reimplementing it in numpy for example ?

    Read the article

  • One position right barrel shift using ALU Operators?

    - by Tomek
    I was wondering if there was an efficient way to perform a shift right on an 8 bit binary value using only ALU Operators (NOT, OR, AND, XOR, ADD, SUB) Example: input: 00110101 output: 10011010 I have been able to implement a shift left by just adding the 8 bit binary value with itself since a shift left is equivalent to multiplying by 2. However, I can't think of a way to do this for shift right. The only method I have come up with so far is to just perform 7 left barrel shifts. Is this the only way?

    Read the article

  • php selecting hash using wildcards

    - by tipu
    Say I have a hashmap, $hash = array('fox' => 'some value', 'fort' => 'some value 2', 'fork' => 'some value again); I am trying to accomplish an autocomplete feature. When the user types 'fo', I would like to retrieve, via ajax, the 3 keys from $hash. When the user types 'for', I would like to only retrieve the keys fort and fork. Is this possible? What I was thinking was using binary search to isolate the keys with 'f', instead of brute-force searching. Then continue eliminating the indexes as the user types out their query. Is there a more efficient solution to this?

    Read the article

  • Hook Response.Cache to memcache

    - by dvr
    Has anyone done this before? I have a 32 bit win 2003 server running 2.0 and have read the ms engineers' blog about min(60%, 1800mb) for cache limits and our site (asp.net 2.0 / 3.5) is caching alot. It throws system outofmemory exceptions when wp is around 1.3gb (unfortunately it is the 2.0 apps) and I would like to push alot over to memcache but worried that at the moment the site is efficient using response.cache as is (though memory is an issue). I want to move most items over to memcache and have concerns on a – how to do this (implementation of response.cache to read/write from memcache) and b – what will performance be like? Before I commit to doing this and possibly spending a few days running tests I would like to hear from you if this has been done already and get some feedback. (and please don’t tell me to buy a x64 machine – I have already requested this!), by the way I ran a test requesting a single image 1000 times and response.cache was over 50% quicker than using application cache. Does response.cache bypass the page lifecycle?

    Read the article

  • Removing duplicates without overriding hash method

    - by Javi
    Hello, I have a List which contains a list of objects and I want to remove from this list all the elements which have the same values in two of their attributes. I had though about doing something like this: List<Class1> myList; .... Set<Class1> mySet = new HashSet<Class1>(); mySet.addAll(myList); and overriding hash method in Class1 so it returns a number which depends only in the attributes I want to consider. The problem is that I need to do a different filtering in another part of the application so I can't override hash method in this way (I would need two different hash methods). What's the most efficient way of doing this filtering without overriding hash method? Thanks

    Read the article

  • LINQ to SQL vs Entity Framework for an app with a future SQL Azure version

    - by Craig L
    I've got a vertical market Dot Net Framework 1.1 C#/WinForms/SQL Server 2000 application. Currently it uses ADO.Net and Microsoft's SQLHelper for CRUD operations. I've successfully converted it to Dot Net Framework 4 C#/WinForms/ SQL Server 2008. What I'd like to do is also offer my customers the ability to use SQL Azure as a backend storage for their data instead of local/LAN SQL Server. If I know SQL Azure is in my application's future, should I: A. Switch to LINQ to SQL B. Swith to Entity Framework C. Stick with ADO.Net and SQLHelper Thanks !

    Read the article

  • SqlServer2008 - Can I Alter a Scalar Function while it is referenced in many places

    - by Casey C.
    We have a scalar function that returns a DateTime. It performs a couple of quick table selects to get its return value. This function is already in use throughout the database - in default constraints, stored procs, etc. I would like to change the implementation of the function (to remove the table hits and make it more efficient) but apparently I can't do that while it is referenced by other objects in the database. Will I actually need to update every object in the database that references it to remove the reference, update the function and then update all those objects to restore the reference to the function? Thanks for any insight you can give.

    Read the article

  • Comparing multiple entity properties against list of entities

    - by roosteronacid
    Consider this snippet of code: var iList = new List<Entities.Ingredient> { new Entities.Ingredient { Name = "tomato", Amount = 2.0 }, new Entities.Ingredient { Name = "cheese", Amount = 100.0 } }; var matches = new DataContext().Ingredients.Where(i => Comparer(i, iList)); private Boolean Comparer(Entities.Ingredient i, List<Entities.Ingredient> iList) { foreach (var i in iList) { if (i.Name == iList.Name && i.Amount >= iList.Amount) return true; } return false; } Is there a more efficient way of doing this? Preferably without being too verbose; from x in y select z... If thats at all possible.

    Read the article

  • Setting Sql server security rights for multiple situations

    - by DanDan
    We have an application which uses an instance of Sql Server locally for its backend storage. The administrator windows login has had its sysadmin right revoked, and instead two sql logins have been created; one for the application with a secret password and one read only login we let users view the raw data with. This was working fine until we moved on FileStreams, which requires intergrated windows authentication. So now the sql server logins must be replaced. As a result, I am now reviewing all of our logins but I am not sure how it is possible. It seems that the application needs full read/write access, yet I still need to lock down writing to the tables so the user cannot login into the database and delete data randomly. Does anyone have any tips for setting multiple levels of security using intergrated windows logins, or can you direct me to any further reading? Some answers can also be found on serverfault: http://serverfault.com/questions/138763/setting-sql-server-security-rights-for-multiple-situations

    Read the article

  • Trying to speed up a SQLITE UNION QUERY

    - by user142683
    I have the below SQLITE code SELECT x.t, CASE WHEN S.Status='A' AND M.Nomorebets=0 THEN S.PriceText ELSE '-' END AS Show_Price FROM sb_Market M LEFT OUTER JOIN (select 2010 t union select 2020 t union select 2030 t union select 2040 t union select 2050 t union select 2060 t union select 2070 t ) as x LEFT OUTER JOIN sb_Selection S ON S.MeetingId=M.MeetingId AND S.EventId=M.EventId AND S.MarketId=M.MarketId AND x.t=S.team WHERE M.meetingid=8051 AND M.eventid=3 AND M.Name='Correct Score' With the current interface restrictions, I have to use the above code to ensure that if one selection is missing, that a '-' appears. Some feed would be something like the following SelectionId Name Team Status PriceText =================================== 1 Barney 2010 A 10 2 Jim 2020 A 5 3 Matt 2030 A 6 4 John 2040 A 8 5 Paul 2050 A 15/2 6 Frank 2060 S 10/11 7 Tom 2070 A 15 Is using the above SQL code the quickest & efficient?? Please advise of anything that could help. Messages with updates would be preferable.

    Read the article

  • Ruby on Rails: Model.all.each vs find_by_sql("SELECT * FROM model").each ?

    - by B_
    I'm fairly new to RoR. In my controller, I'm iterating over every tuple in the database. For every table, for every column I used to call SomeOtherModel.find_by_sql("SELECT column FROM model").each {|x| #etc } which worked fine enough. When I later changed this to Model.all(:select => "column").each {|x| #etc } the loop starts out at roughly the same speed but quickly slows down to something like 100 times slower than the the find_by_sql command. These calls should be identical so I really don't know what's happening. I know these calls are not the most efficient but this is just an intermediate step and I will optimize it more once this works correctly. Thanks!

    Read the article

  • Scipy sparse... arrays?

    - by spitzanator
    Hey, folks. So, I'm doing some Kmeans classification using numpy arrays that are quite sparse-- lots and lots of zeroes. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices. I've gone through this tutorial on how to create sparse matrices: http://www.scipy.org/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 To mimic an array, I just create a 1xN matrix, but as you may guess, Asp.dot(Bsp) doesn't quite work because you can't multiply two 1xN matrices. I'd have to transpose each array to Nx1, and that's pretty lame, since I'd be doing it for every dot-product calculation. Next up, I tried to create an NxN matrix where column 1 == row 1 (such that you can multiply two matrices and just take the top-left corner as the dot product), but that turned out to be really inefficient. I'd love to use scipy's sparse package as a magic replacement for numpy's array(), but as yet, I'm not really sure what to do. Any advice? Thank you very much!

    Read the article

  • very large string in memory

    - by bushman
    Hi, I am writing a program for formatting 100s of MB String data (nearing a gig) into xml == And I am required to return it as a response to an HTTP (GET) request . I am using a StringWriter/XmlWriter to build an XML of the records in a loop and returning the stringWriter.ToString() during testing I saw a few --out of memory exceptions-- and quite clueless on how to find a solution? do you guys have any suggestions for a memory optimized delivery of the response? is there a memory efficient way of encoding the data? or maybe chunking the data -- I just can not think of how to return it without building the whole thing into one HUGE string object thanks

    Read the article

  • Must have JavaScript pro developer tools, libs, utilities and workshop configuration.

    - by WooYek
    This is a followup question to the Pro JavaScript programmer interview questions (with answers). What is considered professional and industrial standard for a professional browser side Java Script developer when it comes to his workshop configuration, and maybe from-concept-to-shipment process? What are the most popular IDE's, utilities and probably libraries, not limited to the free ones. These that can help cut development time (eg. IDE), help with achieve better quality (eg. unit testing tools), reliability and maintainability. I'm looking for a baseline to which I could compare potential candidates based on their ability to keep their tools sharp and workshop efficient (pro's should invest time&money in good tools, right?).

    Read the article

  • How do I regenerate statistics in Openx?

    - by Martin Bauer
    ue to faulty hardware, statistics generated over a 2 week period were significantly higher than normal (10000 times higher than normal). After moving the application to a new server, the problem rectified itself. The issue I have is that there are 2 weeks of stats that are clearly wrong. I have checked the raw impressions table for the affected fortnight and it seems to be correct (ie. stats per banner per day match the average for the previous month). Looking at the intermediate & summary impressions tables, the values are inflated. I understand from the openx forum (http://forum.openx.org/index.php?s=7796fd9dae40e020a010773746f3ada9&showtopic=503424297) it's possible to regenerate stats from the raw data but it will only regenerate stats per hour, meaning regenerating stats for 2 weeks would be very time consuming. Is there another, more efficient way to regenerate the stats from the raw data for the affected fortnight?

    Read the article

  • Shared library linking and loading in BusyBox 0.61

    - by Alex Marshall
    Does anybody know how the dynamic linking and shared library loading works in BusyBox 0.61 ? I can't seem to find how this is done. There's no 'ld' present on the embedded system I'm dealing with, nor is there an LD_LIBRARY_PATH variable set anywhere. My motivation for this is to be able to create a symlink in the /lib directory to another directory on a different device (with considerably more storage space) for adding in more shared libraries, as the file system that contains /lib is a ramdisk that gets reloaded on startup and is within a few kb of being completely full (so we can't add more libraries to the image, nor can we obtain devices with more memory for the ramdisk)

    Read the article

  • Web Services: more frequent "small" calls, or less frequent "big" calls

    - by Klay
    In general, is it better to have a web application make lots of calls to a web service getting smaller chunks of data back, or to have the web app make fewer calls and get larger chunks of data? In particular, I'm building a Silverlight app that needs to get large amounts of data back from the server in response to a query created by a user. Each query could return anywhere from a few hundred records to a few thousand. Each record has around thirty fields of mostly decimal-type data. I've run into the situation before where the payload size of the response exceeded the maximum allowed by the service. I'm wondering whether it's better (more efficient for the server/client/web service) to cut this payload vertically--getting all values for a single field with each call--or horizontally--getting batches of complete records with each call. Or does it matter?

    Read the article

  • Instantiating a context in LINQ to Entities

    - by Jagd
    I've seen two different manners that programmers approach when creating an entity context in their code. The first is like such, and you can find it all over the MSDN code examples: public void DoSomething() { using TaxableEducationEntities context = new TaxableEducationEntities()) { // business logic and whatever else } } The second is to create the context as a private attribute in some class that encapsulates your business logic. So you would have something like: public class Education_LINQ { private TaxableEducationEntities context = new TaxableEducationEntities(); public void DoSomething() { var result = from a in context.luAction select a; // business logic and whatever else } } Which way is more efficient? Assume that you have two methods, one called DoSomething1() and another called DoSomething2(), and both methods incorporate the using statement to open the context and do whatever with it. Were you to call one method after the other, would there be any superfluous overhead going on, since essentially both methods create the context and then clean it up when they're done? As opposed to having just one private attribute that is created when a class object is instantiated, and then in turn cleaned up when the object goes out of scope?

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >