Search Results

Search found 6805 results on 273 pages for 'fast formula'.

Page 184/273 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • NoSQL or Ehcache caching ?

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb or Redis would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. What i need is quick access to the read only database.

    Read the article

  • WCF Caching Solution - Need Advice

    - by Brandon
    The company I work for is looking to implement a caching solution. We have several WCF Web Services hosted and we need to cache certain values that can be persisted and fetched regardless of a client's session to a service. I am looking at the following technologies: Caching Application Block 4.1 WCF TCP Service using HttpRuntime Caching Memcached Win32 and Client Microsoft AppFabric Caching Beta 2 Our test server is a Windows Server 2003 with IIS6, but our production server is Windows Server 2008, so any of the above options would work (except for AppFabric Caching on our test server). Does anyone have any experience with any of these? This caching solution will not be used to store a lot of data, but it will need to be fetched from frequently and fast. Thanks in advance.

    Read the article

  • Does any faster centralized version control than SVN exists?

    - by Savageman
    Hello, I've been using SVN since a long time and now we're trying on Git. I'm not talking on the centralized / decentralized debate here. My only concern is speed. The latter tool is much faster. But sometimes, I NEED to work with a centralized approach, which is much more simple and less complex than the decentralized one. The learning curve is really fast, which saves a lot of time (while digging into decentralized would lead to a waste of time, given the learning curve is much longer and we encounter more problem when working with it). However, SVN is really slow compared to GIT, and I don't think it has anything to do with the centralized argument. Decentralized systems also have to deal with server connections and file transfert. So I can easilly imagine a faster implementation of centralized version control could exists. Does someone has any clue on this?

    Read the article

  • Javascriptlibrary more efficient than Rickshaw for realtime visualizations

    - by dan kutz
    I want to visualize data as time-series graphs on mobile devices(tablets) and therefore stumbled upon rickshaw, which is based on D3. First I must say I was a little bit confused when I realized that realtime in web design is defined totally different to realtime in engineering which has fixed(and often very short) timeframes. Anyway my aim is to visualize the data as fast as possible, and on older tablets visualization with rickshaw is quite slow. Can anybody recommend another library, which may be more efficient in rendering? Or is there no way out and I have to go native? regards Dan.

    Read the article

  • label in my tabel celll

    - by ven in Iphone world
    hi this is lak.. thank you for your fast feedback but that not worked for me i am using labels in table.. UILabel *label1 = (UILabel *) [cell viewWithTag:1]; label1.backgroundColor = [UIColor clearColor]; label1.text=aStation.station_name; label1.textColor = [UIColor colorWithRed:0.76 green:0.21 blue:0.07 alpha:1.0]; [label1 setFont:[UIFont fontWithName:@"Trebuchet MS" size:15]]; for this type of label i want to limit number of characters. hope i will get an answer..

    Read the article

  • How to calculate 2^n-1 efficiently without overflow?

    - by Ludwig Weinzierl
    I want to calculate 2^n-1 for a 64bit integer value. What I currently do is this for(i=0; i<n; i++) r|=1<<i; and I wonder if there is more elegant way to do it. The line is in an inner loop, so I need it to be fast. I thought of r=(1ULL<<n)-1; but it doesn't work for n=64, because << is only defined for values of n up to 63.

    Read the article

  • Nested query to find details in table B for maximum value in table A

    - by jpatokal
    I've got a huge bunch of flights travelling between airports. Each airport has an ID and (x,y) coordinates. For a given list of flights, I want to find the northernmost (highest x) airport visited. Here's the query I'm currently using: SELECT name,iata,icao,apid,x,y FROM airports WHERE y=(SELECT MAX(y) FROM airports AS a , flights AS f WHERE (f.src_apid=a.apid OR f.dst_apid=a.apid) ) This works beautifully and reasonably fast as long as y is unique, but fails once it isn't. What I'd want to do instead is find the MAX(y) in the subquery, but return the unique apid for the airport with the highest y. Any suggestions?

    Read the article

  • google spreadsheet api from android: google-api-java-client or handmade?

    - by yetanothercoderu
    For working with google spreadsheet api from android (2.2) - google suggests using google-api-java-client for android. For that you have to include 5 jars to your android application: guava-r09.jar google-http-client-extensions-android2-1.6.0-beta.jar google-api-client-extensions-android2-1.6.0-beta.jar google-http-client-1.6.0-beta.jar google-api-client-1.6.0-beta.jar and digging into google-api-java-client javadocs for fast-changing api. Does it worth the effort? in term of android specifics and device fragmentation? Isn't it reasonable to write your own simple http response parser or take small existing library like google-spreadsheet-lib-android ? Thanks! UPD: choosed google-api-java-client finally as it has all routine stuff (like parsing http, xml) out of box

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • how to use shared variable using stored procedure in crystal reports

    - by sonia
    i have a parent report and it contains a two sub report. * subreport: item which get all fields from store procedure named spGetReportItem. like ItemName ItemQuantity TotalItemCost ab 4 45 dd 6 98 *subreport: Labour which get all fields from store procedure named spGetReportLabour. like labourName labourQuantity TotalLabourCost ab 44 455 dd 63 986 i want to find the total of totalitemcost and total of totallabourcost and then want grandtotal of totalitemcost and totallabourcost. i have seen many examples on internet in which shared variable is used in the formula bt the problem is that they have used the table but i m fetching data from stored procedure. so how can i access the stored procedure fields for calculation. like i have seen that many have used: shared numbervar total:=sum({tablename.ColumnName}); but i have used stored procedure instead of table so how could i find total of field that resultset returns from stored procedure.. plz give me answer as soon as possible.. i need it urgently. thanks..

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • Migrate from VB.net to C#

    - by rowmark
    Hello Experts, I have been developing applications using VB.net for the past 5 years. As I tried to learn Java earlier and found it very difficult for me I did stick on to VB.net. And for me C# is more or less similar to Java. Now I cannot get away with it. I have to code on C#. Is there a way I can get to speed with C# fast. I would really appreciate if you can let me know your thoughts and if there are any good resources I can try.

    Read the article

  • keep open windows console after a python syntax error

    - by basweber
    File associations on my machine (winxp home) are such that a python script is directly opened with the python interpreter. If I double click on a python script a console window runs and every thing is fine - as long as there is no syntax error in the script. In that case the console window opens up for a moment but it is closed immediately. Too fast to read the error message. Of course their would be the possibility to manually open a console window and to execute the script by typing python myscript.py but I am sure that there is a more convenient (i.e. "double click based") solution.

    Read the article

  • HBase as web app backend

    - by NathanD
    Can anyone advise if it is a good idea to have HBase as primary data source for web-based application? My primary concern is HBase's response time to queries. Is it possible to have sub-second response? edit: more details about the app itself. Amount of data: ~500GB of text data, expect to reach 1TB soon Number of concurrent users using the app: up to 50 The app will be used to present reports about data stored in HBase, like how many times keyword "X" occured in last 24h. For ~80% of requests from that app I will know the exact key, 20% will be scans (I'm looking into HBase schema design related topics to make it run fast)

    Read the article

  • validate constructor arguments or method parameters with annotations, and let them throw an exceptio

    - by marius
    I am validating constructor and method arguments, as I want to the software, especially the model part of it, to fail fast. As a result, constructor code often looks like this public MyModelClass(String arg1, String arg2, OtherModelClass otherModelInstance) { if(arg1 == null) { throw new IllegalArgumentsException("arg1 must not be null"); } // further validation of constraints... // actual constructor code... } Is there a way to do that with an annotation driven approach? Something like: public MyModelClass(@NotNull(raise=IllegalArgumentException.class, message="arg1 must not be null") String arg1, @NotNull(raise=IllegalArgumentException.class) String arg2, OtherModelClass otherModelInstance) { // actual constructor code... } In my eyes this would make the actual code a lot more readable. In understand that there are annotations in order to support IDE validation (like the existing @NotNull annotation). Thank you very much for your help.

    Read the article

  • What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?

    - by Ivan
    Crterias of 'better': fast im math and simple (little of fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selectint/inserting sets of floats and dong maths with rhem). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature), all the communication with outer world is to be done by means of web services.

    Read the article

  • Animate an Image after hovering an item a second

    - by mikep
    Hey, after some tries to get this to work, i ask you, if you know where my mistake is. This is my code until now: $(".menu a").hover( function () { $(this).data('timeout', setTimeout( function () { $(this).hover(function() { $(this).next("em").animate({opacity: "show", top: "-65"}, "slow"); }, function() { $(this).next("em").animate({opacity: "hide", top: "-75"}, "fast"); }); }, 1000)); }, function () { clearTimeout($(this).data('timeout')); }); i would be happy about some help.

    Read the article

  • PHP APC - Why is loading cached array op codes slow?

    - by Aaron Kreider
    I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions. I'm wondering: does OP code caching not work as well for arrays? My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%. apc.shm_size=32 M apc.max_file_size = 1M apc.shm_segments= 1 APC 3.1.6 I'm using PHP 5.2, Apache 2, and Windows Vista.

    Read the article

  • Can this rectangle to rectangle intersection code still work?

    - by Jeremy Rudd
    I was looking for a fast performing code to test if 2 rectangles are intersecting. A search on the internet came up with this one-liner (WOOT!), but I don't understand how to write it in Javascript, it seems to be written in an ancient form of C++. Can this thing still work? Can you make it work? struct { LONG left; LONG top; LONG right; LONG bottom; } RECT; bool IntersectRect(const RECT * r1, const RECT * r2) { return ! ( r2->left > r1->right || r2->right left || r2->top > r1->bottom || r2->bottom top ); }

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • Loading a DB table into nested dictionaries in Python

    - by Hossein
    Hi, I have a table in MySql DB which I want to load it to a dictionary in python. the table columns is as follows: id,url,tag,tagCount tagCount is the number of times that a tag has been repeated for a certain url. So in that case I need a nested dictionary, in other words a dictionary of dictionary, to load this table. Because each url have several tags for which there are different tagCounts.the code that I used is this:( the whole table is about 22,000 records ) cursor.execute( ''' SELECT url,tag,tagCount FROM wtp ''') urlTagCount = cursor.fetchall() d = defaultdict(defaultdict) for url,tag,tagCount in urlTagCount: d[url][tag]=tagCount print d first of all I want to know if this is correct.. and if it is why it takes so much time? Is there any faster solutions? I am loading this table into memory to have fast access to get rid of the hassle of slow database operations, but with this slow speed it has become a bottleneck itself, it is even much slower than DB access. and anyone help? thanks

    Read the article

  • Get only new RSS entries with PHP Script ?

    - by ArneRie
    What im trying to do: Fetch X numbers of RSS Feeds from my Blogs and echo only new entries. My Problem is, how to know wich items are already parsed? Solution so far: Fetch the Feed every 5 hours, store all titles inside an Database table or flat file. Next run check if the title is already in database if not print it and save it inside the database. But iam not sure if this is best practise to do this? If someone knows a fast way, it would be great. Sorry for my poor english.

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • quaring larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >